Developers-5.4.0
Developers-5.4.0
Unlimited Release
December 2009
Updated November 7, 2013
William J. Bohnhoff
Radiation Transport Department
Keith R. Dalbey
Mission Analysis and Simulation Department
John P. Eddy
System Readiness and Sustainment Technologies Department
Kenneth T. Hu
Validation and Uncertainty Quantification Department
Dena M. Vigil
Multiphysics Simulation Technologies Department
Abstract
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and
extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for
optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliabil-
ity, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitiv-
ity/variance analysis with design of experiments and parameter study methods. These capabilities may be used
on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer
nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement
abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible
and extensible problem-solving environment for design and performance analysis of computational models on
high performance computers.
This report serves as a developers manual for the Dakota software and describes the Dakota class hierarchies
and their interrelationships. It derives directly from annotation of the source code and provides detailed class
documentation, including all member functions and attributes.
8 Namespace Index 51
8.1 Namespace List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9 Class Index 53
9.1 Class Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
10 Class Index 57
10.1 Class List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
11 File Index 63
11.1 File List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
12 Namespace Documentation 65
12.1 Dakota Namespace Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
12.2 SIM Namespace Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Author:
Brian M. Adams, Lara E. Bauman, William J. Bohnhoff, Keith R. Dalbey, John P. Eddy, Mohamed S. Ebeida,
Michael S. Eldred, Patricia D. Hough, Kenneth T. Hu, John D. Jakeman, Laura P. Swiler, J. Adam Stephens,
Dena M. Vigil
1.1 Introduction
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible, ex-
tensible interface between analysis codes and iteration methods. Dakota contains algorithms for optimization
with gradient and nongradient-based methods, uncertainty quantification with sampling, reliability, stochastic
expansion, and interval estimation methods, parameter estimation with nonlinear least squares methods, and sen-
sitivity/variance analysis with design of experiments and parameter study capabilities. (Solution verification and
Bayesian approaches are also in development.) These capabilities may be used on their own or as components
within advanced algorithms such as surrogate-based optimization, mixed integer nonlinear programming, mixed
aleatory-epistemic uncertainty quantification, or optimization under uncertainty. By employing object-oriented
design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit
provides a flexible problem-solving environment for design and performance analysis of computational models
on high performance computers.
The Developers Manual focuses on documentation of Dakota design principles and class structures; it derives
principally from annotated source code. For information on input command syntax, refer to the Reference
Manual, and for more details on Dakota features and capabilities, refer to the Users Manual.
an optimization iterator and would employ truth models layered within surrogate models. Thus, iterators and
models provide both stand-alone capabilities as well as building blocks for more sophisticated studies.
A model contains a set of variables, an interface, and a set of responses, and the iterator operates on the model
to map the variables into responses using the interface. Each of these components is a flexible abstraction with a
variety of specializations for supporting different types of iterative studies. In a Dakota input file, the user specifies
these components through strategy, method, model, variables, interface, and responses keyword specifications.
The use of class hierarchies provides a mechanism for extensibility in Dakota components. In each of the various
class hierarchies, adding a new capability typically involves deriving a new class and providing a set of virtual
function redefinitions. These redefinitions define the coding portions specific to the new derived class, with the
common portions already defined at the base class. Thus, with a small amount of new code, the existing facilities
can be extended, reused, and leveraged for new purposes. The following sections tour Dakota’s class organization.
1.2.1 Strategies
• SingleMethodStrategy: the simplest strategy. A single iterator is run on a single model to perform a single
study.
• HybridStrategy: hybrid minimization using a set of iterators employing a corresponding set of models
of varying fidelity. Coordination approaches among the iterators include collaborative, embedded, and
sequential approaches, as embodied in the CollaborativeHybridStrategy, EmbeddedHybridStrategy, and
SequentialHybridStrategy derived classes.
• ConcurrentStrategy: two similar algorithms are available: (1) multi-start iteration from several different
starting points, and (2) pareto set optimization for several different multiobjective weightings. Employs
a single iterator with a single model, but runs multiple instances of the iterator concurrently for different
settings within the model.
1.2.2 Iterators
Class hierarchy: Iterator. Iterator implementations may choose to split operations up into run-time phases as
described in Understanding Iterator Flow.
The iterator hierarchy contains a variety of iterative algorithms for optimization, uncertainty quantification, non-
linear least squares, design of experiments, and parameter studies. The hierarchy is divided into Minimizer and
Analyzer branches. The Minimizer classes address optimization and deterministic calibration and are grouped
into:
• Optimization: Optimizer provides a base class for the DOTOptimizer, CONMINOptimizer, NPSOLOp-
timizer, NLPQLPOptimizer, NonlinearCGOptimizer, and SNLLOptimizer gradient-based optimization
libraries and the APPSOptimizer (supported by APPSEvalMgr for function evaluations), COLINOpti-
mizer (supported by COLINApplication for function evaluations), JEGAOptimizer, and NCSUOptimizer
nongradient-based optimization methods and libraries.
• Parameter estimation: LeastSq provides a base class for NL2SOLLeastSq, a least-squares solver based
on NL2SOL, SNLLLeastSq, a Gauss-Newton least-squares solver, and NLSSOLLeastSq, an SQP-based
least-squares solver.
• Surrogate-based minimization (both optimization and nonlinear least squares): SurrBasedMinimizer pro-
vides a base class for SurrBasedLocalMinimizer, SurrBasedGlobalMinimizer, and EffGlobalMinimizer.
The surrogate-based local and global methods employ a single iterator with any of the available Surro-
gateModel capabilities (local, multipoint, or global data fits or hierarchical approximations) and perform
a sequence of approximate optimizations, each involving build, optimize, and verify steps. The efficient
global method, on the other hand, hard-wires a recursion involving Gaussian process surrogate models
coupled with the DIRECT global optimizer to maximize an expected improvement function.
• Uncertainty quantification: NonD provides a base class for non-deterministic methods NonDSampling,
NonDReliability (reliability analysis), NonDExpansion (stochastic expansion methods), NonDInterval
(interval-based epistemic methods), NonDCalibration (nondeterministic calibration), and EfficientSub-
spaceMethod (prototype input-space dimension reduction method for UQ).
– NonDSampling is further specialized with the NonDLHSSampling class for Latin hypercube and
Monte Carlo sampling, the NonDIncremLHSSampling class for incremental Latin hypercube sam-
pling, NonDAdaptImpSampling for multimodal adaptive importance sampling, and NonDGPImp-
Sampling for Gaussian process-based importance sampling.
– NonDReliability is further specialized with local and global methods (NonDLocalReliability and
NonDGlobalReliability).
– NonDExpansion includes specializations for generalized polynomial chaos (NonDPolynomialChaos)
and stochastic collocation (NonDStochCollocation) and is supported by the NonDIntegration helper
class, which supplies cubature, tensor-product quadrature and Smolyak sparse grid methods (NonD-
Cubature, NonDQuadrature, and NonDSparseGrid).
– NonDCalibration provides a base class for nondeterministic calibration methods with specialization to
Bayesian calibration in NonDBayesCalibration. Specific Bayesian calibration implementations exist
in NonDGPMSABayesCalibration and NonDQUESOBayesCalibration.
– NonDInterval provides a base class for epistemic interval-based UQ methods. Three interval anal-
ysis approaches are provided: LHS (NonDLHSInterval), efficient global optimization (NonDGlob-
alInterval), and local optimization (NonDLocalInterval). Each of these three has specializations for
single interval (NonDLHSSingleInterval, NonDGlobalSingleInterval, NonDLocalSingleInterval) and
Dempster-Shafer Theory of Evidence (NonDLHSEvidence, NonDGlobalEvidence, NonDLocalEvi-
dence) approaches.
• Parameter studies and design of experiments: PStudyDACE provides a base class for ParamStudy, which
provides capabilities for directed parameter space interrogation, PSUADEDesignCompExp, which provides
access to the Morris One-At-a-Time (MOAT) method for parameter screening, and DDACEDesignComp-
Exp and FSUDesignCompExp, which provide for parameter space exploration through design and analysis
of computer experiments. NonDLHSSampling from the uncertainty quantification branch also supports
design of experiments when in active all variables mode.
• Solution verification studies: Verification provides a base class for RichExtrapVerification (verification via
Richardson extrapolation) and other solution verification methods in development.
1.2.3 Models
The model classes are responsible for mapping variables into responses when an iterator makes a function eval-
uation request. There are several types of models, some supporting sub-iterators and sub-models for enabling
layered and nested relationships. When sub-models are used, they may be of arbitrary type so that a variety of
recursions are supported.
• SingleModel: variables are mapped into responses using a single Interface object. No sub-iterators or
sub-models are used.
• SurrogateModel: variables are mapped into responses using an approximation. The approximation is built
and/or corrected using data from a sub-model (the truth model) and the data may be obtained using a sub-
iterator (a design of experiments iterator). SurrogateModel has two derived classes: DataFitSurrModel
for data fit surrogates and HierarchSurrModel for hierarchical models of varying fidelity. The relationship
of the sub-iterators and sub-models is considered to be "layered" since they are not used as part of every
response evaluation on the top level model, but rather used periodically in surrogate update and verification
steps.
• NestedModel: variables are mapped into responses using a combination of an optional Interface and a sub-
iterator/sub-model pair. The relationship of the sub-iterators and sub-models is considered to be "nested"
since they are used to perform a complete iterative study as part of every response evaluation on the top
level model.
• RecastModel: recasts the inputs and outputs of a sub-model for the purposes of variable transformations
(e.g., variable scaling, transformations to standardized random variables) and problem reformulation (e.g.,
multiobjective optimization, response scaling, augmented Lagrangian merit functions, expected improve-
ment).
1.2.4 Variables
• MixedVariables: domain type distinctions are retained, such that separate continuous, discrete integer, and
discrete real domain types are managed. This is the default Variable perspective, and draws its name from
"mixed continuous-discrete" optimization.
• RelaxedVariables: domain types are combined through relaxation of discrete constraints; i.e., continuous
and discrete variables are merged into continuous arrays through relaxation of integrality (for discrete in-
teger ranges) or set membership (for discrete integer or discrete real sets) requirements. The branch and
bound minimizer is the only method using this approach at present.
Whereas domain types are defined based on the derived Variables class selection, the selection of active variable
types is handled within each of these derived classes using variable views. These permit different algorithms to
work on different subsets of variables. Data shared among Variables instances is stored in SharedVariablesData.
For details on managing variables, see Working with Variable Containers and Views.
The Constraints hierarchy manages bound, linear, and nonlinear constraints and utilizes the same specializations
for managing bounds on the variables (see MixedVarConstraints and RelaxedVarConstraints).
1.2.5 Interfaces
• SysCallApplicInterface: the simulation is invoked using a system call (the C function system()). Asyn-
chronous invocation utilizes a background system call. Utilizes the CommandShell utility.
• ForkApplicInterface: the simulation is invoked using a fork (the fork/exec/wait family of functions).
Asynchronous invocation utilizes a nonblocking fork.
• SpawnApplicInterface: for Windows, fork is replaced by spawn. Asynchronous invocation utilizes a non-
blocking spawn.
Fork and Spawn are inherited from ProcessHandleApplicInterface and System and ProcessHandle are inherited
from ProcessApplicInterface. A semi-intrusive approach is also supported by:
• DirectApplicInterface: the simulation is linked into the Dakota executable and is invoked using a procedure
call. Asynchronous invocations will utilize nonblocking threads (capability not yet available). Special-
izations of the direct interface are implemented in MatlabInterface, PythonInterface, ScilabInterface, and
(for built-in testers) TestDriverInterface, while examples of plugin interfaces for library mode in serial and
parallel, respectively, are included in SerialDirectApplicInterface and ParallelDirectApplicInterface
Scheduling of jobs for asynchronous local, message passing, and hybrid parallelism approaches is performed in
the ApplicationInterface class, with job initiation and job capture specifics implemented in the derived classes.
In the approximation case, global, multipoint, or local data fit approximations to simulation code response data can
be built and used as surrogates for the actual, expensive simulation. The interface class providing this capability
is
• ApproximationInterface: builds an approximation using data from a truth model and then employs the ap-
proximation for mapping variables to responses. This class contains an array of Approximation objects, one
per response function, which support a variety of approximation types using the different Approximation
derived classes. These include SurfpackApproximation (provides kriging, MARS, moving least squares,
neural network, polynomial regression, and radial basis functions), GaussProcApproximation (Gaussian
process models), PecosApproximation (multivariate orthogonal and Lagrange interpolation polynomials
from Pecos), TANA3Approximation (two-point adaptive nonlinearity approximation), and TaylorApproxi-
mation (local Taylor series).
which is an essential component within the DataFitSurrModel capability described above in Models.
1.2.6 Responses
Class: Response.
The Response class provides an abstract data representation of response functions and their first and second
derivatives (gradient vectors and Hessian matrices). These response functions can be interpreted as objective
functions and constraints (optimization data set), residual functions and constraints (least squares data set), or
generic response functions (uncertainty quantification data set). This class is not currently part of a class hierarchy,
since the abstraction has been sufficiently general and has not required specialization.
1.3 Services
A variety of services are provided in Dakota for parallel computing, failure capturing, restart, graphics, etc. An
overview of the classes and member functions involved in performing these services is included below.
• Multilevel parallel computing: Dakota supports multiple levels of nested parallelism. A strategy can man-
age concurrent iterators, each of which manages concurrent function evaluations, each of which manages
concurrent analyses executing on multiple processors. Partitioning of these levels with MPI communicators
is managed in ParallelLibrary and scheduling routines for the levels are part of Strategy, ApplicationInter-
face, and ForkApplicInterface.
• Parsing: Dakota employs the NIDR parser (New Input Deck Reader) to retrieve information from
user input files. Parsing options are processed in CommandLineHandler and parsing occurs in
ProblemDescDB::manage_inputs() called from main.cpp. NIDR uses the keyword handlers in the NIDR-
ProblemDescDB derived class to populate data within the ProblemDescDB base class, which maintains a
DataStrategy specification and lists of DataMethod, DataModel, DataVariables, DataInterface, and DataRe-
sponses specifications. Procedures for modifying the parsing subsystem are described in Instructions for
Modifying Dakota’s Input Specification.
• Failure capturing: Simulation failures can be trapped and managed using exception handling in Applica-
tionInterface and its derived classes.
• Restart: Dakota maintains a record of all function evaluations both in memory (for capturing any du-
plication) and on the file system (for restarting runs). Restart options are processed in Command-
LineHandler and retrieved in ParallelLibrary::specify_outputs_restart(), restart file management occurs in
ParallelLibrary::manage_outputs_restart(), and restart file insertions occur in ApplicationInterface. The
dakota_restart_util executable, built from restart_util.cpp, provides a variety of services for inter-
rogating, converting, repairing, concatenating, and post-processing restart files.
• Memory management: Dakota employs the techniques of reference counting and representation sharing
through the use of letter-envelope and handle-body idioms (Coplien, "Advanced C++"). The former idiom
provides for memory efficiency and enhanced polymorphism in the following class hierarchies: Strategy,
Iterator, Model, Variables, Constraints, Interface, ProblemDescDB, and Approximation. The latter idiom
provides for memory efficiency in data-intensive classes which do not involve a class hierarchy. The Re-
sponse and parser data (DataStrategy, DataMethod, DataModel, DataVariables, DataInterface, and DataRe-
sponses) classes use this idiom. When managing reference-counted data containers (e.g., Variables or
Response objects), it is important to properly manage shallow and deep copies, to allow for both efficiency
and data independence as needed in a particular context.
• Graphics and Output: Dakota provides 2D iteration history graphics using Motif widgets. Graphics data
can also be cataloged in a tabular data file for post-processing with 3rd party tools such as Matlab, Tecplot,
etc. These capabilities are encapsulated within the Graphics class. An experimental results database is
implemented in ResultsManager and ResultsDBAny.
• Coding Style Guidelines and Conventions - coding practices used by the Dakota development team.
• Instructions for Modifying Dakota’s Input Specification - how to interact with NIDR and the associated
Dakota classes.
• Interfacing with Dakota as a Library - embed Dakota as a service within your application.
• Understanding Iterator Flow - explanation of the full granularity of steps in Iterator execution.
• Performing Function Evaluations - an overview of the classes and member functions involved in performing
function evaluations synchronously or asynchronously.
• Working with Variable Containers and Views - discussion of data storage for variables and explanation of
active and inactive views of this data.
2.1 Introduction
Common code development practices can be extremely useful in multiple developer environments. Particular
styles for code components lead to improved readability of the code and can provide important visual cues to
other developers. Much of this recommended practices document is borrowed from the CUBIT mesh generation
project, which in turn borrows its recommended practices from other projects, yielding some consistency across
Sandia projects. While not strict requirements, these guidelines suggest a best-practices starting point for coding
in Dakota.
class ClassName;
Class member variables should be composed of two or more descriptive words, with the first character of the
second and succeeding words capitalized, e.g.:
double classMemberVariable;
Temporary (i.e. local) variables are lower case, with underscores separating words in a multiple word temporary
variable, e.g.:
int temporary_variable;
Constants (i.e. parameters) and enumeration values are upper case, with underscores separating words, e.g.:
int function_name();
There is no need to distinguish between member and non-member functions by style, as this distinction is usu-
ally clear by context. This style convention allows member function names which set and return the value of a
similarly-named private member variable, e.g.:
int memberVariable;
void member_variable(int a) { // set
memberVariable = a;
}
int member_variable() const { // get
return memberVariable;
}
In cases where the data to be set or returned is more than a few bytes, it is highly desirable to employ const
references to avoid unnecessary copying, e.g.:
Note that it is not necessary to always accept the returned data as a const reference. If it is desired to be able
change this data, then accepting the result as a new variable will generate a copy, e.g.:
2.2.3 Miscellaneous
Appearance of typedefs to redefine or alias basic types is isolated to a few header files (data_types.h,
template_defs.h), so that issues like program precision can be changed by changing a few lines of type-
defs rather than many lines of code, e.g.:
xemacs is the preferred source code editor, as it has C++ modes for enhancing readability through color (turn
on "Syntax highlighting"). Other helpful features include "Paren highlighting" for matching parentheses and the
"New Frame" utility to have more than one window operating on the same set of files (note that this is still the
same edit session, so all windows are synchronized with each other). Window width should be set to 80 internal
columns, which can be accomplished by manual resizing, or preferably, using the following alias in your shell
resource file (e.g., .cshrc):
where an external width of 81 gives 80 columns internal to the window and the desired height of the window
will vary depending on monitor size. This window width imposes a coding standard since you should avoid line
wrapping by continuing anything over 80 columns onto the next line.
Indenting increments are 2 spaces per indent and comments are aligned with the code they describe, e.g.:
cout << "Numerical gradients using " << finiteDiffStepSize*100. << "%"
<< finiteDiffType << " differences\nto be calculated by the "
<< methodSource << " finite difference routine." << endl;
Lastly, #ifdef’s are not indented (to make use of syntax highlighting in xemacs).
• with the introduction of the Dakota namespace, base classes which previously utilized prepended Dakota
identifiers can now safely omit the identifiers. However, since file names do not have namespace protection
from name collisions, they retain the prepended Dakota identifier. For example, a class previously named
DakotaModel which resided in DakotaModel.cpp/hpp, is now Dakota::Model (class Model in namespace
Dakota) residing in the same filenames. The retention of the previous filenames reduces the possibility of
multiple instances of a Model.hpp causing problems. Derived classes (e.g., NestedModel) do not require a
prepended Dakota identifier for either the class or file names.
• in a few cases, it is convenient to maintain several closely related classes in a single file, in which case the file
name may reflect the top level class or some generalization of the set of classes (e.g., DakotaResponse.[CH]
files contain Dakota::Response and Dakota::ResponseRep classes, and DakotaBinStream.[CH] files contain
the Dakota::BiStream and Dakota::BoStream classes).
The type of file is determined by one of the four file name extensions listed below:
• .hpp A class header file ends in the suffix .hpp. The header file provides the class declaration. This file does
not contain code for implementing the methods, except for the case of inline functions. Inline functions are
to be placed at the bottom of the file with the keyword inline preceding the function name.
• .cpp A class implementation file ends in the suffix .cpp. An implementation file contains the definitions of
the members of the class.
• .h A header file ends in the suffix .h. The header file contains information usually associated with proce-
dures. Defined constants, data structures and function prototypes are typical elements of this file.
• .c A procedure file ends in the suffix .c. The procedure file contains the actual procedures.
These tools are no longer used, so remaining comment blocks of this type are informational only and will not
appear in the documentation generated by doxygen.
• Lines should be kept to less than 80 chars per line where possible.
• Wrapped lines may be indented two spaces or aligned with prior lines.
• For ease of viewing and correctness checking in Emacs, a customization file is available:
http://www.cmake.org/CMakeDocs/cmake-mode.el
These variable naming conventions are especially important for those that ultimately become preprocessor defines
and affect compilation of source files.
• Classic/core elements of the CMake language are set in lower_case, e.g., option, set, if, find_library.
• Static arguments to CMake functions and macros are set in UPPER_CASE, e.g. REQUIRED, NO_-
MODULE, QUIET.
• Minimize "global" variables, i.e., don’t use 2 variables with the same meaning when one will do the job.
• Feature toggling: when possible, use the "HAVE_<pkg/feature>" convention already in use by many
CMake-enabled TPLs, e.g.,
check_function_exists(system HAVE_SYSTEM)
if(HAVE_SYSTEM)
add_definitions("-DHAVE_SYSTEM")
endif(HAVE_SYSTEM)
Dakota/packages/CMakeLists.txt:if(HAVE_CONMIN)
Dakota/packages/CMakeLists.txt:endif(HAVE_CONMIN)
• When a variable/preprocessor macro could result in name clashes beyond Dakota scope, e.g., for library_-
mode users, consider prefixing the "HAVE_<pkg>" name with DAKOTA_, e.g. DAKOTA_HAVE_MPI.
Currently, MPI is the only use case for such a variable in Dakota, but many examples can be found in the
CMake Modules source, e.g.
To modify Dakota’s input specification (for maintenance or addition of new input syntax), specification mainte-
nance mode must be enabled at Dakota configure time with the -DENABLE_SPEC_MAINT option, e.g.,
./cmake -DENABLE_SPEC_MAINT:BOOL=ON ..
This will enable regeneration of NIDR and Dakota components which must be updated following a spec change.
Warning:
• Do not skip this step. Attempts to modify the NIDR_keywds.hpp file in Dakota/src without using the
NIDR table generator are very error-prone. Moreover, the input specification provides a reference to the
allowable inputs of a particular executable and should be kept in synch with the parser files; modifying
the parser files independent of the input specification creates, at a minimum, undocumented features.
• All keywords in dakota.input.nspec are lower case by convention. All user inputs are converted to lower
case by the parser prior to keyword match testing, resulting in case insensitive parsing.
28 Instructions for Modifying Dakota’s Input Specification
• Since the NIDR parser allows abbreviation of keywords, you must avoid adding a keyword that could
be misinterpreted as an abbreviation for a different keyword within the same top-level keyword, such as
"strategy" and "method". For example, adding the keyword "expansion" within the method specification
would be a mistake if the keyword "expansion_factor" already was being used in this specification.
• The NIDR input is somewhat order-dependent, allowing the same keyword to be reused multiple times
in the specification. This often happens with aliases, such as lower_bounds, upper_bounds
and initial_point. Ambiguities are resolved by attaching a keyword to the most recently seen
context in which it could appear, if such exists, or to the first relevant context that subsequently comes
along in the input file. With the earlier IDR parser, non-exclusive specifications (those not in mutually
exclusive blocks) were required to be unique. That is why there are such aliases for initial_point
as cdv_initial_point and ddv_initial_point: so older input files can be used with no or
fewer changes.
order, and most entities in the section for a top-level keyword are also in alphabetical order. While not required, it
is probably good practice to maintain this structure, as it makes things easier to find.
Any integer, real, or string data associated with a keyword are provided to the keyword’s startfcn, whose second
argument is a pointer to a Values structure, defined in header file nidr.h.
Example 1: if you added the specification:
void NIDRProblemDescDB::
method_setting_start(const char *keyname, Values *val, void **g, void *v)
{ ... }
(and supplies a couple of default values to dm). The start functions for lower-level keywords within the method
keyword get access to dm through their g arguments. Here is an example:
void NIDRProblemDescDB::
method_str(const char *keyname, Values *val, void **g, void *v)
{
(*(DataMethod**)g)->**(String DataMethod::**)v = *val->s;
}
In this example, v points to a pointer-to-member, and an assignment is made to one of the components of the
DataMethod object pointed to by ∗g. The corresponding stopfcn for the top-level method keyword is
void NIDRProblemDescDB::
method_stop(const char *keyname, Values *val, void **g, void *v)
{
DataMethod *p = *(DataMethod**)g;
pDDBInstance->dataMethodList.insert(*p);
delete p;
}
which copies the now populated DataMethod object to the right place and cleans up.
Example 2: if you added the specification
then method_RealL (defined in NIDRProblemDescDB.cpp) would be called as the startfcn, and methodCoeffs
would be the name of a (currently nonexistent) component of DataMethod. The N_mdm macro is defined in
NIDRProblemDescDB.cpp; among other things, it turns RealL into NIDRProblemDescDB::method_-
RealL. This function is used to process lists of REAL values for several keywords. By looking at the source, you
can see that the list values are val->r[i] for 0 <= i < val->n.
The implementation of each of these functions contains tables of possible entry_name values and associated
pointer-to-member values. There is one table for each relevant top-level keyword, with the top-level keyword
omitted from the names in the table. Since binary search is used to look for names in these tables, each table must
be kept in alphabetical order of its entry names. For example,
...
else if ((L = Begins(entry_name, "model."))) {
if (dbRep->methodDBLocked)
Locked_db();
#define P &DataModelRep::
static KW<RealVector, DataModelRep> RVdmo[] = { // must be sorted
{"nested.primary_response_mapping", P primaryRespCoeffs},
{"nested.secondary_response_mapping", P secondaryRespCoeffs},
{"surrogate.kriging_conmin_seed", P krigingConminSeed},
{"surrogate.kriging_correlations", P krigingCorrelations},
{"surrogate.kriging_max_correlations", P krigingMaxCorrelations},
{"surrogate.kriging_min_correlations", P krigingMinCorrelations}};
#undef P
is the "model" portion of ProblemDescDB::get_rv(). Based on entry_name, it returns the relevant attribute from a
DataModel object. Since there may be multiple model specifications, the dataModelIter list iterator identifies
which node in the list of DataModel objects is used. In particular, dataModelList contains a list of all of the
data_model objects, one for each time a top-level model keyword was seen by the parser. The particular
model object used for the data retrieval is managed by dataModelIter, which is set in a set_db_list_-
nodes() operation that will not be described here.
There may be multiple DataMethod, DataModel, DataVariables, DataInterface, and/or DataResponses objects.
However, only one strategy specification is currently allowed so a list of DataStrategy objects is not needed.
Rather, ProblemDescDB::strategySpec is the lone DataStrategy object.
To augment the get_<data_type>() functions, add table entries with new identifier strings and pointer-to-member
values that address the appropriate data attributes from the Data class object. The style for the identifier
strings is a top-down hierarchical description, with specification levels separated by periods and words separated
with underscores, e.g., "keyword.group_specification.individual_specification". Use the
dbRep->listIter->attribute syntax for variables, interface, responses, and method specifications. For
example, the method_setting example attribute would be added to get_drv() as:
{"method_name.method_setting", P methodSetting},
inserted at the beginning of the RVdmo array shown above (since the name in the existing first entry, i.e.,
"nested.primary_response_mapping", comes alphabetically after "method_name.method_setting").
Add a new attribute to the public data for each of the new specifications. Follow the style guide for class attribute
naming conventions (or mimic the existing code).
Define defaults for the new attributes in the constructor initialization list. Add the new attributes to the
assign() function for use by the copy constructor and assignment operator. Add the new attributes to the
write(MPIPackBuffer&), read(MPIUnpackBuffer&), and write(ostream&) functions, paying careful attention to
the use of a consistent ordering.
passes the "interface.type" identifier string to the ProblemDescDB::get_string() retrieval function, which
returns the desired attribute from the active DataInterface object.
Warning:
Use of the get_<data_type>() functions is restricted to class constructors, since only in class construc-
tors are the data list iterators (i.e., dataMethodIter, dataModelIter, dataVariablesIter,
dataInterfaceIter, and dataResponsesIter) guaranteed to be set correctly. Outside of the con-
structors, the database list nodes will correspond to the last set operation, and may not return data from the
desired list node.
This page explains the various phases comprising Iterator::run_iterator(). Prior to Iterator construction, when
command-line options are parsed, Boolean run mode flags corresponding to PRERUN, RUN, and POSTRUN are
set in ParallelLibrary. If the user didn’t specify any specific run modes, the default is for all three to be true (all
phases will execute).
Iterator is constructed.
When called, run_iterator() sequences:
• IF PRERUN, invoke pre_run(): virtual function; default no-op. Purpose: derived classes should imple-
ment pre_run() if they are able to generate all parameter sets (variables) at once, separate from run().
Derived implementations should call their nearest parent’s pre_run(), typically before performing their own
steps.
• IF PRERUN, invoke pre_output(): non-virtual function; if user requested, output variables to file.
• IF RUN, invoke virtual function run(). Purpose: at a minimum, evaluate parameter sets through comput-
ing responses; for iterators without pre/post capability, their entire implementation is in run() and this is
a reasonable default for new Iterators.
• IF POSTRUN, invoke post_input(): virtual function, default only print helpful message on mode.
Purpose: derived iterators supporting post-run input from file must implement to read file and populate
variables/responses (and possibly best points) appropriately. Implementations must check if the user re-
quested file input.
34 Understanding Iterator Flow
• IF POSTRUN, invoke post_run(): virtual function. Purpose: generate statistics / final results. Any
analysis that can be done solely on tabular data read by post_input() can be done here. Derived
re-implementations should call their nearest parent’s post-run(), typically after performing their specific
post-run activities.
Iterator is destructed.
5.1 Introduction
It is possible to link the Dakota toolkit into another application for use as an algorithm library. This section
describes facilities which permit this type of integration.
When compiling Dakota with CMake, files in Dakota/src (excepting the main.cpp, restart_util.cpp, and
library_mode.cpp main programs) are compiled into libraries that get installed to CMAKE_INSTALL_-
PREFIX/lib. C/C++ code is in the library dakota_src, while Fortran code lives in the dakota_src_-
fortran library. Applications may link against these Dakota libraries by specifying appropriate include and
link directives. Depending on the configuration used when building this library, other libraries for the vendor
optimizers and vendor packages will also be needed to resolve Dakota symbols for DOT, NPSOL, OPT++, NC-
SUOpt, LHS, Teuchos, etc. Copies of these libraries are also placed in Dakota/lib. Refer to Linking against
the Dakota library for additional information.
Warning:
Users may interface to Dakota as a library within other software applications provided
that they abide by the terms of the GNU Lesser General Public License (LGPL). Refer to
http://www.gnu.org/licenses/lgpl.html or contact the Dakota team for additional in-
formation.
Attention:
The use of Dakota as an algorithm library should be distinguished from the linking of simulations within
Dakota using the direct application interface (see DirectApplicInterface). In the former, Dakota is providing
algorithm services to another software application, and in the latter, a linked simulation is providing anal-
ysis services to Dakota. It is not uncommon for these two capabilities to be used in combination, where a
simulation framework provides both the "front end" and the "back end" for Dakota.
interfaces. The file library_mode.cpp in Dakota/src provides example usage of these plug-ins within a mock
simulator program that demonstrates the required object instantiation syntax in combination with the three
problem database population approaches (input file parsing, data node insertion, and mixed mode). All of this
code may be compiled and tested by configuring Dakota using the --with-plugin option.
is replaced with
ParallelLibrary parallel_lib;
In the case of specifying restart files and output streams, the call to
parallel_lib.specify_outputs_restart(cmd_line_handler);
should be replaced with its overloaded form in order to pass the required information through the parameter list
parallel_lib.specify_outputs_restart(std_output_filename, std_error_filename,
read_restart_filename, write_restart_filename, stop_restart_evals);
where file names for standard output and error and restart read and write as well as the integer number of restart
evaluations are passed through the parameter list rather than read from the command line of the main Dakota
program. The definition of these attributes is performed elsewhere in the parent application (e.g., specified in the
parent application input file or GUI). In this function call, specify NULL for any files not in use, which will elicit
the desired subset of the following defaults: standard output and standard error are directed to the terminal, no
restart input, and restart output to file dakota.rst. The stop_restart_evals specification is an optional
parameter with a default of 0, which indicates that restart processing should process all records. If no overrides of
these defaults are intended, the call to specify_outputs_restart() may be omitted entirely.
With respect to alternate forms of ProblemDescDB::manage_inputs(), the following section describes different
approaches to populating data within Dakota’s problem description database. It is this database from which all
Dakota objects draw data upon instantiation.
The simplest approach to linking an application with the Dakota library is to rely on Dakota’s normal parsing
system to populate Dakota’s problem database (ProblemDescDB) through the reading of an input file. The disad-
vantage to this approach is the requirement for an additional input file beyond those already required by the parent
application.
In this approach, the main.cpp call to
problem_db.manage_inputs(cmd_line_handler);
problem_db.manage_inputs(dakota_input_file);
where the file name for the Dakota input is passed through the parameter list rather than read from the command
line of the main Dakota program. Again, the definition of the Dakota input file name is performed elsewhere in
the parent application (e.g., specified in the parent application input file or GUI). Refer to run_dakota_parse() in
library_mode.cpp for a complete example listing.
ProblemDescDB::manage_inputs() invokes ProblemDescDB::parse_inputs() (which in turn invokes
ProblemDescDB::check_input()), ProblemDescDB::broadcast(), and ProblemDescDB::post_process(), which
are lower level functions that will be important in the following two sections. Thus, the input file parsing
approach may employ a single coarse grain function to coordinate all aspects of problem database population,
whereas the two approaches to follow will use lower level functions to accomplish a finer grain of control.
This approach is more involved than the previous approach, but it allows the application to publish all needed
data to Dakota’s database directly, thereby eliminating the need for the parsing of a separate Dakota input file.
In this case, ProblemDescDB::manage_inputs() is not called. Rather, DataStrategy, DataMethod, DataModel,
DataVariables, DataInterface, and DataResponses objects are instantiated and populated with the desired problem
data. These objects are then published to the problem database using ProblemDescDB::insert_node(), e.g.:
The data objects are populated with their default values upon instantiation, so only the non-default values need
to be specified. Refer to the DataStrategy, DataMethod, DataModel, DataVariables, DataInterface, and DataRe-
sponses class documentation and source code for lists of attributes and their defaults.
The default strategy is single_method, which runs a single iterator on a single model, and the default model is
single, so it is not necessary to instantiate and publish a DataStrategy or DataModel object if advanced multi-
component capabilities are not required. Rather, instantiation and insertion of a single DataMethod, DataVari-
ables, DataInterface, and DataResponses object is sufficient for basic Dakota capabilities.
Once the data objects have been published to the ProblemDescDB object, calls to
problem_db.check_input();
problem_db.broadcast();
problem_db.post_process();
will perform basic database error checking, broadcast a packed MPI buffer of the specification data to other
processors, and post-process specification data to fill in vector defaults (scalar defaults are handled in the Data
class constructors), respectively. For parallel applications, processor rank 0 should be responsible for Data node
population and insertion and the call to ProblemDescDB::check_input(), and all processors should participate
in ProblemDescDB::broadcast() and ProblemDescDB::post_process(). Moreover, preserving the order shown
assures that large default vectors are not transmitted by MPI. Refer to run_dakota_data() in library_mode.cpp for
a complete example listing.
problem_db.manage_inputs(dakota_input_file);
as described in Input file parsing, we will use the lower level function
problem_db.parse_inputs(dakota_input_file);
to provide a finer grain of control. The passed input file dakota_input_file must contain all required
inputs. Since vector data like variable values/bounds/tags, linear/nonlinear constraint coefficients/bounds, etc. are
optional, these potentially large vector specifications can be omitted from the input file. Only the variable/response
counts, e.g.:
method
linear_inequality_constraints = 500
variables
continuous_design = 1000
responses
objective_functions = 1
nonlinear_inequality_constraints = 100000
are required in this case. To update the data omissions from their defaults, one uses the ProblemDescDB::set()
family of overloaded functions, e.g.
where the string identifiers are the same identifiers used when pulling information from the database using one
of the get_<datatype>() functions (refer to the source code of ProblemDescDB.cpp for a full list). However,
the supported ProblemDescDB::set() options are a restricted subset of the database attributes, focused on vector
inputs that can be large scale.
If performing these updates within the constructor of a DirectApplicInterface extension/derivation (see Defining
the direct application interface), then this code is sufficient since the database is unlocked, the active list nodes of
the ProblemDescDB have been set for you, and the correct strategy/method/model/variables/interface/responses
specification instance will get updated. The difficulty in this case stems from the order of instantiation. Since
the Variables and Response instances are constructed in the base Model class, prior to construction of Interface
instances in derived Model classes, database information related to Variables and Response objects will have
already been extracted by the time the Interface constructor is invoked and the database update will not propagate.
Therefore, it is preferred to perform these operations at a higher level (e.g., within your main program), prior
to Strategy instantiation and execution, such that instantiation order is not an issue. However, in this case, it is
necessary to explicitly manage the list nodes of the ProblemDescDB using a specification instance identifier that
corresponds to an identifier from the input file, e.g.:
problem_db.set_db_variables_node("MY_VARIABLES_ID");
Dakota::RealVector drv(1000, 1.); // vector of length 1000, values initialized
to 1.
problem_db.set("variables.continuous_design.initial_point", drv);
Alternatively, rather than setting just a single data node, all data nodes may be set using a method specification
identifier:
problem_db.set_db_list_nodes("MY_METHOD_ID");
since the method specification is responsible for identifying a model specification, which in turn identifies vari-
ables, interface, and responses specifications. If hardwiring specification identifiers is undesirable, then
problem_db.resolve_top_method();
can also be used to deduce the active method specification and set all list nodes based on it. This is most appro-
priate in the case where only single specifications exist for method/model/variables/interface/responses. In each
of these cases, setting list nodes unlocks the corresponding portions of the database, allowing set/get operations.
Once all direct database updates have been performed in this manner, calls to ProblemDescDB::broadcast() and
ProblemDescDB::post_process() should be used on all processors. The former will broadcast a packed MPI
buffer with the aggregated set of specification data from rank 0 to other processors, and the latter will post-
process specification data to fill in any vector defaults that have not yet been provided through either file parsing
or direct updates (Note: scalar defaults are handled in the Data class constructors). Refer to run_dakota_mixed()
in library_mode.cpp for a complete example listing.
Following strategy construction, all MPI communicator partitioning has been performed and the ParallelLibrary
instance may be interrogated for parallel configuration data. For example, the lowest level communicators in
Dakota’s multilevel parallel partitioning are the analysis communicators, which can be retrieved using:
These communicators can then be used for initializing parallel simulation instances, where the number of MPI
communicators in the array corresponds to one communicator per ParallelConfiguration instance.
5.6.1 Extension
The first approach involves extending the existing DirectApplicInterface class to support additional di-
rect simulation interfaces. In this case, a new simulation interface function can be added to
Dakota/src/DirectApplicInterface.[CH] for the simulation of interest. If the new function will not be a member
function, then the following prototype should be used in order to pass the required data:
If the new function will be a member function, then this can be simplified to
int sim();
since the data access can be performed through the DirectApplicInterface class attributes.
This simulation can then be added to the logic blocks in DirectApplicInterface::derived_map_ac(). In addition,
DirectApplicInterface::derived_map_if() and DirectApplicInterface::derived_map_of() can be extended to per-
form pre- and post-processing tasks if desired, but this is not required.
While this approach is the simplest, it has the disadvantage that the Dakota library may need to be recompiled
when the simulation or its direct interface is modified. If it is desirable to maintain the independence of the Dakota
library from the host application, then the following derivation approach should be employed.
5.6.2 Derivation
The second approach is to derive a new interface from DirectApplicInterface in order to redefine several virtual
functions. A typical derived class declaration might be
namespace SIM {
protected:
private:
// Data
}
} // namespace SIM
where the new derived class resides in the simulation’s namespace. Similar to the case of Ex-
tension, the DirectApplicInterface::derived_map_ac() function is the required redefinition, and
DirectApplicInterface::derived_map_if() and DirectApplicInterface::derived_map_of() are optional.
The new derived interface object (from namespace SIM) must now be plugged into the strategy. In the simplest
case of a single model and interface, one could use
from within the Dakota namespace. In a more advanced case of multiple models and multiple interface plug-ins,
one might use
In the case where the simulation interface instance should manage parallel simulations within the context of an
MPI communicator, one should pass in the relevant analysis communicator(s) to the derived constructor. For the
latter case of looping over a set of models, the simplest approach of passing a single analysis communicator would
use code similar to
Since Models may be used in multiple parallel contexts and may therefore have a set of parallel configurations, a
more general approach would extract and pass an array of analysis communicators to allow initialization for each
of the parallel configurations.
New derived direct interface instances inherit various attributes of use in configuring the simulation. In particu-
lar, the ApplicationInterface::parallelLib reference provides access to MPI communicator data (e.g., the analysis
communicators discussed in Instantiating the strategy), DirectApplicInterface::analysisDrivers provides the anal-
ysis driver names specified by the user in the input file, and DirectApplicInterface::analysisComponents provides
additional analysis component identifiers (such as mesh file names) provided by the user which can be used to
distinguish different instances of the same simulation interface. It is worth noting that inherited attributes that
are set as part of the parallel configuration (instead of being extracted from the ProblemDescDB) will be set to
their defaults following construction of the base class instance for the derived class plug-in. It is not until run-
time (i.e., within derived_map_if/derived_map_ac/derived_map_of) that the parallel configuration settings are
re-propagated to the plug-in instance. This is the reason that the analysis communicator should be passed in to
the constructor of a parallel plug-in, if the constructor will be responsible for parallel application initialization.
In the case of optimization, the final design is returned, and in the case of uncertainty quantification, the final
statistics are returned.
This section presumes Dakota has been configured with CMake, compiled, and installed to a CMAKE_-
INSTALL_PREFIX using ’make install’ or equivalent. The Dakota libraries against which you must link will
install to CMAKE_INSTALL_PREFIX/bin and CMAKE_INSTALL_PREFIX/lib. When running CMake,
Dakota and Dakota-included third-party libraries will be output as TPL LIBS, e.g.,
Note that depending on how you configured Dakota, some of the libraries may not be included (for example
NPSOL, DOT, NLPQL). Optional libraries like GSL (discouraged due to GPL license) may also be needed
if Dakota was configured with them. Check which appear in CMAKE_INSTALL_PREFIX/bin CMAKE_-
INSTALL_PREFIX/lib.
Note that as of Dakota 5.2, -lnewmat is no longer required but additional Boost libraries are needed
(-lboost_regex -lboost_filesystem -lboost_system) as a result of migration from legacy
Dakota utilities to more modern Boost components.
You may also need funcadd0.o, -lfl, -lexpat, and, if linking with system-provided GSL, -lgslcblas.
The AMPL solver library may require -ldl. System compiler and math libraries may also need to be included.
If configuring with graphics, you will need to add -lDGraphics and system X libraries (partial list here):
We have experienced problems with the creation of libamplsolver.a on some platforms. Please use the
Dakota mailing lists for help with any problems.
Finally, it is important to use the same C++ compiler (possibly an MPI wrapper) for compiling Dakota and your
application and potentially include Dakota-related preprocessor defines as emitted by CMake during compilation
of Dakota. This ensures that the platform configuration settings are properly propagated.
5.11 Summary
To utilize the Dakota library within a parent software application, the basic steps of main.cpp and the order of
invocation of these steps should be mimicked from within the parent application. Of these steps, ParallelLibrary
instantiation, ProblemDescDB::manage_inputs() and ParallelLibrary::specify_outputs_restart() require the use of
overloaded forms in order to function in an environment without direct command line access and, potentially,
without file parsing. Additional optional steps not performed in main.cpp include the extension/derivation of the
direct interface and the retrieval of strategy results after a run.
Dakota’s library mode is now in production use within several Sandia and external simulation codes/frameworks.
Performing function evaluations is one of the most critical functions of the Dakota software. It can also be one of
the most complicated, as a variety of scheduling approaches and parallelism levels are supported. This complexity
manifests itself in the code through a series of cascaded member functions, from the top level model evaluation
functions, through various scheduling routines, to the low level details of performing a system call, fork, or direct
function invocation. This section provides an overview of the primary classes and member functions involved.
• This derived model class function directly or indirectly invokes Interface::map() in asynchronous mode,
which adds the job to a scheduling queue.
• Model::synchronize() or Model::synchronize_nowait() utilize Model::derived_synchronize() or
Model::derived_synchronize_nowait() for portions of the scheduling process specific to derived model
classes.
• These derived model class functions directly or indirectly invoke Interface::synch() or Interface::synch_-
nowait().
• For application interfaces, these interface synchronization functions are responsible for performing evalua-
tion scheduling in one of the following modes:
– asynchronous local mode (using ApplicationInterface::asynchronous_local_evaluations() or
ApplicationInterface::asynchronous_local_evaluations_nowait())
– message passing mode (using ApplicationInterface::self_schedule_evaluations()
or ApplicationInterface::static_schedule_evaluations() on the iterator master and
ApplicationInterface::serve_evaluations_synch() or ApplicationInterface::serve_evaluations_-
synch_peer() on the servers)
– hybrid mode (using ApplicationInterface::self_schedule_evaluations() or
ApplicationInterface::static_schedule_evaluations() on the iterator master and
ApplicationInterface::serve_evaluations_asynch() or ApplicationInterface::serve_evaluations_-
asynch_peer() or on the servers)
Variable views control the subset of variable types that are active and inactive within a particular iterative study.
For design optimization and uncertainty quantification (UQ), for example, the active variables view consists of
design or uncertain types, respectively, and any other variable types are carried along invisible to the iterative
algorithm being employed. For parameter studies and design of experiments, however, a variable subset view is
not imposed and all variables are active. Selected UQ methods can also be toggled into an "All" view using the
active all variables input specification. When not in an All view, finer gradations within the uncertain variable
sets are also relevant: probabilistic methods (reliability, stochastic expansion) view aleatory uncertain variables as
active, nonprobabilistic methods (interval, evidence) view epistemic uncertain variables as active, and a few UQ
methods (sampling) view both as active. In a more advanced NestedModel use case such as optimization under
uncertainty, design variables are active in the outer optimization context and the uncertain variables are active in
the inner UQ context, with an additional requirement on the inner UQ level to return derivatives with respect to
its "inactive" variables (i.e., the design variables) for use in the outer optimization loop.
For efficiency, contiguous arrays of data store variable information for each of the domain types (continuous,
discrete integer, and discrete real), but active and inactive views into them permit selecting subsets in a given
context. This management is encapsulated into the Variables and SharedVariablesData classes. This page clarifies
concepts of relaxed (formerly merged) vs. mixed, fine-grained vs. aggregated types, domain types, and views into
contiguous arrays.
We begin with an overview of the storage and management concept, for which the following two sections describe
the storage of variable values and meta-data about their organization, used in part to manage views. They are
intended to communicate rationale to maintainers of Variables and SharedVariablesData classes. The final section
provides a discussion of active and inactive views.
As described in the Main Page Variables, a Variables object manages variable types (design, aleatory uncertain,
epistemic uncertain, and state) and domain types (continuous, discrete integer, and discrete real) and supports
different approaches to either distinguishing among these types or aggregating them. Two techniques are used in
cooperation to accomplish this management: (1) class specialization (RelaxedVariables or MixedVariables) and
48 Working with Variable Containers and Views
(2) views into contiguous variable arrays. The latter technique is used whenever it can satisfy the requirement,
with fallback to class specialization when it cannot. In particular, aggregation or separation of variable types
can be accomplished with views, but for aggregation or separation of variable domains, we must resort to class
specialization in order to relax discrete domain types. In this class specialization, a RelaxedVariables object
combines continuous and discrete types (relaxing integers to reals) whereas a MixedVariables object maintains
the integer/real distinction throughout.
The core data for a Variables instance is stored in a set of three continguous arrays, corresponding to the domain
types: allContinuousVars, allDiscreteIntVars, and allDiscreteRealVars, unique to each Variables instance.
Within the core variable data arrays, data corresponding to different aggregated variable types are stored in se-
quence for each domain type:
Note there are currently no epistemic discrete variables. This domain type ordering (continuous, discrete integer,
discrete real) and aggregated variable type ordering (design, aleatory uncertain, epistemic uncertain, state) is
preserved whenever distinct types are flattened into single contiguous arrays. Note that the aleatory and epistemic
uncertain variables contain sub-types for different distributions (e.g., normal, uniform, histogram, poisson), and
discrete integer types include both integer ranges and integer set sub-types. All sub-types are ordered according
to their order of appearance in dakota.input.nspec.
When relaxing in MixedVariables, the allContinuousVars will also aggregate the discrete types, such that they
contain ALL design, then ALL uncertain, then ALL state variables, each in aggregated type order; the allDis-
creteIntVars and allDiscreteRealVars arrays are empty.
(since when relaxed, the continuous array may be storing data corresponding to discrete data).
Finally allContinuousIds stores the 1-based IDs of the variables stored in the allContinuousVars array, i.e., the
variable number of all the problem variables considered as a single contiguous set, in aggregate type order. For
relaxed (formerly merged) views, relaxedDiscreteIds stores the 1-based IDs of the variables which have been
relaxed into the continuous array.
These counts, types, and IDs are most commonly used within the Model classes for mappings between variables
objects at different levels of a model recursion. See, for example, the variable mappings in the NestedModel
constructor.
• continuous_variables(): returns the active view which might return all (ALL views) or a subset (DISTINCT
views) such as design, uncertain, only aleatory uncertain, etc.
• inactive_continuous_variables(): returns the inactive view which which is either a subset or empty
• all_continuous_variables(): returns the full vector allContinuousVars
and this pattern is followed for active/inactive/all access to discrete_int_variables() and discrete_real_variables()
as well as for labels, IDs, and types in SharedVariablesData and variable bounds in Constraints.
Namespace Index
Class Index
Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
Evaluator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
Evaluator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
EvaluatorCreator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
ExperimentData . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
GetLongOpt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
CommandLineHandler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
ApplicationInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
DirectApplicInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
MatlabInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
PythonInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
ScilabInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
TestDriverInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
ParallelDirectApplicInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771
SerialDirectApplicInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
ProcessApplicInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 824
ProcessHandleApplicInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 829
ForkApplicInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
SpawnApplicInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
SysCallApplicInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971
GridApplicInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
ApproximationInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Iterator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
Analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
NonD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
EfficientSubspaceMethod . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
NonDCalibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
NonDBayesCalibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
NonDDREAMBayesCalibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652
NonDGPMSABayesCalibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
NonDQUESOBayesCalibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
NonDExpansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
NonDPolynomialChaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
NonDStochCollocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
NonDIntegration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
NonDCubature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
NonDQuadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
NonDSparseGrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748
NonDInterval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691
NonDGlobalInterval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 667
NonDGlobalEvidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 665
NonDGlobalSingleInterval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 674
NonDLHSInterval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
NonDLHSEvidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
NonDLHSSingleInterval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
NonDLocalInterval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
NonDLocalEvidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
NonDLocalSingleInterval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720
NonDPOFDarts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722
NonDReliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
NonDGlobalReliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 671
NonDLocalReliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
NonDSampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
NonDAdaptImpSampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
NonDAdaptiveSampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
NonDGPImpSampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676
NonDIncremLHSSampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 684
NonDLHSSampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698
PStudyDACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834
DDACEDesignCompExp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
FSUDesignCompExp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
ParamStudy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 796
PSUADEDesignCompExp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837
Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003
RichExtrapVerification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 886
Minimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
LeastSq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
NL2SOLLeastSq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
NLSSOLLeastSq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
SNLLLeastSq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
APPSOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
COLINOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
CONMINOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
DOTOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
JEGAOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
NCSUOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
NLPQLPOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
NomadOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
NonlinearCGOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
NPSOLOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
SNLLOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
SurrBasedMinimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 960
EffGlobalMinimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
SurrBasedGlobalMinimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949
SurrBasedLocalMinimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
NestedModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
RecastModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
SingleModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 911
SurrogateModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
DataFitSurrModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
HierarchSurrModel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
MPIPackBuffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
MPIUnpackBuffer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
NL2Res . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
NoDBBaseConstructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
ParallelConfiguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
ParallelLevel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
ParallelLibrary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777
ParamResponsePair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 792
partial_prp_equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 802
partial_prp_hash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803
ProblemDescDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 812
NIDRProblemDescDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
RecastBaseConstructor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 844
Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
ResponseRep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 866
ResultsDBAny . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874
ResultsEntry< StoredType > . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877
ResultsID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
ResultsManager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
ResultsNames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
SensAnalysisGlobal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
SharedVariablesData . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 900
SharedVariablesDataRep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 905
SNLLBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 914
SNLLLeastSq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
SNLLOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
SOLBase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930
NLSSOLLeastSq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
NPSOLOptimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935
ConcurrentStrategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
HybridStrategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
CollaborativeHybridStrategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
EmbeddedHybridStrategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
SequentialHybridStrategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
SingleMethodStrategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909
TrackerHTTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987
Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
MixedVariables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
RelaxedVariables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
Class Index
ResultsID (Get a globally unique 1-based execution number for a given iterator name (combination of
methodName and methodID) for use in results DB. Each run_iterator call creates or increments
this count for its string identifier ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
ResultsManager (Results manager for iterator final data ) . . . . . . . . . . . . . . . . . . . . . . . . . 881
ResultsNames (List of valid names for iterator results ) . . . . . . . . . . . . . . . . . . . . . . . . . . 884
RichExtrapVerification (Class for Richardson extrapolation for code and solution verification ) . . . . . 886
ScilabInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
SensAnalysisGlobal (Class for a utility class containing correlation calculations and variance-based
decomposition ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
SequentialHybridStrategy (Strategy for sequential hybrid minimization using multiple optimization and
nonlinear least squares methods on multiple models of varying fidelity ) . . . . . . . . . . . . 893
SerialDirectApplicInterface (Sample derived interface class for testing serial simulator plug-ins using
assign_rep() ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
SharedVariablesData (Container class encapsulating variables data that can be shared among a set of
Variables instances ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 900
SharedVariablesDataRep (The representation of a SharedVariablesData instance. This representation,
or body, may be shared by multiple SharedVariablesData handle instances ) . . . . . . . . . . 905
SingleMethodStrategy (Simple fall-through strategy for running a single iterator on a single model ) . . 909
SingleModel (Derived model class which utilizes a single interface to map variables into responses ) . . 911
SNLLBase (Base class for OPT++ optimization and least squares methods ) . . . . . . . . . . . . . . . 914
SNLLLeastSq (Wrapper class for the OPT++ optimization library ) . . . . . . . . . . . . . . . . . . . . 917
SNLLOptimizer (Wrapper class for the OPT++ optimization library ) . . . . . . . . . . . . . . . . . . 922
SOLBase (Base class for Stanford SOL software ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930
SpawnApplicInterface (Derived application interface class which spawns simulation codes using
spawnvp ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
Strategy (Base class for the strategy class hierarchy ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 935
SurfpackApproximation (Derived approximation class for Surfpack approximation classes. Interface
between Surfpack and Dakota ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944
SurrBasedGlobalMinimizer (The global surrogate-based minimizer which sequentially minimizes and
updates a global surrogate model without trust region controls ) . . . . . . . . . . . . . . . . . 949
SurrBasedLocalMinimizer (Class for provably-convergent local surrogate-based optimization and non-
linear least squares ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 951
SurrBasedMinimizer (Base class for local/global surrogate-based optimization/least squares ) . . . . . . 960
SurrogateModel (Base class for surrogate models (DataFitSurrModel and HierarchSurrModel) ) . . . . 966
SysCallApplicInterface (Derived application interface class which spawns simulation codes using sys-
tem calls ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971
TANA3Approximation (Derived approximation class for TANA-3 two-point exponential approximation
(a multipoint approximation) ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 976
TaylorApproximation (Derived approximation class for first- or second-order Taylor series (a local ap-
proximation) ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
TestDriverInterface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 981
TrackerHTTP (TrackerHTTP: a usage tracking module that uses HTTP/HTTPS via the curl library ) . . 987
Variables (Base class for the variables class hierarchy ) . . . . . . . . . . . . . . . . . . . . . . . . . . 990
Verification (Base class for managing common aspects of verification studies ) . . . . . . . . . . . . . . 1003
File Index
Namespace Documentation
Classes
• class ApplicationInterface
Derived class within the interface class hierarchy for supporting interfaces to simulation codes.
• class ApproximationInterface
Derived class within the interface class hierarchy for supporting approximations to simulation-based results.
• class APPSEvalMgr
Evaluation manager class for APPSPACK.
• class APPSOptimizer
Wrapper class for APPSPACK.
• class COLINApplication
• class COLINOptimizer
Wrapper class for optimizers defined using COLIN.
• class CollaborativeHybridStrategy
Strategy for hybrid minimization using multiple collaborating optimization and nonlinear least squares methods.
• class GetLongOpt
GetLongOpt is a general command line utility from S. Manoharan (Advanced Computer Research Institute, Lyon,
France).
• class CommandLineHandler
66 Namespace Documentation
• class CommandShell
Utility class which defines convenience operators for spawning processes with system calls.
• class ConcurrentStrategy
Strategy for multi-start iteration or pareto set optimization.
• class CONMINOptimizer
Wrapper class for the CONMIN optimization library.
• struct BaseConstructor
Dummy struct for overloading letter-envelope constructors.
• struct NoDBBaseConstructor
Dummy struct for overloading constructors used in on-the-fly instantiations.
• struct RecastBaseConstructor
Dummy struct for overloading constructors used in on-the-fly Model instantiations.
• class ActiveSet
Container class for active set tracking information. Contains the active set request vector and the derivative vari-
ables vector.
• class Analyzer
Base class for NonD, DACE, and ParamStudy branches of the iterator hierarchy.
• class Approximation
Base class for the approximation class hierarchy.
• class BiStream
The binary input stream class. Overloads the >> operator for all data types.
• class BoStream
The binary output stream class. Overloads the << operator for all data types.
• class Constraints
Base class for the variable constraints class hierarchy.
• class Graphics
The Graphics class provides a single interface to 2D (motif) and 3D (PLPLOT) graphics as well as tabular cata-
loguing of data for post-processing with Matlab, Tecplot, etc.
• class Interface
Base class for the interface class hierarchy.
• class Iterator
Base class for the iterator class hierarchy.
• class LeastSq
Base class for the nonlinear least squares branch of the iterator hierarchy.
• class Minimizer
Base class for the optimizer and least squares branches of the iterator hierarchy.
• class Model
Base class for the model class hierarchy.
• class NonD
Base class for all nondetermistic iterators (the DAKOTA/UQ branch).
• class Optimizer
Base class for the optimizer branch of the iterator hierarchy.
• class PStudyDACE
Base class for managing common aspects of parameter studies and design of experiments methods.
• class ResponseRep
Container class for response functions and their derivatives. ResponseRep provides the body class.
• class Response
Container class for response functions and their derivatives. Response provides the handle class.
• class Strategy
Base class for the strategy class hierarchy.
• class Variables
Base class for the variables class hierarchy.
• class Verification
Base class for managing common aspects of verification studies.
• class DataFitSurrModel
Derived model class within the surrogate model branch for managing data fit surrogates (global and local).
• class DataInterface
Handle class for interface specification data.
• class DataMethodRep
Body class for method specification data.
• class DataMethod
• class DataModelRep
Body class for model specification data.
• class DataModel
Handle class for model specification data.
• class DataResponsesRep
Body class for responses specification data.
• class DataResponses
Handle class for responses specification data.
• class DataStrategyRep
Body class for strategy specification data.
• class DataStrategy
Handle class for strategy specification data.
• class DataVariablesRep
Body class for variables specification data.
• class DataVariables
Handle class for variables specification data.
• class DDACEDesignCompExp
Wrapper class for the DDACE design of experiments library.
• class DirectApplicInterface
Derived application interface class which spawns simulation codes and testers using direct procedure calls.
• class DiscrepancyCorrection
Base class for discrepancy corrections.
• class DOTOptimizer
Wrapper class for the DOT optimization library.
• class EffGlobalMinimizer
Implementation of Efficient Global Optimization/Least Squares algorithms.
• class EfficientSubspaceMethod
Efficient Subspace Method (ESM), as proposed by Hany S. Abdel-Khalik.
• class EmbeddedHybridStrategy
Strategy for closely-coupled hybrid minimization, typically involving the embedding of local search methods within
global search methods.
• class ExperimentData
• class ForkApplicInterface
Derived application interface class which spawns simulation codes using fork/execvp/waitpid.
• class FSUDesignCompExp
Wrapper class for the FSUDace QMC/CVT library.
• class GaussProcApproximation
Derived approximation class for Gaussian Process implementation.
• class GridApplicInterface
Derived application interface class which spawns simulation codes using grid services such as Condor or Globus.
• class HierarchSurrModel
Derived model class within the surrogate model branch for managing hierarchical surrogates (models of varying
fidelity).
• class HybridStrategy
Base class for hybrid minimization strategies.
• class JEGAOptimizer
A version of Dakota::Optimizer for instantiation of John Eddy’s Genetic Algorithms (JEGA).
• class MatlabInterface
• class MixedVarConstraints
Derived class within the Constraints hierarchy which separates continuous and discrete variables (no domain type
array merging).
• class MixedVariables
Derived class within the Variables hierarchy which separates continuous and discrete variables (no domain type
array merging).
• class MPIPackBuffer
Class for packing MPI message buffers.
• class MPIUnpackBuffer
Class for unpacking MPI message buffers.
• class NCSUOptimizer
Wrapper class for the NCSU DIRECT optimization library.
• class NestedModel
Derived model class which performs a complete sub-iterator execution within every evaluation of the model.
• class NIDRProblemDescDB
The derived input file database utilizing the new IDR parser.
• struct NL2Res
Auxiliary information passed to calcr and calcj via ur.
• class NL2SOLLeastSq
Wrapper class for the NL2SOL nonlinear least squares library.
• class NLPQLPOptimizer
Wrapper class for the NLPQLP optimization library, Version 2.0.
• class NLSSOLLeastSq
Wrapper class for the NLSSOL nonlinear least squares library.
• class NomadOptimizer
Wrapper class for NOMAD Optimizer.
• class NonDAdaptImpSampling
Class for the Adaptive Importance Sampling methods within DAKOTA.
• class NonDAdaptiveSampling
Class for testing various Adaptively sampling methods using geometric, statisctical, and topological information of
the surrogate.
• class NonDBayesCalibration
Base class for Bayesian inference: generates posterior distribution on model parameters given experimental data.
• class NonDCalibration
• class NonDCubature
Derived nondeterministic class that generates N-dimensional numerical cubature points for evaluation of expecta-
tion integrals.
• class NonDDREAMBayesCalibration
Bayesian inference using the DREAM approach.
• class NonDExpansion
Base class for polynomial chaos expansions (PCE) and stochastic collocation (SC).
• class NonDGlobalEvidence
Class for the Dempster-Shafer Evidence Theory methods within DAKOTA/UQ.
• class NonDGlobalInterval
Class for using global nongradient-based optimization approaches to calculate interval bounds for epistemic un-
certainty quantification.
• class NonDGlobalReliability
Class for global reliability methods within DAKOTA/UQ.
• class NonDGlobalSingleInterval
Class for using global nongradient-based optimization approaches to calculate interval bounds for epistemic un-
certainty quantification.
• class NonDGPImpSampling
Class for the Gaussian Process-based Importance Sampling method.
• class NonDGPMSABayesCalibration
Generates posterior distribution on model parameters given experiment data.
• class NonDIncremLHSSampling
Performs icremental LHS sampling for uncertainty quantification.
• class NonDIntegration
Derived nondeterministic class that generates N-dimensional numerical integration points for evaluation of expec-
tation integrals.
• class NonDInterval
Base class for interval-based methods within DAKOTA/UQ.
• class NonDLHSEvidence
Class for the Dempster-Shafer Evidence Theory methods within DAKOTA/UQ.
• class NonDLHSInterval
Class for the LHS-based interval methods within DAKOTA/UQ.
• class NonDLHSSampling
Performs LHS and Monte Carlo sampling for uncertainty quantification.
• class NonDLHSSingleInterval
Class for pure interval propagation using LHS.
• class NonDLocalEvidence
Class for the Dempster-Shafer Evidence Theory methods within DAKOTA/UQ.
• class NonDLocalInterval
Class for using local gradient-based optimization approaches to calculate interval bounds for epistemic uncertainty
quantification.
• class NonDLocalReliability
Class for the reliability methods within DAKOTA/UQ.
• class NonDLocalSingleInterval
Class for using local gradient-based optimization approaches to calculate interval bounds for epistemic uncertainty
quantification.
• class NonDPOFDarts
Base class for POF Dart methods within DAKOTA/UQ.
• class NonDPolynomialChaos
Nonintrusive polynomial chaos expansion approaches to uncertainty quantification.
• class NonDQuadrature
Derived nondeterministic class that generates N-dimensional numerical quadrature points for evaluation of expec-
tation integrals over uncorrelated standard normals/uniforms/exponentials/betas/gammas.
• class NonDQUESOBayesCalibration
Bayesian inference using the QUESO library from UT Austin.
• class NonDReliability
Base class for the reliability methods within DAKOTA/UQ.
• class NonDSampling
Base class for common code between NonDLHSSampling, NonDIncremLHSSampling, and NonDAdaptImpSam-
pling.
• class NonDSparseGrid
Derived nondeterministic class that generates N-dimensional Smolyak sparse grids for numerical evaluation of
expectation integrals over independent standard random variables.
• class NonDStochCollocation
Nonintrusive stochastic collocation approaches to uncertainty quantification.
• class NonlinearCGOptimizer
• class NPSOLOptimizer
Wrapper class for the NPSOL optimization library.
• class ParallelLevel
Container class for the data associated with a single level of communicator partitioning.
• class ParallelConfiguration
Container class for a set of ParallelLevel list iterators that collectively identify a particular multilevel parallel
configuration.
• class ParallelLibrary
Class for partitioning multiple levels of parallelism and managing message passing within these levels.
• class ParamResponsePair
Container class for a variables object, a response object, and an evaluation id.
• class ParamStudy
Class for vector, list, centered, and multidimensional parameter studies.
• class PecosApproximation
Derived approximation class for global basis polynomials.
• class ProblemDescDB
The database containing information parsed from the DAKOTA input file.
• class ProcessApplicInterface
Derived application interface class that spawns a simulation code using a separate process and communicates with
it through files.
• class ProcessHandleApplicInterface
Derived application interface class that spawns a simulation code using a separate process, receives a process
identifier, and communicates with the spawned process through files.
• struct partial_prp_hash
wrapper to delegate to the ParamResponsePair hash_value function
• struct partial_prp_equality
predicate for comparing ONLY the interfaceId and Vars attributes of PRPair
• class PSUADEDesignCompExp
Wrapper class for the PSUADE library.
• class PythonInterface
• class RecastModel
Derived model class which provides a thin wrapper around a sub-model in order to recast the form of its inputs
and/or outputs.
• class RelaxedVarConstraints
Derived class within the Constraints hierarchy which employs relaxation of discrete variables.
• class RelaxedVariables
Derived class within the Variables hierarchy which employs the relaxation of discrete variables.
• class ResultsDBAny
• class ResultsID
Get a globally unique 1-based execution number for a given iterator name (combination of methodName and
methodID) for use in results DB. Each run_iterator call creates or increments this count for its string identifier.
• class ResultsNames
List of valid names for iterator results.
• class ResultsManager
• class ResultsEntry
Class to manage in-core vs. file database lookups.
• class RichExtrapVerification
Class for Richardson extrapolation for code and solution verification.
• class ScilabInterface
• class SensAnalysisGlobal
Class for a utility class containing correlation calculations and variance-based decomposition.
• class SequentialHybridStrategy
Strategy for sequential hybrid minimization using multiple optimization and nonlinear least squares methods on
multiple models of varying fidelity.
• class SharedVariablesDataRep
The representation of a SharedVariablesData instance. This representation, or body, may be shared by multiple
SharedVariablesData handle instances.
• class SharedVariablesData
Container class encapsulating variables data that can be shared among a set of Variables instances.
• class SingleMethodStrategy
Simple fall-through strategy for running a single iterator on a single model.
• class SingleModel
Derived model class which utilizes a single interface to map variables into responses.
• class SNLLBase
Base class for OPT++ optimization and least squares methods.
• class SNLLLeastSq
Wrapper class for the OPT++ optimization library.
• class SNLLOptimizer
Wrapper class for the OPT++ optimization library.
• class SOLBase
Base class for Stanford SOL software.
• class SpawnApplicInterface
Derived application interface class which spawns simulation codes using spawnvp.
• class SurfpackApproximation
Derived approximation class for Surfpack approximation classes. Interface between Surfpack and Dakota.
• class SurrBasedGlobalMinimizer
The global surrogate-based minimizer which sequentially minimizes and updates a global surrogate model without
trust region controls.
• class SurrBasedLocalMinimizer
Class for provably-convergent local surrogate-based optimization and nonlinear least squares.
• class SurrBasedMinimizer
Base class for local/global surrogate-based optimization/least squares.
• class SurrogateModel
Base class for surrogate models (DataFitSurrModel and HierarchSurrModel).
• class SysCallApplicInterface
Derived application interface class which spawns simulation codes using system calls.
• class TANA3Approximation
Derived approximation class for TANA-3 two-point exponential approximation (a multipoint approximation).
• class TaylorApproximation
Derived approximation class for first- or second-order Taylor series (a local approximation).
• class TestDriverInterface
• class TrackerHTTP
TrackerHTTP: a usage tracking module that uses HTTP/HTTPS via the curl library.
Typedefs
• typedef double Real
• typedef std::string String
• typedef Teuchos::SerialDenseVector< int, Real > RealVector
• typedef Teuchos::SerialDenseMatrix< int, Real > RealMatrix
• typedef Teuchos::SerialSymDenseMatrix< int, Real > RealSymMatrix
• typedef Teuchos::SerialDenseVector< int, int > IntVector
• typedef Teuchos::SerialDenseMatrix< int, int > IntMatrix
• typedef std::deque< bool > BoolDeque
• typedef boost::dynamic_bitset< unsigned long > BitArray
• typedef std::vector< BoolDeque > BoolDequeArray
• typedef std::vector< Real > RealArray
• typedef std::vector< RealArray > Real2DArray
• typedef std::vector< int > IntArray
• typedef std::vector< IntArray > Int2DArray
• typedef std::vector< short > ShortArray
• typedef std::vector< unsigned short > UShortArray
Enumerations
• enum {
COBYLA, DIRECT, EA, MS,
PS, SW, BETA }
• enum {
sFTW_F, sFTW_SL, sFTW_D, sFTW_DP,
sFTW_DNR, sFTW_O, sFTW_NS }
• enum {
sFTWret_OK, sFTWret_quit, sFTWret_skipdir, sFTWret_Follow,
sFTWret_mallocfailure }
• enum { OBJECTIVE, INEQUALITY_CONSTRAINT, EQUALITY_CONSTRAINT }
define algebraic function types
• enum {
SILENT_OUTPUT, QUIET_OUTPUT, NORMAL_OUTPUT, VERBOSE_OUTPUT,
DEBUG_OUTPUT }
• enum { STD_NORMAL_U, STD_UNIFORM_U, ASKEY_U, EXTENDED_U }
• enum { DEFAULT_INTERPOLANT, NODAL_INTERPOLANT, HIERARCHICAL_-
INTERPOLANT }
• enum { DEFAULT_COVARIANCE, NO_COVARIANCE, DIAGONAL_COVARIANCE, FULL_-
COVARIANCE }
• enum { NO_INT_REFINE, IS, AIS, MMAIS }
• enum { PROBABILITIES, RELIABILITIES, GEN_RELIABILITIES }
• enum { COMPONENT = 0, SYSTEM_SERIES, SYSTEM_PARALLEL }
• enum { CUMULATIVE, COMPLEMENTARY }
• enum { DEFAULT_LS = 0, SVD_LS, EQ_CON_LS }
• enum {
NO_EMULATOR, POLYNOMIAL_CHAOS, STOCHASTIC_COLLOCATION, GAUSSIAN_-
PROCESS,
KRIGING }
• enum { IGNORE_RANKS, SET_RANKS, GET_RANKS, SET_GET_RANKS }
• enum {
UNCERTAIN, UNCERTAIN_UNIFORM, ALEATORY_UNCERTAIN, ALEATORY_-
UNCERTAIN_UNIFORM,
EPISTEMIC_UNCERTAIN, EPISTEMIC_UNCERTAIN_UNIFORM, ACTIVE, ACTIVE_-
UNIFORM,
ALL, ALL_UNIFORM }
• enum {
MV, AMV_X, AMV_U, AMV_PLUS_X,
AMV_PLUS_U, TANA_X, TANA_U, NO_APPROX }
• enum { BREITUNG, HOHENRACK, HONG }
• enum { EGRA_X, EGRA_U }
• enum { ORIGINAL_PRIMARY, SINGLE_OBJECTIVE, LAGRANGIAN_OBJECTIVE,
AUGMENTED_LAGRANGIAN_OBJECTIVE }
• enum { NO_CONSTRAINTS, LINEARIZED_CONSTRAINTS, ORIGINAL_CONSTRAINTS }
• enum { NO_RELAX, HOMOTOPY, COMPOSITE_STEP }
• enum { PENALTY_MERIT, ADAPTIVE_PENALTY_MERIT, LAGRANGIAN_MERIT,
AUGMENTED_LAGRANGIAN_MERIT }
• enum {
NO_SURROGATE = 0, UNCORRECTED_SURROGATE, AUTO_CORRECTED_SURROGATE,
BYPASS_SURROGATE,
MODEL_DISCREPANCY }
define special values for SurrogateModel::responseMode
• enum var_t {
VAR_x1, VAR_x2, VAR_x3, VAR_b,
VAR_h, VAR_P, VAR_M, VAR_Y,
VAR_w, VAR_t, VAR_R, VAR_E,
VAR_X, VAR_Fs, VAR_P1, VAR_P2,
VAR_P3, VAR_B, VAR_D, VAR_H,
VAR_F0, VAR_d, VAR_MForm }
enumeration of possible variable types (to index to names)
• enum driver_t {
NO_DRIVER = 0, CANTILEVER_BEAM, MOD_CANTILEVER_BEAM, CYLINDER_HEAD,
EXTENDED_ROSENBROCK, GENERALIZED_ROSENBROCK, LF_ROSENBROCK, MF_-
ROSENBROCK,
ROSENBROCK, GERSTNER, SCALABLE_GERSTNER, LOGNORMAL_RATIO,
MULTIMODAL, PLUGIN_ROSENBROCK, PLUGIN_TEXT_BOOK, SHORT_COLUMN,
LF_SHORT_COLUMN, MF_SHORT_COLUMN, SIDE_IMPACT_COST, SIDE_IMPACT_-
PERFORMANCE,
SOBOL_RATIONAL, SOBOL_G_FUNCTION, SOBOL_ISHIGAMI, STEEL_COLUMN_COST,
STEEL_COLUMN_PERFORMANCE, TEXT_BOOK, TEXT_BOOK1, TEXT_BOOK2,
TEXT_BOOK3, TEXT_BOOK_OUU, SCALABLE_TEXT_BOOK, SCALABLE_MONOMIALS,
HERBIE, SMOOTH_HERBIE, SHUBERT, SALINAS,
MODELCENTER }
enumeration of possible direct driver types (to index to names)
• enum {
LIST = 1, VECTOR_SV, VECTOR_FP, CENTERED,
MULTIDIM }
• enum { ESTIMATE_ORDER = 1, CONVERGE_ORDER, CONVERGE_QOI }
• enum EvalType { NLFEvaluator, CONEvaluator }
enumeration for the type of evaluator function
• enum {
TH_SILENT_OUTPUT, TH_QUIET_OUTPUT, TH_NORMAL_OUTPUT, TH_VERBOSE_-
OUTPUT,
TH_DEBUG_OUTPUT }
Functions
• CommandShell & flush (CommandShell &shell)
convenient shell manipulator function to "flush" the shell
• template<typename T >
std::ostream & operator<< (std::ostream &s, const std::set< T > &data)
global std::ostream insertion operator for std::set
• template<typename T >
MPIUnpackBuffer & operator>> (MPIUnpackBuffer &s, std::set< T > &data)
global MPIUnpackBuffer extraction operator for std::set
• template<typename T >
MPIPackBuffer & operator<< (MPIPackBuffer &s, const std::set< T > &data)
global MPIPackBuffer insertion operator for std::set
• template<typename T >
std::istream & operator>> (std::istream &s, std::vector< T > &data)
global std::istream extraction operator for std::vector
• template<typename T >
std::ostream & operator<< (std::ostream &s, const std::vector< T > &data)
global std::ostream insertion operator for std::vector
• template<typename T >
std::ostream & operator<< (std::ostream &s, const std::list< T > &data)
global std::ostream insertion operator for std::list
• Real rel_change_L2 (const RealVector &curr_rv1, const RealVector &prev_rv1, const IntVector &curr_iv,
const IntVector &prev_iv, const RealVector &curr_rv2, const RealVector &prev_rv2)
Computes relative change between Real/int/Real vector triples using Euclidean L2 norm.
• void build_labels_partial (StringArray &label_array, const String &root_label, size_t start_index, size_t
num_items)
create a partial array of labels by tagging root_label for a subset of entries in label_array. Uses build_label().
• void copy_row_vector (const RealMatrix &m, RealMatrix::ordinalType i, std::vector< Real > &row)
• template<typename T >
void copy_data (const std::vector< T > &vec, T ∗ptr, const size_t ptr_len)
copy Array<T> to T∗
• template<typename T >
void copy_data (const T ∗ptr, const size_t ptr_len, std::vector< T > &vec)
copy T∗ to Array<T>
• template<typename T >
void copy_data (const std::list< T > &dl, std::vector< T > &da)
copy std::list<T> to std::vector<T>
• template<typename T >
void copy_data (const std::list< T > &dl, std::vector< std::vector< T > > &d2a, size_t num_a, size_t
a_len)
copy std::list<T> to std::vector<std::vector<T> >
• template<typename T >
void copy_data (const std::vector< std::vector< T > > &d2a, std::vector< T > &da)
copy std::vector<vector<T> > to std::vector<T>(unroll vecOfvecs into vector)
• template<typename T >
void copy_data (const std::map< int, T > &im, std::vector< T > &da)
copy map<int, T> to std::vector<T> (discard integer keys)
• template<typename T >
void copy_data_partial (const std::vector< T > &da1, size_t start_index1, size_t num_items, std::vector<
T > &da2)
• template<typename T >
void copy_data_partial (const std::vector< T > &da1, std::vector< T > &da2, size_t start_index2)
copy all of first Array<T> to portion of second Array<T>
• template<typename T >
void copy_data_partial (const std::vector< T > &da, boost::multi_array< T, 1 > &bma, size_t start_-
index_bma)
copy all of first Array<T> to portion of boost::multi_array<T, 1>
• template<typename T >
void copy_data_partial (const std::vector< T > &da1, size_t start_index1, size_t num_items, std::vector<
T > &da2, size_t start_index2)
copy portion of first Array<T> to portion of second Array<T>
• template<typename T >
size_t find_index (const boost::multi_array< T, 1 > &bma, const T &search_data)
compute the index of an entry within a boost::multi_array
• template<typename T >
T abort_handler_t (int code)
• ResultsKeyType make_key (const StrStrSizet &iterator_id, const std::string &data_name)
Make a full ResultsKeyType from the passed iterator_id and data_name.
• MetaDataValueType make_metadatavalue (const std::string &, const std::string &, const std::string &)
create MetaDataValueType from the passed strings
• MetaDataValueType make_metadatavalue (const std::string &, const std::string &, const std::string &, const
std::string &)
create MetaDataValueType from the passed strings
ScalarType1 will be Real, ScalarType2 will be int, and ScalarType3 may be int or Real, but written for arbitrary
types.
• void run_dakota_data ()
Function to encapsulate the DAKOTA object instantiations for mode 2: direct Data class instantiation.
• static void Vchk_DRset (size_t num_v, const char ∗kind, IntArray ∗input_ndsr, RealVector ∗input_dsr,
RealVector ∗input_dsrp, RealRealMapArray &dsr_vals_probs)
• static bool check_LUV_size (size_t num_v, IntVector &L, IntVector &U, IntVector &V, bool aggregate_-
LUV, size_t offset)
• static bool check_LUV_size (size_t num_v, RealVector &L, RealVector &U, RealVector &V, bool
aggregate_LUV, size_t offset)
• static void Vgen_DIset (size_t num_v, IntSetArray &sets, IntVector &L, IntVector &U, IntVector &V, bool
aggregate_LUV=false, size_t offset=0)
• static void Vgen_DIset (size_t num_v, IntRealMapArray &vals_probs, IntVector &L, IntVector &U,
IntVector &V, bool aggregate_LUV=false, size_t offset=0)
• static void Vgen_DRset (size_t num_v, RealSetArray &sets, RealVector &L, RealVector &U, RealVector
&V, bool aggregate_LUV=false, size_t offset=0)
• static void Vgen_DRset (size_t num_v, RealRealMapArray &vals_probs, RealVector &L, RealVector &U,
RealVector &V, bool aggregate_LUV=false, size_t offset=0)
• static void Vchk_DiscreteDesSetInt (DataVariablesRep ∗dv, size_t offset, Var_Info ∗vi)
• static void Vgen_DiscreteDesSetInt (DataVariablesRep ∗dv, size_t offset)
• static void Vchk_DiscreteDesSetReal (DataVariablesRep ∗dv, size_t offset, Var_Info ∗vi)
• static void Vgen_DiscreteDesSetReal (DataVariablesRep ∗dv, size_t offset)
• static void Vchk_DiscreteUncSetInt (DataVariablesRep ∗dv, size_t offset, Var_Info ∗vi)
• static void Vgen_DiscreteUncSetInt (DataVariablesRep ∗dv, size_t offset)
• static void Vchk_DiscreteUncSetReal (DataVariablesRep ∗dv, size_t offset, Var_Info ∗vi)
• static void Vgen_DiscreteUncSetReal (DataVariablesRep ∗dv, size_t offset)
• static void Vchk_DiscreteStateSetInt (DataVariablesRep ∗dv, size_t offset, Var_Info ∗vi)
• static void Vgen_DiscreteStateSetInt (DataVariablesRep ∗dv, size_t offset)
• static void Vchk_DiscreteStateSetReal (DataVariablesRep ∗dv, size_t offset, Var_Info ∗vi)
• static void Vgen_DiscreteStateSetReal (DataVariablesRep ∗dv, size_t offset)
• static const char ∗ Var_Name (StringArray ∗sa, char ∗buf, size_t i)
• static void Var_boundchk (DataVariablesRep ∗dv, Var_rcheck ∗b)
• static void Var_iboundchk (DataVariablesRep ∗dv, Var_icheck ∗ib)
• static void flatten_num_rva (RealVectorArray ∗rva, IntArray ∗∗pia)
• static void flatten_num_rsa (RealSetArray ∗rsa, IntArray ∗∗pia)
• static void flatten_num_isa (IntSetArray ∗isa, IntArray ∗∗pia)
• static void flatten_num_rrma (RealRealMapArray ∗rrma, IntArray ∗∗pia)
• static void flatten_num_irma (IntRealMapArray ∗irma, IntArray ∗∗pia)
• static void flatten_rva (RealVectorArray ∗rva, RealVector ∗∗prv)
• static void flatten_iva (IntVectorArray ∗iva, IntVector ∗∗piv)
• static void flatten_rsm (RealSymMatrix ∗rsm, RealVector ∗∗prv)
• static void flatten_rsa (RealSetArray ∗rsa, RealVector ∗∗prv)
• static void flatten_isa (IntSetArray ∗isa, IntVector ∗∗piv)
• static void flatten_rrma_keys (RealRealMapArray ∗rrma, RealVector ∗∗prv)
• static void flatten_rrma_values (RealRealMapArray ∗rrma, RealVector ∗∗prv)
• static void flatten_irma_keys (IntRealMapArray ∗irma, IntVector ∗∗piv)
• static void flatten_irma_values (IntRealMapArray ∗irma, RealVector ∗∗prv)
• static void var_iulbl (const char ∗keyname, Values ∗val, VarLabel ∗vl)
• static Iface_mp_Rlit MP3 (failAction, recoveryFnVals, recover)
• static Iface_mp_ilit MP3 (failAction, retryLimit, retry)
• static Iface_mp_lit MP2 (analysisScheduling, master)
• static void ∗ binsearch (void ∗kw, size_t kwsize, size_t n, const char ∗key)
• static const char ∗ Begins (const String &entry_name, const char ∗s)
• static void Bad_name (String entry_name, const char ∗where)
• static void Locked_db ()
• static void Null_rep (const char ∗who)
• static void Null_rep1 (const char ∗who)
• bool set_compare (const ParamResponsePair &database_pr, const ActiveSet &search_set)
search function for a particular ParamResponsePair within a PRPList based on ActiveSet content (request vector
and derivative variables vector)
find the response of a ParamResponsePair within a PRPMultiIndexCache based on interface id, variables, and
ActiveSet search data
• static HANDLE ∗ wait_setup (std::map< pid_t, int > ∗M, size_t ∗pn)
• static int wait_for_one (size_t n, HANDLE ∗h, int req1, size_t ∗pi)
• int salinas_main (int argc, char ∗argv[ ], MPI_Comm ∗comm)
subroutine interface to SALINAS simulation code
• void find_env_token (const char ∗s0, const char ∗∗s1, const char ∗∗s2, const char ∗∗s3)
• const char ∗∗ arg_list_adjust (const char ∗∗, void ∗∗)
Utility function from legacy, "not_executable" module -- DO NOT TOUCH!
• bool contains (const bfs::path &dir_path, const std::string &file_name, boost::filesystem::path &complete_-
filepath)
Helper for "which" - sets complete_filepath from dir_path/file_name combo.
Variables
• BoStream write_restart
the restart binary output stream (doesn’t < really need to be global anymore except for < abort_handler()).
• PRPCache data_pairs
contains all parameter/response pairs.
• ParallelLibrary ∗ Dak_pl
set by ParallelLibrary, for use in CLH
• ResultsManager iterator_results_db
Global results database for iterator results.
• Graphics dakota_graphics
the global Dakota::Graphics object used by < strategies, models, and approximations
• int write_precision = 10
used in ostream data output functions < (restart_util.cpp overrides this default value)
• ProblemDescDB dummy_db
dummy ProblemDescDB object used for < mandatory reference initialization when a < real ProblemDescDB
instance is unavailable
• int mc_ptr_int = 0
global pointer for ModelCenter API
• int dc_ptr_int = 0
global pointer for ModelCenter eval DB
• ProblemDescDB ∗ Dak_pddb
set by ProblemDescDB, for use in parsing
• Interface dummy_interface
dummy Interface object used for mandatory < reference initialization or default virtual < function return by refer-
ence when a real < Interface instance is unavailable
• Model dummy_model
dummy Model object used for mandatory reference < initialization or default virtual function < return by reference
when a real Model instance < is unavailable
• Iterator dummy_iterator
dummy Iterator object used for mandatory < reference initialization or default virtual < function return by refer-
ence when a real < Iterator instance is unavailable
• Dakota_funcs ∗ DF
• Dakota_funcs DakFuncs0
• const char ∗ FIELD_NAMES [ ]
• const int NUMBER_OF_FIELDS = 23
• static GuiKeyWord kw_1 [3]
• static GuiKeyWord kw_2 [1]
• static GuiKeyWord kw_3 [4]
• static GuiKeyWord kw_4 [1]
• static GuiKeyWord kw_5 [2]
• static GuiKeyWord kw_6 [7]
• static GuiKeyWord kw_7 [8]
• static GuiKeyWord kw_8 [12]
• static GuiKeyWord kw_9 [2]
• static GuiKeyWord kw_10 [2]
• static GuiKeyWord kw_11 [3]
• static GuiKeyWord kw_12 [2]
• static GuiKeyWord kw_13 [2]
• static GuiKeyWord kw_14 [8]
• static GuiKeyWord kw_15 [2]
• static GuiKeyWord kw_16 [1]
• static GuiKeyWord kw_17 [1]
• static GuiKeyWord kw_18 [2]
• static GuiKeyWord kw_19 [4]
• static GuiKeyWord kw_20 [2]
• static GuiKeyWord kw_21 [3]
• static GuiKeyWord kw_22 [2]
• static GuiKeyWord kw_23 [2]
• static GuiKeyWord kw_24 [3]
• static GuiKeyWord kw_25 [2]
• static GuiKeyWord kw_26 [14]
• static GuiKeyWord kw_27 [7]
• static GuiKeyWord kw_28 [2]
• static GuiKeyWord kw_29 [18]
• static GuiKeyWord kw_30 [2]
• static GuiKeyWord kw_31 [2]
• static GuiKeyWord kw_32 [5]
• static GuiKeyWord kw_33 [1]
• static GuiKeyWord kw_34 [1]
The primary namespace for DAKOTA. The Dakota namespace encapsulates the core classes of the DAKOTA
framework and prevents name clashes with third-party libraries from methods and packages. The C++ source
files defining these core classes reside in Dakota/src as ∗.[CH].
Boost Multi-Index Container for globally caching ParamResponsePairs. For a global cache, both evaluation and
interface id’s are used for tagging ParamResponsePair records.
Boost Multi-Index Container for locally queueing ParamResponsePairs. For a local queue, interface id’s are
expected to be consistent, such that evaluation id’s are sufficient for tracking particular evaluations.
convenient shell manipulator function to "flush" the shell global convenience function for manipulating the shell;
invokes the class member flush function.
Referenced by SysCallApplicInterface::spawn_analysis_to_shell(), SysCallApplicInterface::spawn_evaluation_-
to_shell(), SysCallApplicInterface::spawn_input_filter_to_shell(), and SysCallApplicInterface::spawn_output_-
filter_to_shell().
Heartbeat function provided by not_executable.C; pass output interval in seconds, or -1 to use $DAKOTA_-
HEARTBEAT
Referenced by ParallelLibrary::init_mpi_comm().
12.1.3.3 int Dakota::my_cp (const char ∗ file, const struct stat ∗ sb, int ftype, int depth, void ∗ v)
my_cp is a wrapper around ’cp -r’. The extra layer allows for symlink to be used instead of file copy.
get_npath "shuffles" the string representing the current $PATH variable definition so that ’.’ is first in the $PATH.
It then returns the new string as the result (last arg in the call).
References get_cwd().
Portability adapter for getcwd. Portability adapter for getcwd: return the string in OS-native format. TODO:
change paths throughout code to use bfs::path where possible, since Windows (and Cygwin) use wchar_t instead
of char_t.
Referenced by get_npath().
Templatized abort_handler_t method that allows for convenient return from methods that otherwise have no sen-
sible return from error clauses. Usage: MyType& method() { return abort_handler<MyType&>(-1); }
References abort_handler().
12.1.3.7 Real Dakota::getdist (const RealVector & x1, const RealVector & x2)
12.1.3.8 Real Dakota::mindist (const RealVector & x, const RealMatrix & xset, int except)
Returns the minimum distance between the point x and the points in the set xset (compares against all points in
xset except point "except"): if except is not needed, pass 0.
References getdist().
Referenced by getRmax().
12.1.3.9 Real Dakota::mindistindx (const RealVector & x, const RealMatrix & xset, const IntArray &
indx)
Gets the min distance between x and points in the set xset defined by the nindx values in indx.
References getdist().
Referenced by GaussProcApproximation::pointsel_add_sel().
Gets the maximum of the min distance between each point and the rest of the set.
References mindist().
Referenced by GaussProcApproximation::pointsel_add_sel().
Creates a string from the argument val using an ostringstream. This only gets used in this file and is only ever
called with ints so no error checking is in place.
Parameters:
val The value of type T to convert to a string.
Returns:
The string representation of val created using an ostringstream.
Referenced by JEGAOptimizer::LoadTheConstraints().
Function to encapsulate the DAKOTA object instantiations for mode 2: direct Data class instantiation. Rather
than parsing from an input file, this function populates Data class objects directly using a minimal specification
and relies on constructor defaults and post-processing in post_process() to fill in the rest.
References ProblemDescDB::broadcast(), ProblemDescDB::check_input(), DataInterface::dataIfaceRep,
DataMethod::dataMethodRep, DataResponses::dataRespRep, DataVariables::dataVarsRep, DataRespons-
esRep::gradientType, DataResponsesRep::hessianType, ProblemDescDB::insert_node(), ProblemDe-
scDB::lock(), DataMethodRep::methodName, model_interface_plugins(), ParallelLibrary::mpirun_flag(),
DataVariablesRep::numContinuousDesVars, DataResponsesRep::numNonlinearIneqConstraints, DataRe-
sponsesRep::numObjectiveFunctions, ProblemDescDB::post_process(), Strategy::run_strategy(), and
ParallelLibrary::world_rank().
Referenced by main().
12.1.3.16 bool Dakota::set_compare (const ParamResponsePair & database_pr, const ActiveSet &
search_set) [inline]
search function for a particular ParamResponsePair within a PRPList based on ActiveSet content (request vector
and derivative variables vector) a global function to compare the ActiveSet of a particular database_pr (presumed
to be in the global history list) with a passed in ActiveSet (search_set).
References ParamResponsePair::active_set(), ActiveSet::derivative_vector(), and ActiveSet::request_vector().
Referenced by lookup_by_val().
search function for a particular ParamResponsePair within a PRPMultiIndex a global function to compare the
interface id and variables of a particular database_pr (presumed to be in the global history list) with a passed in
key of interface id and variables provided by search_pr.
References binary_equal_to(), ParamResponsePair::interface_id(), and ParamResponsePair::prp_parameters().
Referenced by partial_prp_equality::operator()().
find a ParamResponsePair based on the interface id, variables, and ActiveSet search data within search_pr. Lookup
occurs in two steps: (1) PRPMultiIndexCache lookup based on strict equality in interface id and variables, and
(2) set_compare() post-processing based on ActiveSet subset logic.
References ParamResponsePair::active_set(), and set_compare().
Referenced by ApplicationInterface::duplication_detect(), Model::estimate_derivatives(),
SurrBasedLocalMinimizer::find_center_approx(), Optimizer::local_objective_recast_retrieve(), lookup_by_val(),
SNLLLeastSq::post_run(), SurrBasedMinimizer::print_results(), Optimizer::print_results(), LeastSq::print_-
results(), DiscrepancyCorrection::search_db(), and NonDLocalReliability::update_mpp_search_data().
find a ParamResponsePair based on the interface id, variables, and ActiveSet search data within search_pr. Lookup
occurs in two steps: (1) PRPMultiIndexQueue lookup based on strict equality in interface id and variables, and
(2) set_compare() post-processing based on ActiveSet subset logic.
References ParamResponsePair::active_set(), and set_compare().
print a restart file (tabular format) Usage: "dakota_restart_util to_pdb dakota.rst dakota.pdb"
"dakota_restart_util to_tabular dakota.rst dakota.txt"
Unrolls all data associated with a particular tag for all evaluations and then writes this data in a tabular format
(e.g., to a PDB database or MATLAB/TECPLOT data file).
References Variables::continuous_variables(), data_pairs, Variables::discrete_int_variables(),
Variables::discrete_real_variables(), ParamResponsePair::eval_id(), and Response::function_values().
Referenced by main().
read a restart file (neutral file format) Usage: "dakota_restart_util from_neutral dakota.neu dakota.rst"
Reads evaluations from a neutral file. This is used for translating binary files between platforms.
References ParamResponsePair::read_annotated(), and write_restart.
Referenced by main().
repair a restart file by removing corrupted evaluations Usage: "dakota_restart_util remove 0.0 dakota_old.rst
dakota_new.rst"
"dakota_restart_util remove_ids 2 7 13 dakota_old.rst dakota_new.rst"
Repairs a restart file by removing corrupted evaluations. The identifier for evaluation removal can be either a
double precision number (all evaluations having a matching response function value are removed) or a list of
integers (all evaluations with matching evaluation ids are removed).
References Response::active_set_request_vector(), contains(), ParamResponsePair::eval_id(),
Response::function_values(), ParamResponsePair::prp_response(), and write_restart.
Referenced by main().
concatenate multiple restart files Usage: "dakota_restart_util cat dakota_1.rst ... dakota_n.rst dakota_new.rst"
Combines multiple restart files into a single restart database.
References write_restart.
Referenced by main().
Initial value:
0x0, 0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8, 0x9, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf,
0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x
1d, 0x1e, 0x1f,
0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x
2d, 0x2e, 0x2f,
0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x
3d, 0x3e, 0x3f,
0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x
4d, 0x4e, 0x4f,
0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, 0x58, 0x59, 0x5a, 0x5b, ’/’, 0x
5d, 0x5e, 0x5f,
0x60, ’A’, ’B’, ’C’, ’D’, ’E’, ’F’, ’G’, ’H’, ’I’, ’J’, ’K’, ’L’, ’M
’, ’N’, ’O’,
’P’, ’Q’, ’R’, ’S’, ’T’, ’U’, ’V’, ’W’, ’X’, ’Y’, ’Z’, 0x7b, 0x7c, 0x
7d, 0x7e, 0x7f,
0x80, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87, 0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x
8d, 0x8e, 0x8f,
0x90, 0x91, 0x92, 0x93, 0x94, 0x95, 0x96, 0x97, 0x98, 0x99, 0x9a, 0x9b, 0x9c, 0x
9d, 0x9e, 0x9f,
0xa0, 0xa1, 0xa2, 0xa3, 0xa4, 0xa5, 0xa6, 0xa7, 0xa8, 0xa9, 0xaa, 0xab, 0xac, 0x
ad, 0xae, 0xaf,
0xb0, 0xb1, 0xb2, 0xb3, 0xb4, 0xb5, 0xb6, 0xb7, 0xb8, 0xb9, 0xba, 0xbb, 0xbc, 0x
bd, 0xbe, 0xbf,
0xc0, 0xc1, 0xc2, 0xc3, 0xc4, 0xc5, 0xc6, 0xc7, 0xc8, 0xc9, 0xca, 0xcb, 0xcc, 0x
cd, 0xce, 0xcf,
0xd0, 0xd1, 0xd2, 0xd3, 0xd4, 0xd5, 0xd6, 0xd7, 0xd8, 0xd9, 0xda, 0xdb, 0xdc, 0x
dd, 0xde, 0xdf,
0xe0, 0xe1, 0xe2, 0xe3, 0xe4, 0xe5, 0xe6, 0xe7, 0xe8, 0xe9, 0xea, 0xeb, 0xec, 0x
ed, 0xee, 0xef,
0xf0, 0xf1, 0xf2, 0xf3, 0xf4, 0xf5, 0xf6, 0xf7, 0xf8, 0xf9, 0xfa, 0xfb, 0xfc, 0x
fd, 0xfe, 0xff}
Initial value:
{
fprintf,
abort_handler,
dlsolver_option,
continuous_lower_bounds1,
continuous_upper_bounds1,
nonlinear_ineq_constraint_lower_bounds1,
nonlinear_ineq_constraint_upper_bounds1,
nonlinear_eq_constraint_targets1,
linear_ineq_constraint_lower_bounds1,
linear_ineq_constraint_upper_bounds1,
linear_eq_constraint_targets1,
linear_ineq_constraint_coeffs1,
linear_eq_constraint_coeffs1,
ComputeResponses1,
GetFuncs1,
GetGrads1,
GetContVars1,
SetBestContVars1,
SetBestDiscVars1,
SetBestRespFns1,
Get_Real1,
Get_Int1,
Get_Bool1
}
Initial value:
Initial value:
{
{"active_set_vector",8,0,1,0,2151},
{"evaluation_cache",8,0,2,0,2153},
{"restart_file",8,0,3,0,2155}
}
Initial value:
{
{"processors_per_analysis",0x19,0,1,0,2127,0,0.,0.,0.,0,"{Number
of processors per analysis} InterfCommands.html#InterfApplicDF"}
}
Initial value:
{
{"abort",8,0,1,1,2141,0,0.,0.,0.,0,"@[CHOOSE failure mitigation]"
},
{"continuation",8,0,1,1,2147},
{"recover",14,0,1,1,2145},
{"retry",9,0,1,1,2143}
}
Initial value:
{
{"numpy",8,0,1,0,2133,0,0.,0.,0.,0,"{Python NumPy dataflow} Inter
fCommands.html#InterfApplicMSP"}
}
Initial value:
{
{"copy",8,0,1,0,2121,0,0.,0.,0.,0,"{Copy template files} InterfCo
mmands.html#InterfApplicF"},
{"replace",8,0,2,0,2123,0,0.,0.,0.,0,"{Replace existing files} In
terfCommands.html#InterfApplicF"}
}
Initial value:
{
{"dir_save",0,0,3,0,2114},
{"dir_tag",0,0,2,0,2112},
{"directory_save",8,0,3,0,2115,0,0.,0.,0.,0,"{Save work directory
} InterfCommands.html#InterfApplicF"},
{"directory_tag",8,0,2,0,2113,0,0.,0.,0.,0,"{Tag work directory}
InterfCommands.html#InterfApplicF"},
{"named",11,0,1,0,2111,0,0.,0.,0.,0,"{Name of work directory} Int
erfCommands.html#InterfApplicF"},
{"template_directory",11,2,4,0,2117,kw_5,0.,0.,0.,0,"{Template di
rectory} InterfCommands.html#InterfApplicF"},
{"template_files",15,2,4,0,2119,kw_5,0.,0.,0.,0,"{Template files}
InterfCommands.html#InterfApplicF"}
}
Initial value:
{
{"allow_existing_results",8,0,3,0,2099,0,0.,0.,0.,0,"{Allow exist
ing results files} InterfCommands.html#InterfApplicF"},
{"aprepro",8,0,5,0,2103,0,0.,0.,0.,0,"{Aprepro parameters file fo
rmat} InterfCommands.html#InterfApplicF"},
{"file_save",8,0,7,0,2107,0,0.,0.,0.,0,"{Parameters and results f
ile saving} InterfCommands.html#InterfApplicF"},
{"file_tag",8,0,6,0,2105,0,0.,0.,0.,0,"{Parameters and results fi
le tagging} InterfCommands.html#InterfApplicF"},
{"parameters_file",11,0,1,0,2095,0,0.,0.,0.,0,"{Parameters file n
ame} InterfCommands.html#InterfApplicF"},
{"results_file",11,0,2,0,2097,0,0.,0.,0.,0,"{Results file name} I
nterfCommands.html#InterfApplicF"},
{"verbatim",8,0,4,0,2101,0,0.,0.,0.,0,"{Verbatim driver/filter in
vocation syntax} InterfCommands.html#InterfApplicF"},
{"work_directory",8,7,8,0,2109,kw_6,0.,0.,0.,0,"{Create work dire
ctory} InterfCommands.html#InterfApplicF"}
}
Initial value:
{
{"analysis_components",15,0,1,0,2085,0,0.,0.,0.,0,"{Additional id
entifiers for use by the analysis_drivers} InterfCommands.html#InterfApplic"},
{"deactivate",8,3,6,0,2149,kw_1,0.,0.,0.,0,"{Feature deactivation
} InterfCommands.html#InterfApplic"},
{"direct",8,1,4,1,2125,kw_2,0.,0.,0.,0,"[CHOOSE interface type]{D
irect function interface } InterfCommands.html#InterfApplicDF"},
{"failure_capture",8,4,5,0,2139,kw_3,0.,0.,0.,0,"{Failure capturi
ng} InterfCommands.html#InterfApplic"},
{"fork",8,8,4,1,2093,kw_7,0.,0.,0.,0,"@{Fork interface } InterfCo
mmands.html#InterfApplicF"},
{"grid",8,0,4,1,2137,0,0.,0.,0.,0,"{Grid interface } InterfComman
ds.html#InterfApplicG"},
{"input_filter",11,0,2,0,2087,0,0.,0.,0.,0,"{Input filter} Interf
Commands.html#InterfApplic"},
{"matlab",8,0,4,1,2129,0,0.,0.,0.,0,"{Matlab interface } InterfCo
mmands.html#InterfApplicMSP"},
{"output_filter",11,0,3,0,2089,0,0.,0.,0.,0,"{Output filter} Inte
rfCommands.html#InterfApplic"},
{"python",8,1,4,1,2131,kw_4,0.,0.,0.,0,"{Python interface } Inter
fCommands.html#InterfApplicMSP"},
{"scilab",8,0,4,1,2135,0,0.,0.,0.,0,"{Scilab interface } InterfCo
mmands.html#InterfApplicMSP"},
{"system",8,8,4,1,2091,kw_7}
}
Initial value:
{
{"master",8,0,1,1,2185},
{"peer",8,0,1,1,2187}
}
Initial value:
{
{"dynamic",8,0,1,1,2163},
{"static",8,0,1,1,2165}
}
Initial value:
{
{"analysis_concurrency",0x19,0,3,0,2167,0,0.,0.,0.,0,"{Asynchrono
us analysis concurrency} InterfCommands.html#InterfIndControl"},
{"evaluation_concurrency",0x19,0,1,0,2159,0,0.,0.,0.,0,"{Asynchro
nous evaluation concurrency} InterfCommands.html#InterfIndControl"},
{"local_evaluation_scheduling",8,2,2,0,2161,kw_10,0.,0.,0.,0,"{Lo
cal evaluation scheduling} InterfCommands.html#InterfIndControl"}
}
Initial value:
{
{"dynamic",8,0,1,1,2177},
{"static",8,0,1,1,2179}
}
Initial value:
{
{"master",8,0,1,1,2173},
{"peer",8,2,1,1,2175,kw_12,0.,0.,0.,0,"{Peer scheduling of evalua
tions} InterfCommands.html#InterfIndControl"}
}
Initial value:
{
{"algebraic_mappings",11,0,2,0,2081,0,0.,0.,0.,0,"{Algebraic mapp
ings file} InterfCommands.html#InterfAlgebraic"},
{"analysis_drivers",15,12,3,0,2083,kw_8,0.,0.,0.,0,"{Analysis dri
vers} InterfCommands.html#InterfApplic"},
{"analysis_scheduling",8,2,8,0,2183,kw_9,0.,0.,0.,0,"{Message pas
sing configuration for scheduling of analyses} InterfCommands.html#InterfIndContr
ol"},
{"analysis_servers",0x19,0,7,0,2181,0,0.,0.,0.,0,"{Number of anal
ysis servers} InterfCommands.html#InterfIndControl"},
{"asynchronous",8,3,4,0,2157,kw_11,0.,0.,0.,0,"{Asynchronous inte
rface usage} InterfCommands.html#InterfIndControl"},
{"evaluation_scheduling",8,2,6,0,2171,kw_13,0.,0.,0.,0,"{Message
passing configuration for scheduling of evaluations} InterfCommands.html#InterfIn
dControl"},
{"evaluation_servers",0x19,0,5,0,2169,0,0.,0.,0.,0,"{Number of ev
aluation servers} InterfCommands.html#InterfIndControl"},
{"id_interface",11,0,1,0,2079,0,0.,0.,0.,0,"{Interface set identi
fier} InterfCommands.html#InterfIndControl"}
}
Initial value:
{
{"complementary",8,0,1,1,1079},
{"cumulative",8,0,1,1,1077}
}
Initial value:
{
{"num_gen_reliability_levels",13,0,1,0,1087,0,0.,0.,0.,0,"{Number
of generalized reliability levels} MethodCommands.html#MethodNonD"}
}
Initial value:
{
{"num_probability_levels",13,0,1,0,1083,0,0.,0.,0.,0,"{Number of
probability levels} MethodCommands.html#MethodNonD"}
}
Initial value:
{
{"mt19937",8,0,1,1,1091},
{"rnum2",8,0,1,1,1093}
}
Initial value:
{
{"constant_liar",8,0,1,1,971},
{"distance_penalty",8,0,1,1,967},
{"naive",8,0,1,1,965},
{"topology",8,0,1,1,969}
}
Initial value:
{
{"annotated",8,0,1,0,983},
{"freeform",8,0,1,0,985}
}
Initial value:
{
{"distance",8,0,1,1,959},
{"gradient",8,0,1,1,961},
{"predicted_variance",8,0,1,1,957}
}
Initial value:
{
{"annotated",8,0,1,0,977},
{"freeform",8,0,1,0,979}
}
Initial value:
{
{"parallel",8,0,1,1,1001},
{"series",8,0,1,1,999}
}
Initial value:
{
{"gen_reliabilities",8,0,1,1,995},
{"probabilities",8,0,1,1,993},
{"system",8,2,2,0,997,kw_23}
}
Initial value:
{
{"compute",8,3,2,0,991,kw_24},
{"num_response_levels",13,0,1,0,989}
}
Initial value:
{
{"batch_selection",8,4,3,0,963,kw_19,0.,0.,0.,0,"{Batch selection
strategy} MethodCommands.html#MethodNonDAdaptive"},
{"batch_size",9,0,4,0,973,0,0.,0.,0.,0,"{Batch size (number of po
ints added each iteration)} MethodCommands.html#MethodNonDAdaptive"},
{"distribution",8,2,11,0,1075,kw_15,0.,0.,0.,0,"{Distribution typ
e} MethodCommands.html#MethodNonD"},
{"emulator_samples",9,0,1,0,953,0,0.,0.,0.,0,"{Number of samples
on the emulator to generate a new true sample each iteration} MethodCommands.html
#MethodNonDAdaptive"},
{"export_points_file",11,2,6,0,981,kw_20,0.,0.,0.,0,"{File name f
or exporting approximation-based samples from evaluating the GP} MethodCommands.h
tml#MethodNonDAdaptive"},
{"fitness_metric",8,3,2,0,955,kw_21,0.,0.,0.,0,"{Fitness metric}
MethodCommands.html#MethodNonDAdaptive"},
{"gen_reliability_levels",14,1,13,0,1085,kw_16,0.,0.,0.,0,"{Gener
alized reliability levels} MethodCommands.html#MethodNonD"},
{"import_points_file",11,2,5,0,975,kw_22,0.,0.,0.,0,"{File name f
or points to be imported as the basis for the initial GP} MethodCommands.html#Met
hodNonDAdaptive"},
{"misc_options",15,0,8,0,1003},
{"probability_levels",14,1,12,0,1081,kw_17,0.,0.,0.,0,"{Probabili
ty levels} MethodCommands.html#MethodNonD"},
{"response_levels",14,2,7,0,987,kw_25},
{"rng",8,2,14,0,1089,kw_18,0.,0.,0.,0,"{Random number generator}
MethodCommands.html#MethodNonDMC"},
{"samples",9,0,10,0,1303,0,0.,0.,0.,0,"{Number of samples} Method
Commands.html#MethodNonDMC"},
{"seed",0x19,0,9,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"}
}
Initial value:
{
{"merit1",8,0,1,1,271,0,0.,0.,0.,0,"[CHOOSE merit function]"},
{"merit1_smooth",8,0,1,1,273},
{"merit2",8,0,1,1,275},
{"merit2_smooth",8,0,1,1,277,0,0.,0.,0.,0,"@"},
{"merit2_squared",8,0,1,1,279},
{"merit_max",8,0,1,1,267},
{"merit_max_smooth",8,0,1,1,269}
}
Initial value:
{
{"blocking",8,0,1,1,261,0,0.,0.,0.,0,"[CHOOSE synchronization]"},
{"nonblocking",8,0,1,1,263,0,0.,0.,0.,0,"@"}
}
Initial value:
{
{"constraint_penalty",10,0,7,0,281,0,0.,0.,0.,0,"{Constraint pena
lty} MethodCommands.html#MethodAPPSDC"},
{"contraction_factor",10,0,2,0,253,0,0.,0.,0.,0,"{Pattern contrac
tion factor} MethodCommands.html#MethodAPPSDC"},
{"initial_delta",10,0,1,0,251,0,0.,0.,0.,0,"{Initial offset value
} MethodCommands.html#MethodAPPSDC"},
{"linear_equality_constraint_matrix",14,0,14,0,431,0,0.,0.,0.,0,"
{Linear equality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_equality_scale_types",15,0,16,0,435,0,0.,0.,0.,0,"{Linea
r equality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_equality_scales",14,0,17,0,437,0,0.,0.,0.,0,"{Linear equ
ality scales} MethodCommands.html#MethodIndControl"},
{"linear_equality_targets",14,0,15,0,433,0,0.,0.,0.,0,"{Linear eq
uality targets} MethodCommands.html#MethodIndControl"},
{"linear_inequality_constraint_matrix",14,0,9,0,421,0,0.,0.,0.,0,
"{Linear inequality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_inequality_lower_bounds",14,0,10,0,423,0,0.,0.,0.,0,"{Li
near inequality lower bounds} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scale_types",15,0,12,0,427,0,0.,0.,0.,0,"{Lin
ear inequality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scales",14,0,13,0,429,0,0.,0.,0.,0,"{Linear i
nequality scales} MethodCommands.html#MethodIndControl"},
{"linear_inequality_upper_bounds",14,0,11,0,425,0,0.,0.,0.,0,"{Li
near inequality upper bounds} MethodCommands.html#MethodIndControl"},
{"merit_function",8,7,6,0,265,kw_27,0.,0.,0.,0,"{Merit function}
MethodCommands.html#MethodAPPSDC"},
{"smoothing_factor",10,0,8,0,283,0,0.,0.,0.,0,"{Smoothing factor}
MethodCommands.html#MethodAPPSDC"},
{"solution_accuracy",2,0,4,0,256},
{"solution_target",10,0,4,0,257,0,0.,0.,0.,0,"{Solution target} M
ethodCommands.html#MethodAPPSDC"},
{"synchronization",8,2,5,0,259,kw_28,0.,0.,0.,0,"{Evaluation sync
hronization} MethodCommands.html#MethodAPPSDC"},
{"threshold_delta",10,0,3,0,255,0,0.,0.,0.,0,"{Threshold for offs
et values} MethodCommands.html#MethodAPPSDC"}
}
Initial value:
{
{"annotated",8,0,1,0,1231},
{"freeform",8,0,1,0,1233}
}
Initial value:
{"annotated",8,0,1,0,1225},
{"freeform",8,0,1,0,1227}
}
Initial value:
{
{"dakota",8,0,1,1,1219},
{"emulator_samples",9,0,2,0,1221},
{"export_points_file",11,2,4,0,1229,kw_30},
{"import_points_file",11,2,3,0,1223,kw_31},
{"surfpack",8,0,1,1,1217}
}
Initial value:
{
{"sparse_grid_level",13,0,1,0,1237}
}
Initial value:
{
{"sparse_grid_level",13,0,1,0,1241}
}
Initial value:
{
{"gaussian_process",8,5,1,1,1215,kw_32},
{"kriging",0,5,1,1,1214,kw_32},
{"pce",8,1,1,1,1235,kw_33},
{"sc",8,1,1,1,1239,kw_34}
}
Initial value:
{
{"chains",0x29,0,1,0,1203,0,3.,0.,0.,0,"{Number of chains} Method
Commands.html#MethodNonDBayesCalib"},
{"crossover_chain_pairs",0x29,0,3,0,1207,0,0.,0.,0.,0,"{Number of
chain pairs used in crossover } MethodCommands.html#MethodNonDBayesCalib"},
{"emulator",8,4,6,0,1213,kw_35},
{"gr_threshold",0x1a,0,4,0,1209,0,0.,0.,0.,0,"{Gelman-Rubin Thres
hold for convergence} MethodCommands.html#MethodNonDBayesCalib"},
{"jump_step",0x29,0,5,0,1211,0,0.,0.,0.,0,"{Jump-Step } MethodCom
mands.html#MethodNonDBayesCalib"},
{"num_cr",0x29,0,2,0,1205,0,1.,0.,0.,0,"{Number of candidate poin
ts used in burn-in adaptation} MethodCommands.html#MethodNonDBayesCalib"}
}
Initial value:
{
{"adaptive",8,0,1,1,1191},
{"hastings",8,0,1,1,1189}
}
Initial value:
{
{"delayed",8,0,1,1,1185},
{"standard",8,0,1,1,1183}
}
Initial value:
{
{"mt19937",8,0,1,1,1195},
{"rnum2",8,0,1,1,1197}
}
Initial value:
{
{"annotated",8,0,1,0,1177},
{"freeform",8,0,1,0,1179}
}
Initial value:
{
{"annotated",8,0,1,0,1171},
{"freeform",8,0,1,0,1173}
}
Initial value:
{
{"emulator_samples",9,0,1,1,1167},
{"export_points_file",11,2,3,0,1175,kw_40},
{"import_points_file",11,2,2,0,1169,kw_41},
{"metropolis",8,2,5,0,1187,kw_37,0.,0.,0.,0,"{Metropolis type for
the MCMC algorithm } MethodCommands.html#MethodNonDBayesCalib"},
{"proposal_covariance_scale",14,0,7,0,1199,0,0.,0.,0.,0,"{Proposa
l covariance scaling} MethodCommands.html#MethodNonDBayesCalib"},
{"rejection",8,2,4,0,1181,kw_38},
{"rng",8,2,6,0,1193,kw_39,0.,0.,0.,0,"{Random seed generator} Met
hodCommands.html#MethodNonDBayesCalib"}
}
Initial value:
{
{"annotated",8,0,1,0,1153},
{"freeform",8,0,1,0,1155}
}
Initial value:
{
{"annotated",8,0,1,0,1147},
{"freeform",8,0,1,0,1149}
}
Initial value:
{
{"dakota",8,0,1,1,1141},
{"emulator_samples",9,0,2,0,1143},
{"export_points_file",11,2,4,0,1151,kw_43},
{"import_points_file",11,2,3,0,1145,kw_44},
{"surfpack",8,0,1,1,1139}
}
Initial value:
{
{"sparse_grid_level",13,0,1,0,1159}
}
Initial value:
{
{"sparse_grid_level",13,0,1,0,1163}
}
Initial value:
{
{"gaussian_process",8,5,1,1,1137,kw_45},
{"kriging",0,5,1,1,1136,kw_45},
{"pce",8,1,1,1,1157,kw_46},
{"sc",8,1,1,1,1161,kw_47}
}
Initial value:
{
{"emulator",8,4,1,0,1135,kw_48},
{"metropolis",8,2,3,0,1187,kw_37,0.,0.,0.,0,"{Metropolis type for
the MCMC algorithm } MethodCommands.html#MethodNonDBayesCalib"},
{"proposal_covariance_scale",14,0,5,0,1199,0,0.,0.,0.,0,"{Proposa
l covariance scaling} MethodCommands.html#MethodNonDBayesCalib"},
{"rejection",8,2,2,0,1181,kw_38},
{"rng",8,2,4,0,1193,kw_39,0.,0.,0.,0,"{Random seed generator} Met
hodCommands.html#MethodNonDBayesCalib"}
}
Initial value:
{
{"calibrate_sigma",8,0,4,0,1247,0,0.,0.,0.,0,"{Calibrate sigma fl
ag} MethodCommands.html#MethodNonDBayesCalib"},
{"dream",8,6,1,1,1201,kw_36},
{"gpmsa",8,7,1,1,1165,kw_42},
{"likelihood_scale",10,0,3,0,1245,0,0.,0.,0.,0,"{Likelihood scale
factor} MethodCommands.html#MethodNonDBayesCalib"},
{"queso",8,5,1,1,1133,kw_49},
{"samples",9,0,6,0,1303,0,0.,0.,0.,0,"{Number of samples} MethodC
ommands.html#MethodNonDMC"},
{"seed",0x19,0,5,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"},
{"use_derivatives",8,0,2,0,1243}
}
Initial value:
{
{"deltas_per_variable",5,0,2,2,1526},
{"step_vector",14,0,1,1,1525,0,0.,0.,0.,0,"{Step vector} MethodCo
mmands.html#MethodPSCPS"},
{"steps_per_variable",13,0,2,2,1527,0,0.,0.,0.,0,"{Number of step
s per variable} MethodCommands.html#MethodPSCPS"}
}
Initial value:
{
{"beta_solver_name",11,0,1,1,569},
{"misc_options",15,0,5,0,577,0,0.,0.,0.,0,"{Specify miscellaneous
options} MethodCommands.html#MethodSCOLIBDC"},
{"seed",0x19,0,3,0,573,0,0.,0.,0.,0,"{Random seed for stochastic
pattern search} MethodCommands.html#MethodSCOLIBPS"},
{"show_misc_options",8,0,4,0,575,0,0.,0.,0.,0,"{Show miscellaneou
s options} MethodCommands.html#MethodSCOLIBDC"},
{"solution_accuracy",2,0,2,0,570},
{"solution_target",10,0,2,0,571,0,0.,0.,0.,0,"{Desired solution t
arget} MethodCommands.html#MethodSCOLIBDC"}
}
Initial value:
{
{"initial_delta",10,0,5,0,487,0,0.,0.,0.,0,"{Initial offset value
} MethodCommands.html#MethodSCOLIBPS"},
{"misc_options",15,0,4,0,577,0,0.,0.,0.,0,"{Specify miscellaneous
options} MethodCommands.html#MethodSCOLIBDC"},
{"seed",0x19,0,2,0,573,0,0.,0.,0.,0,"{Random seed for stochastic
pattern search} MethodCommands.html#MethodSCOLIBPS"},
{"show_misc_options",8,0,3,0,575,0,0.,0.,0.,0,"{Show miscellaneou
s options} MethodCommands.html#MethodSCOLIBDC"},
{"solution_accuracy",2,0,1,0,570},
{"solution_target",10,0,1,0,571,0,0.,0.,0.,0,"{Desired solution t
arget} MethodCommands.html#MethodSCOLIBDC"},
{"threshold_delta",10,0,6,0,489,0,0.,0.,0.,0,"{Threshold for offs
et values} MethodCommands.html#MethodSCOLIBPS"}
}
Initial value:
{
{"all_dimensions",8,0,1,1,497},
{"major_dimension",8,0,1,1,495}
}
Initial value:
{
{"constraint_penalty",10,0,6,0,507,0,0.,0.,0.,0,"{Constraint pena
lty} MethodCommands.html#MethodSCOLIBDIR"},
{"division",8,2,1,0,493,kw_54,0.,0.,0.,0,"{Box subdivision approa
ch} MethodCommands.html#MethodSCOLIBDIR"},
{"global_balance_parameter",10,0,2,0,499,0,0.,0.,0.,0,"{Global se
arch balancing parameter} MethodCommands.html#MethodSCOLIBDIR"},
{"local_balance_parameter",10,0,3,0,501,0,0.,0.,0.,0,"{Local sear
ch balancing parameter} MethodCommands.html#MethodSCOLIBDIR"},
{"max_boxsize_limit",10,0,4,0,503,0,0.,0.,0.,0,"{Maximum boxsize
limit} MethodCommands.html#MethodSCOLIBDIR"},
{"min_boxsize_limit",10,0,5,0,505,0,0.,0.,0.,0,"{Minimum boxsize
limit} MethodCommands.html#MethodSCOLIBDIR"},
{"misc_options",15,0,10,0,577,0,0.,0.,0.,0,"{Specify miscellaneou
s options} MethodCommands.html#MethodSCOLIBDC"},
{"seed",0x19,0,8,0,573,0,0.,0.,0.,0,"{Random seed for stochastic
pattern search} MethodCommands.html#MethodSCOLIBPS"},
{"show_misc_options",8,0,9,0,575,0,0.,0.,0.,0,"{Show miscellaneou
s options} MethodCommands.html#MethodSCOLIBDC"},
{"solution_accuracy",2,0,7,0,570},
{"solution_target",10,0,7,0,571,0,0.,0.,0.,0,"{Desired solution t
arget} MethodCommands.html#MethodSCOLIBDC"}
}
Initial value:
{
{"blend",8,0,1,1,543},
{"two_point",8,0,1,1,541},
{"uniform",8,0,1,1,545}
}
Initial value:
{
{"linear_rank",8,0,1,1,523},
{"merit_function",8,0,1,1,525}
}
Initial value:
{
{"flat_file",11,0,1,1,519},
{"simple_random",8,0,1,1,515},
{"unique_random",8,0,1,1,517}
}
Initial value:
{
{"mutation_range",9,0,2,0,561,0,0.,0.,0.,0,"{Mutation range} Meth
odCommands.html#MethodSCOLIBEA"},
{"mutation_scale",10,0,1,0,559,0,0.,0.,0.,0,"{Mutation scale} Met
hodCommands.html#MethodSCOLIBEA"}
}
Initial value:
{
{"non_adaptive",8,0,2,0,563,0,0.,0.,0.,0,"{Non-adaptive mutation
flag} MethodCommands.html#MethodSCOLIBEA"},
{"offset_cauchy",8,2,1,1,555,kw_59},
{"offset_normal",8,2,1,1,553,kw_59},
{"offset_uniform",8,2,1,1,557,kw_59},
{"replace_uniform",8,0,1,1,551}
}
Initial value:
{
{"chc",9,0,1,1,531,0,0.,0.,0.,0,"{CHC replacement type} MethodCom
mands.html#MethodSCOLIBEA"},
{"elitist",9,0,1,1,533,0,0.,0.,0.,0,"{Elitist replacement type} M
ethodCommands.html#MethodSCOLIBEA"},
{"new_solutions_generated",9,0,2,0,535,0,0.,0.,0.,0,"{New solutio
ns generated} MethodCommands.html#MethodSCOLIBEA"},
{"random",9,0,1,1,529,0,0.,0.,0.,0,"{Random replacement type} Met
hodCommands.html#MethodSCOLIBEA"}
}
Initial value:
{
{"constraint_penalty",10,0,9,0,565},
{"crossover_rate",10,0,5,0,537,0,0.,0.,0.,0,"{Crossover rate} Met
hodCommands.html#MethodSCOLIBEA"},
{"crossover_type",8,3,6,0,539,kw_56,0.,0.,0.,0,"{Crossover type}
MethodCommands.html#MethodSCOLIBEA"},
{"fitness_type",8,2,3,0,521,kw_57,0.,0.,0.,0,"{Fitness type} Meth
odCommands.html#MethodSCOLIBEA"},
{"initialization_type",8,3,2,0,513,kw_58,0.,0.,0.,0,"{Initializat
ion type} MethodCommands.html#MethodSCOLIBEA"},
{"misc_options",15,0,13,0,577,0,0.,0.,0.,0,"{Specify miscellaneou
s options} MethodCommands.html#MethodSCOLIBDC"},
{"mutation_rate",10,0,7,0,547,0,0.,0.,0.,0,"{Mutation rate} Metho
dCommands.html#MethodSCOLIBEA"},
{"mutation_type",8,5,8,0,549,kw_60,0.,0.,0.,0,"{Mutation type} Me
thodCommands.html#MethodSCOLIBEA"},
{"population_size",0x19,0,1,0,511,0,0.,0.,0.,0,"{Number of popula
tion members} MethodCommands.html#MethodSCOLIBEA"},
{"replacement_type",8,4,4,0,527,kw_61,0.,0.,0.,0,"{Replacement ty
pe} MethodCommands.html#MethodSCOLIBEA"},
{"seed",0x19,0,11,0,573,0,0.,0.,0.,0,"{Random seed for stochastic
pattern search} MethodCommands.html#MethodSCOLIBPS"},
{"show_misc_options",8,0,12,0,575,0,0.,0.,0.,0,"{Show miscellaneo
us options} MethodCommands.html#MethodSCOLIBDC"},
{"solution_accuracy",2,0,10,0,570},
{"solution_target",10,0,10,0,571,0,0.,0.,0.,0,"{Desired solution
target} MethodCommands.html#MethodSCOLIBDC"}
}
Initial value:
{
{"adaptive_pattern",8,0,1,1,461},
{"basic_pattern",8,0,1,1,463},
{"multi_step",8,0,1,1,459}
}
Initial value:
{
{"coordinate",8,0,1,1,449},
{"simplex",8,0,1,1,451}
}
Initial value:
{
{"blocking",8,0,1,1,467},
{"nonblocking",8,0,1,1,469}
}
Initial value:
{
{"constant_penalty",8,0,1,0,441,0,0.,0.,0.,0,"{Control of dynamic
penalty} MethodCommands.html#MethodSCOLIBPS"},
{"constraint_penalty",10,0,16,0,483,0,0.,0.,0.,0,"{Constraint pen
alty} MethodCommands.html#MethodSCOLIBPS"},
{"contraction_factor",10,0,15,0,481,0,0.,0.,0.,0,"{Pattern contra
ction factor} MethodCommands.html#MethodSCOLIBPS"},
{"expand_after_success",9,0,3,0,445,0,0.,0.,0.,0,"{Number of cons
ecutive improvements before expansion} MethodCommands.html#MethodSCOLIBPS"},
{"exploratory_moves",8,3,7,0,457,kw_63,0.,0.,0.,0,"{Exploratory m
oves selection} MethodCommands.html#MethodSCOLIBPS"},
{"initial_delta",10,0,13,0,487,0,0.,0.,0.,0,"{Initial offset valu
e} MethodCommands.html#MethodSCOLIBPS"},
{"misc_options",15,0,12,0,577,0,0.,0.,0.,0,"{Specify miscellaneou
s options} MethodCommands.html#MethodSCOLIBDC"},
{"no_expansion",8,0,2,0,443,0,0.,0.,0.,0,"{No expansion flag} Met
hodCommands.html#MethodSCOLIBPS"},
{"pattern_basis",8,2,4,0,447,kw_64,0.,0.,0.,0,"{Pattern basis sel
ection} MethodCommands.html#MethodSCOLIBPS"},
{"seed",0x19,0,10,0,573,0,0.,0.,0.,0,"{Random seed for stochastic
pattern search} MethodCommands.html#MethodSCOLIBPS"},
{"show_misc_options",8,0,11,0,575,0,0.,0.,0.,0,"{Show miscellaneo
us options} MethodCommands.html#MethodSCOLIBDC"},
{"solution_accuracy",2,0,9,0,570},
{"solution_target",10,0,9,0,571,0,0.,0.,0.,0,"{Desired solution t
arget} MethodCommands.html#MethodSCOLIBDC"},
{"stochastic",8,0,5,0,453,0,0.,0.,0.,0,"{Stochastic pattern searc
h} MethodCommands.html#MethodSCOLIBPS"},
{"synchronization",8,2,8,0,465,kw_65,0.,0.,0.,0,"{Evaluation sync
hronization} MethodCommands.html#MethodSCOLIBPS"},
{"threshold_delta",10,0,14,0,489,0,0.,0.,0.,0,"{Threshold for off
set values} MethodCommands.html#MethodSCOLIBPS"},
{"total_pattern_size",9,0,6,0,455,0,0.,0.,0.,0,"{Total number of
points in pattern} MethodCommands.html#MethodSCOLIBPS"}
}
Initial value:
{
{"constant_penalty",8,0,4,0,479,0,0.,0.,0.,0,"{Control of dynamic
penalty} MethodCommands.html#MethodSCOLIBSW"},
{"constraint_penalty",10,0,12,0,483,0,0.,0.,0.,0,"{Constraint pen
alty} MethodCommands.html#MethodSCOLIBPS"},
{"contract_after_failure",9,0,1,0,473,0,0.,0.,0.,0,"{Number of co
nsecutive failures before contraction} MethodCommands.html#MethodSCOLIBSW"},
{"contraction_factor",10,0,11,0,481,0,0.,0.,0.,0,"{Pattern contra
ction factor} MethodCommands.html#MethodSCOLIBPS"},
{"expand_after_success",9,0,3,0,477,0,0.,0.,0.,0,"{Number of cons
ecutive improvements before expansion} MethodCommands.html#MethodSCOLIBSW"},
{"initial_delta",10,0,9,0,487,0,0.,0.,0.,0,"{Initial offset value
} MethodCommands.html#MethodSCOLIBPS"},
{"misc_options",15,0,8,0,577,0,0.,0.,0.,0,"{Specify miscellaneous
options} MethodCommands.html#MethodSCOLIBDC"},
{"no_expansion",8,0,2,0,475,0,0.,0.,0.,0,"{No expansion flag} Met
hodCommands.html#MethodSCOLIBSW"},
{"seed",0x19,0,6,0,573,0,0.,0.,0.,0,"{Random seed for stochastic
pattern search} MethodCommands.html#MethodSCOLIBPS"},
{"show_misc_options",8,0,7,0,575,0,0.,0.,0.,0,"{Show miscellaneou
s options} MethodCommands.html#MethodSCOLIBDC"},
{"solution_accuracy",2,0,5,0,570},
{"solution_target",10,0,5,0,571,0,0.,0.,0.,0,"{Desired solution t
arget} MethodCommands.html#MethodSCOLIBDC"},
{"threshold_delta",10,0,10,0,489,0,0.,0.,0.,0,"{Threshold for off
set values} MethodCommands.html#MethodSCOLIBPS"}
}
Initial value:
{
{"frcg",8,0,1,1,185},
{"linear_equality_constraint_matrix",14,0,7,0,431,0,0.,0.,0.,0,"{
Linear equality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_equality_scale_types",15,0,9,0,435,0,0.,0.,0.,0,"{Linear
equality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_equality_scales",14,0,10,0,437,0,0.,0.,0.,0,"{Linear equ
ality scales} MethodCommands.html#MethodIndControl"},
{"linear_equality_targets",14,0,8,0,433,0,0.,0.,0.,0,"{Linear equ
ality targets} MethodCommands.html#MethodIndControl"},
{"linear_inequality_constraint_matrix",14,0,2,0,421,0,0.,0.,0.,0,
"{Linear inequality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_inequality_lower_bounds",14,0,3,0,423,0,0.,0.,0.,0,"{Lin
ear inequality lower bounds} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scale_types",15,0,5,0,427,0,0.,0.,0.,0,"{Line
ar inequality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scales",14,0,6,0,429,0,0.,0.,0.,0,"{Linear in
equality scales} MethodCommands.html#MethodIndControl"},
{"linear_inequality_upper_bounds",14,0,4,0,425,0,0.,0.,0.,0,"{Lin
ear inequality upper bounds} MethodCommands.html#MethodIndControl"},
{"mfd",8,0,1,1,187}
}
Initial value:
{
{"linear_equality_constraint_matrix",14,0,7,0,431,0,0.,0.,0.,0,"{
Linear equality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_equality_scale_types",15,0,9,0,435,0,0.,0.,0.,0,"{Linear
equality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_equality_scales",14,0,10,0,437,0,0.,0.,0.,0,"{Linear equ
ality scales} MethodCommands.html#MethodIndControl"},
{"linear_equality_targets",14,0,8,0,433,0,0.,0.,0.,0,"{Linear equ
ality targets} MethodCommands.html#MethodIndControl"},
{"linear_inequality_constraint_matrix",14,0,2,0,421,0,0.,0.,0.,0,
"{Linear inequality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_inequality_lower_bounds",14,0,3,0,423,0,0.,0.,0.,0,"{Lin
ear inequality lower bounds} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scale_types",15,0,5,0,427,0,0.,0.,0.,0,"{Line
ar inequality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scales",14,0,6,0,429,0,0.,0.,0.,0,"{Linear in
equality scales} MethodCommands.html#MethodIndControl"},
{"linear_inequality_upper_bounds",14,0,4,0,425,0,0.,0.,0.,0,"{Lin
ear inequality upper bounds} MethodCommands.html#MethodIndControl"}
}
Initial value:
{
{"drop_tolerance",10,0,1,0,1271}
}
Initial value:
{
{"box_behnken",8,0,1,1,1261,0,0.,0.,0.,0,"[CHOOSE DACE type]"},
{"central_composite",8,0,1,1,1263},
{"fixed_seed",8,0,5,0,1273,0,0.,0.,0.,0,"{Fixed seed flag} Method
Commands.html#MethodDDACE"},
{"grid",8,0,1,1,1251},
{"lhs",8,0,1,1,1257},
{"main_effects",8,0,2,0,1265,0,0.,0.,0.,0,"{Main effects} MethodC
ommands.html#MethodDDACE"},
{"oa_lhs",8,0,1,1,1259},
{"oas",8,0,1,1,1255},
{"quality_metrics",8,0,3,0,1267,0,0.,0.,0.,0,"{Quality metrics} M
ethodCommands.html#MethodDDACE"},
{"random",8,0,1,1,1253},
{"samples",9,0,8,0,1303,0,0.,0.,0.,0,"{Number of samples} MethodC
ommands.html#MethodNonDMC"},
{"seed",0x19,0,7,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"},
{"symbols",9,0,6,0,1275,0,0.,0.,0.,0,"{Number of symbols} MethodC
ommands.html#MethodDDACE"},
{"variance_based_decomp",8,1,4,0,1269,kw_70,0.,0.,0.,0,"{Variance
based decomposition} MethodCommands.html#MethodDDACE"}
}
Initial value:
{
{"bfgs",8,0,1,1,173},
{"frcg",8,0,1,1,169},
{"linear_equality_constraint_matrix",14,0,7,0,431,0,0.,0.,0.,0,"{
Linear equality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_equality_scale_types",15,0,9,0,435,0,0.,0.,0.,0,"{Linear
equality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_equality_scales",14,0,10,0,437,0,0.,0.,0.,0,"{Linear equ
ality scales} MethodCommands.html#MethodIndControl"},
{"linear_equality_targets",14,0,8,0,433,0,0.,0.,0.,0,"{Linear equ
ality targets} MethodCommands.html#MethodIndControl"},
{"linear_inequality_constraint_matrix",14,0,2,0,421,0,0.,0.,0.,0,
"{Linear inequality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_inequality_lower_bounds",14,0,3,0,423,0,0.,0.,0.,0,"{Lin
ear inequality lower bounds} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scale_types",15,0,5,0,427,0,0.,0.,0.,0,"{Line
ar inequality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scales",14,0,6,0,429,0,0.,0.,0.,0,"{Linear in
equality scales} MethodCommands.html#MethodIndControl"},
{"linear_inequality_upper_bounds",14,0,4,0,425,0,0.,0.,0.,0,"{Lin
ear inequality upper bounds} MethodCommands.html#MethodIndControl"},
{"mmfd",8,0,1,1,171},
{"slp",8,0,1,1,175},
{"sqp",8,0,1,1,177}
}
Initial value:
{
{"annotated",8,0,1,0,629},
{"freeform",8,0,1,0,631}
}
Initial value:
{
{"dakota",8,0,1,1,617},
{"surfpack",8,0,1,1,615}
}
Initial value:
{
{"annotated",8,0,1,0,623},
{"freeform",8,0,1,0,625}
}
Initial value:
{
{"export_points_file",11,2,4,0,627,kw_73,0.,0.,0.,0,"{File name f
or exporting approximation-based samples from evaluating the GP} MethodCommands.h
tml#MethodEG"},
{"gaussian_process",8,2,1,0,613,kw_74,0.,0.,0.,0,"{GP selection}
MethodCommands.html#MethodEG"},
{"import_points_file",11,2,3,0,621,kw_75,0.,0.,0.,0,"{File name f
or points to be imported as the basis for the initial GP} MethodCommands.html#Met
hodEG"},
{"kriging",0,2,1,0,612,kw_74},
{"seed",0x19,0,5,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"},
{"use_derivatives",8,0,2,0,619,0,0.,0.,0.,0,"{Derivative usage} M
ethodCommands.html#MethodEG"}
}
Initial value:
{
{"batch_size",9,0,2,0,1027},
{"distribution",8,2,5,0,1075,kw_15,0.,0.,0.,0,"{Distribution type
} MethodCommands.html#MethodNonD"},
{"emulator_samples",9,0,1,0,1025},
{"gen_reliability_levels",14,1,7,0,1085,kw_16,0.,0.,0.,0,"{Genera
lized reliability levels} MethodCommands.html#MethodNonD"},
{"probability_levels",14,1,6,0,1081,kw_17,0.,0.,0.,0,"{Probabilit
y levels} MethodCommands.html#MethodNonD"},
{"rng",8,2,8,0,1089,kw_18,0.,0.,0.,0,"{Random number generator} M
ethodCommands.html#MethodNonDMC"},
{"samples",9,0,4,0,1303,0,0.,0.,0.,0,"{Number of samples} MethodC
ommands.html#MethodNonDMC"},
{"seed",0x19,0,3,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"}
}
Initial value:
{
{"grid",8,0,1,1,1291,0,0.,0.,0.,0,"[CHOOSE trial type]"},
{"halton",8,0,1,1,1293},
{"random",8,0,1,1,1295,0,0.,0.,0.,0,"@"}
}
Initial value:
{
{"drop_tolerance",10,0,1,0,1285}
}
Initial value:
{
{"fixed_seed",8,0,4,0,1287,0,0.,0.,0.,0,"{Fixed seed flag} Method
Commands.html#MethodFSUDACE"},
{"latinize",8,0,1,0,1279,0,0.,0.,0.,0,"{Latinization of samples}
MethodCommands.html#MethodFSUDACE"},
{"num_trials",9,0,6,0,1297,0,0.,0.,0.,0,"{Number of trials } Met
hodCommands.html#MethodFSUDACE"},
{"quality_metrics",8,0,2,0,1281,0,0.,0.,0.,0,"{Quality metrics} M
ethodCommands.html#MethodFSUDACE"},
{"samples",9,0,8,0,1303,0,0.,0.,0.,0,"{Number of samples} MethodC
ommands.html#MethodNonDMC"},
{"seed",0x19,0,7,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"},
{"trial_type",8,3,5,0,1289,kw_78,0.,0.,0.,0,"{Trial type} MethodC
ommands.html#MethodFSUDACE"},
{"variance_based_decomp",8,1,3,0,1283,kw_79,0.,0.,0.,0,"{Variance
based decomposition} MethodCommands.html#MethodFSUDACE"}
}
Initial value:
{
{"drop_tolerance",10,0,1,0,1493}
}
Initial value:
{
{"fixed_sequence",8,0,6,0,1497,0,0.,0.,0.,0,"{Fixed sequence flag
} MethodCommands.html#MethodFSUDACE"},
{"halton",8,0,1,1,1483,0,0.,0.,0.,0,"[CHOOSE sequence type]"},
{"hammersley",8,0,1,1,1485},
{"latinize",8,0,2,0,1487,0,0.,0.,0.,0,"{Latinization of samples}
MethodCommands.html#MethodFSUDACE"},
{"prime_base",13,0,9,0,1503,0,0.,0.,0.,0,"{Prime bases for sequen
ces} MethodCommands.html#MethodFSUDACE"},
{"quality_metrics",8,0,3,0,1489,0,0.,0.,0.,0,"{Quality metrics} M
ethodCommands.html#MethodFSUDACE"},
{"samples",9,0,5,0,1495,0,0.,0.,0.,0,"{Number of samples taken in
the MCMC sampling} MethodCommands.html#MethodNonDBayesCalib"},
{"sequence_leap",13,0,8,0,1501,0,0.,0.,0.,0,"{Sequence leaping in
dices} MethodCommands.html#MethodFSUDACE"},
{"sequence_start",13,0,7,0,1499,0,0.,0.,0.,0,"{Sequence starting
indices} MethodCommands.html#MethodFSUDACE"},
{"variance_based_decomp",8,1,4,0,1491,kw_81,0.,0.,0.,0,"{Variance
based decomposition} MethodCommands.html#MethodFSUDACE"}
}
Initial value:
{
{"annotated",8,0,1,0,931},
{"freeform",8,0,1,0,933}
}
Initial value:
{
{"annotated",8,0,1,0,925},
{"freeform",8,0,1,0,927}
}
Initial value:
{
{"parallel",8,0,1,1,949},
{"series",8,0,1,1,947}
}
Initial value:
{
{"gen_reliabilities",8,0,1,1,943},
{"probabilities",8,0,1,1,941},
{"system",8,2,2,0,945,kw_85}
}
Initial value:
{
{"compute",8,3,2,0,939,kw_86},
{"num_response_levels",13,0,1,0,937}
}
Initial value:
{
{"distribution",8,2,7,0,1075,kw_15,0.,0.,0.,0,"{Distribution type
} MethodCommands.html#MethodNonD"},
{"emulator_samples",9,0,1,0,921},
{"export_points_file",11,2,3,0,929,kw_83,0.,0.,0.,0,"{File name f
or exporting approximation-based samples from evaluating the emulator} MethodComm
ands.html#MethodNonDBayesCalib"},
{"gen_reliability_levels",14,1,9,0,1085,kw_16,0.,0.,0.,0,"{Genera
lized reliability levels} MethodCommands.html#MethodNonD"},
{"import_points_file",11,2,2,0,923,kw_84,0.,0.,0.,0,"{File contai
ning points to evaluate} MethodCommands.html#MethodPSLPS"},
{"probability_levels",14,1,8,0,1081,kw_17,0.,0.,0.,0,"{Probabilit
y levels} MethodCommands.html#MethodNonD"},
{"response_levels",14,2,4,0,935,kw_87},
{"rng",8,2,10,0,1089,kw_18,0.,0.,0.,0,"{Random number generator}
MethodCommands.html#MethodNonDMC"},
{"samples",9,0,6,0,1303,0,0.,0.,0.,0,"{Number of samples} MethodC
ommands.html#MethodNonDMC"},
{"seed",0x19,0,5,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"}
}
Initial value:
{
{"parallel",8,0,1,1,1073},
{"series",8,0,1,1,1071}
}
Initial value:
{
{"gen_reliabilities",8,0,1,1,1067},
{"probabilities",8,0,1,1,1065},
{"system",8,2,2,0,1069,kw_89}
}
Initial value:
{
{"compute",8,3,2,0,1063,kw_90},
{"num_response_levels",13,0,1,0,1061}
}
Initial value:
{
{"annotated",8,0,1,0,1051},
{"freeform",8,0,1,0,1053}
}
Initial value:
{
{"dakota",8,0,1,1,1039},
{"surfpack",8,0,1,1,1037}
}
Initial value:
{
{"annotated",8,0,1,0,1045},
{"freeform",8,0,1,0,1047}
}
Initial value:
{
{"export_points_file",11,2,4,0,1049,kw_92},
{"gaussian_process",8,2,1,0,1035,kw_93},
{"import_points_file",11,2,3,0,1043,kw_94},
{"kriging",0,2,1,0,1034,kw_93},
{"use_derivatives",8,0,2,0,1041}
}
Initial value:
{
{"distribution",8,2,5,0,1075,kw_15,0.,0.,0.,0,"{Distribution type
} MethodCommands.html#MethodNonD"},
{"ea",8,0,1,0,1055},
{"ego",8,5,1,0,1033,kw_95},
{"gen_reliability_levels",14,1,7,0,1085,kw_16,0.,0.,0.,0,"{Genera
lized reliability levels} MethodCommands.html#MethodNonD"},
{"lhs",8,0,1,0,1057},
{"probability_levels",14,1,6,0,1081,kw_17,0.,0.,0.,0,"{Probabilit
y levels} MethodCommands.html#MethodNonD"},
{"response_levels",14,2,2,0,1059,kw_91},
{"rng",8,2,8,0,1089,kw_18,0.,0.,0.,0,"{Random number generator} M
ethodCommands.html#MethodNonDMC"},
{"samples",9,0,4,0,1303,0,0.,0.,0.,0,"{Number of samples} MethodC
ommands.html#MethodNonDMC"},
{"sbo",8,5,1,0,1031,kw_95},
{"seed",0x19,0,3,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"}
}
Initial value:
{
{"mt19937",8,0,1,1,1127},
{"rnum2",8,0,1,1,1129}
}
Initial value:
{
{"annotated",8,0,1,0,1117},
{"freeform",8,0,1,0,1119}
}
Initial value:
{
{"dakota",8,0,1,1,1105},
{"surfpack",8,0,1,1,1103}
}
Initial value:
{
{"annotated",8,0,1,0,1111},
{"freeform",8,0,1,0,1113}
}
Initial value:
{
{"export_points_file",11,2,4,0,1115,kw_98,0.,0.,0.,0,"{File name
for exporting approximation-based samples from evaluating the GP} MethodCommands.
html#MethodNonDGlobalIntervalEst"},
{"gaussian_process",8,2,1,0,1101,kw_99,0.,0.,0.,0,"{EGO GP select
ion} MethodCommands.html#MethodNonDGlobalIntervalEst"},
{"import_points_file",11,2,3,0,1109,kw_100,0.,0.,0.,0,"{File name
for points to be imported as the basis for the initial GP} MethodCommands.html#M
ethodNonDGlobalIntervalEst"},
{"kriging",0,2,1,0,1100,kw_99},
{"use_derivatives",8,0,2,0,1107,0,0.,0.,0.,0,"{Derivative usage}
MethodCommands.html#MethodNonDGlobalIntervalEst"}
}
Initial value:
{
{"ea",8,0,1,0,1121},
{"ego",8,5,1,0,1099,kw_101},
{"lhs",8,0,1,0,1123},
{"rng",8,2,2,0,1125,kw_97,0.,0.,0.,0,"{Random seed generator} Met
hodCommands.html#MethodNonDGlobalIntervalEst"},
{"samples",9,0,4,0,1303,0,0.,0.,0.,0,"{Number of samples} MethodC
ommands.html#MethodNonDMC"},
{"sbo",8,5,1,0,1097,kw_101},
{"seed",0x19,0,3,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"}
}
Initial value:
{
{"complementary",8,0,1,1,1471},
{"cumulative",8,0,1,1,1469}
}
Initial value:
{
{"num_gen_reliability_levels",13,0,1,0,1479}
}
Initial value:
{
{"num_probability_levels",13,0,1,0,1475}
}
Initial value:
{
{"annotated",8,0,1,0,1437},
{"freeform",8,0,1,0,1439}
}
Initial value:
{
{"annotated",8,0,1,0,1431},
{"freeform",8,0,1,0,1433}
}
Initial value:
{
{"parallel",8,0,1,1,1465},
{"series",8,0,1,1,1463}
}
Initial value:
{
{"gen_reliabilities",8,0,1,1,1459},
{"probabilities",8,0,1,1,1457},
{"system",8,2,2,0,1461,kw_108}
}
Initial value:
{
{"compute",8,3,2,0,1455,kw_109},
{"num_response_levels",13,0,1,0,1453}
}
Initial value:
{
{"mt19937",8,0,1,1,1447},
{"rnum2",8,0,1,1,1449}
}
Initial value:
{
{"dakota",8,0,1,0,1427},
{"surfpack",8,0,1,0,1425}
}
Initial value:
{
{"distribution",8,2,8,0,1467,kw_103},
{"export_points_file",11,2,3,0,1435,kw_106,0.,0.,0.,0,"{File name
for exporting approximation-based samples from evaluating the GP} MethodCommands
.html#MethodNonDGlobalRel"},
{"gen_reliability_levels",14,1,10,0,1477,kw_104},
{"import_points_file",11,2,2,0,1429,kw_107,0.,0.,0.,0,"{File name
for points to be imported as the basis for the initial GP} MethodCommands.html#M
ethodNonDGlobalRel"},
{"probability_levels",14,1,9,0,1473,kw_105},
{"response_levels",14,2,7,0,1451,kw_110},
{"rng",8,2,6,0,1445,kw_111},
{"seed",0x19,0,5,0,1443,0,0.,0.,0.,0,"{Random seed for initial GP
construction} MethodCommands.html#MethodNonDGlobalRel"},
{"u_gaussian_process",8,2,1,1,1423,kw_112},
{"u_kriging",0,0,1,1,1422},
{"use_derivatives",8,0,4,0,1441,0,0.,0.,0.,0,"{Derivative usage}
MethodCommands.html#MethodNonDGlobalRel"},
{"x_gaussian_process",8,2,1,1,1421,kw_112},
{"x_kriging",0,2,1,1,1420,kw_112}
}
Initial value:
{
{"parallel",8,0,1,1,917},
{"series",8,0,1,1,915}
}
Initial value:
{
{"gen_reliabilities",8,0,1,1,911},
{"probabilities",8,0,1,1,909},
{"system",8,2,2,0,913,kw_114}
}
Initial value:
{
{"compute",8,3,2,0,907,kw_115},
{"num_response_levels",13,0,1,0,905}
}
Initial value:
{
{"adapt_import",8,0,1,0,899},
{"distribution",8,2,5,0,1075,kw_15,0.,0.,0.,0,"{Distribution type
} MethodCommands.html#MethodNonD"},
{"gen_reliability_levels",14,1,7,0,1085,kw_16,0.,0.,0.,0,"{Genera
lized reliability levels} MethodCommands.html#MethodNonD"},
{"import",8,0,1,0,897},
{"mm_adapt_import",8,0,1,0,901},
{"probability_levels",14,1,6,0,1081,kw_17,0.,0.,0.,0,"{Probabilit
y levels} MethodCommands.html#MethodNonD"},
{"response_levels",14,2,2,0,903,kw_116},
{"rng",8,2,8,0,1089,kw_18,0.,0.,0.,0,"{Random number generator} M
ethodCommands.html#MethodNonDMC"},
{"samples",9,0,4,0,1303,0,0.,0.,0.,0,"{Number of samples} MethodC
ommands.html#MethodNonDMC"},
{"seed",0x19,0,3,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"}
}
Initial value:
{
{"annotated",8,0,1,0,1519},
{"freeform",8,0,1,0,1521}
}
Initial value:
{
{"import_points_file",11,2,1,1,1517,kw_118},
{"list_of_points",14,0,1,1,1515,0,0.,0.,0.,0,"{List of points to
evaluate} MethodCommands.html#MethodPSLPS"}
}
Initial value:
{
{"complementary",8,0,1,1,1349},
{"cumulative",8,0,1,1,1347}
}
Initial value:
{
{"num_gen_reliability_levels",13,0,1,0,1343}
}
Initial value:
{
{"num_probability_levels",13,0,1,0,1339}
}
Initial value:
{
{"parallel",8,0,1,1,1335},
{"series",8,0,1,1,1333}
}
Initial value:
{
{"gen_reliabilities",8,0,1,1,1329},
{"probabilities",8,0,1,1,1327},
{"system",8,2,2,0,1331,kw_123}
}
Initial value:
{
{"compute",8,3,2,0,1325,kw_124},
{"num_response_levels",13,0,1,0,1323}
}
Initial value:
{
{"distribution",8,2,5,0,1345,kw_120},
{"gen_reliability_levels",14,1,4,0,1341,kw_121},
{"nip",8,0,1,0,1319},
{"probability_levels",14,1,3,0,1337,kw_122},
{"response_levels",14,2,2,0,1321,kw_125},
{"sqp",8,0,1,0,1317}
}
Initial value:
{
{"nip",8,0,1,0,1355},
{"sqp",8,0,1,0,1353}
}
Initial value:
{
{"adapt_import",8,0,1,1,1389},
{"import",8,0,1,1,1387},
{"mm_adapt_import",8,0,1,1,1391},
{"samples",9,0,2,0,1393,0,0.,0.,0.,0,"{Refinement samples} Method
Commands.html#MethodNonDLocalRel"},
{"seed",0x19,0,3,0,1395,0,0.,0.,0.,0,"{Refinement seed} MethodCom
mands.html#MethodNonDLocalRel"}
}
Initial value:
{
{"first_order",8,0,1,1,1381},
{"sample_refinement",8,5,2,0,1385,kw_128},
{"second_order",8,0,1,1,1383}
}
Initial value:
{
{"integration",8,3,3,0,1379,kw_129,0.,0.,0.,0,"{Integration metho
d} MethodCommands.html#MethodNonDLocalRel"},
{"nip",8,0,2,0,1377},
{"no_approx",8,0,1,1,1373},
{"sqp",8,0,2,0,1375},
{"u_taylor_mean",8,0,1,1,1363},
{"u_taylor_mpp",8,0,1,1,1367},
{"u_two_point",8,0,1,1,1371},
{"x_taylor_mean",8,0,1,1,1361},
{"x_taylor_mpp",8,0,1,1,1365},
{"x_two_point",8,0,1,1,1369}
}
Initial value:
{
{"num_reliability_levels",13,0,1,0,1417}
}
Initial value:
{
{"parallel",8,0,1,1,1413},
{"series",8,0,1,1,1411}
}
Initial value:
{
{"gen_reliabilities",8,0,1,1,1407},
{"probabilities",8,0,1,1,1403},
{"reliabilities",8,0,1,1,1405},
{"system",8,2,2,0,1409,kw_132}
}
Initial value:
{
{"compute",8,4,2,0,1401,kw_133},
{"num_response_levels",13,0,1,0,1399}
}
Initial value:
{
{"distribution",8,2,4,0,1467,kw_103},
{"gen_reliability_levels",14,1,6,0,1477,kw_104},
{"mpp_search",8,10,1,0,1359,kw_130,0.,0.,0.,0,"{MPP search type}
MethodCommands.html#MethodNonDLocalRel"},
{"probability_levels",14,1,5,0,1473,kw_105},
{"reliability_levels",14,1,3,0,1415,kw_131},
{"response_levels",14,2,2,0,1397,kw_134}
}
Initial value:
{
{"display_all_evaluations",8,0,6,0,297},
{"display_format",11,0,4,0,293},
{"function_precision",10,0,1,0,287,0,0.,0.,0.,0,"{Relative precis
ion in least squares terms} MethodCommands.html#MethodLSNL2SOL"},
{"history_file",11,0,3,0,291},
{"linear_equality_constraint_matrix",14,0,12,0,431,0,0.,0.,0.,0,"
{Linear equality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_equality_scale_types",15,0,14,0,435,0,0.,0.,0.,0,"{Linea
r equality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_equality_scales",14,0,15,0,437,0,0.,0.,0.,0,"{Linear equ
ality scales} MethodCommands.html#MethodIndControl"},
{"linear_equality_targets",14,0,13,0,433,0,0.,0.,0.,0,"{Linear eq
uality targets} MethodCommands.html#MethodIndControl"},
{"linear_inequality_constraint_matrix",14,0,7,0,421,0,0.,0.,0.,0,
"{Linear inequality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_inequality_lower_bounds",14,0,8,0,423,0,0.,0.,0.,0,"{Lin
ear inequality lower bounds} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scale_types",15,0,10,0,427,0,0.,0.,0.,0,"{Lin
ear inequality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scales",14,0,11,0,429,0,0.,0.,0.,0,"{Linear i
nequality scales} MethodCommands.html#MethodIndControl"},
{"linear_inequality_upper_bounds",14,0,9,0,425,0,0.,0.,0.,0,"{Lin
ear inequality upper bounds} MethodCommands.html#MethodIndControl"},
{"seed",0x19,0,2,0,289,0,0.,0.,0.,0,"{Random seed} MethodCommands
.html#MethodJEGADC"},
{"variable_neighborhood_search",10,0,5,0,295}
}
Initial value:
{
{"num_offspring",0x19,0,2,0,399,0,0.,0.,0.,0,"{Number of offsprin
g in random shuffle crossover} MethodCommands.html#MethodJEGADC"},
{"num_parents",0x19,0,1,0,397,0,0.,0.,0.,0,"{Number of parents in
random shuffle crossover} MethodCommands.html#MethodJEGADC"}
}
Initial value:
{
{"crossover_rate",10,0,2,0,401,0,0.,0.,0.,0,"{Crossover rate} Met
hodCommands.html#MethodJEGADC"},
{"multi_point_binary",9,0,1,1,389,0,0.,0.,0.,0,"{Multi point bina
ry crossover} MethodCommands.html#MethodJEGADC"},
{"multi_point_parameterized_binary",9,0,1,1,391,0,0.,0.,0.,0,"{Mu
lti point parameterized binary crossover} MethodCommands.html#MethodJEGADC"},
{"multi_point_real",9,0,1,1,393,0,0.,0.,0.,0,"{Multi point real c
rossover} MethodCommands.html#MethodJEGADC"},
{"shuffle_random",8,2,1,1,395,kw_137,0.,0.,0.,0,"{Random shuffle
crossover} MethodCommands.html#MethodJEGADC"}
}
Initial value:
{
{"flat_file",11,0,1,1,385},
{"simple_random",8,0,1,1,381},
{"unique_random",8,0,1,1,383}
}
Initial value:
{
{"mutation_scale",10,0,1,0,415,0,0.,0.,0.,0,"{Mutation scale} Met
hodCommands.html#MethodJEGADC"}
}
Initial value:
{
{"bit_random",8,0,1,1,405},
{"mutation_rate",10,0,2,0,417,0,0.,0.,0.,0,"{Mutation rate} Metho
dCommands.html#MethodJEGADC"},
{"offset_cauchy",8,1,1,1,411,kw_140},
{"offset_normal",8,1,1,1,409,kw_140},
{"offset_uniform",8,1,1,1,413,kw_140},
{"replace_uniform",8,0,1,1,407}
}
Initial value:
{
{"metric_tracker",8,0,1,1,331,0,0.,0.,0.,0,"{Convergence type} Me
thodCommands.html#MethodJEGAMOGA"},
{"num_generations",0x29,0,3,0,335,0,0.,0.,0.,0,"{Number generatio
ns for metric_tracker converger} MethodCommands.html#MethodJEGAMOGA"},
{"percent_change",10,0,2,0,333,0,0.,0.,0.,0,"{Percent change limi
t for metric_tracker converger} MethodCommands.html#MethodJEGAMOGA"}
}
Initial value:
{
{"domination_count",8,0,1,1,305},
{"layer_rank",8,0,1,1,303}
}
Initial value:
{
{"num_designs",0x29,0,1,0,327,0,2.,0.,0.,0,"{Number designs to ke
ep for max_designs nicher} MethodCommands.html#MethodJEGAMOGA"}
}
Initial value:
{
{"distance",14,0,1,1,323},
{"max_designs",14,1,1,1,325,kw_144},
{"radial",14,0,1,1,321}
}
Initial value:
{
{"orthogonal_distance",14,0,1,1,339,0,0.,0.,0.,0,"{Post_processor
distance} MethodCommands.html#MethodJEGAMOGA"}
}
Initial value:
{
{"shrinkage_fraction",10,0,1,0,317},
{"shrinkage_percentage",2,0,1,0,316}
}
Initial value:
{
{"below_limit",10,2,1,1,315,kw_147,0.,0.,0.,0,"{Below limit selec
tion} MethodCommands.html#MethodJEGADC"},
{"elitist",8,0,1,1,309},
{"roulette_wheel",8,0,1,1,311},
{"unique_roulette_wheel",8,0,1,1,313}
}
Initial value:
{
{"convergence_type",8,3,4,0,329,kw_142},
{"crossover_type",8,5,19,0,387,kw_138,0.,0.,0.,0,"{Crossover type
} MethodCommands.html#MethodJEGADC"},
{"fitness_type",8,2,1,0,301,kw_143,0.,0.,0.,0,"{Fitness type} Met
hodCommands.html#MethodJEGAMOGA"},
{"initialization_type",8,3,18,0,379,kw_139,0.,0.,0.,0,"{Initializ
ation type} MethodCommands.html#MethodJEGADC"},
{"linear_equality_constraint_matrix",14,0,11,0,431,0,0.,0.,0.,0,"
{Linear equality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_equality_scale_types",15,0,13,0,435,0,0.,0.,0.,0,"{Linea
r equality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_equality_scales",14,0,14,0,437,0,0.,0.,0.,0,"{Linear equ
ality scales} MethodCommands.html#MethodIndControl"},
{"linear_equality_targets",14,0,12,0,433,0,0.,0.,0.,0,"{Linear eq
uality targets} MethodCommands.html#MethodIndControl"},
{"linear_inequality_constraint_matrix",14,0,6,0,421,0,0.,0.,0.,0,
"{Linear inequality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_inequality_lower_bounds",14,0,7,0,423,0,0.,0.,0.,0,"{Lin
ear inequality lower bounds} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scale_types",15,0,9,0,427,0,0.,0.,0.,0,"{Line
ar inequality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scales",14,0,10,0,429,0,0.,0.,0.,0,"{Linear i
nequality scales} MethodCommands.html#MethodIndControl"},
{"linear_inequality_upper_bounds",14,0,8,0,425,0,0.,0.,0.,0,"{Lin
ear inequality upper bounds} MethodCommands.html#MethodIndControl"},
{"log_file",11,0,16,0,375,0,0.,0.,0.,0,"{Log file} MethodCommands
.html#MethodJEGADC"},
{"mutation_type",8,6,20,0,403,kw_141,0.,0.,0.,0,"{Mutation type}
MethodCommands.html#MethodJEGADC"},
{"niching_type",8,3,3,0,319,kw_145,0.,0.,0.,0,"{Niche pressure ty
pe} MethodCommands.html#MethodJEGAMOGA"},
{"population_size",0x29,0,15,0,373,0,0.,0.,0.,0,"{Number of popul
ation members} MethodCommands.html#MethodJEGADC"},
{"postprocessor_type",8,1,5,0,337,kw_146,0.,0.,0.,0,"{Post_proces
sor type} MethodCommands.html#MethodJEGAMOGA"},
{"print_each_pop",8,0,17,0,377,0,0.,0.,0.,0,"{Population output}
MethodCommands.html#MethodJEGADC"},
{"replacement_type",8,4,2,0,307,kw_148,0.,0.,0.,0,"{Replacement t
ype} MethodCommands.html#MethodJEGAMOGA"},
{"seed",0x19,0,21,0,419,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodNonDMC"}
}
Initial value:
{
{"partitions",13,0,1,1,1531,0,0.,0.,0.,0,"{Partitions per variabl
e} MethodCommands.html#MethodPSMPS"}
}
Initial value:
{
{"min_boxsize_limit",10,0,2,0,1311,0,0.,0.,0.,0,"{Min boxsize lim
it} MethodCommands.html#MethodNCSUDC"},
{"solution_accuracy",2,0,1,0,1308},
{"solution_target",10,0,1,0,1309,0,0.,0.,0.,0,"{Solution Target }
MethodCommands.html#MethodNCSUDC"},
{"volume_boxsize_limit",10,0,3,0,1313,0,0.,0.,0.,0,"{Volume boxsi
ze limit} MethodCommands.html#MethodNCSUDC"}
}
Initial value:
{
{"absolute_conv_tol",10,0,2,0,583,0,0.,0.,0.,0,"{Absolute functio
n convergence tolerance} MethodCommands.html#MethodLSNL2SOL"},
{"covariance",9,0,8,0,595,0,0.,0.,0.,0,"{Covariance post-processi
ng} MethodCommands.html#MethodLSNL2SOL"},
{"false_conv_tol",10,0,6,0,591,0,0.,0.,0.,0,"{False convergence t
olerance} MethodCommands.html#MethodLSNL2SOL"},
{"function_precision",10,0,1,0,581},
{"initial_trust_radius",10,0,7,0,593,0,0.,0.,0.,0,"{Initial trust
region radius} MethodCommands.html#MethodLSNL2SOL"},
{"regression_diagnostics",8,0,9,0,597,0,0.,0.,0.,0,"{Regression d
iagnostics post-processing} MethodCommands.html#MethodLSNL2SOL"},
{"singular_conv_tol",10,0,4,0,587,0,0.,0.,0.,0,"{Singular converg
ence tolerance} MethodCommands.html#MethodLSNL2SOL"},
{"singular_radius",10,0,5,0,589,0,0.,0.,0.,0,"{Step limit for sct
ol} MethodCommands.html#MethodLSNL2SOL"},
{"x_conv_tol",10,0,3,0,585,0,0.,0.,0.,0,"{Convergence tolerance f
or change in parameter vector} MethodCommands.html#MethodLSNL2SOL"}
}
Initial value:
{
{"parallel",8,0,1,1,1021},
{"series",8,0,1,1,1019}
}
Initial value:
{
{"gen_reliabilities",8,0,1,1,1015},
{"probabilities",8,0,1,1,1013},
{"system",8,2,2,0,1017,kw_153}
}
Initial value:
{
{"compute",8,3,2,0,1011,kw_154},
{"num_response_levels",13,0,1,0,1009}
}
Initial value:
{
{"distribution",8,2,4,0,1075,kw_15,0.,0.,0.,0,"{Distribution type
} MethodCommands.html#MethodNonD"},
{"gen_reliability_levels",14,1,6,0,1085,kw_16,0.,0.,0.,0,"{Genera
lized reliability levels} MethodCommands.html#MethodNonD"},
{"probability_levels",14,1,5,0,1081,kw_17,0.,0.,0.,0,"{Probabilit
y levels} MethodCommands.html#MethodNonD"},
{"response_levels",14,2,1,0,1007,kw_155},
{"rng",8,2,7,0,1089,kw_18,0.,0.,0.,0,"{Random number generator} M
ethodCommands.html#MethodNonDMC"},
{"samples",9,0,3,0,1303,0,0.,0.,0.,0,"{Number of samples} MethodC
ommands.html#MethodNonDMC"},
{"seed",0x19,0,2,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"}
}
Initial value:
{
{"num_reliability_levels",13,0,1,0,875,0,0.,0.,0.,0,"{Number of r
eliability levels} MethodCommands.html#MethodNonD"}
}
Initial value:
{
{"parallel",8,0,1,1,893},
{"series",8,0,1,1,891}
}
Initial value:
{
{"gen_reliabilities",8,0,1,1,887},
{"probabilities",8,0,1,1,883},
{"reliabilities",8,0,1,1,885},
{"system",8,2,2,0,889,kw_158}
}
Initial value:
{
{"compute",8,4,2,0,881,kw_159,0.,0.,0.,0,"{Target statistics for
response levels} MethodCommands.html#MethodNonD"},
{"num_response_levels",13,0,1,0,879,0,0.,0.,0.,0,"{Number of resp
onse levels} MethodCommands.html#MethodNonD"}
}
Initial value:
{
{"annotated",8,0,1,0,719},
{"freeform",8,0,1,0,721}
}
Initial value:
{
{"noise_tolerance",14,0,1,0,691}
}
Initial value:
{
{"noise_tolerance",14,0,1,0,695}
}
Initial value:
{
{"l2_penalty",10,0,2,0,701,0,0.,0.,0.,0,"{l2_penalty used for ela
stic net modification of LASSO} MethodCommands.html#MethodNonDPCE"},
{"noise_tolerance",14,0,1,0,699}
}
Initial value:
{
{"equality_constrained",8,0,1,0,681},
{"svd",8,0,1,0,679}
}
Initial value:
{
{"noise_tolerance",14,0,1,0,685}
}
Initial value:
{
{"basis_pursuit",8,0,2,0,687,0,0.,0.,0.,0,"{L1 minimization via B
asis Pursuit (BP)} MethodCommands.html#MethodNonDPCE"},
{"basis_pursuit_denoising",8,1,2,0,689,kw_162,0.,0.,0.,0,"{L1 min
imization via Basis Pursuit DeNoising (BPDN)} MethodCommands.html#MethodNonDPCE"}
,
{"bp",0,0,2,0,686},
{"bpdn",0,1,2,0,688,kw_162},
{"cross_validation",8,0,3,0,703,0,0.,0.,0.,0,"{Specify whether to
use cross validation} MethodCommands.html#MethodNonDPCE"},
{"import_points_file",11,2,7,0,717,kw_161,0.,0.,0.,0,"{File name
for points to be imported for forming a PCE (unstructured grid assumed)} MethodCo
mmands.html#MethodNonDPCE"},
{"lars",0,1,2,0,692,kw_163},
{"lasso",0,2,2,0,696,kw_164},
{"least_absolute_shrinkage",8,2,2,0,697,kw_164,0.,0.,0.,0,"{L1 mi
nimization via Least Absolute Shrinkage Operator (LASSO)} MethodCommands.html#Met
hodNonDPCE"},
{"least_angle_regression",8,1,2,0,693,kw_163,0.,0.,0.,0,"{L1 mini
mization via Least Angle Regression (LARS)} MethodCommands.html#MethodNonDPCE"},
{"least_squares",8,2,2,0,677,kw_165,0.,0.,0.,0,"{Least squares re
gression} MethodCommands.html#MethodNonDPCE"},
{"omp",0,1,2,0,682,kw_166},
{"orthogonal_matching_pursuit",8,1,2,0,683,kw_166,0.,0.,0.,0,"{L1
minimization via Orthogonal Matching Pursuit (OMP)} MethodCommands.html#MethodNo
nDPCE"},
{"ratio_order",10,0,1,0,675,0,0.,0.,0.,0,"{Order of collocation o
versampling relationship} MethodCommands.html#MethodNonDPCE"},
{"reuse_points",8,0,6,0,709},
{"reuse_samples",0,0,6,0,708},
{"tensor_grid",8,0,5,0,707},
{"use_derivatives",8,0,4,0,705}
}
Initial value:
{
{"import_points_file",11,2,3,0,717,kw_161,0.,0.,0.,0,"{File name
for points to be imported for forming a PCE (unstructured grid assumed)} MethodCo
mmands.html#MethodNonDPCE"},
{"incremental_lhs",8,0,2,0,715,0,0.,0.,0.,0,"{Use incremental LHS
for expansion_samples} MethodCommands.html#MethodNonDPCE"},
{"reuse_points",8,0,1,0,713},
{"reuse_samples",0,0,1,0,712}
}
Initial value:
{
{"collocation_points",13,18,2,1,671,kw_167,0.,0.,0.,0,"{Number co
llocation points to estimate coeffs} MethodCommands.html#MethodNonDPCE"},
{"collocation_ratio",10,18,2,1,673,kw_167,0.,0.,0.,0,"{Collocatio
n point oversampling ratio to estimate coeffs} MethodCommands.html#MethodNonDPCE"
},
{"dimension_preference",14,0,1,0,669},
{"expansion_import_file",11,0,2,1,723,0,0.,0.,0.,0,"{Import file
for PCE coefficients} MethodCommands.html#MethodNonDPCE"},
{"expansion_samples",13,4,2,1,711,kw_168,0.,0.,0.,0,"{Number simu
lation samples to estimate coeffs} MethodCommands.html#MethodNonDPCE"}
}
Initial value:
{
{"annotated",8,0,1,0,769},
{"freeform",8,0,1,0,771}
}
Initial value:
{
{"annotated",8,0,1,0,737},
{"freeform",8,0,1,0,739}
}
Initial value:
{
{"collocation_points",13,0,1,1,727},
{"cross_validation",8,0,2,0,729},
{"import_points_file",11,2,5,0,735,kw_171,0.,0.,0.,0,"{File name
for points to be imported as the basis for the initial emulator} MethodCommands.h
tml#MethodNonDBayesCalib"},
{"reuse_points",8,0,4,0,733},
{"reuse_samples",0,0,4,0,732},
{"tensor_grid",13,0,3,0,731}
}
Initial value:
{
{"decay",8,0,1,1,643},
{"generalized",8,0,1,1,645},
{"sobol",8,0,1,1,641}
}
Initial value:
{
{"dimension_adaptive",8,3,1,1,639,kw_173},
{"uniform",8,0,1,1,637}
}
Initial value:
{
{"dimension_preference",14,0,1,0,659,0,0.,0.,0.,0,"{Dimension pre
ference for anisotropic tensor and sparse grids} MethodCommands.html#MethodNonDPC
E"},
{"nested",8,0,2,0,661},
{"non_nested",8,0,2,0,663}
}
Initial value:
{
{"adapt_import",8,0,1,1,763},
{"import",8,0,1,1,761},
{"mm_adapt_import",8,0,1,1,765}
}
Initial value:
{
{"lhs",8,0,1,1,755},
{"random",8,0,1,1,757}
}
Initial value:
{
{"dimension_preference",14,0,2,0,659,0,0.,0.,0.,0,"{Dimension pre
ference for anisotropic tensor and sparse grids} MethodCommands.html#MethodNonDPC
E"},
{"nested",8,0,3,0,661},
{"non_nested",8,0,3,0,663},
{"restricted",8,0,1,0,655},
{"unrestricted",8,0,1,0,657}
}
Initial value:
{
{"drop_tolerance",10,0,2,0,745,0,0.,0.,0.,0,"{VBD tolerance for o
mitting small indices} MethodCommands.html#MethodNonDMC"},
{"interaction_order",0x19,0,1,0,743,0,0.,0.,0.,0,"{Restriction of
order of VBD interations} MethodCommands.html#MethodNonDPCE"}
}
Initial value:
{
{"askey",8,0,2,0,647},
{"cubature_integrand",9,0,3,1,665,0,0.,0.,0.,0,"{Cubature integra
nd order for PCE coefficient estimation} MethodCommands.html#MethodNonDPCE"},
{"diagonal_covariance",8,0,5,0,747},
{"distribution",8,2,12,0,1075,kw_15,0.,0.,0.,0,"{Distribution typ
e} MethodCommands.html#MethodNonD"},
{"expansion_order",13,5,3,1,667,kw_169,0.,0.,0.,0,"{Expansion ord
er} MethodCommands.html#MethodNonDPCE"},
{"export_points_file",11,2,9,0,767,kw_170,0.,0.,0.,0,"{File name
for exporting approximation-based samples from evaluating the PCE} MethodCommands
.html#MethodNonDPCE"},
{"fixed_seed",8,0,18,0,871,0,0.,0.,0.,0,"{Fixed seed flag} Method
Commands.html#MethodNonDMC"},
{"full_covariance",8,0,5,0,749},
{"gen_reliability_levels",14,1,14,0,1085,kw_16,0.,0.,0.,0,"{Gener
alized reliability levels} MethodCommands.html#MethodNonD"},
{"least_interpolation",0,6,3,1,724,kw_172},
{"normalized",8,0,6,0,751,0,0.,0.,0.,0,"{Output PCE coefficients
corresponding to normalized basis} MethodCommands.html#MethodNonDPCE"},
{"oli",0,6,3,1,724,kw_172},
{"orthogonal_least_interpolation",8,6,3,1,725,kw_172,0.,0.,0.,0,"
{Orthogonal Least Interpolation (OLI)} MethodCommands.html#MethodNonDPCE"},
{"p_refinement",8,2,1,0,635,kw_174,0.,0.,0.,0,"{Automated polynom
ial order refinement} MethodCommands.html#MethodNonDPCE"},
{"probability_levels",14,1,13,0,1081,kw_17,0.,0.,0.,0,"{Probabili
ty levels} MethodCommands.html#MethodNonD"},
{"quadrature_order",13,3,3,1,651,kw_175,0.,0.,0.,0,"{Quadrature o
rder for PCE coefficient estimation} MethodCommands.html#MethodNonDPCE"},
{"reliability_levels",14,1,16,0,873,kw_157,0.,0.,0.,0,"{Reliabili
ty levels} MethodCommands.html#MethodNonD"},
{"response_levels",14,2,17,0,877,kw_160,0.,0.,0.,0,"{Response lev
els} MethodCommands.html#MethodNonD"},
{"rng",8,2,15,0,1089,kw_18,0.,0.,0.,0,"{Random number generator}
MethodCommands.html#MethodNonDMC"},
{"sample_refinement",8,3,8,0,759,kw_176,0.,0.,0.,0,"{Importance s
ampling refinement} MethodCommands.html#MethodNonDPCE"},
{"sample_type",8,2,7,0,753,kw_177,0.,0.,0.,0,"{Sampling type} Met
hodCommands.html#MethodNonDMC"},
{"samples",9,0,11,0,1303,0,0.,0.,0.,0,"{Number of samples} Method
Commands.html#MethodNonDMC"},
{"seed",0x19,0,10,0,1305,0,0.,0.,0.,0,"{Random seed} MethodComman
ds.html#MethodEG"},
{"sparse_grid_level",13,5,3,1,653,kw_178,0.,0.,0.,0,"{Sparse grid
level for PCE coefficient estimation} MethodCommands.html#MethodNonDPCE"},
{"variance_based_decomp",8,2,4,0,741,kw_179,0.,0.,0.,0,"{Variance
based decomposition (VBD)} MethodCommands.html#MethodNonDMC"},
{"wiener",8,0,2,0,649}
}
Initial value:
{"previous_samples",9,0,1,1,865,0,0.,0.,0.,0,"{Previous samples f
or incremental approaches} MethodCommands.html#MethodNonDMC"}
}
Initial value:
{
{"incremental_lhs",8,1,1,1,861,kw_181},
{"incremental_random",8,1,1,1,863,kw_181},
{"lhs",8,0,1,1,859},
{"random",8,0,1,1,857}
}
Initial value:
{
{"drop_tolerance",10,0,1,0,869}
}
Initial value:
{
{"distribution",8,2,5,0,1075,kw_15,0.,0.,0.,0,"{Distribution type
} MethodCommands.html#MethodNonD"},
{"fixed_seed",8,0,11,0,871,0,0.,0.,0.,0,"{Fixed seed flag} Method
Commands.html#MethodNonDMC"},
{"gen_reliability_levels",14,1,7,0,1085,kw_16,0.,0.,0.,0,"{Genera
lized reliability levels} MethodCommands.html#MethodNonD"},
{"probability_levels",14,1,6,0,1081,kw_17,0.,0.,0.,0,"{Probabilit
y levels} MethodCommands.html#MethodNonD"},
{"reliability_levels",14,1,9,0,873,kw_157,0.,0.,0.,0,"{Reliabilit
y levels} MethodCommands.html#MethodNonD"},
{"response_levels",14,2,10,0,877,kw_160,0.,0.,0.,0,"{Response lev
els} MethodCommands.html#MethodNonD"},
{"rng",8,2,8,0,1089,kw_18,0.,0.,0.,0,"{Random number generator} M
ethodCommands.html#MethodNonDMC"},
{"sample_type",8,4,1,0,855,kw_182},
{"samples",9,0,4,0,1303,0,0.,0.,0.,0,"{Number of samples} MethodC
ommands.html#MethodNonDMC"},
{"seed",0x19,0,3,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"},
{"variance_based_decomp",8,1,2,0,867,kw_183}
}
Initial value:
{
{"annotated",8,0,1,0,849},
{"freeform",8,0,1,0,851}
}
Initial value:
{
{"generalized",8,0,1,1,793},
{"sobol",8,0,1,1,791}
}
Initial value:
{
{"dimension_adaptive",8,2,1,1,789,kw_186},
{"local_adaptive",8,0,1,1,795},
{"uniform",8,0,1,1,787}
}
Initial value:
{
{"generalized",8,0,1,1,783},
{"sobol",8,0,1,1,781}
}
Initial value:
{
{"dimension_adaptive",8,2,1,1,779,kw_188},
{"uniform",8,0,1,1,777}
}
Initial value:
{
{"adapt_import",8,0,1,1,843},
{"import",8,0,1,1,841},
{"mm_adapt_import",8,0,1,1,845}
}
Initial value:
{
{"lhs",8,0,1,1,835},
{"random",8,0,1,1,837}
}
Initial value:
{
{"hierarchical",8,0,2,0,813},
{"nodal",8,0,2,0,811},
{"restricted",8,0,1,0,807},
{"unrestricted",8,0,1,0,809}
}
Initial value:
{
{"drop_tolerance",10,0,2,0,827,0,0.,0.,0.,0,"{VBD tolerance for o
mitting small indices} MethodCommands.html#MethodNonDSC"},
{"interaction_order",0x19,0,1,0,825,0,0.,0.,0.,0,"{Restriction of
order of VBD interations} MethodCommands.html#MethodNonDSC"}
}
Initial value:
{
{"askey",8,0,2,0,799},
{"diagonal_covariance",8,0,8,0,829},
{"dimension_preference",14,0,4,0,815,0,0.,0.,0.,0,"{Dimension pre
ference for anisotropic tensor and sparse grids} MethodCommands.html#MethodNonDSC
"},
{"distribution",8,2,14,0,1075,kw_15,0.,0.,0.,0,"{Distribution typ
e} MethodCommands.html#MethodNonD"},
{"export_points_file",11,2,11,0,847,kw_185,0.,0.,0.,0,"{File name
for exporting approximation-based samples from evaluating the interpolant} Metho
dCommands.html#MethodNonDSC"},
{"fixed_seed",8,0,20,0,871,0,0.,0.,0.,0,"{Fixed seed flag} Method
Commands.html#MethodNonDMC"},
{"full_covariance",8,0,8,0,831},
{"gen_reliability_levels",14,1,16,0,1085,kw_16,0.,0.,0.,0,"{Gener
alized reliability levels} MethodCommands.html#MethodNonD"},
{"h_refinement",8,3,1,0,785,kw_187},
{"nested",8,0,6,0,819},
{"non_nested",8,0,6,0,821},
{"p_refinement",8,2,1,0,775,kw_189},
{"piecewise",8,0,2,0,797},
{"probability_levels",14,1,15,0,1081,kw_17,0.,0.,0.,0,"{Probabili
ty levels} MethodCommands.html#MethodNonD"},
{"quadrature_order",13,0,3,1,803,0,0.,0.,0.,0,"{Quadrature order
for collocation points} MethodCommands.html#MethodNonDSC"},
{"reliability_levels",14,1,18,0,873,kw_157,0.,0.,0.,0,"{Reliabili
ty levels} MethodCommands.html#MethodNonD"},
{"response_levels",14,2,19,0,877,kw_160,0.,0.,0.,0,"{Response lev
els} MethodCommands.html#MethodNonD"},
{"rng",8,2,17,0,1089,kw_18,0.,0.,0.,0,"{Random number generator}
MethodCommands.html#MethodNonDMC"},
{"sample_refinement",8,3,10,0,839,kw_190},
{"sample_type",8,2,9,0,833,kw_191},
{"samples",9,0,13,0,1303,0,0.,0.,0.,0,"{Number of samples} Method
Commands.html#MethodNonDMC"},
{"seed",0x19,0,12,0,1305,0,0.,0.,0.,0,"{Random seed} MethodComman
ds.html#MethodEG"},
{"sparse_grid_level",13,4,3,1,805,kw_192,0.,0.,0.,0,"{Sparse grid
level for collocation points} MethodCommands.html#MethodNonDSC"},
{"use_derivatives",8,0,5,0,817,0,0.,0.,0.,0,"{Derivative enhancem
ent flag} MethodCommands.html#MethodNonDSC"},
{"variance_based_decomp",8,2,7,0,823,kw_193,0.,0.,0.,0,"{Variance
-based decomposition (VBD)} MethodCommands.html#MethodNonDSC"},
{"wiener",8,0,2,0,801}
}
Initial value:
{
{"misc_options",15,0,1,0,601}
}
Initial value:
{
{"function_precision",10,0,11,0,203,0,0.,0.,0.,0,"{Function preci
sion} MethodCommands.html#MethodNPSOLDC"},
{"linear_equality_constraint_matrix",14,0,6,0,431,0,0.,0.,0.,0,"{
Initial value:
{
{"gradient_tolerance",10,0,11,0,243},
{"linear_equality_constraint_matrix",14,0,6,0,431,0,0.,0.,0.,0,"{
Linear equality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_equality_scale_types",15,0,8,0,435,0,0.,0.,0.,0,"{Linear
equality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_equality_scales",14,0,9,0,437,0,0.,0.,0.,0,"{Linear equa
lity scales} MethodCommands.html#MethodIndControl"},
{"linear_equality_targets",14,0,7,0,433,0,0.,0.,0.,0,"{Linear equ
ality targets} MethodCommands.html#MethodIndControl"},
{"linear_inequality_constraint_matrix",14,0,1,0,421,0,0.,0.,0.,0,
"{Linear inequality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_inequality_lower_bounds",14,0,2,0,423,0,0.,0.,0.,0,"{Lin
ear inequality lower bounds} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scale_types",15,0,4,0,427,0,0.,0.,0.,0,"{Line
ar inequality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scales",14,0,5,0,429,0,0.,0.,0.,0,"{Linear in
equality scales} MethodCommands.html#MethodIndControl"},
{"linear_inequality_upper_bounds",14,0,3,0,425,0,0.,0.,0.,0,"{Lin
ear inequality upper bounds} MethodCommands.html#MethodIndControl"},
{"max_step",10,0,10,0,241}
}
Initial value:
{
{"linear_equality_constraint_matrix",14,0,7,0,431,0,0.,0.,0.,0,"{
Linear equality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_equality_scale_types",15,0,9,0,435,0,0.,0.,0.,0,"{Linear
equality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_equality_scales",14,0,10,0,437,0,0.,0.,0.,0,"{Linear equ
ality scales} MethodCommands.html#MethodIndControl"},
{"linear_equality_targets",14,0,8,0,433,0,0.,0.,0.,0,"{Linear equ
ality targets} MethodCommands.html#MethodIndControl"},
{"linear_inequality_constraint_matrix",14,0,2,0,421,0,0.,0.,0.,0,
"{Linear inequality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_inequality_lower_bounds",14,0,3,0,423,0,0.,0.,0.,0,"{Lin
ear inequality lower bounds} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scale_types",15,0,5,0,427,0,0.,0.,0.,0,"{Line
ar inequality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scales",14,0,6,0,429,0,0.,0.,0.,0,"{Linear in
equality scales} MethodCommands.html#MethodIndControl"},
{"linear_inequality_upper_bounds",14,0,4,0,425,0,0.,0.,0.,0,"{Lin
ear inequality upper bounds} MethodCommands.html#MethodIndControl"},
{"search_scheme_size",9,0,1,0,247}
}
Initial value:
{
{"argaez_tapia",8,0,1,1,233},
{"el_bakry",8,0,1,1,231},
{"van_shanno",8,0,1,1,235}
}
Initial value:
{
{"gradient_based_line_search",8,0,1,1,223,0,0.,0.,0.,0,"[CHOOSE l
ine search type]"},
{"tr_pds",8,0,1,1,227},
{"trust_region",8,0,1,1,225},
{"value_based_line_search",8,0,1,1,221}
}
Initial value:
{
{"centering_parameter",10,0,4,0,239},
{"gradient_tolerance",10,0,15,0,243},
{"linear_equality_constraint_matrix",14,0,10,0,431,0,0.,0.,0.,0,"
{Linear equality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_equality_scale_types",15,0,12,0,435,0,0.,0.,0.,0,"{Linea
r equality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_equality_scales",14,0,13,0,437,0,0.,0.,0.,0,"{Linear equ
ality scales} MethodCommands.html#MethodIndControl"},
{"linear_equality_targets",14,0,11,0,433,0,0.,0.,0.,0,"{Linear eq
uality targets} MethodCommands.html#MethodIndControl"},
{"linear_inequality_constraint_matrix",14,0,5,0,421,0,0.,0.,0.,0,
"{Linear inequality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_inequality_lower_bounds",14,0,6,0,423,0,0.,0.,0.,0,"{Lin
ear inequality lower bounds} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scale_types",15,0,8,0,427,0,0.,0.,0.,0,"{Line
ar inequality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scales",14,0,9,0,429,0,0.,0.,0.,0,"{Linear in
equality scales} MethodCommands.html#MethodIndControl"},
{"linear_inequality_upper_bounds",14,0,7,0,425,0,0.,0.,0.,0,"{Lin
ear inequality upper bounds} MethodCommands.html#MethodIndControl"},
{"max_step",10,0,14,0,241},
{"merit_function",8,3,2,0,229,kw_199},
{"search_method",8,4,1,0,219,kw_200},
{"steplength_to_boundary",10,0,3,0,237}
}
Initial value:
{
{"debug",8,0,1,1,73,0,0.,0.,0.,0,"[CHOOSE output level]"},
{"normal",8,0,1,1,77},
{"quiet",8,0,1,1,79},
{"silent",8,0,1,1,81},
{"verbose",8,0,1,1,75}
}
Initial value:
{
{"partitions",13,0,1,0,1301,0,0.,0.,0.,0,"{Number of partitions}
MethodCommands.html#MethodPSUADE"},
{"samples",9,0,3,0,1303,0,0.,0.,0.,0,"{Number of samples} MethodC
ommands.html#MethodNonDMC"},
{"seed",0x19,0,2,0,1305,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodEG"}
}
Initial value:
{
{"converge_order",8,0,1,1,1537},
{"converge_qoi",8,0,1,1,1539},
{"estimate_order",8,0,1,1,1535},
{"refinement_rate",10,0,2,0,1541,0,0.,0.,0.,0,"{Refinement rate}
MethodCommands.html#MethodSolnRichardson"}
}
Initial value:
{
{"num_generations",0x29,0,2,0,371},
{"percent_change",10,0,1,0,369}
}
Initial value:
{
{"num_generations",0x29,0,2,0,365,0,0.,0.,0.,0,"{Number of genera
tions (for convergence test) } MethodCommands.html#MethodJEGASOGA"},
{"percent_change",10,0,1,0,363,0,0.,0.,0.,0,"{Percent change in f
itness} MethodCommands.html#MethodJEGASOGA"}
}
Initial value:
{
{"average_fitness_tracker",8,2,1,1,367,kw_205},
{"best_fitness_tracker",8,2,1,1,361,kw_206}
}
Initial value:
{
{"constraint_penalty",10,0,2,0,347,0,0.,0.,0.,0,"{Constraint pena
lty in merit function} MethodCommands.html#MethodJEGASOGA"},
{"merit_function",8,0,1,1,345}
}
Initial value:
{
{"elitist",8,0,1,1,351},
{"favor_feasible",8,0,1,1,353},
{"roulette_wheel",8,0,1,1,355},
{"unique_roulette_wheel",8,0,1,1,357}
}
Initial value:
{
{"convergence_type",8,2,3,0,359,kw_207,0.,0.,0.,0,"{Convergence t
ype} MethodCommands.html#MethodJEGASOGA"},
{"crossover_type",8,5,17,0,387,kw_138,0.,0.,0.,0,"{Crossover type
} MethodCommands.html#MethodJEGADC"},
{"fitness_type",8,2,1,0,343,kw_208,0.,0.,0.,0,"{Fitness type} Met
hodCommands.html#MethodJEGASOGA"},
{"initialization_type",8,3,16,0,379,kw_139,0.,0.,0.,0,"{Initializ
ation type} MethodCommands.html#MethodJEGADC"},
{"linear_equality_constraint_matrix",14,0,9,0,431,0,0.,0.,0.,0,"{
Linear equality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_equality_scale_types",15,0,11,0,435,0,0.,0.,0.,0,"{Linea
r equality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_equality_scales",14,0,12,0,437,0,0.,0.,0.,0,"{Linear equ
ality scales} MethodCommands.html#MethodIndControl"},
{"linear_equality_targets",14,0,10,0,433,0,0.,0.,0.,0,"{Linear eq
uality targets} MethodCommands.html#MethodIndControl"},
{"linear_inequality_constraint_matrix",14,0,4,0,421,0,0.,0.,0.,0,
"{Linear inequality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_inequality_lower_bounds",14,0,5,0,423,0,0.,0.,0.,0,"{Lin
ear inequality lower bounds} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scale_types",15,0,7,0,427,0,0.,0.,0.,0,"{Line
ar inequality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_inequality_scales",14,0,8,0,429,0,0.,0.,0.,0,"{Linear in
equality scales} MethodCommands.html#MethodIndControl"},
{"linear_inequality_upper_bounds",14,0,6,0,425,0,0.,0.,0.,0,"{Lin
ear inequality upper bounds} MethodCommands.html#MethodIndControl"},
{"log_file",11,0,14,0,375,0,0.,0.,0.,0,"{Log file} MethodCommands
.html#MethodJEGADC"},
{"mutation_type",8,6,18,0,403,kw_141,0.,0.,0.,0,"{Mutation type}
MethodCommands.html#MethodJEGADC"},
{"population_size",0x29,0,13,0,373,0,0.,0.,0.,0,"{Number of popul
ation members} MethodCommands.html#MethodJEGADC"},
{"print_each_pop",8,0,15,0,377,0,0.,0.,0.,0,"{Population output}
MethodCommands.html#MethodJEGADC"},
{"replacement_type",8,4,2,0,349,kw_209,0.,0.,0.,0,"{Replacement t
ype} MethodCommands.html#MethodJEGASOGA"},
{"seed",0x19,0,19,0,419,0,0.,0.,0.,0,"{Random seed} MethodCommand
s.html#MethodNonDMC"}
}
Initial value:
{
{"function_precision",10,0,12,0,203,0,0.,0.,0.,0,"{Function preci
sion} MethodCommands.html#MethodNPSOLDC"},
{"linear_equality_constraint_matrix",14,0,7,0,431,0,0.,0.,0.,0,"{
Linear equality coefficient matrix} MethodCommands.html#MethodIndControl"},
{"linear_equality_scale_types",15,0,9,0,435,0,0.,0.,0.,0,"{Linear
equality scaling types} MethodCommands.html#MethodIndControl"},
{"linear_equality_scales",14,0,10,0,437,0,0.,0.,0.,0,"{Linear equ
ality scales} MethodCommands.html#MethodIndControl"},
{"linear_equality_targets",14,0,8,0,433,0,0.,0.,0.,0,"{Linear equ
Initial value:
{
{"approx_method_name",11,0,1,1,605,0,0.,0.,0.,0,"[CHOOSE sub-meth
od ref.]{Approximate sub-problem minimization method name} MethodCommands.html#Me
thodSBG"},
{"approx_method_pointer",11,0,1,1,607,0,0.,0.,0.,0,"{Approximate
sub-problem minimization method pointer} MethodCommands.html#MethodSBG"},
{"replace_points",8,0,2,0,609,0,0.,0.,0.,0,"{Replace points used
in surrogate construction with best points from previous iteration} MethodCommand
s.html#MethodSBG"}
}
Initial value:
{
{"filter",8,0,1,1,151,0,0.,0.,0.,0,"@[CHOOSE acceptance logic]"},
{"tr_ratio",8,0,1,1,149}
}
Initial value:
{
{"augmented_lagrangian_objective",8,0,1,1,127,0,0.,0.,0.,0,"[CHOO
SE objective formulation]"},
{"lagrangian_objective",8,0,1,1,129},
{"linearized_constraints",8,0,2,2,133,0,0.,0.,0.,0,"[CHOOSE const
raint formulation]"},
{"no_constraints",8,0,2,2,135},
{"original_constraints",8,0,2,2,131,0,0.,0.,0.,0,"@"},
{"original_primary",8,0,1,1,123,0,0.,0.,0.,0,"@"},
{"single_objective",8,0,1,1,125}
}
Initial value:
{
{"homotopy",8,0,1,1,155}
}
Initial value:
{
{"adaptive_penalty_merit",8,0,1,1,141,0,0.,0.,0.,0,"[CHOOSE merit
function]"},
{"augmented_lagrangian_merit",8,0,1,1,145,0,0.,0.,0.,0,"@"},
{"lagrangian_merit",8,0,1,1,143},
{"penalty_merit",8,0,1,1,139}
}
Initial value:
{
{"contract_threshold",10,0,3,0,113,0,0.,0.,0.,0,"{Shrink trust re
gion if trust region ratio is below this value} MethodCommands.html#MethodSBL"},
{"contraction_factor",10,0,5,0,117,0,0.,0.,0.,0,"{Trust region co
ntraction factor} MethodCommands.html#MethodSBL"},
{"expand_threshold",10,0,4,0,115,0,0.,0.,0.,0,"{Expand trust regi
on if trust region ratio is above this value} MethodCommands.html#MethodSBL"},
{"expansion_factor",10,0,6,0,119,0,0.,0.,0.,0,"{Trust region expa
nsion factor} MethodCommands.html#MethodSBL"},
{"initial_size",10,0,1,0,109,0,0.,0.,0.,0,"{Trust region initial
size (relative to bounds)} MethodCommands.html#MethodSBL"},
{"minimum_size",10,0,2,0,111,0,0.,0.,0.,0,"{Trust region minimum
size} MethodCommands.html#MethodSBL"}
}
Initial value:
{
{"acceptance_logic",8,2,7,0,147,kw_213,0.,0.,0.,0,"{SBL iterate a
Initial value:
{
{"final_point",14,0,1,1,1507,0,0.,0.,0.,0,"[CHOOSE final pt or in
crement]{Termination point of vector} MethodCommands.html#MethodPSVPS"},
{"num_steps",9,0,2,2,1511,0,0.,0.,0.,0,"{Number of steps along ve
ctor} MethodCommands.html#MethodPSVPS"},
{"step_vector",14,0,1,1,1509,0,0.,0.,0.,0,"{Step vector} MethodCo
mmands.html#MethodPSVPS"}
}
Initial value:
{"optional_interface_responses_pointer",11,0,1,0,1747,0,0.,0.,0.,
0,"{Responses pointer for nested model optional interfaces} ModelCommands.html#Mo
delNested"}
}
Initial value:
{
{"primary_response_mapping",14,0,3,0,1755,0,0.,0.,0.,0,"{Primary
response mappings for nested models} ModelCommands.html#ModelNested"},
{"primary_variable_mapping",15,0,1,0,1751,0,0.,0.,0.,0,"{Primary
variable mappings for nested models} ModelCommands.html#ModelNested"},
{"secondary_response_mapping",14,0,4,0,1757,0,0.,0.,0.,0,"{Second
ary response mappings for nested models} ModelCommands.html#ModelNested"},
{"secondary_variable_mapping",15,0,2,0,1753,0,0.,0.,0.,0,"{Second
ary variable mappings for nested models} ModelCommands.html#ModelNested"}
}
Initial value:
{
{"optional_interface_pointer",11,1,1,0,1745,kw_221,0.,0.,0.,0,"{O
ptional interface set pointer} ModelCommands.html#ModelNested"},
{"sub_method_pointer",11,4,2,1,1749,kw_222,0.,0.,0.,0,"{Sub-metho
d pointer for nested models} ModelCommands.html#ModelNested"}
}
Initial value:
{
{"interface_pointer",11,0,1,0,1555,0,0.,0.,0.,0,"{Interface set p
ointer} ModelCommands.html#ModelSingle"}
}
Initial value:
{
{"annotated",8,0,1,0,1709},
{"freeform",8,0,1,0,1711}
}
Initial value:
{
{"additive",8,0,2,2,1691,0,0.,0.,0.,0,"[CHOOSE correction type]"}
,
{"combined",8,0,2,2,1695},
{"first_order",8,0,1,1,1687,0,0.,0.,0.,0,"[CHOOSE correction orde
r]"},
{"multiplicative",8,0,2,2,1693},
{"second_order",8,0,1,1,1689},
{"zeroth_order",8,0,1,1,1685}
}
Initial value:
{
{"folds",9,0,1,0,1701,0,0.,0.,0.,0,"{Number cross validation fold
s} ModelCommands.html#ModelSurrG"},
{"percent",10,0,1,0,1703,0,0.,0.,0.,0,"{Percent points per CV fol
d} ModelCommands.html#ModelSurrG"}
}
Initial value:
{
{"cross_validate",8,2,1,0,1699,kw_227},
{"press",8,0,2,0,1705,0,0.,0.,0.,0,"{Perform PRESS cross validati
on} ModelCommands.html#ModelSurrG"}
}
Initial value:
{
{"annotated",8,0,1,0,1677},
{"freeform",8,0,1,0,1679}
}
Initial value:
{
{"constant",8,0,1,1,1571},
{"linear",8,0,1,1,1573},
{"reduced_quadratic",8,0,1,1,1575}
}
Initial value:
{
{"point_selection",8,0,1,0,1567,0,0.,0.,0.,0,"{GP point selection
} ModelCommands.html#ModelSurrG"},
{"trend",8,3,2,0,1569,kw_230,0.,0.,0.,0,"{GP trend function} Mode
lCommands.html#ModelSurrG"}
}
Initial value:
{
{"constant",8,0,1,1,1581},
{"linear",8,0,1,1,1583},
{"quadratic",8,0,1,1,1587},
{"reduced_quadratic",8,0,1,1,1585}
}
Initial value:
{
{"correlation_lengths",14,0,5,0,1597,0,0.,0.,0.,0,"{Surfpack GP c
orrelation lengths} ModelCommands.html#ModelSurrG"},
{"export_model_file",11,0,6,0,1599},
{"find_nugget",9,0,4,0,1595,0,0.,0.,0.,0,"{Surfpack finds the opt
imal nugget } ModelCommands.html#ModelSurrG"},
{"max_trials",0x19,0,3,0,1591,0,0.,0.,0.,0,"{Surfpack GP maximum
trials} ModelCommands.html#ModelSurrG"},
{"nugget",0x1a,0,4,0,1593,0,0.,0.,0.,0,"{Surfpack user-specified
nugget } ModelCommands.html#ModelSurrG"},
{"optimization_method",11,0,2,0,1589,0,0.,0.,0.,0,"{Surfpack GP o
ptimization method} ModelCommands.html#ModelSurrG"},
{"trend",8,4,1,0,1579,kw_232,0.,0.,0.,0,"{Surfpack GP trend funct
ion} ModelCommands.html#ModelSurrG"}
}
Initial value:
{
{"dakota",8,2,1,1,1565,kw_231},
{"surfpack",8,7,1,1,1577,kw_233}
}
Initial value:
{
{"annotated",8,0,1,0,1671,0,0.,0.,0.,0,"{Challenge file in annota
ted format} ModelCommands.html#ModelSurrG"},
{"freeform",8,0,1,0,1673,0,0.,0.,0.,0,"{Challenge file in freefor
m format} ModelCommands.html#ModelSurrG"}
}
Initial value:
{
{"cubic",8,0,1,1,1609},
{"linear",8,0,1,1,1607}
}
Initial value:
{
{"export_model_file",11,0,3,0,1611},
{"interpolation",8,2,2,0,1605,kw_236,0.,0.,0.,0,"{MARS interpolat
ion} ModelCommands.html#ModelSurrG"},
{"max_bases",9,0,1,0,1603,0,0.,0.,0.,0,"{MARS maximum bases} Mode
lCommands.html#ModelSurrG"}
}
Initial value:
{
{"export_model_file",11,0,3,0,1619},
{"poly_order",9,0,1,0,1615,0,0.,0.,0.,0,"{MLS polynomial order} M
odelCommands.html#ModelSurrG"},
{"weight_function",9,0,2,0,1617,0,0.,0.,0.,0,"{MLS weight functio
n} ModelCommands.html#ModelSurrG"}
}
Initial value:
{
{"export_model_file",11,0,4,0,1629},
{"nodes",9,0,1,0,1623,0,0.,0.,0.,0,"{ANN number nodes} ModelComma
nds.html#ModelSurrG"},
{"random_weight",9,0,3,0,1627,0,0.,0.,0.,0,"{ANN random weight} M
odelCommands.html#ModelSurrG"},
{"range",10,0,2,0,1625,0,0.,0.,0.,0,"{ANN range} ModelCommands.ht
ml#ModelSurrG"}
}
Initial value:
{
{"cubic",8,0,1,1,1649,0,0.,0.,0.,0,"[CHOOSE polynomial order]"},
{"export_model_file",11,0,2,0,1651},
{"linear",8,0,1,1,1645},
{"quadratic",8,0,1,1,1647}
}
Initial value:
{
{"bases",9,0,1,0,1633,0,0.,0.,0.,0,"{RBF number of bases} ModelCo
mmands.html#ModelSurrG"},
{"export_model_file",11,0,5,0,1641},
{"max_pts",9,0,2,0,1635,0,0.,0.,0.,0,"{RBF maximum points} ModelC
ommands.html#ModelSurrG"},
{"max_subsets",9,0,4,0,1639},
{"min_partition",9,0,3,0,1637,0,0.,0.,0.,0,"{RBF minimum partitio
ns} ModelCommands.html#ModelSurrG"}
}
Initial value:
{
{"all",8,0,1,1,1663},
{"none",8,0,1,1,1667},
{"region",8,0,1,1,1665}
}
Initial value:
{
{"challenge_points_file",11,2,10,0,1707,kw_225,0.,0.,0.,0,"{Chall
enge file for surrogate metrics} ModelCommands.html#ModelSurrG"},
{"correction",8,6,8,0,1683,kw_226,0.,0.,0.,0,"{Surrogate correcti
on approach} ModelCommands.html#ModelSurrG"},
{"dace_method_pointer",11,0,3,0,1659,0,0.,0.,0.,0,"{Design of exp
eriments method pointer} ModelCommands.html#ModelSurrG"},
{"diagnostics",7,2,9,0,1696,kw_228},
{"export_points_file",11,2,6,0,1675,kw_229,0.,0.,0.,0,"{File expo
rt of global approximation-based sample results} ModelCommands.html#ModelSurrG"},
{"gaussian_process",8,2,1,1,1563,kw_234,0.,0.,0.,0,"[CHOOSE surro
gate type]{Dakota Gaussian process} ModelCommands.html#ModelSurrG"},
{"import_points_file",11,2,5,0,1669,kw_235,0.,0.,0.,0,"{File impo
rt of samples for global approximation builds} ModelCommands.html#ModelSurrG"},
{"kriging",0,2,1,1,1562,kw_234},
{"mars",8,3,1,1,1601,kw_237,0.,0.,0.,0,"{Multivariate adaptive re
gression splines} ModelCommands.html#ModelSurrG"},
{"metrics",15,2,9,0,1697,kw_228,0.,0.,0.,0,"{Compute surrogate di
agnostics} ModelCommands.html#ModelSurrG"},
{"minimum_points",8,0,2,0,1655},
{"moving_least_squares",8,3,1,1,1613,kw_238,0.,0.,0.,0,"{Moving l
east squares} ModelCommands.html#ModelSurrG"},
{"neural_network",8,4,1,1,1621,kw_239,0.,0.,0.,0,"{Artificial neu
ral network} ModelCommands.html#ModelSurrG"},
{"polynomial",8,4,1,1,1643,kw_240,0.,0.,0.,0,"{Polynomial} ModelC
ommands.html#ModelSurrG"},
{"radial_basis",8,5,1,1,1631,kw_241},
{"recommended_points",8,0,2,0,1657},
{"reuse_points",8,3,4,0,1661,kw_242},
{"reuse_samples",0,3,4,0,1660,kw_242},
{"samples_file",3,2,5,0,1668,kw_235},
{"total_points",9,0,2,0,1653},
{"use_derivatives",8,0,7,0,1681,0,0.,0.,0.,0,"{Surfpack GP gradie
nt enhancement} ModelCommands.html#ModelSurrG"}
}
Initial value:
{
{"additive",8,0,2,2,1737,0,0.,0.,0.,0,"[CHOOSE correction type]"}
,
{"combined",8,0,2,2,1741},
{"first_order",8,0,1,1,1733,0,0.,0.,0.,0,"[CHOOSE correction orde
r]"},
{"multiplicative",8,0,2,2,1739},
{"second_order",8,0,1,1,1735},
{"zeroth_order",8,0,1,1,1731}
}
Initial value:
{
{"correction",8,6,3,3,1729,kw_244,0.,0.,0.,0,"{Surrogate correcti
on approach} ModelCommands.html#ModelSurrH"},
{"high_fidelity_model_pointer",11,0,2,2,1727,0,0.,0.,0.,0,"{Point
er to the high fidelity model specification} ModelCommands.html#ModelSurrH"},
{"low_fidelity_model_pointer",11,0,1,1,1725,0,0.,0.,0.,0,"{Pointe
r to the low fidelity model specification} ModelCommands.html#ModelSurrH"}
}
Initial value:
{
{"actual_model_pointer",11,0,2,2,1721,0,0.,0.,0.,0,"{Pointer to t
he truth model specification} ModelCommands.html#ModelSurrMP"},
{"taylor_series",8,0,1,1,1719,0,0.,0.,0.,0,"{Taylor series local
approximation } ModelCommands.html#ModelSurrL"}
}
Initial value:
{
{"actual_model_pointer",11,0,2,2,1721,0,0.,0.,0.,0,"{Pointer to t
he truth model specification} ModelCommands.html#ModelSurrMP"},
{"tana",8,0,1,1,1715,0,0.,0.,0.,0,"{Two-point adaptive nonlinear
approximation } ModelCommands.html#ModelSurrMP"}
}
Initial value:
{
{"global",8,21,2,1,1561,kw_243,0.,0.,0.,0,"[CHOOSE surrogate cate
gory]{Global approximations } ModelCommands.html#ModelSurrG"},
{"hierarchical",8,3,2,1,1723,kw_245,0.,0.,0.,0,"{Hierarchical app
roximation } ModelCommands.html#ModelSurrH"},
{"id_surrogates",13,0,1,0,1559,0,0.,0.,0.,0,"{Surrogate response
ids} ModelCommands.html#ModelSurrogate"},
{"local",8,2,2,1,1717,kw_246,0.,0.,0.,0,"{Local approximation} Mo
delCommands.html#ModelSurrL"},
{"multipoint",8,2,2,1,1713,kw_247,0.,0.,0.,0,"{Multipoint approxi
mation} ModelCommands.html#ModelSurrMP"}
}
Initial value:
{
{"hierarchical_tagging",8,0,4,0,1551,0,0.,0.,0.,0,"{Hierarchical
evaluation tags} ModelCommands.html#ModelIndControl"},
{"id_model",11,0,1,0,1545,0,0.,0.,0.,0,"{Model set identifier} Mo
delCommands.html#ModelIndControl"},
{"nested",8,2,5,1,1743,kw_223,0.,0.,0.,0,"[CHOOSE model type]"},
{"responses_pointer",11,0,3,0,1549,0,0.,0.,0.,0,"{Responses set p
ointer} ModelCommands.html#ModelIndControl"},
{"single",8,1,5,1,1553,kw_224,0.,0.,0.,0,"@"},
{"surrogate",8,5,5,1,1557,kw_248},
{"variables_pointer",11,0,2,0,1547,0,0.,0.,0.,0,"{Variables set p
ointer} ModelCommands.html#ModelIndControl"}
}
Initial value:
{
{"annotated",8,0,3,0,2237,0,0.,0.,0.,0,"{Data file in annotated f
ormat} RespCommands.html#RespFnLS"},
{"freeform",8,0,3,0,2239,0,0.,0.,0.,0,"{Data file in freeform for
mat} RespCommands.html#RespFnLS"},
{"num_config_variables",0x29,0,4,0,2241,0,0.,0.,0.,0,"{Configurat
ion variable columns in file} RespCommands.html#RespFnLS"},
{"num_experiments",0x29,0,1,0,2233,0,0.,0.,0.,0,"{Experiments in
file} RespCommands.html#RespFnLS"},
{"num_replicates",13,0,2,0,2235,0,0.,0.,0.,0,"{Replicates per eac
h experiment in file} RespCommands.html#RespFnLS"},
{"num_std_deviations",0x29,0,5,0,2243,0,0.,0.,0.,0,"{Standard dev
iation columns in file} RespCommands.html#RespFnLS"}
}
Initial value:
{
{"nonlinear_equality_scale_types",0x807,0,2,0,2258,0,0.,0.,0.,0,0
,0,"nonlinear_equality_constraints"},
{"nonlinear_equality_scales",0x806,0,3,0,2260,0,0.,0.,0.,0,0,0,"n
onlinear_equality_constraints"},
{"nonlinear_equality_targets",6,0,1,0,2256,0,0.,0.,0.,0,0,0,"nonl
inear_equality_constraints"},
{"scale_types",0x80f,0,2,0,2259,0,0.,0.,0.,0,0,0,"nonlinear_equal
ity_constraints"},
{"scales",0x80e,0,3,0,2261,0,0.,0.,0.,0,0,0,"nonlinear_equality_c
onstraints"},
{"targets",14,0,1,0,2257,0,0.,0.,0.,0,"{Nonlinear equality target
s} RespCommands.html#RespFnLS",0,"nonlinear_equality_constraints"}
}
Initial value:
{
{"lower_bounds",14,0,1,0,2247,0,0.,0.,0.,0,"{Nonlinear inequality
lower bounds} RespCommands.html#RespFnLS",0,"nonlinear_inequality_constraints"},
{"nonlinear_inequality_lower_bounds",6,0,1,0,2246,0,0.,0.,0.,0,0,
0,"nonlinear_inequality_constraints"},
{"nonlinear_inequality_scale_types",0x807,0,3,0,2250,0,0.,0.,0.,0
,0,0,"nonlinear_inequality_constraints"},
{"nonlinear_inequality_scales",0x806,0,4,0,2252,0,0.,0.,0.,0,0,0,
"nonlinear_inequality_constraints"},
{"nonlinear_inequality_upper_bounds",6,0,2,0,2248,0,0.,0.,0.,0,0,
0,"nonlinear_inequality_constraints"},
{"scale_types",0x80f,0,3,0,2251,0,0.,0.,0.,0,0,0,"nonlinear_inequ
ality_constraints"},
{"scales",0x80e,0,4,0,2253,0,0.,0.,0.,0,0,0,"nonlinear_inequality
_constraints"},
{"upper_bounds",14,0,2,0,2249,0,0.,0.,0.,0,"{Nonlinear inequality
upper bounds} RespCommands.html#RespFnLS",0,"nonlinear_inequality_constraints"}
}
Initial value:
{
{"calibration_data_file",11,6,4,0,2231,kw_250,0.,0.,0.,0,"{Calibr
ation data file name} RespCommands.html#RespFnLS"},
{"calibration_term_scale_types",0x807,0,1,0,2224,0,0.,0.,0.,0,0,0
,"calibration_terms"},
{"calibration_term_scales",0x806,0,2,0,2226,0,0.,0.,0.,0,0,0,"cal
ibration_terms"},
{"calibration_weights",6,0,3,0,2228,0,0.,0.,0.,0,0,0,"calibration
_terms"},
{"least_squares_data_file",3,6,4,0,2230,kw_250},
{"least_squares_term_scale_types",0x807,0,1,0,2224,0,0.,0.,0.,0,0
,0,"calibration_terms"},
{"least_squares_term_scales",0x806,0,2,0,2226,0,0.,0.,0.,0,0,0,"c
alibration_terms"},
{"least_squares_weights",6,0,3,0,2228,0,0.,0.,0.,0,0,0,"calibrati
on_terms"},
{"nonlinear_equality_constraints",0x29,6,6,0,2255,kw_251,0.,0.,0.
,0,"{Number of nonlinear equality constraints} RespCommands.html#RespFnLS"},
{"nonlinear_inequality_constraints",0x29,8,5,0,2245,kw_252,0.,0.,
0.,0,"{Number of nonlinear inequality constraints} RespCommands.html#RespFnLS"},
{"num_nonlinear_equality_constraints",0x21,6,6,0,2254,kw_251},
{"num_nonlinear_inequality_constraints",0x21,8,5,0,2244,kw_252},
{"primary_scale_types",0x80f,0,1,0,2225,0,0.,0.,0.,0,"{Calibratio
n scaling types} RespCommands.html#RespFnLS",0,"calibration_terms"},
{"primary_scales",0x80e,0,2,0,2227,0,0.,0.,0.,0,"{Calibration sca
les} RespCommands.html#RespFnLS",0,"calibration_terms"},
{"weights",14,0,3,0,2229,0,0.,0.,0.,0,"{Calibration term weights}
RespCommands.html#RespFnLS",0,"calibration_terms"}
}
Initial value:
{
{"absolute",8,0,2,0,2285},
{"bounds",8,0,2,0,2287},
{"ignore_bounds",8,0,1,0,2281,0,0.,0.,0.,0,"{Ignore variable boun
ds} RespCommands.html#RespGradMixed"},
{"relative",8,0,2,0,2283}
}
Initial value:
{
{"central",8,0,6,0,2295,0,0.,0.,0.,0,"[CHOOSE difference interval
]"},
{"dakota",8,4,4,0,2279,kw_254,0.,0.,0.,0,"@[CHOOSE gradient sourc
e]{Interval scaling type} RespCommands.html#RespGradNum"},
{"fd_gradient_step_size",0x406,0,7,0,2296,0,0.,0.,0.001},
{"fd_step_size",0x40e,0,7,0,2297,0,0.,0.,0.001,0,"{Finite differe
nce step size} RespCommands.html#RespGradMixed"},
{"forward",8,0,6,0,2293,0,0.,0.,0.,0,"@"},
{"id_analytic_gradients",13,0,2,2,2273,0,0.,0.,0.,0,"{Analytic de
rivatives function list} RespCommands.html#RespGradMixed"},
{"id_numerical_gradients",13,0,1,1,2271,0,0.,0.,0.,0,"{Numerical
derivatives function list} RespCommands.html#RespGradMixed"},
{"interval_type",8,0,5,0,2291,0,0.,0.,0.,0,"{Interval type} RespC
ommands.html#RespGradNum"},
{"method_source",8,0,3,0,2277,0,0.,0.,0.,0,"{Method source} RespC
ommands.html#RespGradNum"},
{"vendor",8,0,4,0,2289}
}
Initial value:
{
{"fd_hessian_step_size",6,0,1,0,2328},
{"fd_step_size",14,0,1,0,2329,0,0.,0.,0.,0,"{Finite difference st
ep size} RespCommands.html#RespHessMixed"}
}
Initial value:
{
{"damped",8,0,1,0,2345,0,0.,0.,0.,0,"{Numerical safeguarding of B
FGS update} RespCommands.html#RespHessMixed"}
}
Initial value:
{
{"bfgs",8,1,1,1,2343,kw_257,0.,0.,0.,0,"[CHOOSE Hessian approx.]"
},
{"sr1",8,0,1,1,2347}
}
Initial value:
{
{"absolute",8,0,2,0,2333},
{"bounds",8,0,2,0,2335},
{"central",8,0,3,0,2339,0,0.,0.,0.,0,"[CHOOSE difference interval
]"},
{"forward",8,0,3,0,2337,0,0.,0.,0.,0,"@"},
{"id_analytic_hessians",13,0,5,0,2349,0,0.,0.,0.,0,"{Analytic Hes
sians function list} RespCommands.html#RespHessMixed"},
{"id_numerical_hessians",13,2,1,0,2327,kw_256,0.,0.,0.,0,"{Numeri
cal Hessians function list} RespCommands.html#RespHessMixed"},
{"id_quasi_hessians",13,2,4,0,2341,kw_258,0.,0.,0.,0,"{Quasi Hess
ians function list} RespCommands.html#RespHessMixed"},
{"relative",8,0,2,0,2331}
}
Initial value:
{
{"nonlinear_equality_scale_types",0x807,0,2,0,2218,0,0.,0.,0.,0,0
,0,"nonlinear_equality_constraints"},
{"nonlinear_equality_scales",0x806,0,3,0,2220,0,0.,0.,0.,0,0,0,"n
onlinear_equality_constraints"},
{"nonlinear_equality_targets",6,0,1,0,2216,0,0.,0.,0.,0,0,0,"nonl
inear_equality_constraints"},
{"scale_types",0x80f,0,2,0,2219,0,0.,0.,0.,0,"{Nonlinear scaling
types (for inequalities or equalities)} RespCommands.html#RespFnLS",0,"nonlinear_
equality_constraints"},
{"scales",0x80e,0,3,0,2221,0,0.,0.,0.,0,"{Nonlinear scales (for i
nequalities or equalities)} RespCommands.html#RespFnLS",0,"nonlinear_equality_con
straints"},
{"targets",14,0,1,0,2217,0,0.,0.,0.,0,"{Nonlinear equality constr
aint targets} RespCommands.html#RespFnOpt",0,"nonlinear_equality_constraints"}
}
Initial value:
{
{"lower_bounds",14,0,1,0,2207,0,0.,0.,0.,0,"{Nonlinear inequality
constraint lower bounds} RespCommands.html#RespFnOpt",0,"nonlinear_inequality_co
nstraints"},
{"nonlinear_inequality_lower_bounds",6,0,1,0,2206,0,0.,0.,0.,0,0,
0,"nonlinear_inequality_constraints"},
{"nonlinear_inequality_scale_types",0x807,0,3,0,2210,0,0.,0.,0.,0
,0,0,"nonlinear_inequality_constraints"},
{"nonlinear_inequality_scales",0x806,0,4,0,2212,0,0.,0.,0.,0,0,0,
"nonlinear_inequality_constraints"},
{"nonlinear_inequality_upper_bounds",6,0,2,0,2208,0,0.,0.,0.,0,0,
0,"nonlinear_inequality_constraints"},
{"scale_types",0x80f,0,3,0,2211,0,0.,0.,0.,0,"{Nonlinear constrai
nt scaling types (for inequalities or equalities)} RespCommands.html#RespFnOpt",0
,"nonlinear_inequality_constraints"},
{"scales",0x80e,0,4,0,2213,0,0.,0.,0.,0,"{Nonlinear constraint sc
ales (for inequalities or equalities)} RespCommands.html#RespFnOpt",0,"nonlinear_
inequality_constraints"},
{"upper_bounds",14,0,2,0,2209,0,0.,0.,0.,0,"{Nonlinear inequality
constraint upper bounds} RespCommands.html#RespFnOpt",0,"nonlinear_inequality_co
nstraints"}
}
Initial value:
{
{"multi_objective_weights",6,0,4,0,2202,0,0.,0.,0.,0,0,0,"objecti
ve_functions"},
{"nonlinear_equality_constraints",0x29,6,6,0,2215,kw_260,0.,0.,0.
,0,"{Number of nonlinear equality constraints} RespCommands.html#RespFnOpt"},
{"nonlinear_inequality_constraints",0x29,8,5,0,2205,kw_261,0.,0.,
0.,0,"{Number of nonlinear inequality constraints} RespCommands.html#RespFnOpt"},
{"num_nonlinear_equality_constraints",0x21,6,6,0,2214,kw_260},
{"num_nonlinear_inequality_constraints",0x21,8,5,0,2204,kw_261},
{"objective_function_scale_types",0x807,0,2,0,2198,0,0.,0.,0.,0,0
,0,"objective_functions"},
{"objective_function_scales",0x806,0,3,0,2200,0,0.,0.,0.,0,0,0,"o
bjective_functions"},
{"primary_scale_types",0x80f,0,2,0,2199,0,0.,0.,0.,0,"{Objective
function scaling types} RespCommands.html#RespFnOpt",0,"objective_functions"},
{"primary_scales",0x80e,0,3,0,2201,0,0.,0.,0.,0,"{Objective funct
ion scales} RespCommands.html#RespFnOpt",0,"objective_functions"},
{"sense",0x80f,0,1,0,2197,0,0.,0.,0.,0,"{Optimization sense} Resp
Commands.html#RespFnOpt",0,"objective_functions"},
{"weights",14,0,4,0,2203,0,0.,0.,0.,0,"{Multi-objective weighting
s} RespCommands.html#RespFnOpt",0,"objective_functions"}
}
Initial value:
{
{"central",8,0,6,0,2295,0,0.,0.,0.,0,"[CHOOSE difference interval
]"},
{"dakota",8,4,4,0,2279,kw_254,0.,0.,0.,0,"@[CHOOSE gradient sourc
e]{Interval scaling type} RespCommands.html#RespGradNum"},
{"fd_gradient_step_size",0x406,0,7,0,2296,0,0.,0.,0.001},
{"fd_step_size",0x40e,0,7,0,2297,0,0.,0.,0.001,0,"{Finite differe
nce step size} RespCommands.html#RespGradMixed"},
{"forward",8,0,6,0,2293,0,0.,0.,0.,0,"@"},
{"interval_type",8,0,5,0,2291,0,0.,0.,0.,0,"{Interval type} RespC
ommands.html#RespGradNum"},
{"method_source",8,0,3,0,2277,0,0.,0.,0.,0,"{Method source} RespC
ommands.html#RespGradNum"},
{"vendor",8,0,4,0,2289}
}
Initial value:
{
{"absolute",8,0,2,0,2307},
{"bounds",8,0,2,0,2309},
{"central",8,0,3,0,2313,0,0.,0.,0.,0,"[CHOOSE difference interval
]"},
{"fd_hessian_step_size",6,0,1,0,2302},
{"fd_step_size",14,0,1,0,2303,0,0.,0.,0.,0,"{Finite difference st
ep size} RespCommands.html#RespHessNum"},
{"forward",8,0,3,0,2311,0,0.,0.,0.,0,"@"},
{"relative",8,0,2,0,2305}
}
Initial value:
{
{"damped",8,0,1,0,2319,0,0.,0.,0.,0,"{Numerical safeguarding of B
FGS update} RespCommands.html#RespHessQuasi"}
}
Initial value:
{
{"bfgs",8,1,1,1,2317,kw_265,0.,0.,0.,0,"[CHOOSE Hessian approx.]"
},
{"sr1",8,0,1,1,2321}
}
Initial value:
{
{"analytic_gradients",8,0,4,2,2267,0,0.,0.,0.,0,"[CHOOSE gradient
type]"},
{"analytic_hessians",8,0,5,3,2323,0,0.,0.,0.,0,"[CHOOSE Hessian t
ype]"},
{"calibration_terms",0x29,15,3,1,2223,kw_253,0.,0.,0.,0,"{{Calibr
ation (Least squares)} Number of calibration terms} RespCommands.html#RespFnLS"},
Initial value:
{
{"method_list",15,0,1,1,39,0,0.,0.,0.,0,"{List of methods} StratC
ommands.html#StratHybrid"}
}
Initial value:
{
{"global_method_pointer",11,0,1,1,31,0,0.,0.,0.,0,"{Pointer to th
e global method specification} StratCommands.html#StratHybrid"},
{"local_method_pointer",11,0,2,2,33,0,0.,0.,0.,0,"{Pointer to the
local method specification} StratCommands.html#StratHybrid"},
{"local_search_probability",10,0,3,0,35,0,0.,0.,0.,0,"{Probabilit
y of executing local searches} StratCommands.html#StratHybrid"}
}
Initial value:
{
{"method_list",15,0,1,1,27,0,0.,0.,0.,0,"{List of methods} StratC
ommands.html#StratHybrid"}
}
Initial value:
{
{"collaborative",8,1,1,1,37,kw_268,0.,0.,0.,0,"[CHOOSE hybrid typ
e]{Collaborative hybrid} StratCommands.html#StratHybrid"},
{"coupled",0,3,1,1,28,kw_269},
{"embedded",8,3,1,1,29,kw_269,0.,0.,0.,0,"{Embedded hybrid} Strat
Commands.html#StratHybrid"},
{"sequential",8,1,1,1,25,kw_270,0.,0.,0.,0,"{Sequential hybrid} S
tratCommands.html#StratHybrid"},
{"uncoupled",0,1,1,1,24,kw_270}
}
Initial value:
{
{"master",8,0,1,1,19},
{"peer",8,0,1,1,21}
}
Initial value:
{
{"seed",9,0,1,0,47,0,0.,0.,0.,0,"{Seed for random starting points
} StratCommands.html#StratMultiStart"}
}
Initial value:
{
{"method_pointer",11,0,1,1,43,0,0.,0.,0.,0,"{Method pointer} Stra
tCommands.html#StratMultiStart"},
{"random_starts",9,1,2,0,45,kw_273,0.,0.,0.,0,"{Number of random
Initial value:
{
{"seed",9,0,1,0,57,0,0.,0.,0.,0,"{Seed for random weighting sets}
StratCommands.html#StratParetoSet"}
}
Initial value:
{
{"method_pointer",11,0,1,1,53,0,0.,0.,0.,0,"{Optimization method
pointer} StratCommands.html#StratParetoSet"},
{"multi_objective_weight_sets",6,0,3,0,58},
{"opt_method_pointer",3,0,1,1,52},
{"random_weight_sets",9,1,2,0,55,kw_275,0.,0.,0.,0,"{Number of ra
ndom weighting sets} StratCommands.html#StratParetoSet"},
{"weight_sets",14,0,3,0,59,0,0.,0.,0.,0,"{List of user-specified
weighting sets} StratCommands.html#StratParetoSet"}
}
Initial value:
{
{"results_output_file",11,0,1,0,13,0,0.,0.,0.,0,"{File name for r
esults output} StratCommands.html#StratIndControl"}
}
Initial value:
{
{"method_pointer",11,0,1,0,63,0,0.,0.,0.,0,"{Method pointer} Stra
tCommands.html#StratSingle"}
}
Initial value:
{
{"tabular_graphics_file",11,0,1,0,7,0,0.,0.,0.,0,"{File name for
tabular graphics data} StratCommands.html#StratIndControl"}
}
Initial value:
{
{"graphics",8,0,1,0,3,0,0.,0.,0.,0,"{Graphics flag} StratCommands
.html#StratIndControl"},
{"hybrid",8,5,7,1,23,kw_271,0.,0.,0.,0,"[CHOOSE strategy type]{Hy
brid strategy} StratCommands.html#StratHybrid"},
{"iterator_scheduling",8,2,6,0,17,kw_272,0.,0.,0.,0,"{Message pas
sing configuration for scheduling of iterator jobs} StratCommands.html#StratIndCo
ntrol"},
{"iterator_servers",9,0,5,0,15,0,0.,0.,0.,0,"{Number of iterator
servers} StratCommands.html#StratIndControl"},
{"multi_start",8,3,7,1,41,kw_274,0.,0.,0.,0,"{Multi-start iterati
on strategy} StratCommands.html#StratMultiStart"},
{"output_precision",0x29,0,3,0,9,0,0.,0.,0.,0,"{Numeric output pr
ecision} StratCommands.html#StratIndControl"},
{"pareto_set",8,5,7,1,51,kw_276,0.,0.,0.,0,"{Pareto set optimizat
ion strategy} StratCommands.html#StratParetoSet"},
{"results_output",8,1,4,0,11,kw_277,0.,0.,0.,0,"{Enable results o
utput} StratCommands.html#StratIndControl"},
{"single_method",8,1,7,1,61,kw_278,0.,0.,0.,0,"@{Single method st
rategy} StratCommands.html#StratSingle"},
{"tabular_graphics_data",8,1,2,0,5,kw_279,0.,0.,0.,0,"{Tabulation
of graphics data} StratCommands.html#StratIndControl"}
}
Initial value:
{
{"aleatory",8,0,1,1,1771},
{"all",8,0,1,1,1765},
{"design",8,0,1,1,1767},
{"epistemic",8,0,1,1,1773},
{"state",8,0,1,1,1775},
{"uncertain",8,0,1,1,1769}
}
Initial value:
{
{"alphas",14,0,1,1,1889,0,0.,0.,0.,0,"{beta uncertain alphas} Var
Commands.html#VarCAUV_Beta",0,"beta_uncertain"},
{"betas",14,0,2,2,1891,0,0.,0.,0.,0,"{beta uncertain betas} VarCo
mmands.html#VarCAUV_Beta",0,"beta_uncertain"},
{"buv_alphas",6,0,1,1,1888,0,0.,0.,0.,0,0,0,"beta_uncertain"},
{"buv_betas",6,0,2,2,1890,0,0.,0.,0.,0,0,0,"beta_uncertain"},
{"buv_descriptors",7,0,5,0,1896,0,0.,0.,0.,0,0,0,"beta_uncertain"
},
{"buv_lower_bounds",6,0,3,3,1892,0,0.,0.,0.,0,0,0,"beta_uncertain
"},
{"buv_upper_bounds",6,0,4,4,1894,0,0.,0.,0.,0,0,0,"beta_uncertain
"},
{"descriptors",15,0,5,0,1897,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCAUV_Beta",0,"beta_uncertain"},
{"lower_bounds",14,0,3,3,1893,0,0.,0.,0.,0,"{Distribution lower b
ounds} VarCommands.html#VarCAUV_Beta",0,"beta_uncertain"},
{"upper_bounds",14,0,4,4,1895,0,0.,0.,0.,0,"{Distribution upper b
ounds} VarCommands.html#VarCAUV_Beta",0,"beta_uncertain"}
}
Initial value:
{
{"descriptors",15,0,3,0,1955,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDAUV_Binomial",0,"binomial_uncertain"},
{"num_trials",13,0,2,2,1953,0,0.,0.,0.,0,"{binomial uncertain num
_trials} VarCommands.html#VarDAUV_Binomial",0,"binomial_uncertain"},
{"prob_per_trial",6,0,1,1,1950,0,0.,0.,0.,0,0,0,"binomial_uncerta
in"},
{"probability_per_trial",14,0,1,1,1951,0,0.,0.,0.,0,0,0,"binomial
_uncertain"}
}
Initial value:
{
{"cdv_descriptors",7,0,6,0,1792,0,0.,0.,0.,0,0,0,"continuous_desi
gn"},
{"cdv_initial_point",6,0,1,0,1782,0,0.,0.,0.,0,0,0,"continuous_de
sign"},
{"cdv_lower_bounds",6,0,2,0,1784,0,0.,0.,0.,0,0,0,"continuous_des
ign"},
{"cdv_scale_types",0x807,0,4,0,1788,0,0.,0.,0.,0,0,0,"continuous_
design"},
{"cdv_scales",0x806,0,5,0,1790,0,0.,0.,0.,0,0,0,"continuous_desig
n"},
{"cdv_upper_bounds",6,0,3,0,1786,0,0.,0.,0.,0,0,0,"continuous_des
ign"},
{"descriptors",15,0,6,0,1793,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCDV",0,"continuous_design"},
{"initial_point",14,0,1,0,1783,0,0.,0.,0.,0,"{Initial point} VarC
ommands.html#VarCDV",0,"continuous_design"},
Initial value:
{
{"descriptors",15,0,5,0,2003,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCEUV_Interval",0,"continuous_interval_uncertain"},
{"interval_probabilities",14,0,2,0,1997,0,0.,0.,0.,0,"{basic prob
ability assignments per continuous interval} VarCommands.html#VarCEUV_Interval"},
{"interval_probs",6,0,2,0,1996},
{"iuv_descriptors",7,0,5,0,2002,0,0.,0.,0.,0,0,0,"continuous_inte
rval_uncertain"},
{"iuv_interval_probs",6,0,2,0,1996},
{"iuv_num_intervals",5,0,1,0,1994,0,0.,0.,0.,0,0,0,"continuous_in
terval_uncertain"},
{"lower_bounds",14,0,3,1,1999,0,0.,0.,0.,0,"{lower bounds of cont
inuous intervals} VarCommands.html#VarCEUV_Interval"},
{"num_intervals",13,0,1,0,1995,0,0.,0.,0.,0,"{number of intervals
defined for each continuous interval variable} VarCommands.html#VarCEUV_Interval
",0,"continuous_interval_uncertain"},
{"upper_bounds",14,0,4,2,2001,0,0.,0.,0.,0,"{upper bounds of cont
inuous intervals} VarCommands.html#VarCEUV_Interval"}
}
Initial value:
{
{"csv_descriptors",7,0,4,0,2044,0,0.,0.,0.,0,0,0,"continuous_stat
e"},
{"csv_initial_state",6,0,1,0,2038,0,0.,0.,0.,0,0,0,"continuous_st
ate"},
{"csv_lower_bounds",6,0,2,0,2040,0,0.,0.,0.,0,0,0,"continuous_sta
te"},
{"csv_upper_bounds",6,0,3,0,2042,0,0.,0.,0.,0,0,0,"continuous_sta
te"},
{"descriptors",15,0,4,0,2045,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCSV",0,"continuous_state"},
{"initial_state",14,0,1,0,2039,0,0.,0.,0.,0,"{Initial states} Var
Commands.html#VarCSV",0,"continuous_state"},
{"lower_bounds",14,0,2,0,2041,0,0.,0.,0.,0,"{Lower bounds} VarCom
mands.html#VarCSV",0,"continuous_state"},
{"upper_bounds",14,0,3,0,2043,0,0.,0.,0.,0,"{Upper bounds} VarCom
mands.html#VarCSV",0,"continuous_state"}
}
Initial value:
{
{"ddv_descriptors",7,0,4,0,1802,0,0.,0.,0.,0,0,0,"discrete_design
_range"},
{"ddv_initial_point",5,0,1,0,1796,0,0.,0.,0.,0,0,0,"discrete_desi
gn_range"},
{"ddv_lower_bounds",5,0,2,0,1798,0,0.,0.,0.,0,0,0,"discrete_desig
n_range"},
{"ddv_upper_bounds",5,0,3,0,1800,0,0.,0.,0.,0,0,0,"discrete_desig
n_range"},
{"descriptors",15,0,4,0,1803,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDDRIV",0,"discrete_design_range"},
{"initial_point",13,0,1,0,1797,0,0.,0.,0.,0,"{Initial point} VarC
ommands.html#VarDDRIV",0,"discrete_design_range"},
{"lower_bounds",13,0,2,0,1799,0,0.,0.,0.,0,"{Lower bounds} VarCom
mands.html#VarDDRIV",0,"discrete_design_range"},
{"upper_bounds",13,0,3,0,1801,0,0.,0.,0.,0,"{Upper bounds} VarCom
mands.html#VarDDRIV",0,"discrete_design_range"}
}
Initial value:
{
{"descriptors",15,0,4,0,1813,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDDSIV",0,"discrete_design_set_integer"},
{"initial_point",13,0,1,0,1807,0,0.,0.,0.,0,"{Initial point} VarC
ommands.html#VarDDSIV",0,"discrete_design_set_integer"},
{"num_set_values",13,0,2,0,1809,0,0.,0.,0.,0,"{Number of values f
or each variable} VarCommands.html#VarDDSIV",0,"discrete_design_set_integer"},
{"set_values",13,0,3,1,1811,0,0.,0.,0.,0,"{Set values} VarCommand
s.html#VarDDSIV"}
}
Initial value:
{
{"descriptors",15,0,4,0,1823,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDDSRV",0,"discrete_design_set_real"},
{"initial_point",14,0,1,0,1817,0,0.,0.,0.,0,"{Initial point} VarC
ommands.html#VarDDSRV",0,"discrete_design_set_real"},
{"num_set_values",13,0,2,0,1819,0,0.,0.,0.,0,"{Number of values f
or each variable} VarCommands.html#VarDDSRV",0,"discrete_design_set_real"},
{"set_values",14,0,3,1,1821,0,0.,0.,0.,0,"{Set values} VarCommand
s.html#VarDDSRV"}
}
Initial value:
{
{"descriptors",15,0,5,0,2015,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDIUV",0,"discrete_interval_uncertain"},
{"interval_probabilities",14,0,2,0,2009,0,0.,0.,0.,0,"{Basic prob
ability assignments per interval} VarCommands.html#VarDIUV"},
{"interval_probs",6,0,2,0,2008},
{"lower_bounds",13,0,3,1,2011,0,0.,0.,0.,0,"{Lower bounds} VarCom
mands.html#VarDIUV"},
{"num_intervals",13,0,1,0,2007,0,0.,0.,0.,0,"{Number of intervals
defined for each interval variable} VarCommands.html#VarDIUV",0,"discrete_interv
al_uncertain"},
{"range_probabilities",6,0,2,0,2008},
{"range_probs",6,0,2,0,2008},
{"upper_bounds",13,0,4,2,2013,0,0.,0.,0.,0,"{Upper bounds} VarCom
mands.html#VarDIUV"}
}
Initial value:
{
{"descriptors",15,0,4,0,2055,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDSRIV",0,"discrete_state_range"},
{"dsv_descriptors",7,0,4,0,2054,0,0.,0.,0.,0,0,0,"discrete_state_
range"},
{"dsv_initial_state",5,0,1,0,2048,0,0.,0.,0.,0,0,0,"discrete_stat
e_range"},
{"dsv_lower_bounds",5,0,2,0,2050,0,0.,0.,0.,0,0,0,"discrete_state
_range"},
{"dsv_upper_bounds",5,0,3,0,2052,0,0.,0.,0.,0,0,0,"discrete_state
_range"},
{"initial_state",13,0,1,0,2049,0,0.,0.,0.,0,"{Initial states} Var
Commands.html#VarDSRIV",0,"discrete_state_range"},
{"lower_bounds",13,0,2,0,2051,0,0.,0.,0.,0,"{Lower bounds} VarCom
mands.html#VarDSRIV",0,"discrete_state_range"},
{"upper_bounds",13,0,3,0,2053,0,0.,0.,0.,0,"{Upper bounds} VarCom
mands.html#VarDSRIV",0,"discrete_state_range"}
}
Initial value:
{
{"descriptors",15,0,4,0,2065,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDSSIV",0,"discrete_state_set_integer"},
{"initial_state",13,0,1,0,2059,0,0.,0.,0.,0,"{Initial state} VarC
ommands.html#VarDSSIV",0,"discrete_state_set_integer"},
{"num_set_values",13,0,2,0,2061,0,0.,0.,0.,0,"{Number of values f
or each variable} VarCommands.html#VarDSSIV",0,"discrete_state_set_integer"},
{"set_values",13,0,3,1,2063,0,0.,0.,0.,0,"{Set values} VarCommand
s.html#VarDSSIV"}
}
Initial value:
{
{"descriptors",15,0,4,0,2075,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDSSRV",0,"discrete_state_set_real"},
{"initial_state",14,0,1,0,2069,0,0.,0.,0.,0,"{Initial state} VarC
ommands.html#VarDSSRV",0,"discrete_state_set_real"},
{"num_set_values",13,0,2,0,2071,0,0.,0.,0.,0,"{Number of values f
or each variable} VarCommands.html#VarDSSRV",0,"discrete_state_set_real"},
{"set_values",14,0,3,1,2073,0,0.,0.,0.,0,"{Set values} VarCommand
s.html#VarDSSRV"}
}
Initial value:
{
{"descriptors",15,0,4,0,2025,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDUSIV",0,"discrete_uncertain_set_integer"},
{"num_set_values",13,0,1,0,2019,0,0.,0.,0.,0,"{Number of values f
or each variable} VarCommands.html#VarDUSIV",0,"discrete_uncertain_set_integer"},
{"set_probabilities",14,0,3,0,2023,0,0.,0.,0.,0,"{Probabilities f
or each set member} VarCommands.html#VarDUSIV"},
{"set_probs",6,0,3,0,2022},
{"set_values",13,0,2,1,2021,0,0.,0.,0.,0,"{Set values} VarCommand
s.html#VarDUSIV"}
}
Initial value:
{
{"descriptors",15,0,4,0,2035,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDUSRV",0,"discrete_uncertain_set_real"},
{"num_set_values",13,0,1,0,2029,0,0.,0.,0.,0,"{Number of values f
or each variable} VarCommands.html#VarDUSRV",0,"discrete_uncertain_set_real"},
{"set_probabilities",14,0,3,0,2033,0,0.,0.,0.,0,"{Probabilities f
or each set member} VarCommands.html#VarDUSRV"},
{"set_probs",6,0,3,0,2032},
{"set_values",14,0,2,1,2031,0,0.,0.,0.,0,"{Set values} VarCommand
s.html#VarDUSRV"}
}
Initial value:
{
{"betas",14,0,1,1,1883,0,0.,0.,0.,0,"{exponential uncertain betas
} VarCommands.html#VarCAUV_Exponential",0,"exponential_uncertain"},
{"descriptors",15,0,2,0,1885,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCAUV_Exponential",0,"exponential_uncertain"},
{"euv_betas",6,0,1,1,1882,0,0.,0.,0.,0,0,0,"exponential_uncertain
"},
{"euv_descriptors",7,0,2,0,1884,0,0.,0.,0.,0,0,0,"exponential_unc
ertain"}
}
Initial value:
{
{"alphas",14,0,1,1,1917,0,0.,0.,0.,0,"{frechet uncertain alphas}
VarCommands.html#VarCAUV_Frechet",0,"frechet_uncertain"},
{"betas",14,0,2,2,1919,0,0.,0.,0.,0,"{frechet uncertain betas} Va
rCommands.html#VarCAUV_Frechet",0,"frechet_uncertain"},
{"descriptors",15,0,3,0,1921,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCAUV_Frechet",0,"frechet_uncertain"},
{"fuv_alphas",6,0,1,1,1916,0,0.,0.,0.,0,0,0,"frechet_uncertain"},
{"fuv_betas",6,0,2,2,1918,0,0.,0.,0.,0,0,0,"frechet_uncertain"},
{"fuv_descriptors",7,0,3,0,1920,0,0.,0.,0.,0,0,0,"frechet_uncerta
in"}
}
Initial value:
{
{"alphas",14,0,1,1,1901,0,0.,0.,0.,0,"{gamma uncertain alphas} Va
rCommands.html#VarCAUV_Gamma",0,"gamma_uncertain"},
{"betas",14,0,2,2,1903,0,0.,0.,0.,0,"{gamma uncertain betas} VarC
ommands.html#VarCAUV_Gamma",0,"gamma_uncertain"},
{"descriptors",15,0,3,0,1905,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCAUV_Gamma",0,"gamma_uncertain"},
{"gauv_alphas",6,0,1,1,1900,0,0.,0.,0.,0,0,0,"gamma_uncertain"},
{"gauv_betas",6,0,2,2,1902,0,0.,0.,0.,0,0,0,"gamma_uncertain"},
{"gauv_descriptors",7,0,3,0,1904,0,0.,0.,0.,0,0,0,"gamma_uncertai
n"}
}
Initial value:
{
{"descriptors",15,0,2,0,1969,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDAUV_Geometric",0,"geometric_uncertain"},
{"prob_per_trial",6,0,1,1,1966,0,0.,0.,0.,0,0,0,"geometric_uncert
ain"},
{"probability_per_trial",14,0,1,1,1967,0,0.,0.,0.,0,0,0,"geometri
c_uncertain"}
}
Initial value:
{
{"alphas",14,0,1,1,1909,0,0.,0.,0.,0,"{gumbel uncertain alphas} V
arCommands.html#VarCAUV_Gumbel",0,"gumbel_uncertain"},
{"betas",14,0,2,2,1911,0,0.,0.,0.,0,"{gumbel uncertain betas} Var
Commands.html#VarCAUV_Gumbel",0,"gumbel_uncertain"},
{"descriptors",15,0,3,0,1913,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCAUV_Gumbel",0,"gumbel_uncertain"},
{"guuv_alphas",6,0,1,1,1908,0,0.,0.,0.,0,0,0,"gumbel_uncertain"},
{"guuv_betas",6,0,2,2,1910,0,0.,0.,0.,0,0,0,"gumbel_uncertain"},
{"guuv_descriptors",7,0,3,0,1912,0,0.,0.,0.,0,0,0,"gumbel_uncerta
in"}
}
Initial value:
{
{"abscissas",14,0,2,1,1935,0,0.,0.,0.,0,"{sets of abscissas for b
in-based histogram variables} VarCommands.html#VarCAUV_Bin_Histogram"},
{"counts",14,0,3,2,1939,0,0.,0.,0.,0,"{sets of counts for bin-bas
ed histogram variables} VarCommands.html#VarCAUV_Bin_Histogram"},
{"descriptors",15,0,4,0,1941,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCAUV_Bin_Histogram",0,"histogram_bin_uncertain"},
{"huv_bin_abscissas",6,0,2,1,1934},
{"huv_bin_counts",6,0,3,2,1938},
{"huv_bin_descriptors",7,0,4,0,1940,0,0.,0.,0.,0,0,0,"histogram_b
in_uncertain"},
{"huv_bin_ordinates",6,0,3,2,1936},
{"huv_num_bin_pairs",5,0,1,0,1932,0,0.,0.,0.,0,0,0,"histogram_bin
_uncertain"},
{"num_pairs",13,0,1,0,1933,0,0.,0.,0.,0,"{key to apportionment am
ong bin-based histogram variables} VarCommands.html#VarCAUV_Bin_Histogram",0,"his
togram_bin_uncertain"},
{"ordinates",14,0,3,2,1937,0,0.,0.,0.,0,"{sets of ordinates for b
in-based histogram variables} VarCommands.html#VarCAUV_Bin_Histogram"}
}
Initial value:
{
{"abscissas",14,0,2,1,1985,0,0.,0.,0.,0,"{sets of abscissas for p
oint-based histogram variables} VarCommands.html#VarDAUV_Point_Histogram"},
{"counts",14,0,3,2,1987,0,0.,0.,0.,0,"{sets of counts for point-b
ased histogram variables} VarCommands.html#VarDAUV_Point_Histogram"},
{"descriptors",15,0,4,0,1989,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDAUV_Point_Histogram",0,"histogram_point_uncertain"},
{"huv_num_point_pairs",5,0,1,0,1982,0,0.,0.,0.,0,0,0,"histogram_p
oint_uncertain"},
{"huv_point_abscissas",6,0,2,1,1984},
{"huv_point_counts",6,0,3,2,1986},
{"huv_point_descriptors",7,0,4,0,1988,0,0.,0.,0.,0,0,0,"histogram
_point_uncertain"},
{"num_pairs",13,0,1,0,1983,0,0.,0.,0.,0,"{key to apportionment am
ong point-based histogram variables} VarCommands.html#VarDAUV_Point_Histogram",0,
"histogram_point_uncertain"}
}
Initial value:
{
{"descriptors",15,0,4,0,1979,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDAUV_Hypergeometric",0,"hypergeometric_uncertain"},
{"num_drawn",13,0,3,3,1977,0,0.,0.,0.,0,"{hypergeometric uncertai
n num_drawn } VarCommands.html#VarDAUV_Hypergeometric",0,"hypergeometric_uncertai
n"},
{"selected_population",13,0,2,2,1975,0,0.,0.,0.,0,"{hypergeometri
c uncertain selected_population} VarCommands.html#VarDAUV_Hypergeometric",0,"hype
rgeometric_uncertain"},
{"total_population",13,0,1,1,1973,0,0.,0.,0.,0,"{hypergeometric u
ncertain total_population} VarCommands.html#VarDAUV_Hypergeometric",0,"hypergeome
tric_uncertain"}
}
Initial value:
{
{"lnuv_zetas",6,0,1,1,1840,0,0.,0.,0.,0,0,0,"lognormal_uncertain"
},
{"zetas",14,0,1,1,1841,0,0.,0.,0.,0,"{lognormal uncertain zetas}
VarCommands.html#VarCAUV_Lognormal",0,"lognormal_uncertain"}
}
Initial value:
{
{"error_factors",14,0,1,1,1847,0,0.,0.,0.,0,"[CHOOSE variance spe
c.]{lognormal uncertain error factors} VarCommands.html#VarCAUV_Lognormal",0,"log
normal_uncertain"},
{"lnuv_error_factors",6,0,1,1,1846,0,0.,0.,0.,0,0,0,"lognormal_un
certain"},
{"lnuv_std_deviations",6,0,1,1,1844,0,0.,0.,0.,0,0,0,"lognormal_u
ncertain"},
{"std_deviations",14,0,1,1,1845,0,0.,0.,0.,0,"@{lognormal uncerta
in standard deviations} VarCommands.html#VarCAUV_Lognormal",0,"lognormal_uncertai
n"}
}
Initial value:
{
{"descriptors",15,0,4,0,1853,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCAUV_Lognormal",0,"lognormal_uncertain"},
{"lambdas",14,2,1,1,1839,kw_304,0.,0.,0.,0,"[CHOOSE characterizat
ion]{lognormal uncertain lambdas} VarCommands.html#VarCAUV_Lognormal",0,"lognorma
l_uncertain"},
{"lnuv_descriptors",7,0,4,0,1852,0,0.,0.,0.,0,0,0,"lognormal_unce
rtain"},
{"lnuv_lambdas",6,2,1,1,1838,kw_304,0.,0.,0.,0,0,0,"lognormal_unc
ertain"},
{"lnuv_lower_bounds",6,0,2,0,1848,0,0.,0.,0.,0,0,0,"lognormal_unc
ertain"},
{"lnuv_means",6,4,1,1,1842,kw_305,0.,0.,0.,0,0,0,"lognormal_uncer
tain"},
{"lnuv_upper_bounds",6,0,3,0,1850,0,0.,0.,0.,0,0,0,"lognormal_unc
ertain"},
{"lower_bounds",14,0,2,0,1849,0,0.,0.,0.,0,"{Distribution lower b
ounds} VarCommands.html#VarCAUV_Lognormal",0,"lognormal_uncertain"},
{"means",14,4,1,1,1843,kw_305,0.,0.,0.,0,"@{lognormal uncertain m
eans} VarCommands.html#VarCAUV_Lognormal",0,"lognormal_uncertain"},
{"upper_bounds",14,0,3,0,1851,0,0.,0.,0.,0,"{Distribution upper b
ounds} VarCommands.html#VarCAUV_Lognormal",0,"lognormal_uncertain"}
}
Initial value:
{
{"descriptors",15,0,3,0,1869,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCAUV_Loguniform",0,"loguniform_uncertain"},
{"lower_bounds",14,0,1,1,1865,0,0.,0.,0.,0,"{Distribution lower b
ounds} VarCommands.html#VarCAUV_Loguniform",0,"loguniform_uncertain"},
{"luuv_descriptors",7,0,3,0,1868,0,0.,0.,0.,0,0,0,"loguniform_unc
ertain"},
{"luuv_lower_bounds",6,0,1,1,1864,0,0.,0.,0.,0,0,0,"loguniform_un
certain"},
{"luuv_upper_bounds",6,0,2,2,1866,0,0.,0.,0.,0,0,0,"loguniform_un
certain"},
{"upper_bounds",14,0,2,2,1867,0,0.,0.,0.,0,"{Distribution upper b
ounds} VarCommands.html#VarCAUV_Loguniform",0,"loguniform_uncertain"}
}
Initial value:
{
{"descriptors",15,0,3,0,1963,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDAUV_Negative_Binomial",0,"negative_binomial_uncertain"},
{"num_trials",13,0,2,2,1961,0,0.,0.,0.,0,"{negative binomial unce
rtain success num_trials} VarCommands.html#VarDAUV_Negative_Binomial",0,"negative
_binomial_uncertain"},
{"prob_per_trial",6,0,1,1,1958,0,0.,0.,0.,0,0,0,"negative_binomia
l_uncertain"},
{"probability_per_trial",14,0,1,1,1959,0,0.,0.,0.,0,0,0,"negative
_binomial_uncertain"}
}
Initial value:
{
{"descriptors",15,0,5,0,1835,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCAUV_Normal",0,"normal_uncertain"},
{"lower_bounds",14,0,3,0,1831,0,0.,0.,0.,0,"{Distribution lower b
ounds} VarCommands.html#VarCAUV_Normal",0,"normal_uncertain"},
{"means",14,0,1,1,1827,0,0.,0.,0.,0,"{normal uncertain means} Var
Commands.html#VarCAUV_Normal",0,"normal_uncertain"},
{"nuv_descriptors",7,0,5,0,1834,0,0.,0.,0.,0,0,0,"normal_uncertai
n"},
{"nuv_lower_bounds",6,0,3,0,1830,0,0.,0.,0.,0,0,0,"normal_uncerta
in"},
{"nuv_means",6,0,1,1,1826,0,0.,0.,0.,0,0,0,"normal_uncertain"},
{"nuv_std_deviations",6,0,2,2,1828,0,0.,0.,0.,0,0,0,"normal_uncer
tain"},
{"nuv_upper_bounds",6,0,4,0,1832,0,0.,0.,0.,0,0,0,"normal_uncerta
in"},
{"std_deviations",14,0,2,2,1829,0,0.,0.,0.,0,"{normal uncertain s
tandard deviations} VarCommands.html#VarCAUV_Normal",0,"normal_uncertain"},
{"upper_bounds",14,0,4,0,1833,0,0.,0.,0.,0,"{Distribution upper b
ounds} VarCommands.html#VarCAUV_Normal",0,"normal_uncertain"}
}
Initial value:
{
{"descriptors",15,0,2,0,1947,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarDAUV_Poisson",0,"poisson_uncertain"},
{"lambdas",14,0,1,1,1945,0,0.,0.,0.,0,"{poisson uncertain lambdas
} VarCommands.html#VarDAUV_Poisson",0,"poisson_uncertain"}
}
Initial value:
{
{"descriptors",15,0,4,0,1879,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCAUV_Triangular",0,"triangular_uncertain"},
{"lower_bounds",14,0,2,2,1875,0,0.,0.,0.,0,"{Distribution lower b
ounds} VarCommands.html#VarCAUV_Triangular",0,"triangular_uncertain"},
{"modes",14,0,1,1,1873,0,0.,0.,0.,0,"{triangular uncertain modes}
VarCommands.html#VarCAUV_Triangular",0,"triangular_uncertain"},
{"tuv_descriptors",7,0,4,0,1878,0,0.,0.,0.,0,0,0,"triangular_unce
rtain"},
{"tuv_lower_bounds",6,0,2,2,1874,0,0.,0.,0.,0,0,0,"triangular_unc
ertain"},
{"tuv_modes",6,0,1,1,1872,0,0.,0.,0.,0,0,0,"triangular_uncertain"
},
{"tuv_upper_bounds",6,0,3,3,1876,0,0.,0.,0.,0,0,0,"triangular_unc
ertain"},
{"upper_bounds",14,0,3,3,1877,0,0.,0.,0.,0,"{Distribution upper b
ounds} VarCommands.html#VarCAUV_Triangular",0,"triangular_uncertain"}
}
Initial value:
{
{"descriptors",15,0,3,0,1861,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCAUV_Uniform",0,"uniform_uncertain"},
{"lower_bounds",14,0,1,1,1857,0,0.,0.,0.,0,"{Distribution lower b
ounds} VarCommands.html#VarCAUV_Uniform",0,"uniform_uncertain"},
{"upper_bounds",14,0,2,2,1859,0,0.,0.,0.,0,"{Distribution upper b
ounds} VarCommands.html#VarCAUV_Uniform",0,"uniform_uncertain"},
{"uuv_descriptors",7,0,3,0,1860,0,0.,0.,0.,0,0,0,"uniform_uncerta
in"},
{"uuv_lower_bounds",6,0,1,1,1856,0,0.,0.,0.,0,0,0,"uniform_uncert
ain"},
{"uuv_upper_bounds",6,0,2,2,1858,0,0.,0.,0.,0,0,0,"uniform_uncert
ain"}
}
Initial value:
{
{"alphas",14,0,1,1,1925,0,0.,0.,0.,0,"{weibull uncertain alphas}
VarCommands.html#VarCAUV_Weibull",0,"weibull_uncertain"},
{"betas",14,0,2,2,1927,0,0.,0.,0.,0,"{weibull uncertain betas} Va
rCommands.html#VarCAUV_Weibull",0,"weibull_uncertain"},
{"descriptors",15,0,3,0,1929,0,0.,0.,0.,0,"{Descriptors} VarComma
nds.html#VarCAUV_Weibull",0,"weibull_uncertain"},
{"wuv_alphas",6,0,1,1,1924,0,0.,0.,0.,0,0,0,"weibull_uncertain"},
{"wuv_betas",6,0,2,2,1926,0,0.,0.,0.,0,0,0,"weibull_uncertain"},
{"wuv_descriptors",7,0,3,0,1928,0,0.,0.,0.,0,0,0,"weibull_uncerta
in"}
}
Initial value:
{
{"interface",0x308,8,5,5,2077,kw_14,0.,0.,0.,0,"{Interface} An in
terface specifies how function evaluations will be performed in order to map a se
t of parameters into a set of responses. InterfCommands.html"},
{"method",0x308,86,2,2,65,kw_220,0.,0.,0.,0,"{Method} A method sp
ecifies the name and controls of an iterative procedure, e.g., a sensitivity anal
ysis, uncertainty quantification, or optimization method. MethodCommands.html"},
{"model",8,7,3,3,1543,kw_249,0.,0.,0.,0,"{Model} A model consists
of a model type and maps specified variables through an interface to generate re
sponses. ModelCommands.html"},
{"responses",0x308,19,6,6,2189,kw_267,0.,0.,0.,0,"{Responses} A r
esponses object specifies the data that can be returned to DAKOTA through the int
erface after the completion of a function evaluation. RespCommands.html"},
{"strategy",0x108,10,1,1,1,kw_280,0.,0.,0.,0,"{Strategy} The stra
tegy specifies the top level technique which will govern the management of iterat
ors and models in the solution of the problem of interest. StratCommands.html"},
{"variables",0x308,37,4,4,1759,kw_314,0.,0.,0.,0,"{Variables} A v
ariables object specifies the parameter set to be iterated by a particular method
. VarCommands.html"}
}
Initial value:
{
{"abscissas",14,0,2,1,0,0.,0.,0,N_vam(newrvec,Var_Info_hba)},
{"counts",14,0,3,2,0,0.,0.,0,N_vam(newrvec,Var_Info_hbc)},
{"descriptors",15,0,4,0,0,0.,0.,0,N_vae(caulbl,CAUVar_histogram_b
in)},
{"huv_bin_abscissas",6,0,2,1,0,0.,0.,-3,N_vam(newrvec,Var_Info_hb
a)},
{"huv_bin_counts",6,0,3,2,0,0.,0.,-3,N_vam(newrvec,Var_Info_hbc)}
,
{"huv_bin_descriptors",7,0,4,0,0,0.,0.,-3,N_vae(caulbl,CAUVar_his
togram_bin)},
{"huv_bin_ordinates",6,0,3,2,0,0.,0.,3,N_vam(newrvec,Var_Info_hbo
)},
{"huv_num_bin_pairs",5,0,1,0,0,0.,0.,1,N_vam(newiarray,Var_Info_n
hbp)},
{"num_pairs",13,0,1,0,0,0.,0.,0,N_vam(newiarray,Var_Info_nhbp)},
{"ordinates",14,0,3,2,0,0.,0.,0,N_vam(newrvec,Var_Info_hbo)}
}
Initial value:
{
{"abscissas",14,0,2,1,0,0.,0.,0,N_vam(newrvec,Var_Info_hpa)},
{"counts",14,0,3,2,0,0.,0.,0,N_vam(newrvec,Var_Info_hpc)},
{"descriptors",15,0,4,0,0,0.,0.,0,N_vae(daurlbl,DAURVar_histogram
_point)},
{"huv_num_point_pairs",5,0,1,0,0,0.,0.,4,N_vam(newiarray,Var_Info
_nhpp)},
{"huv_point_abscissas",6,0,2,1,0,0.,0.,-4,N_vam(newrvec,Var_Info_
hpa)},
{"huv_point_counts",6,0,3,2,0,0.,0.,-4,N_vam(newrvec,Var_Info_hpc
)},
{"huv_point_descriptors",7,0,4,0,0,0.,0.,-4,N_vae(daurlbl,DAURVar
_histogram_point)},
{"num_pairs",13,0,1,0,0,0.,0.,0,N_vam(newiarray,Var_Info_nhpp)}
}
Initial value:
{
{"descriptors",15,0,4,0,0,0.,0.,0,N_vae(dauilbl,DAUIVar_hypergeom
etric)},
{"num_drawn",13,0,3,3,0,0.,0.,0,N_vam(ivec,hyperGeomUncNumDrawn)}
,
{"selected_population",13,0,2,2,0,0.,0.,0,N_vam(ivec,hyperGeomUnc
SelectedPop)},
{"total_population",13,0,1,1,0,0.,0.,0,N_vam(ivec,hyperGeomUncTot
alPop)}
}
Initial value:
{
{"lnuv_zetas",6,0,1,1,0,0.,0.,1,N_vam(RealLb,lognormalUncZetas)},
{"zetas",14,0,1,1,0,0.,0.,0,N_vam(RealLb,lognormalUncZetas)}
}
Initial value:
{
{"error_factors",14,0,1,1,0,0.,0.,0,N_vam(RealLb,lognormalUncErrF
acts)},
{"lnuv_error_factors",6,0,1,1,0,0.,0.,-1,N_vam(RealLb,lognormalUn
cErrFacts)},
{"lnuv_std_deviations",6,0,1,1,0,0.,0.,1,N_vam(RealLb,lognormalUn
cStdDevs)},
{"std_deviations",14,0,1,1,0,0.,0.,0,N_vam(RealLb,lognormalUncStd
Devs)}
}
Initial value:
{
{"descriptors",15,0,4,0,0,0.,0.,0,N_vae(caulbl,CAUVar_lognormal)}
,
{"lambdas",14,2,1,1,kw_319,0.,0.,0,N_vam(rvec,lognormalUncLambdas
)},
{"lnuv_descriptors",7,0,4,0,0,0.,0.,-2,N_vae(caulbl,CAUVar_lognor
mal)},
{"lnuv_lambdas",6,2,1,1,kw_319,0.,0.,-2,N_vam(rvec,lognormalUncLa
mbdas)},
{"lnuv_lower_bounds",6,0,2,0,0,0.,0.,3,N_vam(RealLb,lognormalUncL
owerBnds)},
{"lnuv_means",6,4,1,1,kw_320,0.,0.,3,N_vam(RealLb,lognormalUncMea
ns)},
{"lnuv_upper_bounds",6,0,3,0,0,0.,0.,3,N_vam(RealUb,lognormalUncU
pperBnds)},
{"lower_bounds",14,0,2,0,0,0.,0.,0,N_vam(RealLb,lognormalUncLower
Bnds)},
{"means",14,4,1,1,kw_320,0.,0.,0,N_vam(RealLb,lognormalUncMeans)}
,
{"upper_bounds",14,0,3,0,0,0.,0.,0,N_vam(RealUb,lognormalUncUpper
Bnds)}
}
Initial value:
{
{"descriptors",15,0,3,0,0,0.,0.,0,N_vae(caulbl,CAUVar_loguniform)
},
{"lower_bounds",14,0,1,1,0,0.,0.,0,N_vam(RealLb,loguniformUncLowe
rBnds)},
{"luuv_descriptors",7,0,3,0,0,0.,0.,-2,N_vae(caulbl,CAUVar_loguni
form)},
{"luuv_lower_bounds",6,0,1,1,0,0.,0.,-2,N_vam(RealLb,loguniformUn
cLowerBnds)},
{"luuv_upper_bounds",6,0,2,2,0,0.,0.,1,N_vam(RealUb,loguniformUnc
UpperBnds)},
{"upper_bounds",14,0,2,2,0,0.,0.,0,N_vam(RealUb,loguniformUncUppe
rBnds)}
}
Initial value:
{
{"descriptors",15,0,3,0,0,0.,0.,0,N_vae(dauilbl,DAUIVar_negative_
binomial)},
{"num_trials",13,0,2,2,0,0.,0.,0,N_vam(ivec,negBinomialUncNumTria
ls)},
{"prob_per_trial",6,0,1,1,0,0.,0.,1,N_vam(rvec,negBinomialUncProb
PerTrial)},
{"probability_per_trial",14,0,1,1,0,0.,0.,0,N_vam(rvec,negBinomia
lUncProbPerTrial)}
}
Initial value:
{
{"descriptors",15,0,5,0,0,0.,0.,0,N_vae(caulbl,CAUVar_normal)},
{"lower_bounds",14,0,3,0,0,0.,0.,0,N_vam(rvec,normalUncLowerBnds)
},
{"means",14,0,1,1,0,0.,0.,0,N_vam(rvec,normalUncMeans)},
{"nuv_descriptors",7,0,5,0,0,0.,0.,-3,N_vae(caulbl,CAUVar_normal)
},
{"nuv_lower_bounds",6,0,3,0,0,0.,0.,-3,N_vam(rvec,normalUncLowerB
nds)},
{"nuv_means",6,0,1,1,0,0.,0.,-3,N_vam(rvec,normalUncMeans)},
{"nuv_std_deviations",6,0,2,2,0,0.,0.,2,N_vam(RealLb,normalUncStd
Devs)},
{"nuv_upper_bounds",6,0,4,0,0,0.,0.,2,N_vam(rvec,normalUncUpperBn
ds)},
{"std_deviations",14,0,2,2,0,0.,0.,0,N_vam(RealLb,normalUncStdDev
s)},
{"upper_bounds",14,0,4,0,0,0.,0.,0,N_vam(rvec,normalUncUpperBnds)
}
}
Initial value:
{
{"descriptors",15,0,2,0,0,0.,0.,0,N_vae(dauilbl,DAUIVar_poisson)}
,
{"lambdas",14,0,1,1,0,0.,0.,0,N_vam(rvec,poissonUncLambdas)}
}
Initial value:
{
{"descriptors",15,0,4,0,0,0.,0.,0,N_vae(caulbl,CAUVar_triangular)
},
{"lower_bounds",14,0,2,2,0,0.,0.,0,N_vam(RealLb,triangularUncLowe
rBnds)},
{"modes",14,0,1,1,0,0.,0.,0,N_vam(rvec,triangularUncModes)},
{"tuv_descriptors",7,0,4,0,0,0.,0.,-3,N_vae(caulbl,CAUVar_triangu
lar)},
{"tuv_lower_bounds",6,0,2,2,0,0.,0.,-3,N_vam(RealLb,triangularUnc
LowerBnds)},
{"tuv_modes",6,0,1,1,0,0.,0.,-3,N_vam(rvec,triangularUncModes)},
{"tuv_upper_bounds",6,0,3,3,0,0.,0.,1,N_vam(RealUb,triangularUncU
pperBnds)},
{"upper_bounds",14,0,3,3,0,0.,0.,0,N_vam(RealUb,triangularUncUppe
rBnds)}
}
Initial value:
{
{"descriptors",15,0,3,0,0,0.,0.,0,N_vae(caulbl,CAUVar_uniform)},
{"lower_bounds",14,0,1,1,0,0.,0.,0,N_vam(RealLb,uniformUncLowerBn
ds)},
{"upper_bounds",14,0,2,2,0,0.,0.,0,N_vam(RealUb,uniformUncUpperBn
ds)},
{"uuv_descriptors",7,0,3,0,0,0.,0.,-3,N_vae(caulbl,CAUVar_uniform
)},
{"uuv_lower_bounds",6,0,1,1,0,0.,0.,-3,N_vam(RealLb,uniformUncLow
erBnds)},
{"uuv_upper_bounds",6,0,2,2,0,0.,0.,-3,N_vam(RealUb,uniformUncUpp
erBnds)}
}
Initial value:
{
{"alphas",14,0,1,1,0,0.,0.,0,N_vam(RealLb,weibullUncAlphas)},
{"betas",14,0,2,2,0,0.,0.,0,N_vam(RealLb,weibullUncBetas)},
{"descriptors",15,0,3,0,0,0.,0.,0,N_vae(caulbl,CAUVar_weibull)},
{"wuv_alphas",6,0,1,1,0,0.,0.,-3,N_vam(RealLb,weibullUncAlphas)},
{"wuv_betas",6,0,2,2,0,0.,0.,-3,N_vam(RealLb,weibullUncBetas)},
{"wuv_descriptors",7,0,3,0,0,0.,0.,-3,N_vae(caulbl,CAUVar_weibull
)}
}
Initial value:
{
{"interface",0x308,8,5,5,kw_14,0.,0.,0,N_ifm3(start,0,stop)},
{"method",0x308,86,2,2,kw_234,0.,0.,0,N_mdm3(start,0,stop)},
{"model",8,7,3,3,kw_264,0.,0.,0,N_mom3(start,0,stop)},
{"responses",0x308,19,6,6,kw_282,0.,0.,0,N_rem3(start,0,stop)},
{"strategy",0x108,10,1,1,kw_295,0.,0.,0,NIDRProblemDescDB::strate
gy_start},
{"variables",0x308,37,4,4,kw_329,0.,0.,0,N_vam3(start,0,stop)}
}
Initial value:
{
VarLabelInfo(nuv_, NormalUnc),
VarLabelInfo(lnuv_, LognormalUnc),
VarLabelInfo(uuv_, UniformUnc),
VarLabelInfo(luuv_, LoguniformUnc),
VarLabelInfo(tuv_, TriangularUnc),
VarLabelInfo(euv_, ExponentialUnc),
VarLabelInfo(beuv_, BetaUnc),
VarLabelInfo(gauv_, GammaUnc),
VarLabelInfo(guuv_, GumbelUnc),
VarLabelInfo(fuv_, FrechetUnc),
VarLabelInfo(wuv_, WeibullUnc),
VarLabelInfo(hbuv_, HistogramBinUnc)
}
Initial value:
{
VarLabelInfo(puv_, PoissonUnc),
VarLabelInfo(biuv_, BinomialUnc),
VarLabelInfo(nbuv_, NegBinomialUnc),
VarLabelInfo(geuv_, GeometricUnc),
VarLabelInfo(hguv_, HyperGeomUnc)
}
Initial value:
{
VarLabelInfo(hpuv_, HistogramPtUnc)
}
Initial value:
{
VarLabelInfo(ciuv_, ContinuousIntervalUnc)
}
Initial value:
{
VarLabelInfo(diuv_, DiscreteIntervalUnc),
VarLabelInfo(dusiv_, DiscreteUncSetInt)
}
Initial value:
{
VarLabelInfo(dusrv_, DiscreteUncSetReal)
}
Initial value:
{
VarLabelInfo(ddsiv_, DiscreteDesSetInt),
VarLabelInfo(ddsrv_, DiscreteDesSetReal),
VarLabelInfo(dssiv_, DiscreteStateSetInt),
VarLabelInfo(dssrv_, DiscreteStateSetReal)
}
Initial value:
{
{ AVI numContinuousDesVars, AVI continuousDesignLabels, "cdv_", "cdv_descriptor
s" },
{ AVI numDiscreteDesRangeVars, AVI discreteDesignRangeLabels, "ddriv_", "ddriv_
descriptors" },
{ AVI numDiscreteDesSetIntVars, AVI discreteDesignSetIntLabels, "ddsiv_", "ddsi
v_descriptors" },
{ AVI numDiscreteDesSetRealVars, AVI discreteDesignSetRealLabels, "ddsrv_", "dd
srv_descriptors" },
{ AVI numContinuousStateVars, AVI continuousStateLabels, "csv_", "csv_descripto
rs" },
{ AVI numDiscreteStateRangeVars, AVI discreteStateRangeLabels, "dsriv_", "dsriv
_descriptors" },
{ AVI numDiscreteStateSetIntVars, AVI discreteStateSetIntLabels, "dssiv_", "dss
iv_descriptors" },
{ AVI numDiscreteStateSetRealVars, AVI discreteStateSetRealLabels, "dssrv_", "d
ssrv_descriptors" },
{ AVI numContinuousDesVars, AVI continuousDesignScaleTypes, 0, "cdv_scale_types
" }
}
Initial value:
{
{CAUVar_Nkinds, AVI CAUv, CAUVLbl,
DVR continuousAleatoryUncLabels,
DVR continuousAleatoryUncLowerBnds,
DVR continuousAleatoryUncUpperBnds,
DVR continuousAleatoryUncVars},
{CEUVar_Nkinds, AVI CEUv, CEUVLbl,
DVR continuousEpistemicUncLabels,
DVR continuousEpistemicUncLowerBnds,
DVR continuousEpistemicUncUpperBnds,
DVR continuousEpistemicUncVars},
{DAURVar_Nkinds, AVI DAURv, DAURVLbl,
DVR discreteRealAleatoryUncLabels,
DVR discreteRealAleatoryUncLowerBnds,
DVR discreteRealAleatoryUncUpperBnds,
DVR discreteRealAleatoryUncVars},
{DEURVar_Nkinds, AVI DEURv, DEURVLbl,
DVR discreteRealEpistemicUncLabels,
DVR discreteRealEpistemicUncLowerBnds,
DVR discreteRealEpistemicUncUpperBnds,
DVR discreteRealEpistemicUncVars}}
Initial value:
{
{DAUIVar_Nkinds, AVI DAUIv, DAUIVLbl,
DVR discreteIntAleatoryUncLabels,
DVR discreteIntAleatoryUncLowerBnds,
DVR discreteIntAleatoryUncUpperBnds,
DVR discreteIntAleatoryUncVars},
{DEUIVar_Nkinds, AVI DEUIv, DEUIVLbl,
DVR discreteIntEpistemicUncLabels,
DVR discreteIntEpistemicUncLowerBnds,
DVR discreteIntEpistemicUncUpperBnds,
DVR discreteIntEpistemicUncVars}}
Initial value:
{
Vchk_3(continuous_design,ContinuousDes),
Vchk_3(continuous_state,ContinuousState) }
Initial value:
{
Vchk_3(discrete_design_set_integer,DiscreteDesSetInt),
Vchk_3(discrete_design_set_real,DiscreteDesSetReal),
Vchk_3(discrete_state_set_integer,DiscreteStateSetInt),
Vchk_3(discrete_state_set_real,DiscreteStateSetReal) }
Initial value:
{
Vchk_3(normal_uncertain,NormalUnc),
Vchk_3(lognormal_uncertain,LognormalUnc),
Vchk_3(uniform_uncertain,UniformUnc),
Vchk_3(loguniform_uncertain,LoguniformUnc),
Vchk_3(triangular_uncertain,TriangularUnc),
Vchk_3(exponential_uncertain,ExponentialUnc),
Vchk_3(beta_uncertain,BetaUnc),
Vchk_3(gamma_uncertain,GammaUnc),
Vchk_3(gumbel_uncertain,GumbelUnc),
Vchk_3(frechet_uncertain,FrechetUnc),
Vchk_3(weibull_uncertain,WeibullUnc),
Vchk_3(histogram_bin_uncertain,HistogramBinUnc) }
Initial value:
{
Vchk_3(poisson_uncertain,PoissonUnc),
Vchk_3(binomial_uncertain,BinomialUnc),
Vchk_3(negative_binomial_uncertain,NegBinomialUnc),
Vchk_3(geometric_uncertain,GeometricUnc),
Vchk_3(hypergeometric_uncertain,HyperGeomUnc) }
Initial value:
{
Vchk_3(histogram_point_uncertain,HistogramPtUnc) }
Initial value:
{
Vchk_3(continuous_interval_uncertain,ContinuousIntervalUnc) }
Initial value:
{
Vchk_3(discrete_interval_uncertain,DiscreteIntervalUnc),
Vchk_3(discrete_uncertain_set_integer,DiscreteUncSetInt) }
Initial value:
{
Vchk_3(discrete_uncertain_set_real,DiscreteUncSetReal) }
Initial value:
{
Vchk_7(continuous_design,ContinuousDes,continuousDesign),
Vchk_7(continuous_state,ContinuousState,continuousState),
Vchk_5(normal_uncertain,NormalUnc,normalUnc),
Vchk_5(lognormal_uncertain,LognormalUnc,lognormalUnc),
Vchk_5(uniform_uncertain,UniformUnc,uniformUnc),
Vchk_5(loguniform_uncertain,LoguniformUnc,loguniformUnc),
Vchk_5(triangular_uncertain,TriangularUnc,triangularUnc),
Vchk_5(beta_uncertain,BetaUnc,betaUnc) }
Initial value:
{
Vchk_7(discrete_design_range,DiscreteDesRange,discreteDesignRange),
Vchk_7(discrete_state_range,DiscreteStateRange,discreteStateRange) }
Initial value:
Classes
• class ParallelDirectApplicInterface
Sample derived interface class for testing parallel simulator plug-ins using assign_rep().
• class SerialDirectApplicInterface
Sample derived interface class for testing serial simulator plug-ins using assign_rep().
A sample namespace for derived classes that use assign_rep() to plug facilities into DAKOTA. A typical use of
plug-ins with assign_rep() is to publish a simulation interface for use in library mode See Interfacing with Dakota
as a Library for more information.
Class Documentation
• ∼ActiveSet ()
destructor
Private Attributes
• ShortArray requestVector
the vector of response requests
• SizetArray derivVarsVector
the vector of variable ids used for computing derivatives
Friends
• bool operator== (const ActiveSet &set1, const ActiveSet &set2)
equality operator
Container class for active set tracking information. Contains the active set request vector and the derivative vari-
ables vector. The ActiveSet class is a small class whose initial design function is to avoid having to pass the ASV
and DVV separately. It is not part of a class hierarchy and does not employ reference-counting/ representation-
sharing idioms (e.g., handle-body).
the vector of response requests It uses a 0 value for inactive functions and sums 1 (value), 2 (gradient), and 4
(Hessian) for active functions.
Referenced by ActiveSet::ActiveSet(), ActiveSet::operator=(), Dakota::operator==(), ActiveSet::read(),
ActiveSet::request_value(), ActiveSet::request_values(), ActiveSet::request_vector(), ActiveSet::reshape(),
ActiveSet::write(), and ActiveSet::write_annotated().
the vector of variable ids used for computing derivatives These ids will generally identify either the active contin-
uous variables or the inactive continuous variables.
• DakotaActiveSet.hpp
• DakotaActiveSet.cpp
Iterator
Analyzer
NonDCalibration FSUDesignCompExp
NonDExpansion ParamStudy
NonDIntegration PSUADEDesignCompExp
NonDInterval
NonDPOFDarts
NonDReliability
NonDSampling
• Analyzer (NoDBBaseConstructor)
alternate constructor for instantiations "on the fly" without a Model
• ∼Analyzer ()
destructor
• void pre_output ()
• void print_results (std::ostream &s)
print the final iterator results
• void variance_based_decomp (int ncont, int ndiscint, int ndiscreal, int num_samples)
• void read_variables_responses (int num_evals, size_t num_vars)
convenience function for reading variables/responses (used in derived classes post_input)
Protected Attributes
• bool compactMode
switch for allSamples (compact mode) instead of allVariables (normal mode)
• VariablesArray allVariables
array of all variables to be evaluated in evaluate_parameter_sets()
• RealMatrix allSamples
compact alternative to allVariables
• IntResponseMap allResponses
array of all responses to be computed in evaluate_parameter_sets()
• StringArray allHeaders
array of headers to insert into output while evaluating allVariables
• size_t numObjFns
number of objective functions
• size_t numLSqTerms
number of least squares terms
• RealPairPRPMultiMap bestVarsRespMap
map which stores best set of solutions
• void compute_best_metrics (const Response &response, std::pair< Real, Real > &metrics)
compares current evaluation to best evaluation and updates best
• void update_best (const Variables &vars, int eval_id, const Response &response)
compares current evaluation to best evaluation and updates best
• void update_best (const Real ∗sample_c_vars, int eval_id, const Response &response)
compares current evaluation to best evaluation and updates best
Private Attributes
• Real vbdDropTol
tolerance for omitting output of small VBD indices
• RealVectorArray S4
VBD main effect indices.
• RealVectorArray T4
VBD total effect indices.
Base class for NonD, DACE, and ParamStudy branches of the iterator hierarchy. The Analyzer class provides
common data and functionality for various types of systems analysis, including nondeterministic analysis, design
of experiments, and parameter studies.
Generate tabular output with active variables (compactMode) or all variables with their labels and response labels,
with no data. Variables are sequenced {cv, div, drv}
Reimplemented from Iterator.
References Analyzer::allSamples, Analyzer::allVariables, ParallelLibrary::command_line_pre_run_output(),
ParallelLibrary::command_line_user_modes(), Analyzer::compactMode, Model::current_response(),
Model::current_variables(), Iterator::iteratedModel, Iterator::outputLevel, Model::parallel_library(),
Dakota::write_data_tabular(), Dakota::write_precision, and Iterator::writePrecision.
print the final iterator results This virtual function provides additional iterator-specific final results outputs beyond
the function evaluation summary printed in finalize_run().
Reimplemented from Iterator.
Reimplemented in PStudyDACE, Verification, NonDAdaptImpSampling, NonDAdaptiveSampling, Non-
DExpansion, NonDGlobalReliability, NonDGPImpSampling, NonDIncremLHSSampling, NonDInterval,
NonDLHSSampling, NonDLocalReliability, NonDPOFDarts, and RichExtrapVerification.
References Analyzer::bestVarsRespMap, ParamResponsePair::eval_id(), Response::function_values(), Ana-
lyzer::numLSqTerms, Analyzer::numObjFns, ParamResponsePair::prp_parameters(), ParamResponsePair::prp_-
response(), and Dakota::write_data_partial().
13.2.2.3 void evaluate_parameter_sets (Model & model, bool log_resp_flag, bool log_best_flag)
[protected]
perform function evaluations to map parameter sets (allVariables) into response sets (allResponses) Convenience
function for derived classes with sets of function evaluations to perform (e.g., NonDSampling, DDACEDesign-
CompExp, FSUDesignCompExp, ParamStudy).
References Iterator::activeSet, Analyzer::allHeaders, Analyzer::allResponses, Analyzer::allSamples, An-
alyzer::allVariables, Model::asynch_compute_response(), Iterator::asynchFlag, Analyzer::compactMode,
Model::compute_response(), Response::copy(), Model::current_response(), Model::current_variables(),
Model::evaluation_id(), Model::synchronize(), Analyzer::update_best(), Analyzer::update_model_from_-
sample(), and Analyzer::update_model_from_variables().
Referenced by NonDSparseGrid::evaluate_grid_increment(), NonDSparseGrid::evaluate_set(),
PSUADEDesignCompExp::extract_trends(), ParamStudy::extract_trends(), FSUDesignCompExp::extract_-
trends(), DDACEDesignCompExp::extract_trends(), NonDLHSSampling::quantify_uncertainty(),
NonDIntegration::quantify_uncertainty(), NonDIncremLHSSampling::quantify_uncertainty(),
NonDAdaptImpSampling::quantify_uncertainty(), and Analyzer::variance_based_decomp().
13.2.2.4 void variance_based_decomp (int ncont, int ndiscint, int ndiscreal, int num_samples)
[protected]
Calculation of sensitivity indices obtained by variance based decomposition. These indices are obtained by the
Saltelli version of the Sobol VBD which uses (K+2)∗N function evaluations, where K is the number of dimensions
(uncertain vars) and N is the number of samples.
References Dakota::abort_handler(), Analyzer::allResponses, Analyzer::allSamples, Analyzer::allVariables,
Analyzer::compactMode, Variables::continuous_variables(), Dakota::copy_data(), Variables::discrete_-
int_variables(), Variables::discrete_real_variables(), Analyzer::evaluate_parameter_sets(), Analyzer::get_-
parameter_sets(), Iterator::iteratedModel, Iterator::numFunctions, Analyzer::S4, Analyzer::T4, and
Analyzer::vary_pattern().
Referenced by FSUDesignCompExp::extract_trends(), DDACEDesignCompExp::extract_trends(), and
NonDLHSSampling::quantify_uncertainty().
convenience function for reading variables/responses (used in derived classes post_input) read num_evals vari-
ables/responses from file
References Dakota::abort_handler(), Analyzer::allResponses, Analyzer::allSamples, Analyzer::allVariables,
ParallelLibrary::command_line_post_run_input(), ParallelLibrary::command_line_user_modes(), Ana-
lyzer::compactMode, Response::copy(), Variables::copy(), Model::current_response(), Model::current_-
variables(), Iterator::iteratedModel, Iterator::outputLevel, Model::parallel_library(), and Analyzer::update_best().
Referenced by PSUADEDesignCompExp::post_input(), ParamStudy::post_input(), NonDLHSSampling::post_-
input(), FSUDesignCompExp::post_input(), and DDACEDesignCompExp::post_input().
• DakotaAnalyzer.hpp
• DakotaAnalyzer.cpp
Interface
ApplicationInterface
DirectApplicInterface ProcessApplicInterface
MatlabInterface ProcessHandleApplicInterface
PythonInterface SysCallApplicInterface
ScilabInterface
TestDriverInterface
ParallelDirectApplicInterface
SerialDirectApplicInterface
• ∼ApplicationInterface ()
destructor
• void free_communicators ()
deallocate communicator partitions for concurrent evaluations within an iterator and concurrent multiprocessor
analyses within an evaluation.
• void init_serial ()
• int asynch_local_evaluation_concurrency () const
return asynchLocalEvalConcurrency
• void map (const Variables &vars, const ActiveSet &set, Response &response, bool asynch_flag=false)
Provides a "mapping" of variables to responses using a simulation. Protected due to Interface letter-envelope
idiom.
• void manage_failure (const Variables &vars, const ActiveSet &set, Response &response, int failed_eval_id)
• void serve_evaluations ()
run on evaluation servers to serve the iterator master
• void stop_evaluation_servers ()
used by the iterator master to terminate evaluation servers
• virtual void derived_map (const Variables &vars, const ActiveSet &set, Response &response, int fn_eval_-
id)
Called by map() and other functions to execute the simulation in synchronous mode. The portion of performing an
evaluation that is specific to a derived class.
• void master_dynamic_schedule_analyses ()
blocking dynamic schedule of all analyses within a function evaluation using message passing
• void serve_analyses_synch ()
serve the master analysis scheduler and manage one synchronous analysis job at a time
Protected Attributes
• ParallelLibrary & parallelLib
reference to the ParallelLibrary object used to manage MPI partitions for the concurrent evaluations and concur-
rent analyses parallelism levels
• bool suppressOutput
flag for suppressing output on slave processors
• int evalCommSize
size of evalComm
• int evalCommRank
processor rank within evalComm
• int evalServerId
evaluation server identifier
• bool eaDedMasterFlag
flag for dedicated master partitioning at ea level
• int analysisCommSize
size of analysisComm
• int analysisCommRank
processor rank within analysisComm
• int analysisServerId
analysis server identifier
• int numAnalysisServers
current number of analysis servers
• int numAnalysisServersSpec
user spec for number of analysis servers
• bool multiProcAnalysisFlag
flag for multiprocessor analysis partitions
• bool asynchLocalAnalysisFlag
flag for asynchronous local parallelism of analyses
• int asynchLocalAnalysisConcSpec
user specification for asynchronous local analysis concurrency
• int asynchLocalAnalysisConcurrency
limits the number of concurrent analyses in asynchronous local scheduling and specifies hybrid concurrency when
message passing
• int numAnalysisDrivers
the number of analysis drivers used for each function evaluation (from the analysis_drivers interface specification)
• IntSet completionSet
the set of completed fn_eval_id’s populated by wait_local_evaluations() and test_local_evaluations()
• void master_dynamic_schedule_evaluations ()
blocking dynamic schedule of all evaluations in beforeSynchCorePRPQueue using message passing on a dedicated
master partition; executes on iteratorComm master
• void peer_static_schedule_evaluations ()
blocking static schedule of all evaluations in beforeSynchCorePRPQueue using message passing on a peer parti-
tion; executes on iteratorComm master
• void peer_dynamic_schedule_evaluations ()
blocking dynamic schedule of all evaluations in beforeSynchCorePRPQueue using message passing on a peer
partition; executes on iteratorComm master
• void master_dynamic_schedule_evaluations_nowait ()
execute a nonblocking dynamic schedule in a master-slave partition
• void peer_dynamic_schedule_evaluations_nowait ()
execute a nonblocking static/dynamic schedule in a peer partition
• void broadcast_evaluation (int fn_eval_id, const Variables &vars, const ActiveSet &set)
convenience function for broadcasting an evaluation over an evalComm
• void send_evaluation (PRPQueueIter &prp_it, size_t buff_index, int server_id, bool peer_flag)
helper function for sending sendBuffers[buff_index] to server
• void receive_evaluation (PRPQueueIter &prp_it, size_t buff_index, int server_id, bool peer_flag)
helper function for processing recvBuffers[buff_index] within scheduler
helper function for creating an initial active local queue by launching asynch local jobs from local_prp_queue, as
limited by server capacity
• void serve_evaluations_synch ()
serve the evaluation message passing schedulers and perform one synchronous evaluation at a time
• void serve_evaluations_synch_peer ()
serve the evaluation message passing schedulers and perform one synchronous evaluation at a time as part of the
1st peer
• void serve_evaluations_asynch ()
serve the evaluation message passing schedulers and manage multiple asynchronous evaluations
• void serve_evaluations_asynch_peer ()
serve the evaluation message passing schedulers and perform multiple asynchronous evaluations as part of the 1st
peer
• void set_analysis_communicators ()
convenience function for updating the local analysis partition data following ParallelLibrary::init_analysis_-
communicators().
• void init_serial_evaluations ()
set concurrent evaluation configuration for serial operations
• void init_serial_analyses ()
set concurrent analysis configuration for serial operations (e.g., for local executions on a dedicated master)
• void continuation (const Variables &target_vars, const ActiveSet &set, Response &response, const Param-
ResponsePair &source_pair, int failed_eval_id)
performs a 0th order continuation method to step from a successful "source" evaluation to the failed "target".
Invoked by manage_failure() for failAction == "continuation".
Private Attributes
• int worldSize
size of MPI_COMM_WORLD
• int worldRank
processor rank within MPI_COMM_WORLD
• int iteratorCommSize
size of iteratorComm
• int iteratorCommRank
processor rank within iteratorComm
• bool ieMessagePass
flag for message passing at ie scheduling level
• int numEvalServers
current number of evaluation servers
• int numEvalServersSpec
user specification for number of evaluation servers
• bool eaMessagePass
flag for message passing at ea scheduling level
• int procsPerAnalysisSpec
user specification for processors per analysis servers
• int lenVarsMessage
length of a MPIPackBuffer containing a Variables object; computed in Model::init_communicators()
• int lenVarsActSetMessage
length of a MPIPackBuffer containing a Variables object and an ActiveSet object; computed in Model::init_-
communicators()
• int lenResponseMessage
length of a MPIPackBuffer containing a Response object; computed in Model::init_communicators()
• int lenPRPairMessage
length of a MPIPackBuffer containing a ParamResponsePair object; computed in Model::init_communicators()
• String evalScheduling
user specification of evaluation scheduling algorithm (self, static, or no spec). Used for manual overrides of the
auto-configure logic in ParallelLibrary::resolve_inputs().
• String analysisScheduling
user specification of analysis scheduling algorithm (self, static, or no spec). Used for manual overrides of the
auto-configure logic in ParallelLibrary::resolve_inputs().
• int asynchLocalEvalConcSpec
user specification for asynchronous local evaluation concurrency
• int asynchLocalEvalConcurrency
limits the number of concurrent evaluations in asynchronous local scheduling and specifies hybrid concurrency
when message passing
• bool asynchLocalEvalStatic
whether the asynchronous local evaluations are to be performed with a static schedule (default false)
• BitArray localServerAssigned
array with one bit per logical "server" indicating whether a job is currently running on the server (used for asynch
local static schedules)
• String interfaceSynchronization
interface synchronization specification: synchronous (default) or asynchronous
• bool asvControlFlag
used to manage a user request to deactivate the active set vector control. true = modify the ASV each evaluation as
appropriate (default); false = ASV values are static so that the user need not check them on each evaluation.
• bool evalCacheFlag
used to manage a user request to deactivate the function evaluation cache (i.e., queries and insertions using the
data_pairs cache).
• bool restartFileFlag
used to manage a user request to deactivate the restart file (i.e., insertions into write_restart).
• ShortArray defaultASV
the static ASV values used when the user has selected asvControl = off
• String failAction
mitigation action for captured simulation failures: abort, retry, recover, or continuation
• int failRetryLimit
limit on the number of retries for the retry failAction
• RealVector failRecoveryFnVals
the dummy function values used for the recover failAction
• IntResponseMap historyDuplicateMap
used to bookkeep asynchronous evaluations which duplicate data_pairs evaluations. Map key is evalIdCntr, map
value is corresponding response.
• PRPQueue beforeSynchCorePRPQueue
used to bookkeep vars/set/response of nonduplicate asynchronous core evaluations. This is the queue of jobs popu-
lated by asynchronous map() that is later scheduled in synch() or synch_nowait().
• PRPQueue beforeSynchAlgPRPQueue
used to bookkeep vars/set/response of asynchronous algebraic evaluations. This is the queue of algebraic jobs
populated by asynchronous map() that is later evaluated in synch() or synch_nowait().
• PRPQueue asynchLocalActivePRPQueue
used by nonblocking asynchronous local schedulers to bookkeep active local jobs
• MPIPackBuffer ∗ sendBuffers
array of pack buffers for evaluation jobs queued to a server
• MPIUnpackBuffer ∗ recvBuffers
array of unpack buffers for evaluation jobs returned by a server
• MPI_Request ∗ recvRequests
array of requests for nonblocking evaluation receives
Derived class within the interface class hierarchy for supporting interfaces to simulation codes. ApplicationIn-
terface provides an interface class for performing parameter to response mappings using simulation code(s). It
provides common functionality for a number of derived classes and contains the majority of all of the scheduling
algorithms in DAKOTA. The derived classes provide the specifics for managing code invocations using system
calls, forks, direct procedure calls, or distributed resource facilities.
DataInterface.cpp defaults of 0 servers are needed to distinguish an explicit user request for 1 server (se-
rialization of a parallelism level) from no user request (use parallel auto-config). This default causes
problems when init_communicators() is not called for an interface object (e.g., static scheduling fails in
DirectApplicInterface::derived_map() for NestedModel::optionalInterface). This is the reason for this function:
to reset certain defaults for interface objects that are used serially.
Reimplemented from Interface.
References ApplicationInterface::init_serial_analyses(), and ApplicationInterface::init_serial_evaluations().
13.3.2.2 void map (const Variables & vars, const ActiveSet & set, Response & response, bool
asynch_flag = false) [protected, virtual]
Provides a "mapping" of variables to responses using a simulation. Protected due to Interface letter-envelope
idiom. The function evaluator for application interfaces. Called from derived_compute_response() and derived_-
asynch_compute_response() in derived Model classes. If asynch_flag is not set, perform a blocking evaluation
(using derived_map()). If asynch_flag is set, add the job to the beforeSynchCorePRPQueue queue for execution
by one of the scheduler routines in synch() or synch_nowait(). Duplicate function evaluations are detected with
duplication_detect().
Reimplemented from Interface.
References Response::active_set(), Interface::algebraic_mappings(), Interface::algebraicMappings,
Interface::asv_mapping(), ApplicationInterface::asvControlFlag, ApplicationInter-
face::beforeSynchAlgPRPQueue, ApplicationInterface::beforeSynchCorePRPQueue,
ApplicationInterface::broadcast_evaluation(), Response::copy(), Interface::coreMappings, Inter-
face::currEvalId, Dakota::data_pairs, ApplicationInterface::defaultASV, ApplicationInterface::derived_map(),
ApplicationInterface::duplication_detect(), ApplicationInterface::evalCacheFlag, Interface::evalIdCntr, Inter-
face::fineGrainEvalCounters, Interface::fnGradCounter, Interface::fnHessCounter, Interface::fnLabels, Inter-
face::fnValCounter, Response::function_labels(), Interface::init_algebraic_mappings(), Interface::interfaceId,
ApplicationInterface::manage_failure(), Interface::multiProcEvalFlag, Interface::newEvalIdCntr, Inter-
face::newFnGradCounter, Interface::newFnHessCounter, Interface::newFnValCounter, Interface::outputLevel,
ActiveSet::request_vector(), Interface::response_mapping(), ApplicationInterface::restartFileFlag, and
Dakota::write_restart.
executes a blocking schedule for asynchronous evaluations in the beforeSynchCorePRPQueue and returns all jobs
This function provides blocking synchronization for all cases of asynchronous evaluations, including the local
asynchronous case (background system call, nonblocking fork, & multithreads), the message passing case, and
the hybrid case. Called from derived_synchronize() in derived Model classes.
Reimplemented from Interface.
References Interface::algebraic_mappings(), Interface::algebraicMappings, Interface::asv_mapping(), Ap-
plicationInterface::asynchLocalEvalStatic, ApplicationInterface::asynchronous_local_evaluations(), Applica-
tionInterface::beforeSynchAlgPRPQueue, ApplicationInterface::beforeSynchCorePRPQueue, ApplicationIn-
executes a nonblocking schedule for asynchronous evaluations in the beforeSynchCorePRPQueue and returns a
partial set of completed jobs This function provides nonblocking synchronization for the local asynchronous case
and selected nonblocking message passing schedulers. Called from derived_synchronize_nowait() in derived
Model classes.
Reimplemented from Interface.
References Dakota::abort_handler(), Interface::algebraic_mappings(), Interface::algebraicMappings,
Interface::asv_mapping(), ApplicationInterface::asynchLocalEvalStatic, ApplicationInterface::asynchronous_-
local_evaluations_nowait(), ApplicationInterface::beforeSynchAlgPRPQueue, ApplicationIn-
terface::beforeSynchCorePRPQueue, ApplicationInterface::beforeSynchDuplicateMap, Inter-
face::coreMappings, ParamResponsePair::eval_id(), ApplicationInterface::evalScheduling, Application-
Interface::historyDuplicateMap, Interface::ieDedMasterFlag, ApplicationInterface::ieMessagePass, Inter-
face::interfaceId, Interface::interfaceType, Dakota::lookup_by_eval_id(), ApplicationInterface::master_-
dynamic_schedule_evaluations_nowait(), Interface::multiProcEvalFlag, Interface::outputLevel,
ApplicationInterface::peer_dynamic_schedule_evaluations_nowait(), ParamResponsePair::prp_response(),
Interface::rawResponseMap, Interface::response_mapping(), and Response::update().
run on evaluation servers to serve the iterator master Invoked by the serve() function in derived Model classes.
Passes control to serve_evaluations_synch(), serve_evaluations_asynch(), serve_evaluations_synch_peer(), or
serve_evaluations_asynch_peer() according to specified concurrency, partition, and scheduler configuration.
Reimplemented from Interface.
References ApplicationInterface::asynchLocalEvalConcurrency, ApplicationInterface::evalServerId, Inter-
face::ieDedMasterFlag, ApplicationInterface::serve_evaluations_asynch(), ApplicationInterface::serve_-
evaluations_asynch_peer(), ApplicationInterface::serve_evaluations_synch(), and ApplicationInterface::serve_-
evaluations_synch_peer().
used by the iterator master to terminate evaluation servers This code is executed on the iteratorComm rank 0
processor when iteration on a particular model is complete. It sends a termination signal (tag = 0 instead of a
valid fn_eval_id) to each of the slave analysis servers. NOTE: This function is called from the Strategy layer even
when in serial mode. Therefore, use iteratorCommSize to provide appropriate fall through behavior.
Reimplemented from Interface.
References ParallelLibrary::bcast_e(), ParallelLibrary::free(), Interface::ieDedMasterFlag,
perform construct-time error checks on the parallel configuration Override DirectApplicInterface definition if
plug-in to allow batch processing in Plugin{Serial,Parallel}DirectApplicInterface.cpp
Reimplemented in DirectApplicInterface, ProcessHandleApplicInterface, and SysCallApplicInterface.
Referenced by ApplicationInterface::init_communicators().
perform run-time error checks on the parallel configuration Override DirectApplicInterface definition if plug-in
to allow batch processing in Plugin{Serial,Parallel}DirectApplicInterface.cpp
Reimplemented in DirectApplicInterface, ParallelDirectApplicInterface, SerialDirectApplicInterface, Pro-
cessHandleApplicInterface, and SysCallApplicInterface.
Referenced by ApplicationInterface::set_communicators().
blocking dynamic schedule of all analyses within a function evaluation using message passing This code is called
from derived classes to provide the master portion of a master-slave algorithm for the dynamic scheduling of anal-
yses among slave servers. It is patterned after master_dynamic_schedule_evaluations(). It performs no analyses
locally and matches either serve_analyses_synch() or serve_analyses_asynch() on the slave servers, depending
on the value of asynchLocalAnalysisConcurrency. Dynamic scheduling assigns jobs in 2 passes. The 1st pass
gives each server the same number of jobs (equal to asynchLocalAnalysisConcurrency). The 2nd pass assigns
the remaining jobs to slave servers as previous jobs are completed. Single- and multilevel parallel use intra- and
inter-communicators, respectively, for send/receive. Specific syntax is encapsulated within ParallelLibrary.
References ApplicationInterface::asynchLocalAnalysisConcurrency, ParallelLibrary::free(),
ParallelLibrary::irecv_ea(), ParallelLibrary::isend_ea(), ApplicationInterface::numAnalysisDrivers, Appli-
cationInterface::numAnalysisServers, ApplicationInterface::parallelLib, ParallelLibrary::waitall(), and Parallel-
Library::waitsome().
Referenced by SysCallApplicInterface::create_evaluation_process(), ProcessHandleApplicInterface::create_-
evaluation_process(), and DirectApplicInterface::derived_map().
serve the master analysis scheduler and manage one synchronous analysis job at a time This code is called from
derived classes to run synchronous analyses on slave processors. The slaves receive requests (blocking receive),
do local derived_map_ac’s, and return codes. This is done continuously until a termination signal is received from
the master. It is patterned after serve_evaluations_synch().
References ApplicationInterface::analysisCommRank, ParallelLibrary::bcast_a(), ParallelLibrary::isend_ea(),
ApplicationInterface::multiProcAnalysisFlag, ApplicationInterface::parallelLib, ParallelLibrary::recv_ea(),
13.3.2.11 bool duplication_detect (const Variables & vars, Response & response, bool asynch_flag)
[private]
checks data_pairs and beforeSynchCorePRPQueue to see if the current evaluation request has already been per-
formed or queued Called from map() to check incoming evaluation request for duplication with content of data_-
pairs and beforeSynchCorePRPQueue. If duplication is detected, return true, else return false. Manage bookkeep-
ing with historyDuplicateMap and beforeSynchDuplicateMap. Note that the list searches can get very expensive
if a long list is searched on every new function evaluation (either from a large number of previous jobs, a large
number of pending jobs, or both). For this reason, a user request for deactivation of the evaluation cache results
in a complete bypass of duplication_detect(), even though a beforeSynchCorePRPQueue search would still be
meaningful. Since the intent of this request is to streamline operations, both list searches are bypassed.
References Response::active_set(), ApplicationInterface::beforeSynchCorePRPQueue, Application-
Interface::beforeSynchDuplicateMap, Response::copy(), Dakota::data_pairs, Interface::evalIdCntr,
Dakota::hashedQueueEnd(), ApplicationInterface::historyDuplicateMap, Interface::interfaceId,
Dakota::lookup_by_val(), and Response::update().
Referenced by ApplicationInterface::map().
blocking dynamic schedule of all evaluations in beforeSynchCorePRPQueue using message passing on a dedi-
cated master partition; executes on iteratorComm master This code is called from synch() to provide the master
portion of a master-slave algorithm for the dynamic scheduling of evaluations among slave servers. It performs
no evaluations locally and matches either serve_evaluations_synch() or serve_evaluations_asynch() on the slave
servers, depending on the value of asynchLocalEvalConcurrency. Dynamic scheduling assigns jobs in 2 passes.
The 1st pass gives each server the same number of jobs (equal to asynchLocalEvalConcurrency). The 2nd pass
assigns the remaining jobs to slave servers as previous jobs are completed and returned. Single- and multilevel
parallel use intra- and inter-communicators, respectively, for send/receive. Specific syntax is encapsulated within
ParallelLibrary.
peer
References ApplicationInterface::asynchLocalEvalConcurrency, ApplicationInter-
face::beforeSynchCorePRPQueue, Dakota::lookup_by_eval_id(), ApplicationInterface::numEvalServers,
Interface::outputLevel, ApplicationInterface::parallelLib, ApplicationInterface::receive_evaluation(), Appli-
cationInterface::recvBuffers, ApplicationInterface::recvRequests, ApplicationInterface::send_evaluation(),
ApplicationInterface::sendBuffers, ParallelLibrary::waitall(), and ParallelLibrary::waitsome().
Referenced by ApplicationInterface::synch().
blocking static schedule of all evaluations in beforeSynchCorePRPQueue using message passing on a peer parti-
tion; executes on iteratorComm master This code runs on the iteratorCommRank 0 processor (the iterator) and is
called from synch() in order to manage a static schedule for cases where peer 1 must block when evaluating its lo-
cal job allocation (e.g., single or multiprocessor direct interface evaluations). It matches serve_evaluations_peer()
for any other processors within the first evaluation partition and serve_evaluations_{synch,asynch}() for all other
evaluation partitions (depending on asynchLocalEvalConcurrency). It performs function evaluations locally for its
portion of the job allocation using either asynchronous_local_evaluations() or synchronous_local_evaluations().
Single-level and multilevel parallel use intra- and inter-communicators, respectively, for send/receive. Specific
syntax is encapsulated within ParallelLibrary. The iteratorCommRank 0 processor assigns the static schedule
since it is the only processor with access to beforeSynchCorePRPQueue (it runs the iterator and calls synchro-
nize). The alternate design of each peer selecting its own jobs using the modulus operator would be applicable if
execution of this function (and therefore the job list) were distributed.
References ApplicationInterface::asynchLocalEvalConcurrency, ApplicationInterface::asynchronous_local_-
evaluations(), ApplicationInterface::beforeSynchCorePRPQueue, ApplicationInterface::numEvalServers,
Interface::outputLevel, ApplicationInterface::parallelLib, ApplicationInterface::receive_evaluation(), Appli-
cationInterface::recvBuffers, ApplicationInterface::recvRequests, ApplicationInterface::send_evaluation(),
ApplicationInterface::sendBuffers, ApplicationInterface::synchronous_local_evaluations(), and ParallelLi-
brary::waitall().
Referenced by ApplicationInterface::synch().
blocking dynamic schedule of all evaluations in beforeSynchCorePRPQueue using message passing on a peer
partition; executes on iteratorComm master This code runs on the iteratorCommRank 0 processor (the iterator)
and is called from synch() in order to manage a dynamic schedule, as enabled by nonblocking management of local
asynchronous jobs. It matches serve_evaluations_{synch,asynch}() for other evaluation partitions, depending on
asynchLocalEvalConcurrency; it does not match serve_evaluations_peer() since, for local asynchronous jobs, the
first evaluation partition cannot be multiprocessor. It performs function evaluations locally for its portion of the
job allocation using asynchronous_local_evaluations_nowait(). Single-level and multilevel parallel use intra- and
inter-communicators, respectively, for send/receive. Specific syntax is encapsulated within ParallelLibrary.
References ApplicationInterface::assign_asynch_local_queue(), ApplicationInter-
face::asynchLocalEvalConcurrency, ApplicationInterface::beforeSynchCorePRPQueue, ApplicationInter-
face::msgPassRunningMap, ApplicationInterface::numEvalServers, Interface::outputLevel, ApplicationInter-
face::recvBuffers, ApplicationInterface::recvRequests, ApplicationInterface::send_evaluation(), Application-
Interface::sendBuffers, ApplicationInterface::test_local_backfill(), and ApplicationInterface::test_receives_-
backfill().
Referenced by ApplicationInterface::synch().
perform all jobs in prp_queue using asynchronous approaches on the local processor This function provides block-
ing synchronization for the local asynch case (background system call, nonblocking fork, or threads). It can be
called from synch() for a complete local scheduling of all asynchronous jobs or from peer_{static,dynamic}_-
schedule_evaluations() to perform a local portion of the total job set. It uses derived_map_asynch() to initi-
ate asynchronous evaluations and wait_local_evaluations() to capture completed jobs, and mirrors the master_-
dynamic_schedule_evaluations() message passing scheduler as much as possible (wait_local_evaluations() is
modeled after MPI_Waitsome()).
References ApplicationInterface::assign_asynch_local_queue(), ApplicationInter-
perform all jobs in prp_queue using synchronous approaches on the local processor This function provides block-
ing synchronization for the local synchronous case (foreground system call, blocking fork, or procedure call from
derived_map()). It is called from peer_static_schedule_evaluations() to perform a local portion of the total job set.
References ApplicationInterface::broadcast_evaluation(), Interface::currEvalId, ApplicationInterface::derived_-
map(), ApplicationInterface::manage_failure(), Interface::multiProcEvalFlag, and
ApplicationInterface::process_synch_local().
Referenced by ApplicationInterface::peer_static_schedule_evaluations().
execute a nonblocking dynamic schedule in a master-slave partition This code is called from synch_nowait() to
provide the master portion of a nonblocking master-slave algorithm for the dynamic scheduling of evaluations
among slave servers. It performs no evaluations locally and matches either serve_evaluations_synch() or serve_-
evaluations_asynch() on the slave servers, depending on the value of asynchLocalEvalConcurrency. Dynamic
scheduling assigns jobs in 2 passes. The 1st pass gives each server the same number of jobs (equal to asynchLo-
calEvalConcurrency). The 2nd pass assigns the remaining jobs to slave servers as previous jobs are completed.
Single- and multilevel parallel use intra- and inter-communicators, respectively, for send/receive. Specific syntax
is encapsulated within ParallelLibrary.
References Dakota::abort_handler(), ApplicationInterface::asynchLocalEvalConcurrency, Applica-
tionInterface::beforeSynchCorePRPQueue, ApplicationInterface::msgPassRunningMap, Application-
Interface::numEvalServers, ApplicationInterface::recvBuffers, ApplicationInterface::recvRequests,
ApplicationInterface::send_evaluation(), ApplicationInterface::sendBuffers, and ApplicationInterface::test_-
receives_backfill().
Referenced by ApplicationInterface::synch_nowait().
execute a nonblocking static/dynamic schedule in a peer partition This code runs on the iteratorCommRank 0
processor (the iterator) and is called from synch_nowait() in order to manage a nonblocking static schedule.
It matches serve_evaluations_{synch,asynch}() for other evaluation partitions (depending on asynchLocalEval-
Concurrency). It performs nonblocking local function evaluations for its portion of the static schedule using
asynchronous_local_evaluations(). Single-level and multilevel parallel use intra- and inter-communicators, re-
spectively, for send/receive. Specific syntax is encapsulated within ParallelLibrary. The iteratorCommRank 0
processor assigns the static schedule since it is the only processor with access to beforeSynchCorePRPQueue (it
runs the iterator and calls synchronize). The alternate design of each peer selecting its own jobs using the modulus
operator would be applicable if execution of this function (and therefore the job list) were distributed.
launch new jobs in prp_queue asynchronously (if capacity is available), perform nonblocking query of all running
jobs, and process any completed jobs (handles both local master- and local peer-scheduling cases) This function
provides nonblocking synchronization for the local asynch case (background system call, nonblocking fork, or
threads). It is called from synch_nowait() and passed the complete set of all asynchronous jobs (beforeSynch-
CorePRPQueue). It uses derived_map_asynch() to initiate asynchronous evaluations and test_local_evaluations()
to capture completed jobs in nonblocking mode. It mirrors a nonblocking message passing scheduler as much
as possible (test_local_evaluations() modeled after MPI_Testsome()). The result of this function is rawRespon-
seMap, which uses eval_id as a key. It is assumed that the incoming local_prp_queue contains only active and
new jobs - i.e., all completed jobs are cleared by synch_nowait().
Also supports asynchronous local evaluations with static scheduling. This scheduling policy specifically ensures
that a completed asynchronous evaluation eval_id is replaced with an equivalent one, modulo asynchLocalEval-
Concurrency. In the nowait case, this could render some servers idle if evaluations don’t come in eval_id order or
some evaluations are cancelled by the caller in between calls. If this function is called with unlimited local eval
concurrency, the static scheduling request is ignored.
References ApplicationInterface::assign_asynch_local_queue_nowait(), ApplicationInter-
face::asynchLocalActivePRPQueue, ApplicationInterface::asynchLocalEvalConcurrency, ApplicationInter-
face::asynchLocalEvalStatic, and ApplicationInterface::test_local_backfill().
Referenced by ApplicationInterface::synch_nowait().
serve the evaluation message passing schedulers and perform one synchronous evaluation at a time This code is
invoked by serve_evaluations() to perform one synchronous job at a time on each slave/peer server. The servers
receive requests (blocking receive), do local synchronous maps, and return results. This is done continuously until
a termination signal is received from the master (sent via stop_evaluation_servers()).
References Dakota::array_write_annotated(), ParallelLibrary::bcast_e(), Interface::currEvalId,
ApplicationInterface::derived_map(), ApplicationInterface::evalCommRank, ParallelLibrary::isend_-
ie(), ApplicationInterface::lenResponseMessage, ApplicationInterface::lenVarsActSetMessage,
ApplicationInterface::manage_failure(), Interface::multiProcEvalFlag, ApplicationInterface::parallelLib,
ParallelLibrary::recv_ie(), MPIPackBuffer::reset(), and ParallelLibrary::wait().
Referenced by ApplicationInterface::serve_evaluations().
serve the evaluation message passing schedulers and perform one synchronous evaluation at a time as part of
the 1st peer This code is invoked by serve_evaluations() to perform a synchronous evaluation in coordination
with the iteratorCommRank 0 processor (the iterator) for static schedules. The bcast() matches either the bcast()
in synchronous_local_evaluations(), which is invoked by peer_static_schedule_evaluations(), or the bcast() in
map().
References Dakota::array_write_annotated(), ParallelLibrary::bcast_e(), Interface::currEvalId,
ApplicationInterface::derived_map(), ApplicationInterface::lenVarsActSetMessage,
ApplicationInterface::manage_failure(), and ApplicationInterface::parallelLib.
Referenced by ApplicationInterface::serve_evaluations().
serve the evaluation message passing schedulers and manage multiple asynchronous evaluations This code is
invoked by serve_evaluations() to perform multiple asynchronous jobs on each slave/peer server. The servers test
for any incoming jobs, launch any new jobs, process any completed jobs, and return any results. Each of these
components is nonblocking, although the server loop continues until a termination signal is received from the
master (sent via stop_evaluation_servers()). In the master-slave case, the master maintains the correct number of
jobs on each slave. In the static scheduling case, each server is responsible for limiting concurrency (since the
entire static schedule is sent to the peers at start up).
References Dakota::abort_handler(), ApplicationInterface::asynchLocalActivePRPQueue, Ap-
plicationInterface::asynchLocalEvalConcurrency, ParallelLibrary::bcast_e(), ApplicationInter-
face::completionSet, ApplicationInterface::derived_map_asynch(), ApplicationInterface::evalCommRank,
Interface::interfaceId, ParallelLibrary::irecv_ie(), ApplicationInterface::lenResponseMessage, Application-
Interface::lenVarsActSetMessage, Dakota::lookup_by_eval_id(), Interface::multiProcEvalFlag, Applica-
tionInterface::parallelLib, ParallelLibrary::recv_ie(), MPIUnpackBuffer::reset(), ParallelLibrary::send_ie(),
ParallelLibrary::test(), and ApplicationInterface::test_local_evaluations().
Referenced by ApplicationInterface::serve_evaluations().
serve the evaluation message passing schedulers and perform multiple asynchronous evaluations as part of the
1st peer This code is invoked by serve_evaluations() to perform multiple asynchronous jobs on multiproces-
sor slave/peer servers. It matches the multiProcEvalFlag bcasts in ApplicationInterface::asynchronous_local_-
evaluations().
References Dakota::abort_handler(), ApplicationInterface::asynchLocalActivePRPQueue, Ap-
plicationInterface::asynchLocalEvalConcurrency, ParallelLibrary::bcast_e(), ApplicationInter-
face::completionSet, ApplicationInterface::derived_map_asynch(), Interface::interfaceId, ApplicationInter-
face::lenVarsActSetMessage, Dakota::lookup_by_eval_id(), ApplicationInterface::parallelLib, MPIUnpack-
Buffer::reset(), and ApplicationInterface::test_local_evaluations().
Referenced by ApplicationInterface::serve_evaluations().
The documentation for this class was generated from the following files:
• ApplicationInterface.hpp
• ApplicationInterface.cpp
Approximation
• Approximation (const String &approx_type, const UShortArray &approx_order, size_t num_vars, short
data_order, short output_level)
alternate constructor
• virtual ∼Approximation ()
destructor
• void add (const RealVector &c_vars, const IntVector &di_vars, const RealVector &dr_vars, bool anchor_-
flag, bool deep_copy)
shared code among add(Variables&) and add(Real∗); adds a new data point by either appending to Surrogate-
Data::varsData or assigning to SurrogateData::anchorVars, as dictated by anchor_flag. Uses add_point() and
add_anchor().
• void add (const Response &response, int fn_index, bool anchor_flag, bool deep_copy)
adds a new data point by either appending to SurrogateData::respData or assigning to Surrogate-
Data::anchorResp, as dictated by anchor_flag. Uses add_point() and add_anchor().
• void clear_all ()
clear all build data (current and history) to restore original state
• void clear_anchor ()
clear SurrogateData::anchor{Vars,Resp}
• void clear_data ()
clear SurrogateData::{vars,resp}Data
• void clear_saved ()
clear popCountStack and SurrogateData::saved{Vars,Resp}Trials
• void set_bounds (const RealVector &c_l_bnds, const RealVector &c_u_bnds, const IntVector &di_l_bnds,
const IntVector &di_u_bnds, const RealVector &dr_l_bnds, const RealVector &dr_u_bnds)
set approximation lower and upper bounds (currently only used by graphics)
Protected Attributes
• short outputLevel
output verbosity level: {SILENT,QUIET,NORMAL,VERBOSE,DEBUG}_OUTPUT
• int numVars
number of variables in the approximation
• String approxType
approximation type identifier
• short buildDataOrder
order of the data used for surrogate construction, in ActiveSet request vector 3-bit format.
• RealVector approxGradient
gradient of the approximation returned by gradient()
• RealSymMatrix approxHessian
Hessian of the approximation returned by hessian().
• Pecos::SurrogateData approxData
contains the variables/response data for constructing a single approximation model (one response function)
• RealVector approxCLowerBnds
approximation continuous lower bounds (used by 3D graphics and Surfpack KrigingModel)
• RealVector approxCUpperBnds
approximation continuous upper bounds (used by 3D graphics and Surfpack KrigingModel)
• IntVector approxDILowerBnds
approximation continuous lower bounds
• IntVector approxDIUpperBnds
approximation continuous upper bounds
• RealVector approxDRLowerBnds
approximation continuous lower bounds
• RealVector approxDRUpperBnds
approximation continuous upper bounds
• Approximation ∗ get_approx (const String &approx_type, const UShortArray &approx_order, size_t num_-
vars, short data_order, short output_level)
Used only by the alternate envelope constructor to initialize approxRep to the appropriate derived type.
Private Attributes
• SizetArray popCountStack
a stack managing the number of points previously added by calls to append() that can be removed by calls to pop()
• Approximation ∗ approxRep
pointer to the letter (initialized only for the envelope)
• int referenceCount
number of objects sharing approxRep
Base class for the approximation class hierarchy. The Approximation class is the base class for the response
data fit approximation class hierarchy in DAKOTA. One instance of an Approximation must be created for each
function to be approximated (a vector of Approximations is contained in ApproximationInterface). For memory
efficiency and enhanced polymorphism, the approximation hierarchy employs the "letter/envelope idiom" (see
Coplien "Advanced C++", p. 133), for which the base class (Approximation) serves as the envelope and one of
the derived classes (selected in Approximation::get_approx()) serves as the letter.
13.4.2.1 Approximation ()
default constructor The default constructor is used in Array<Approximation> instantiations and by the alternate
envelope constructor. approxRep is NULL in this case (problem_db is needed to build a meaningful Approxi-
mation object). This makes it necessary to check for NULL in the copy constructor, assignment operator, and
destructor.
standard constructor for envelope Envelope constructor only needs to extract enough data to properly execute
get_approx, since Approximation(BaseConstructor, problem_db) builds the actual base class data for the derived
approximations.
References Dakota::abort_handler(), Approximation::approxRep, and Approximation::get_approx().
13.4.2.3 Approximation (const String & approx_type, const UShortArray & approx_order, size_t
num_vars, short data_order, short output_level)
alternate constructor This is the alternate envelope constructor for instantiations on the fly. Since it does not have
access to problem_db, it utilizes the NoDBBaseConstructor constructor chain.
References Dakota::abort_handler(), Approximation::approxRep, and Approximation::get_approx().
copy constructor Copy constructor manages sharing of approxRep and incrementing of referenceCount.
References Approximation::approxRep, and Approximation::referenceCount.
destructor Destructor decrements referenceCount and only deletes approxRep when referenceCount reaches zero.
References Approximation::approxRep, and Approximation::referenceCount.
constructor initializes the base class part of letter classes (BaseConstructor overloading avoids infinite recursion
in the derived class constructors - Coplien, p. 139) This constructor is the one which must build the base class
data for all derived classes. get_approx() instantiates a derived class letter and the derived constructor selects
this base class constructor in its initialization list (to avoid recursion in the base class constructor calling get_-
approx() again). Since the letter IS the representation, its rep pointer is set to NULL (an uninitialized pointer
causes problems in ∼Approximation).
References Approximation::approxType, ProblemDescDB::get_bool(), and Dakota::strends().
constructor initializes the base class part of letter classes (BaseConstructor overloading avoids infinite recursion
in the derived class constructors - Coplien, p. 139) This constructor is the one which must build the base class
data for all derived classes. get_approx() instantiates a derived class letter and the derived constructor selects
this base class constructor in its initialization list (to avoid recursion in the base class constructor calling get_-
approx() again). Since the letter IS the representation, its rep pointer is set to NULL (an uninitialized pointer
causes problems in ∼Approximation).
assignment operator Assignment operator decrements referenceCount for old approxRep, assigns new approxRep,
and increments referenceCount for new approxRep.
References Approximation::approxRep, and Approximation::referenceCount.
builds the approximation from scratch This is the common base class portion of the virtual fn and is insufficient
on its own; derived implementations should explicitly invoke (or reimplement) this base class contribution.
rebuilds the approximation incrementally This is the common base class portion of the virtual fn and is insufficient
on its own; derived implementations should explicitly invoke (or reimplement) this base class contribution.
Reimplemented in PecosApproximation.
References Approximation::approxRep, Approximation::build(), and Approximation::rebuild().
Referenced by Approximation::rebuild().
removes entries from end of SurrogateData::{vars,resp}Data (last points appended, or as specified in args) This
is the common base class portion of the virtual fn and is insufficient on its own; derived implementations should
explicitly invoke (or reimplement) this base class contribution.
Reimplemented in PecosApproximation.
References Dakota::abort_handler(), Approximation::approxData, Approximation::approxRep, Approxima-
tion::pop(), and Approximation::popCountStack.
Referenced by Approximation::pop().
restores state prior to previous append() This is the common base class portion of the virtual fn and is insufficient
on its own; derived implementations should explicitly invoke (or reimplement) this base class contribution.
Reimplemented in PecosApproximation.
References Approximation::approxData, Approximation::approxRep, Approximation::popCountStack,
Approximation::restoration_index(), and Approximation::restore().
Referenced by Approximation::restore().
finalize approximation by applying all remaining trial sets This is the common base class portion of the virtual fn
and is insufficient on its own; derived implementations should explicitly invoke (or reimplement) this base class
contribution.
Reimplemented in PecosApproximation.
References Approximation::approxData, Approximation::approxRep, Approximation::clear_saved(),
Approximation::finalization_index(), and Approximation::finalize().
Referenced by Approximation::finalize().
clear current build data in preparation for next build Redefined by TANA3Approximation to clear current data but
preserve history.
Reimplemented in TANA3Approximation.
References Approximation::approxRep, Approximation::clear_all(), and Approximation::clear_current().
Referenced by Approximation::clear_current().
clear all build data (current and history) to restore original state Clears out any history (e.g.,
TANA3Approximation use for a different response function in NonDReliability).
References Approximation::approxData, Approximation::approxRep, and Approximation::clear_all().
Referenced by Approximation::clear_all(), and Approximation::clear_current().
Used only by the standard envelope constructor to initialize approxRep to the appropriate derived type. Used only
by the envelope constructor to initialize approxRep to the appropriate derived type.
References ProblemDescDB::get_string(), and Dakota::strends().
Referenced by Approximation::Approximation().
13.4.3.10 Approximation ∗ get_approx (const String & approx_type, const UShortArray & approx_order,
size_t num_vars, short data_order, short output_level) [private]
Used only by the alternate envelope constructor to initialize approxRep to the appropriate derived type. Used only
by the envelope constructor to initialize approxRep to the appropriate derived type.
References Dakota::strends().
order of the data used for surrogate construction, in ActiveSet request vector 3-bit format. This setting dis-
tinguishes derivative data intended for use in construction (includes derivatives w.r.t. the build variables) from
derivative data that may be approximated separately (excludes derivatives w.r.t. auxilliary variables). This setting
should also not be inferred directly from the responses specification, since we may need gradient support for eval-
uating gradients at a single point (e.g., the center of a trust region), but not require gradient evaluations at every
point.
• DakotaApproximation.hpp
• DakotaApproximation.cpp
Interface
ApproximationInterface
• ∼ApproximationInterface ()
destructor
• void store_approximation ()
move the current approximation into storage for later combination
• void clear_current ()
clears current data from an approximation interface
• void clear_all ()
clears all data from an approximation interface
• void clear_saved ()
clears saved data (from pop invocations) from an approximation interface
• void mixed_add (const Real ∗c_vars, const Response &response, bool anchor)
add variables/response data to functionSurfaces using a mixture of shallow and deep copies
• void shallow_add (const Variables &vars, const Response &response, bool anchor)
add variables/response data to functionSurfaces using a shallow copy
• void read_challenge_points ()
Load approximation test points from user challenge points file.
Private Attributes
• IntSet approxFnIndices
for incomplete approximation sets, this array specifies the response function subset that is approximated
• RealVectorArray functionSurfaceCoeffs
array of approximation coefficient vectors, one vector per response function
• RealVector functionSurfaceVariances
vector of approximation variances, one value per response function
• String challengeFile
data file for user-supplied challenge data (per interface, since may contain multiple responses)
• bool challengeAnnotated
whether the points file is annotated
• RealMatrix challengePoints
container for the challenge points data
• Variables actualModelVars
copy of the actualModel variables object used to simplify conversion among differing variable views
• bool actualModelCache
• String actualModelInterfaceId
the interface id from the actualModel used for ordered PRPCache lookups
• IntResponseMap beforeSynchResponseMap
bookkeeping map to catalogue responses generated in map() for use in synch() and synch_nowait(). This supports
pseudo-asynchronous operations (approximate responses are always computed synchronously, but asynchronous
virtual functions are supported through bookkeeping).
Derived class within the interface class hierarchy for supporting approximations to simulation-based results. Ap-
proximationInterface provides an interface class for building a set of global/local/multipoint approximations and
performing approximate function evaluations using them. It contains a list of Approximation objects, one for each
response function.
13.5.2.1 void update_approximation (const Variables & vars, const IntResponsePair & response_pr)
[protected, virtual]
This function populates/replaces each Approximation::anchorPoint with the incoming variables/response data
point.
Reimplemented from Interface.
References ApproximationInterface::actualModelCache, ApproximationInterface::actualModelInterfaceId,
Dakota::data_pairs, Dakota::lookup_by_ids(), ApproximationInterface::mixed_add(), and
ApproximationInterface::shallow_add().
13.5.2.2 void update_approximation (const RealMatrix & samples, const IntResponseMap & resp_map)
[protected, virtual]
This function populates/replaces each Approximation::currentPoints with the incoming variables/response arrays.
Reimplemented from Interface.
References Dakota::abort_handler(), ApproximationInterface::actualModelCache, ApproximationInter-
face::actualModelInterfaceId, ApproximationInterface::approxFnIndices, Dakota::data_pairs, Approxi-
mationInterface::functionSurfaces, Dakota::lookup_by_ids(), ApproximationInterface::mixed_add(), and
ApproximationInterface::shallow_add().
13.5.2.3 void update_approximation (const VariablesArray & vars_array, const IntResponseMap &
resp_map) [protected, virtual]
This function populates/replaces each Approximation::currentPoints with the incoming variables/response arrays.
13.5.2.4 void append_approximation (const Variables & vars, const IntResponsePair & response_pr)
[protected, virtual]
This function appends to each Approximation::currentPoints with one incoming variables/response data point.
Reimplemented from Interface.
References ApproximationInterface::actualModelCache, ApproximationInterface::actualModelInterfaceId,
ApproximationInterface::approxFnIndices, Dakota::data_pairs, ApproximationInterface::functionSurfaces,
Dakota::lookup_by_ids(), ApproximationInterface::mixed_add(), and ApproximationInterface::shallow_add().
13.5.2.5 void append_approximation (const RealMatrix & samples, const IntResponseMap & resp_map)
[protected, virtual]
This function appends to each Approximation::currentPoints with multiple incoming variables/response data
points.
Reimplemented from Interface.
References Dakota::abort_handler(), ApproximationInterface::actualModelCache, ApproximationInter-
face::actualModelInterfaceId, Dakota::data_pairs, Dakota::lookup_by_ids(), ApproximationInterface::mixed_-
add(), ApproximationInterface::shallow_add(), and ApproximationInterface::update_pop_counts().
13.5.2.6 void append_approximation (const VariablesArray & vars_array, const IntResponseMap &
resp_map) [protected, virtual]
This function appends to each Approximation::currentPoints with multiple incoming variables/response data
points.
Reimplemented from Interface.
References Dakota::abort_handler(), ApproximationInterface::actualModelCache, ApproximationInter-
face::actualModelInterfaceId, Dakota::data_pairs, Dakota::lookup_by_ids(), ApproximationInterface::mixed_-
add(), ApproximationInterface::shallow_add(), and ApproximationInterface::update_pop_counts().
13.5.2.7 void build_approximation (const RealVector & c_l_bnds, const RealVector & c_u_bnds, const
IntVector & di_l_bnds, const IntVector & di_u_bnds, const RealVector & dr_l_bnds, const
RealVector & dr_u_bnds) [protected, virtual]
This function finds the coefficients for each Approximation based on the data passed through update_-
approximation() calls. The bounds are used only for graphics visualization.
Reimplemented from Interface.
This function updates the coefficients for each Approximation based on data increments provided by
{update,append}_approximation().
Reimplemented from Interface.
References ApproximationInterface::approxFnIndices, and ApproximationInterface::functionSurfaces.
This function removes data provided by a previous call to append_approximation(), possibly different numbers
for each function, or as specified in pop_count, which is assumed same for all functions.
Reimplemented from Interface.
References ApproximationInterface::approxFnIndices, and ApproximationInterface::functionSurfaces.
This function updates the coefficients for each Approximation based on data increments provided by
{update,append}_approximation().
Reimplemented from Interface.
References ApproximationInterface::approxFnIndices, and ApproximationInterface::functionSurfaces.
This function updates the coefficients for each Approximation based on data increments provided by
{update,append}_approximation().
Reimplemented from Interface.
References ApproximationInterface::approxFnIndices, and ApproximationInterface::functionSurfaces.
list of approximations, one per response function This formulation allows the use of mixed approximations (i.e.,
different approximations used for different response functions), although the input specification is not currently
general enough to support it.
Referenced by ApproximationInterface::append_approximation(), ApproximationInterface::approximation_-
coefficients(), ApproximationInterface::approximation_data(), ApproximationInterface::approximation_-
• ApproximationInterface.hpp
• ApproximationInterface.cpp
• ∼APPSEvalMgr ()
destructor
• void set_constraint_map (std::vector< int > constraintMapIndices, std::vector< double > constraintMap-
Multipliers, std::vector< double > constraintMapOffsets)
publishes constraint transformation
Private Attributes
• Model & iteratedModel
reference to the APPSOptimizer’s model passed in the constructor
• bool modelAsynchFlag
flag for asynchronous function evaluations
• bool blockingSynch
flag for APPS synchronous behavior
• int numWorkersUsed
number of processors actively performing function evaluations
• int numWorkersTotal
total number of processors available for performing function evaluations
• RealVector xTrial
trial iterate
• IntResponseMap dakotaResponseMap
map of DAKOTA responses returned by synchronize_nowait()
Evaluation manager class for APPSPACK. The APPSEvalMgr class is derived from APPSPACK’s Executor
class. It implements the methods of that class in such away that allows DAKOTA to manage the computation
of responses instead of APPS. Iterate and response values are passed between Dakota and APPSPACK via this
interface.
tells APPS whether or not there is a processor available to perform a function evaluation Check to see if all
processors available for function evaluations are being used. If not, tell APPS that one is available.
References APPSEvalMgr::numWorkersTotal, and APPSEvalMgr::numWorkersUsed.
13.6.3.2 bool submit (const int apps_tag, const HOPSPACK::Vector & apps_xtrial, const
HOPSPACK::EvalRequestType apps_request)
performs a function evaluation at APPS-provided x_in Convert APPSPACK vector of variables to DAKOTA
vector of variables and perform function evaluation asynchronously or not as specified in the DAKOTA input
deck. If evaluation is asynchronous, map the dakota id to the APPS tag. If evaluation is synchronous, map the
responses to the APPS tag.
References Model::asynch_compute_response(), Model::compute_response(), Model::continuous_variables(),
Model::current_response(), Model::evaluation_id(), Response::function_values(), APPSEvalMgr::functionList,
APPSEvalMgr::iteratedModel, APPSEvalMgr::modelAsynchFlag, APPSEvalMgr::numWorkersTotal, APPSE-
valMgr::numWorkersUsed, APPSEvalMgr::tagList, and APPSEvalMgr::xTrial.
13.6.3.3 int recv (int & apps_tag, HOPSPACK::Vector & apps_f, HOPSPACK::Vector & apps_cEqs,
HOPSPACK::Vector & apps_cIneqs, string & apps_msg)
returns a function value to APPS Retrieve a set of reponse values, convert to APPS data structures, and return them
to APPS. APPS tags are tied to corresponding responses using the appropriate (i.e., asynchronous or synchronous)
map.
References APPSEvalMgr::blockingSynch, APPSEvalMgr::constrMapIndices, APPSE-
valMgr::constrMapMultipliers, APPSEvalMgr::constrMapOffsets, APPSEvalMgr::dakotaResponseMap,
APPSEvalMgr::functionList, APPSEvalMgr::iteratedModel, APPSEvalMgr::modelAsynchFlag, Model::num_-
nonlinear_eq_constraints(), APPSEvalMgr::numWorkersUsed, Model::primary_response_fn_sense(),
Model::synchronize(), Model::synchronize_nowait(), and APPSEvalMgr::tagList.
The documentation for this class was generated from the following files:
• APPSEvalMgr.hpp
• APPSEvalMgr.cpp
Iterator
Minimizer
Optimizer
APPSOptimizer
• ∼APPSOptimizer ()
destructor
• void find_optimum ()
Performs the iterations to determine the optimal solution.
• void initialize_variables_and_constraints ()
initializes problem variables and constraints
Protected Attributes
• HOPSPACK::ParameterList params
Pointer to APPS parameter list.
• HOPSPACK::ParameterList ∗ problemParams
• HOPSPACK::ParameterList ∗ linearParams
Pointer to APPS linear constraint parameter sublist.
• HOPSPACK::ParameterList ∗ mediatorParams
Pointer to APPS mediator parameter sublist.
• HOPSPACK::ParameterList ∗ citizenParams
Pointer to APPS citizen/algorithm parameter sublist.
• APPSEvalMgr ∗ evalMgr
Pointer to the APPS evaluation manager object.
Wrapper class for APPSPACK. The APPSOptimizer class provides a wrapper for APPSPACK, a Sandia-
developed C++ library for generalized pattern search. APPSPACK defaults to a coordinate pattern search but
also allows for augmented search patterns. It can solve problems with bounds, linear constraints, and general
nonlinear constraints. APPSOptimizer uses an APPSEvalMgr object to manage the function evaluations.
The user input mappings are as follows: output max_function_evaluations, constraint_-
tol initial_delta, contraction_factor, threshold_delta, solution_target,
synchronization, merit_function, constraint_penalty, and smoothing_factor are
mapped into APPS’s "Debug", "Maximum Evaluations", "Bounds Tolerance"/"Machine Epsilon"/"Constraint
Tolerance", "Initial Step", "Contraction Factor", "Step Tolerance", "Function Tolerance", "Synchronous",
"Method", "Initial Penalty Value", and "Initial Smoothing Value" data attributes. Refer to the APPS web site
(http://software.sandia.gov/appspack) for additional information on APPS objects and controls.
It can solve problems with bounds, linear constraints, and general nonlinear constraints. APPSOptimizer uses an
APPSEvalMgr object to manage the function evaluations.
The user input mappings are as follows: output max_function_evaluations, constraint_-
tol initial_delta, contraction_factor, threshold_delta, solution_target,
synchronization, merit_function, constraint_penalty, and smoothing_factor are
mapped into HOPS’s "Display", "Maximum Evaluations", "Active Tolerance"/"Nonlinear Active Tolerance",
"Initial Step", "Contraction Factor", "Step Tolerance", "Objective Target", "Synchronous Evaluations", "Penalty
Function", "Penalty Parameter", and "Penalty Smoothing Value" data attributes. Refer to the HOPS web site
(https://software.sandia.gov/trac/hopspack) for additional information on HOPS objects and
controls.
References APPSOptimizer::evalMgr, Model::init_communicators(), Iterator::iteratedModel, Itera-
tor::maxConcurrency, Minimizer::minimizerRecasts, and APPSOptimizer::set_apps_parameters().
Performs the iterations to determine the optimal solution. find_optimum redefines the Optimizer virtual function
to perform the optimization using HOPS. It first sets up the problem data, then executes minimize() on the HOPS
optimizer, and finally catalogues the results.
Implements Optimizer.
References Model::asynch_flag(), Iterator::bestResponseArray, Iterator::bestVariablesArray,
APPSOptimizer::constraintMapIndices, APPSOptimizer::constraintMapMultipliers, APP-
SOptimizer::constraintMapOffsets, APPSOptimizer::evalMgr, Model::evaluation_capacity(),
APPSOptimizer::initialize_variables_and_constraints(), Iterator::iteratedModel, Opti-
mizer::localObjectiveRecast, Iterator::numContinuousVars, Iterator::numFunctions, Mini-
mizer::numNonlinearEqConstraints, Minimizer::numNonlinearIneqConstraints, APPSOptimizer::params,
Model::primary_response_fn_sense(), APPSEvalMgr::set_asynch_flag(), and APPSEvalMgr::set_total_-
workers().
sets options for specific methods based on user specifications Set all of the HOPS algorithmic parameters as
specified in the DAKOTA input deck. This is called at construction time.
References APPSOptimizer::citizenParams, Minimizer::constraintTol, APPSOptimizer::evalMgr,
ProblemDescDB::get_real(), ProblemDescDB::get_string(), ProblemDescDB::is_null(), APPSOpti-
mizer::linearParams, Iterator::maxConcurrency, Iterator::maxFunctionEvals, APPSOptimizer::mediatorParams,
Iterator::numContinuousVars, Minimizer::numNonlinearConstraints, Iterator::outputLevel, APPSOpti-
mizer::params, Iterator::probDescDB, APPSOptimizer::problemParams, and APPSEvalMgr::set_blocking_-
synch().
Referenced by APPSOptimizer::APPSOptimizer().
initializes problem variables and constraints Set the variables and constraints as specified in the DAKOTA input
deck. This is done at run time.
References Minimizer::bigRealBoundSize, APPSOptimizer::constraintMapIndices, APPSOpti-
mizer::constraintMapMultipliers, APPSOptimizer::constraintMapOffsets, Model::continuous_lower_-
bounds(), Model::continuous_upper_bounds(), Model::continuous_variables(), APPSOptimizer::evalMgr,
Iterator::iteratedModel, Model::linear_eq_constraint_coeffs(), Model::linear_eq_constraint_targets(),
Model::linear_ineq_constraint_coeffs(), Model::linear_ineq_constraint_lower_bounds(), Model::linear_-
ineq_constraint_upper_bounds(), APPSOptimizer::linearParams, Model::nonlinear_eq_constraint_targets(),
Model::nonlinear_ineq_constraint_lower_bounds(), Model::nonlinear_ineq_constraint_upper_bounds(), It-
erator::numContinuousVars, Minimizer::numLinearEqConstraints, Minimizer::numLinearIneqConstraints,
Minimizer::numNonlinearEqConstraints, Minimizer::numNonlinearIneqConstraints, APPSOpti-
mizer::problemParams, and APPSEvalMgr::set_constraint_map().
Referenced by APPSOptimizer::find_optimum().
The documentation for this class was generated from the following files:
• APPSOptimizer.hpp
• APPSOptimizer.cpp
Dummy struct for overloading letter-envelope constructors. BaseConstructor is used to overload the constructor
for the base class portion of letter objects. It avoids infinite recursion (Coplien p.139) in the letter-envelope idiom
by preventing the letter from instantiating another envelope. Putting this struct here avoids circular dependencies.
The documentation for this struct was generated from the following file:
• dakota_global_defs.hpp
• ∼BiStream ()
Destructor, calls xdr_destroy to delete xdr stream.
Private Attributes
• XDR xdrInBuf
XDR input stream buffer.
The binary input stream class. Overloads the >> operator for all data types. The Dakota::BiStream class is a
binary input class which overloads the >> operator for all standard data types (int, char, float, etc). The class
relies on the methods within the ifstream base class. The Dakota::BiStream class inherits from the ifstream class.
If available, the class utilize rpc/xdr to construct machine independent binary files. These Dakota restart files can
be moved from host to host. The motivation to develop these classes was to replace the Rogue wave classes which
Dakota historically used for binary I/O.
13.9.2.1 BiStream ()
Default constructor, need to open. Default constructor, allocates xdr stream , but does not call the open method.
The open method must be called before stream can be read.
References BiStream::inBuf, and BiStream::xdrInBuf.
Constructor takes name of input file. Constructor which takes a char∗ filename. Calls the base class open method
with the filename and no other arguments. Also allocates the xdr stream.
References BiStream::inBuf, and BiStream::xdrInBuf.
Constructor takes name of input file, mode. Constructor which takes a char∗ filename and int flags. Calls the base
class open method with the filename and flags as arguments. Also allocates xdr stream.
References BiStream::inBuf, and BiStream::xdrInBuf.
13.9.2.4 ∼BiStream ()
Destructor, calls xdr_destroy to delete xdr stream. Destructor, destroys the xdr stream allocated in constructor
References BiStream::xdrInBuf.
Binary Input stream operator>>. The std::string input operator must first read both the xdr buffer size and the
size of the string written. Once these are read it can then read and convert the std::string correctly.
References BiStream::inBuf, and BiStream::xdrInBuf.
The documentation for this class was generated from the following files:
• DakotaBinStream.hpp
• DakotaBinStream.cpp
• ∼BoStream () throw ()
Destructor, calls xdr_destroy to delete xdr stream.
Private Attributes
• XDR xdrOutBuf
XDR output stream buffer.
The binary output stream class. Overloads the << operator for all data types. The Dakota::BoStream class is a
binary output classes which overloads the << operator for all standard data types (int, char, float, etc). The class
relies on the built in write methods within the ostream base classes. Dakota::BoStream inherits from the ofstream
class. The motivation to develop this class was to replace the Rogue wave class which Dakota historically used
for binary I/O. If available, the class utilize rpc/xdr to construct machine independent binary files. These Dakota
restart files can be moved between hosts.
13.10.2.1 BoStream ()
Default constructor, need to open. Default constructor allocates the xdr stream but does not call the open() method.
The open() method must be called before stream can be written to.
References BoStream::outBuf, and BoStream::xdrOutBuf.
Constructor takes name of input file. Constructor, takes char ∗ filename as argument. Calls base class open method
with filename and no other arguments. Also allocates xdr stream
References BoStream::outBuf, and BoStream::xdrOutBuf.
Constructor takes name of input file, mode. Constructor, takes char ∗ filename and int flags as arguments. Calls
base class open method with filename and flags as arguments. Also allocates xdr stream. Note : If no rpc/xdr
support xdr calls are #ifdef’d out.
References BoStream::outBuf, and BoStream::xdrOutBuf.
Binary Output stream operator<<. The std::string operator<< must first write the xdr buffer size and the original
string size to the stream. The input operator needs this information to be able to correctly read and convert the
std::string.
References BoStream::outBuf, and BoStream::xdrOutBuf.
The documentation for this class was generated from the following files:
• DakotaBinStream.hpp
• DakotaBinStream.cpp
• ∼COLINApplication ()
Destructor.
• virtual bool map_domain (const utilib::Any &src, utilib::Any &native, bool forward=true) const
Map the domain point into data type desired by this application context.
Protected Attributes
• Model iteratedModel
Shallow copy of the model on which COLIN will iterate.
• bool blockingSynch
Flag for COLIN synchronous behavior (Pattern Search only).
• ActiveSet activeSet
Local copy of model’s active set for convenience.
• IntResponseMap dakota_responses
eval_id to response mapping to cache completed jobs.
COLINApplication is a DAKOTA class that is derived from COLIN’s Application hierarchy. It redefines a variety
of virtual COLIN functions to use the corresponding DAKOTA functions. This is a more flexible algorithm library
interfacing approach than can be obtained with the function pointer approaches used by NPSOLOptimizer and
SNLLOptimizer.
Helper function called after default construction to extract problem information from the Model and set it for
COLIN. Set variable bounds and linear and nonlinear constraints. This avoids using probDescDB, so it is called
by both the standard and the on-the-fly COLINOptimizer constructors.
References Response::active_set(), COLINApplication::activeSet, Model::continuous_lower_bounds(),
Model::continuous_upper_bounds(), Model::current_response(), Model::cv(), Model::discrete_int_lower_-
bounds(), Model::discrete_int_sets(), Model::discrete_int_upper_bounds(), Model::discrete_set_int_-
values(), Model::discrete_set_real_values(), Model::div(), Model::drv(), COLINApplication::iteratedModel,
Model::linear_eq_constraint_coeffs(), Model::linear_eq_constraint_targets(), Model::linear_ineq_-
constraint_coeffs(), Model::linear_ineq_constraint_lower_bounds(), Model::linear_ineq_constraint_upper_-
bounds(), Model::nonlinear_eq_constraint_targets(), Model::nonlinear_ineq_constraint_lower_bounds(),
Model::nonlinear_ineq_constraint_upper_bounds(), Model::num_functions(), Model::num_linear_eq_-
constraints(), Model::num_linear_ineq_constraints(), Model::num_nonlinear_eq_constraints(), Model::num_-
nonlinear_ineq_constraints(), and Model::primary_response_fn_sense().
Referenced by COLINApplication::COLINApplication().
Schedule one or more requests at specified domain point, returning a DAKOTA-specific evaluation tracking ID.
Schedule one or more requests at specified domain point, returning a DAKOTA-specific evaluation tracking ID.
This is only called by COLIN’s concurrent evaluator, which is only instantiated when the Model supports asynch
evals. The domain point is guaranteed to be compatible with data type specified by map_domain(...)
References Model::asynch_compute_response(), COLINApplication::colin_request_to_dakota_request(),
Model::evaluation_id(), and COLINApplication::iteratedModel.
Check to see if there are any function values ready to be collected. Check to see if any asynchronous evaluations
have finished. This is only called by COLIN’s concurrent evaluator, which is only instantiated when the Model
supports asynch evals.
References COLINApplication::blockingSynch, COLINApplication::dakota_responses, COLINApplica-
tion::iteratedModel, Model::synchronize(), and Model::synchronize_nowait().
Perform a function evaluation at t given point. Perform an evaluation at a specified domain point. Wait for and
return the response. This is only called by COLIN’s serial evaluator, which is only instantiated when the Model
does not support asynch evals. The domain point is guaranteed to be compatible with data type specified by
map_domain(...)
References COLINApplication::colin_request_to_dakota_request(), Model::compute_response(),
Model::current_response(), COLINApplication::dakota_response_to_colin_response(), and COLINApplica-
tion::iteratedModel.
Collect a completed evaluation from DAKOTA. Collect the next completed evaluation from DAKOTA. Always
returns the evalid of the response returned.
References COLINApplication::dakota_response_to_colin_response(), and COLINApplication::dakota_-
responses.
Helper function to convert evaluation request data from COLIN structures to DAKOTA structures. Map COLIN
info requests to DAKOTA objectives and constraints.
Gelper function to convert evaluation response data from DAKOTA structures to COLIN structures. Map
DAKOTA objective and constraint values to COLIN response.
References Response::active_set_request_vector(), and Response::function_value().
Referenced by COLINApplication::collect_evaluation_impl(), and COLINApplication::perform_evaluation_-
impl().
13.11.2.8 bool map_domain (const utilib::Any & src, utilib::Any & native, bool forward = true) const
[virtual]
Map the domain point into data type desired by this application context. Map the domain point into data type
desired by this application context (utilib::MixedIntVars). This data type can be exposed from the Any &domain
presented to spawn and collect.
The documentation for this class was generated from the following files:
• COLINApplication.hpp
• COLINApplication.cpp
Iterator
Minimizer
Optimizer
COLINOptimizer
• COLINOptimizer (const String &method_name, Model &model, int seed, int max_iter, int max_eval)
alternate constructor for on-the-fly instantiations
• ∼COLINOptimizer ()
destructor
• void reset ()
clears internal optimizer state
• void find_optimum ()
iterates the COLIN solver to determine the optimal solution
• void set_solver_parameters ()
sets construct-time options for specific methods based on user specifications, including calling method-specific set
functions
• std::pair< bool, bool > colin_cache_lookup (const colin::AppResponse &colinResponse, Response &tm-
pResponseHolder)
Retrieve response from Colin AppResponse, return pair indicating success for <objective, constraints>.
Protected Attributes
• short solverType
COLIN solver sub-type as enumerated in COLINOptimizer.cpp.
• colin::SolverHandle colinSolver
handle to the COLIN solver
• colin::EvaluationManager_Base ∗ colinEvalMgr
pointer to the COLIN evalutaion manager object
• utilib::RNG ∗ rng
random number generator pointer
• bool blockingSynch
the synchronization setting: true if blocking, false if nonblocking
• Real constraint_penalty
Buffer to hold problem constraint_penalty parameter.
• bool constant_penalty
Buffer to hold problem constant_penalty parameter.
Wrapper class for optimizers defined using COLIN. The COLINOptimizer class wraps COLIN, a Sandia-
developed C++ optimization interface library. A variety of COLIN optimizers are defined in COLIN and its
associated libraries, including SCOLIB which contains the optimization components from the old COLINY (for-
merly SGOPT) library. COLIN contains optimizers such as genetic algorithms, pattern search methods, and
other nongradient-based techniques. COLINOptimizer uses a COLINApplication object to perform the function
evaluations.
The user input mappings are as follows: max_iterations, max_function_evaluations,
convergence_tolerance, and solution_accuracy are mapped into COLIN’s max_iterations,
max_function_evaluations_this_trial, function_value_tolerance, sufficient_-
objective_value properties. An outputLevel is mapped to COLIN’s output_level property and
a setting of debug activates output of method initialization and sets the COLIN debug attribute to 10000 for the
DEBUG output level. Refer to [Hart, W.E., 2006] for additional information on COLIN objects and controls.
13.12.2.2 COLINOptimizer (const String & method_name, Model & model, int seed, int max_iter, int
max_eval)
alternate constructor for on-the-fly instantiations Alternate constructor for on-the-fly instantiations.
References Iterator::maxFunctionEvals, Iterator::maxIterations, COLINOptimizer::set_rng(),
COLINOptimizer::set_solver_parameters(), and COLINOptimizer::solver_setup().
alternate constructor for Iterator instantiations by name Alternate constructor for Iterator instantiations by name.
References COLINOptimizer::set_solver_parameters(), and COLINOptimizer::solver_setup().
iterates the COLIN solver to determine the optimal solution find_optimum redefines the Optimizer virtual function
to perform the optimization using COLIN. It first sets up the problem data, then executes optimize() on the COLIN
solver and finally catalogues the results.
Implements Optimizer.
some COLIN methods can return multiple points Designate which solvers can return multiple final points.
Reimplemented from Iterator.
References COLINOptimizer::solverType.
13.12.3.3 void solver_setup (const String & method_name, Model & model) [protected]
convenience function for setting up the particular COLIN solver and appropriate Application This convenience
function is called by the constructors in order to instantiate the solver.
References COLINOptimizer::colinProblem, COLINOptimizer::colinSolver, COLINOptimizer::constant_-
penalty, COLINOptimizer::constraint_penalty, ProblemDescDB::get_string(), Iterator::probDescDB, and
COLINOptimizer::solverType.
Referenced by COLINOptimizer::COLINOptimizer().
sets up the random number generator for stochastic methods Instantiate random number generator (RNG).
References COLINOptimizer::colinSolver, and COLINOptimizer::rng.
Referenced by COLINOptimizer::COLINOptimizer().
sets construct-time options for specific methods based on user specifications, including calling method-specific
set functions Sets solver properties based on user specifications. Called at construction time.
References Model::asynch_flag(), COLINOptimizer::blockingSynch, COLINOptimizer::colinSolver,
COLINOptimizer::constant_penalty, COLINOptimizer::constraint_penalty, Iterator::convergenceTol,
ProblemDescDB::get_bool(), ProblemDescDB::get_int(), ProblemDescDB::get_real(), ProblemDescDB::get_-
sa(), ProblemDescDB::get_string(), ProblemDescDB::is_null(), Iterator::iteratedModel, Itera-
tor::maxConcurrency, Iterator::maxFunctionEvals, Iterator::maxIterations, Iterator::numContinuousVars,
Iterator::outputLevel, Iterator::probDescDB, and COLINOptimizer::solverType.
Referenced by COLINOptimizer::COLINOptimizer().
Get the final set of points from the solver Look up responses and sort, first according to constraint violation,
then according to function value. Supplement Optimizer::post_run to first retrieve points from the Colin cache
(or possibly the Dakota DB) and rank them. When complete, this function will populate bestVariablesArray and
bestResponsesArray with iterator-space data, that is, in the context of the solver, leaving any further untransfor-
mation to Optimizer.
Reimplemented from Optimizer.
References Iterator::bestResponseArray, Iterator::bestVariablesArray, COLINOptimizer::colin_cache_-
lookup(), COLINOptimizer::colinProblem, COLINOptimizer::colinSolver, COLINOptimizer::constraint_-
violation(), Variables::continuous_variables(), Response::copy(), Variables::copy(), Model::current_response(),
Model::current_variables(), Model::discrete_int_sets(), Variables::discrete_int_variable(), Variables::discrete_-
real_variable(), Model::discrete_set_int_values(), Model::discrete_set_real_values(), Response::function_-
values(), Iterator::iteratedModel, Optimizer::localObjectiveRecast, Iterator::numDiscreteIntVars, Itera-
tor::numDiscreteRealVars, Iterator::numFinalSolutions, Optimizer::numObjectiveFns, Minimizer::objective(),
Model::primary_response_fn_sense(), Model::primary_response_fn_weights(), Minimizer::resize_best_resp_-
array(), Minimizer::resize_best_vars_array(), Dakota::set_index_to_value(), Model::subordinate_model(), and
Dakota::write_data().
13.12.3.7 std::pair< bool, bool > colin_cache_lookup (const colin::AppResponse & colinResponse,
Response & tmpResponseHolder) [protected]
Retrieve response from Colin AppResponse, return pair indicating success for <objective, constraints>. Encap-
sulated Colin Cache response extraction, which will ultimately become the default lookup. Might want to return
separate vectors of function values and constraints for use in the sort, but not for now (least change). Return true
if not needed or successful lookup.
References Response::function_value(), Minimizer::numNonlinearConstraints, and Opti-
mizer::numObjectiveFns.
Referenced by COLINOptimizer::post_run().
Compute constraint violation, based on nonlinear constraints in iteratedModel and provided Response data. BMA
TODO: incorporate constraint tolerance, possibly via elevating SurrBasedMinimizer::constraint_violation(). Al-
ways use iteratedModel to get the constraints; they are in the right space.
References Response::function_values(), Iterator::iteratedModel, Model::nonlinear_eq_constraint_-
targets(), Model::nonlinear_ineq_constraint_lower_bounds(), Model::nonlinear_ineq_constraint_upper_-
bounds(), Model::num_nonlinear_eq_constraints(), Model::num_nonlinear_ineq_constraints(), and Mini-
mizer::numIterPrimaryFns.
Referenced by COLINOptimizer::post_run().
The documentation for this class was generated from the following files:
• COLINOptimizer.hpp
• COLINOptimizer.cpp
Strategy
HybridStrategy
CollaborativeHybridStrategy
• ∼CollaborativeHybridStrategy ()
destructor
Private Attributes
• String hybridCollabType
abo or hops
• Variables bestVariables
best variables found in minimization
• Response bestResponse
best response found in minimization
Strategy for hybrid minimization using multiple collaborating optimization and nonlinear least squares meth-
ods. This strategy has two approaches to hybrid minimization: (1) agent-based using the ABO framework; (2)
nonagent-based using the HOPSPACK framework.
The documentation for this class was generated from the following files:
• CollaborativeHybridStrategy.hpp
• CollaborativeHybridStrategy.cpp
GetLongOpt
CommandLineHandler
• ∼CommandLineHandler ()
destructor
• void assign_streams ()
conditionally associate Cout/Cerr with file streams, if specified by user
• void reset_streams ()
Private Attributes
• std::ofstream output_ofstream
temporary file redirection of stdout
• std::ofstream error_ofstream
temporary file redirection of stderr
Utility class for managing command line inputs to DAKOTA. CommandLineHandler provides additional func-
tionality that is specific to DAKOTA’s needs for the definition and parsing of command line options. Inheritance
is used to allow the class to have all the functionality of the base class, GetLongOpt.
Whether command line args dictate instantiation of objects for run. Instantiate objects if not just getting help or
version
References GetLongOpt::retrieve().
Referenced by main().
conditionally associate Cout/Cerr with file streams, if specified by user Redirect output/error to files, including
output from this class. If there is a valid ParallelLibrary, only redirect on rank 0 to avoid file clash.
References Dakota::abort_handler(), Dakota::Dak_pl, Dakota::dakota_cerr, Dakota::dakota_-
cout, CommandLineHandler::error_ofstream, CommandLineHandler::output_helper(),
CommandLineHandler::output_ofstream, GetLongOpt::retrieve(), and ParallelLibrary::world_rank().
Referenced by CommandLineHandler::check_usage().
13.14.2.4 void output_helper (const std::string message, std::ostream & os) const [private]
perform output of message to ostream os on rank 0 only When there is a valid ParallelLibrary, output only on rank
0
References Dakota::Dak_pl, and ParallelLibrary::output_helper().
Referenced by CommandLineHandler::assign_streams(), CommandLineHandler::check_usage(), and
CommandLineHandler::output_version().
The documentation for this class was generated from the following files:
• CommandLineHandler.hpp
• CommandLineHandler.cpp
• ∼CommandShell ()
destructor
Private Attributes
• const std::string & workDir
To convey working directory when useWorkdir is true:.
• std::string sysCommand
The command string that is constructed through one or more << insertions and then executed by flush.
• bool asynchFlag
flags nonblocking operation (background system calls)
• bool suppressOutputFlag
flags suppression of shell output (no command echo)
Utility class which defines convenience operators for spawning processes with system calls. The CommandShell
class wraps the C system() utility and defines convenience operators for building a command string and then
passing it to the shell.
appends cmd to sysCommand convenient operator: appends string to the commandString to be executed
References CommandShell::sysCommand.
allows passing of the flush function to the shell using << convenience operator: allows passing of the flush func
to the shell via <<
"flushes" the shell; i.e. executes the sysCommand Executes the sysCommand by passing it to system(). Appends
an "&" if asynchFlag is set (background system call) and echos the sysCommand to Cout if suppressOutputFlag
is not set.
References Dakota::abort_handler(), CommandShell::asynchFlag, CommandShell::suppressOutputFlag, Com-
mandShell::sysCommand, and CommandShell::workDir.
The documentation for this class was generated from the following files:
• CommandShell.hpp
• CommandShell.cpp
Strategy
ConcurrentStrategy
• ∼ConcurrentStrategy ()
destructor
Private Attributes
• Model userDefinedModel
the model used by the iterator
• Iterator selectedIterator
the iterator used by the concurrent strategy
• bool multiStartFlag
distinguishes multi-start from Pareto-set
• RealVector initialPt
the initial continuous variables for restoring the starting point in the Pareto set strategy
• RealVectorArray parameterSets
an array of parameter set vectors (either multistart variable sets or pareto multi-objective/least squares weighting
sets) to be performed.
• PRPArray prpResults
1-d array of ParamResponsePair results corresponding to numIteratorJobs
Strategy for multi-start iteration or pareto set optimization. This strategy maintains two concurrent iterator capa-
bilities. First, a general capability for running an iterator multiple times from different starting points is provided
(often used for multi-start optimization, but not restricted to optimization). Second, a simple capability for map-
ping the "pareto frontier" (the set of optimal solutions in multiobjective formulations) is provided. This pareto set
is mapped through running an optimizer multiple times for different sets of multiobjective weightings.
pack a send_buffer for assigning an iterator job to a server This virtual function redefinition is executed on the
dedicated master processor for self scheduling. It is not used for peer partitions.
unpack a recv_buffer for accepting an iterator job from the scheduler This virtual function redefinition is executed
on an iterator server for dedicated master self scheduling. It is not used for peer partitions.
Reimplemented from Strategy.
References ConcurrentStrategy::initialize_iterator().
pack a send_buffer for returning iterator results from a server This virtual function redefinition is executed either
on an iterator server for dedicated master self scheduling or on peers 2 through n for static scheduling.
Reimplemented from Strategy.
References ConcurrentStrategy::prpResults.
unpack a recv_buffer for accepting iterator results from a server This virtual function redefinition is executed on
an strategy master (either the dedicated master processor for self scheduling or peer 1 for static scheduling).
Reimplemented from Strategy.
References ConcurrentStrategy::prpResults.
The documentation for this class was generated from the following files:
• ConcurrentStrategy.hpp
• ConcurrentStrategy.cpp
Iterator
Minimizer
Optimizer
CONMINOptimizer
• ∼CONMINOptimizer ()
destructor
• void find_optimum ()
Used within the optimizer branch for computing the optimal solution. Redefines the run virtual function for the
optimizer branch.
• void allocate_workspace ()
Allocates workspace for the optimizer.
• void deallocate_workspace ()
Releases workspace memory.
• void allocate_constraints ()
Allocates constraint mappings.
Private Attributes
• int conminInfo
INFO from CONMIN manual.
• int printControl
IPRINT from CONMIN manual (controls output verbosity).
• int optimizationType
MINMAX from DOT manual (minimize or maximize).
• Real objFnValue
value of the objective function passed to CONMIN
• RealVector constraintValues
array of nonlinear constraint values passed to CONMIN
• int numConminNlnConstr
total number of nonlinear constraints seen by CONMIN
• int numConminLinConstr
total number of linear constraints seen by CONMIN
• int numConminConstr
total number of linear and nonlinear constraints seen by CONMIN
• SizetArray constraintMappingIndices
a container of indices for referencing the corresponding Response constraints used in computing the CONMIN
constraints.
• RealArray constraintMappingMultipliers
a container of multipliers for mapping the Response constraints to the CONMIN constraints.
• RealArray constraintMappingOffsets
a container of offsets for mapping the Response constraints to the CONMIN constraints.
• int N1
Size variable for CONMIN arrays. See CONMIN manual.
• int N2
Size variable for CONMIN arrays. See CONMIN manual.
• int N3
Size variable for CONMIN arrays. See CONMIN manual.
• int N4
Size variable for CONMIN arrays. See CONMIN manual.
• int N5
Size variable for CONMIN arrays. See CONMIN manual.
• int NFDG
Finite difference flag.
• int IPRINT
Flag to control amount of output data.
• int ITMAX
Flag to specify the maximum number of iterations.
• double FDCH
Relative finite difference step size.
• double FDCHM
Absolute finite difference step size.
• double CT
Constraint thickness parameter.
• double CTMIN
Minimum absolute value of CT used during optimization.
• double CTL
Constraint thickness parameter for linear and side constraints.
• double CTLMIN
Minimum value of CTL used during optimization.
• double DELFUN
Relative convergence criterion threshold.
• double DABFUN
Absolute convergence criterion threshold.
• double ∗ conminDesVars
• double ∗ conminLowerBnds
Array of lower bounds used by CONMIN (length N1 = numdv+2).
• double ∗ conminUpperBnds
Array of upper bounds used by CONMIN (length N1 = numdv+2).
• double ∗ S
Internal CONMIN array.
• double ∗ G1
Internal CONMIN array.
• double ∗ G2
Internal CONMIN array.
• double ∗ B
Internal CONMIN array.
• double ∗ C
Internal CONMIN array.
• int ∗ MS1
Internal CONMIN array.
• double ∗ SCAL
Internal CONMIN array.
• double ∗ DF
Internal CONMIN array.
• double ∗ A
Internal CONMIN array.
• int ∗ ISC
Internal CONMIN array.
• int ∗ IC
Internal CONMIN array.
Wrapper class for the CONMIN optimization library. The CONMINOptimizer class provides a wrapper for
CONMIN, a Public-domain Fortran 77 optimization library written by Gary Vanderplaats under contract to NASA
Ames Research Center. The CONMIN User’s Manual is contained in NASA Technical Memorandum X-62282,
1978. CONMIN uses a reverse communication mode, which avoids the static member function issues that arise
with function pointer designs (see NPSOLOptimizer and SNLLOptimizer).
The user input mappings are as follows: max_iterations is mapped into CONMIN’s ITMAX parameter,
max_function_evaluations is implemented directly in the find_optimum() loop since there is no CON-
MIN parameter equivalent, convergence_tolerance is mapped into CONMIN’s DELFUN and DABFUN
parameters, output verbosity is mapped into CONMIN’s IPRINT parameter (verbose: IPRINT = 4; quiet:
IPRINT = 2), gradient mode is mapped into CONMIN’s NFDG parameter, and finite difference step size is
mapped into CONMIN’s FDCH and FDCHM parameters. Refer to [Vanderplaats, 1978] for additional information
on CONMIN parameters.
INFO from CONMIN manual. Information requested by CONMIN: 1 = evaluate objective and constraints, 2 =
evaluate gradients of objective and constraints.
Referenced by CONMINOptimizer::find_optimum(), and CONMINOptimizer::initialize().
IPRINT from CONMIN manual (controls output verbosity). Values range from 0 (nothing) to 4 (most output).
0 = nothing, 1 = initial and final function information, 2 = all of #1 plus function value and design vars at each
iteration, 3 = all of #2 plus constraint values and direction vectors, 4 = all of #3 plus gradients of the objective
function and constraints, 5 = all of #4 plus proposed design vector, plus objective and constraint functions from
the 1-D search
Referenced by CONMINOptimizer::initialize().
array of nonlinear constraint values passed to CONMIN This array must be of nonzero length and must contain
only one-sided inequality constraints which are <= 0 (which requires a transformation from 2-sided inequalities
and equalities).
Referenced by CONMINOptimizer::allocate_workspace(), and CONMINOptimizer::find_optimum().
a container of indices for referencing the corresponding Response constraints used in computing the CONMIN
constraints. The length of the container corresponds to the number of CONMIN constraints, and each entry in the
container points to the corresponding DAKOTA constraint.
Referenced by CONMINOptimizer::allocate_constraints(), and CONMINOptimizer::find_optimum().
a container of multipliers for mapping the Response constraints to the CONMIN constraints. The length of the
container corresponds to the number of CONMIN constraints, and each entry in the container stores a multiplier
for the DAKOTA constraint identified with constraintMappingIndices. These multipliers are currently +1 or -1.
Referenced by CONMINOptimizer::allocate_constraints(), and CONMINOptimizer::find_optimum().
a container of offsets for mapping the Response constraints to the CONMIN constraints. The length of the
container corresponds to the number of CONMIN constraints, and each entry in the container stores an offset
for the DAKOTA constraint identified with constraintMappingIndices. These offsets involve inequality bounds or
equality targets, since CONMIN assumes constraint allowables = 0.
Referenced by CONMINOptimizer::allocate_constraints(), and CONMINOptimizer::find_optimum().
Size variable for CONMIN arrays. See CONMIN manual. N1 = number of variables + 2
Referenced by CONMINOptimizer::allocate_workspace(), CONMINOptimizer::find_optimum(), and
CONMINOptimizer::initialize_run().
Size variable for CONMIN arrays. See CONMIN manual. N2 = number of constraints + 2∗(number of variables)
Referenced by CONMINOptimizer::allocate_workspace(), and CONMINOptimizer::find_optimum().
Size variable for CONMIN arrays. See CONMIN manual. N3 = Maximum possible number of active constraints.
Referenced by CONMINOptimizer::allocate_workspace(), and CONMINOptimizer::find_optimum().
Size variable for CONMIN arrays. See CONMIN manual. N4 = Maximum(N3,number of variables)
Referenced by CONMINOptimizer::allocate_workspace(), and CONMINOptimizer::find_optimum().
Internal CONMIN array. Temporary storage for use with arrays B and S.
Referenced by CONMINOptimizer::allocate_workspace(), CONMINOptimizer::deallocate_workspace(), and
CONMINOptimizer::find_optimum().
Internal CONMIN array. Temporary storage for use with arrays B and S.
Referenced by CONMINOptimizer::allocate_workspace(), CONMINOptimizer::deallocate_workspace(), and
CONMINOptimizer::find_optimum().
Internal CONMIN array. Vector of scaling parameters for design parameter values.
Referenced by CONMINOptimizer::allocate_workspace(), CONMINOptimizer::deallocate_workspace(), and
CONMINOptimizer::find_optimum().
Internal CONMIN array. Temporary 2-D array for storage of constraint gradients.
Referenced by CONMINOptimizer::allocate_workspace(), CONMINOptimizer::deallocate_workspace(), and
CONMINOptimizer::find_optimum().
Internal CONMIN array. Array of flags to identify linear constraints. (not used in this implementation of CON-
MIN)
Referenced by CONMINOptimizer::allocate_workspace(), CONMINOptimizer::deallocate_workspace(),
CONMINOptimizer::find_optimum(), and CONMINOptimizer::initialize_run().
Internal CONMIN array. Array of flags to identify active and violated constraints
Referenced by CONMINOptimizer::allocate_workspace(), CONMINOptimizer::deallocate_workspace(),
CONMINOptimizer::find_optimum(), and CONMINOptimizer::initialize_run().
The documentation for this class was generated from the following files:
• CONMINOptimizer.hpp
• CONMINOptimizer.cpp
Constraints
MixedVarConstraints RelaxedVarConstraints
• virtual ∼Constraints ()
destructor
• void build_views ()
construct active/inactive views of all variables arrays
Protected Attributes
• SharedVariablesData sharedVarsData
configuration data shared from a Variables instance
• RealVector allContinuousLowerBnds
a continuous lower bounds array combining continuous design, uncertain, and continuous state variable types (all
view).
• RealVector allContinuousUpperBnds
a continuous upper bounds array combining continuous design, uncertain, and continuous state variable types (all
view).
• IntVector allDiscreteIntLowerBnds
a discrete lower bounds array combining discrete design and discrete state variable types (all view).
• IntVector allDiscreteIntUpperBnds
a discrete upper bounds array combining discrete design and discrete state variable types (all view).
• RealVector allDiscreteRealLowerBnds
a discrete lower bounds array combining discrete design and discrete state variable types (all view).
• RealVector allDiscreteRealUpperBnds
a discrete upper bounds array combining discrete design and discrete state variable types (all view).
• size_t numNonlinearIneqCons
number of nonlinear inequality constraints
• size_t numNonlinearEqCons
number of nonlinear equality constraints
• RealVector nonlinearIneqConLowerBnds
nonlinear inequality constraint lower bounds
• RealVector nonlinearIneqConUpperBnds
nonlinear inequality constraint upper bounds
• RealVector nonlinearEqConTargets
nonlinear equality constraint targets
• size_t numLinearIneqCons
number of linear inequality constraints
• size_t numLinearEqCons
number of linear equality constraints
• RealMatrix linearIneqConCoeffs
linear inequality constraint coefficients
• RealMatrix linearEqConCoeffs
linear equality constraint coefficients
• RealVector linearIneqConLowerBnds
linear inequality constraint lower bounds
• RealVector linearIneqConUpperBnds
linear inequality constraint upper bounds
• RealVector linearEqConTargets
linear equality constraint targets
• RealVector continuousLowerBnds
the active continuous lower bounds array view
• RealVector continuousUpperBnds
the active continuous upper bounds array view
• IntVector discreteIntLowerBnds
the active discrete lower bounds array view
• IntVector discreteIntUpperBnds
the active discrete upper bounds array view
• RealVector discreteRealLowerBnds
the active discrete lower bounds array view
• RealVector discreteRealUpperBnds
the active discrete upper bounds array view
• RealVector inactiveContinuousLowerBnds
the inactive continuous lower bounds array view
• RealVector inactiveContinuousUpperBnds
• IntVector inactiveDiscreteIntLowerBnds
the inactive discrete lower bounds array view
• IntVector inactiveDiscreteIntUpperBnds
the inactive discrete upper bounds array view
• RealVector inactiveDiscreteRealLowerBnds
the inactive discrete lower bounds array view
• RealVector inactiveDiscreteRealUpperBnds
the inactive discrete upper bounds array view
Private Attributes
• Constraints ∗ constraintsRep
pointer to the letter (initialized only for the envelope)
• int referenceCount
number of objects sharing constraintsRep
Base class for the variable constraints class hierarchy. The Constraints class is the base class for the class hierarchy
managing bound, linear, and nonlinear constraints. Using the variable lower and upper bounds arrays from the
input specification, different derived classes define different views of this data. The linear and nonlinear constraint
data is consistent in all views and is managed at the base class level. For memory efficiency and enhanced
polymorphism, the variable constraints hierarchy employs the "letter/envelope idiom" (see Coplien "Advanced
C++", p. 133), for which the base class (Constraints) serves as the envelope and one of the derived classes
(selected in Constraints::get_constraints()) serves as the letter.
13.18.2.1 Constraints ()
default constructor The default constructor: constraintsRep is NULL in this case (a populated problem_db is
needed to build a meaningful Constraints object). This makes it necessary to check for NULL in the copy con-
structor, assignment operator, and destructor.
13.18.2.2 Constraints (const ProblemDescDB & problem_db, const SharedVariablesData & svd)
standard constructor The envelope constructor only needs to extract enough data to properly execute get_-
constraints, since the constructor overloaded with BaseConstructor builds the actual base class data inherited
by the derived classes.
References Dakota::abort_handler(), Constraints::constraintsRep, and Constraints::get_constraints().
alternate constructor for instantiations on the fly Envelope constructor for instantiations on the fly. This construc-
tor executes get_constraints(view), which invokes the default derived/base constructors, followed by a reshape()
based on vars_comps.
References Dakota::abort_handler(), Constraints::constraintsRep, and Constraints::get_constraints().
copy constructor Copy constructor manages sharing of constraintsRep and incrementing of referenceCount.
References Constraints::constraintsRep, and Constraints::referenceCount.
destructor Destructor decrements referenceCount and only deletes constraintsRep when referenceCount reaches
zero.
References Constraints::constraintsRep, and Constraints::referenceCount.
constructor initializes the base class part of letter classes (BaseConstructor overloading avoids infinite recursion
in the derived class constructors - Coplien, p. 139) This constructor is the one which must build the base class
data for all derived classes. get_constraints() instantiates a derived class letter and the derived constructor selects
this base class constructor in its initialization list (to avoid recursion in the base class constructor calling get_-
constraints() again). Since the letter IS the representation, its rep pointer is set to NULL (an uninitialized pointer
causes problems in ∼Constraints).
constructor initializes the base class part of letter classes (BaseConstructor overloading avoids infinite recursion
in the derived class constructors - Coplien, p. 139) This constructor is the one which must build the base class
data for all derived classes. get_constraints() instantiates a derived class letter and the derived constructor selects
this base class constructor in its initialization list (to avoid recursion in the base class constructor calling get_-
constraints() again). Since the letter IS the representation, its rep pointer is set to NULL (an uninitialized pointer
causes problems in ∼Constraints).
assignment operator Assignment operator decrements referenceCount for old constraintsRep, assigns new con-
straintsRep, and increments referenceCount for new constraintsRep.
References Constraints::constraintsRep, and Constraints::referenceCount.
reshape the lower/upper bound arrays within the Constraints hierarchy Resizes the derived bounds arrays.
Reimplemented in MixedVarConstraints, and RelaxedVarConstraints.
References Constraints::constraintsRep, Constraints::continuousLowerBnds, Constraints::discreteIntLowerBnds,
Constraints::discreteRealLowerBnds, Constraints::linearEqConCoeffs, Constraints::linearIneqConCoeffs, Con-
straints::numLinearEqCons, Constraints::numLinearIneqCons, and Constraints::reshape().
Referenced by DataFitSurrModel::DataFitSurrModel(), RecastModel::RecastModel(), and Con-
straints::reshape().
for use when a deep copy is needed (the representation is _not_ shared) Deep copies are used for history mech-
anisms that catalogue permanent copies (should not change as the representation within userDefinedConstraints
changes).
References Constraints::allContinuousLowerBnds, Constraints::allContinuousUpperBnds,
Constraints::allDiscreteIntLowerBnds, Constraints::allDiscreteIntUpperBnds, Con-
straints::allDiscreteRealLowerBnds, Constraints::allDiscreteRealUpperBnds, Constraints::build_views(),
Constraints::constraintsRep, Constraints::get_constraints(), Constraints::linearEqConCoeffs, Con-
straints::linearEqConTargets, Constraints::linearIneqConCoeffs, Constraints::linearIneqConLowerBnds,
Constraints::linearIneqConUpperBnds, Constraints::nonlinearEqConTargets, Con-
straints::nonlinearIneqConLowerBnds, Constraints::nonlinearIneqConUpperBnds, Con-
straints::numLinearEqCons, Constraints::numLinearIneqCons, Constraints::numNonlinearEqCons, Con-
straints::numNonlinearIneqCons, and Constraints::sharedVarsData.
Referenced by SurrogateModel::force_rebuild(), and RecastModel::RecastModel().
reshape the linear/nonlinear constraint arrays within the Constraints hierarchy Resizes the linear and nonlinear
constraint arrays at the base class. Does NOT currently resize the derived bounds arrays.
References Constraints::constraintsRep, Constraints::linearEqConTargets, Con-
straints::linearIneqConLowerBnds, Constraints::linearIneqConUpperBnds, Constraints::nonlinearEqConTargets,
Constraints::nonlinearIneqConLowerBnds, Constraints::nonlinearIneqConUpperBnds, Con-
straints::numLinearEqCons, Constraints::numLinearIneqCons, Constraints::numNonlinearEqCons, Con-
straints::numNonlinearIneqCons, and Constraints::reshape().
perform checks on user input, convert linear constraint coefficient input to matrices, and assign defaults Con-
venience function called from derived class constructors. The number of variables active for applying linear
constraints is currently defined to be the number of active continuous variables plus the number of active discrete
variables (the most general case), even though very few optimizers can currently support mixed variable linear
constraints.
References Dakota::abort_handler(), Constraints::continuousLowerBnds, Dakota::copy_data(), Con-
straints::discreteIntLowerBnds, Constraints::discreteRealLowerBnds, ProblemDescDB::get_rv(), Con-
straints::linearEqConCoeffs, Constraints::linearEqConTargets, Constraints::linearIneqConCoeffs, Con-
straints::linearIneqConLowerBnds, Constraints::linearIneqConUpperBnds, Constraints::numLinearEqCons,
and Constraints::numLinearIneqCons.
Referenced by MixedVarConstraints::MixedVarConstraints(), and RelaxedVarCon-
straints::RelaxedVarConstraints().
Used only by the constructor to initialize constraintsRep to the appropriate derived type. Initializes constraintsRep
to the appropriate derived type, as given by the variables view.
References SharedVariablesData::view().
Referenced by Constraints::Constraints(), and Constraints::copy().
Used by copy() to initialize constraintsRep to the appropriate derived type. Initializes constraintsRep to the
appropriate derived type, as given by the variables view. The default derived class constructors are invoked.
References SharedVariablesData::view().
The documentation for this class was generated from the following files:
• DakotaConstraints.hpp
• DakotaConstraints.cpp
Model
SurrogateModel
DataFitSurrModel
• ∼DataFitSurrModel ()
destructor
• void build_approximation ()
Builds the local/multipoint/global approximation using daceIterator/actualModel to generate new data points.
• void update_approximation (const Variables &vars, const IntResponsePair &response_pr, bool rebuild_-
flag)
replaces the anchor point, and rebuilds the approximation if requested
• void append_approximation (const Variables &vars, const IntResponsePair &response_pr, bool rebuild_-
flag)
appends a point to a global approximation and rebuilds it if requested
• void restore_approximation ()
restore a previous approximation data state
• bool restore_available ()
query for whether a trial increment is restorable
• void finalize_approximation ()
finalize data fit by applying all previous trial increments
• void store_approximation ()
store the current data fit approximation for later combination
• void derived_init_serial ()
set up actualModel for serial operations.
• void stop_servers ()
Executed by the master to terminate actualModel server operations when DataFitSurrModel iteration is complete.
• void set_evaluation_reference ()
set the evaluation counter reference points for the DataFitSurrModel (request forwarded to approxInterface and
actualModel)
• void fine_grained_evaluation_counters ()
request fine-grained evaluation reporting within approxInterface and actualModel
• void update_global ()
Updates fit arrays for global approximations.
• void update_local_multipoint ()
Updates fit arrays for local or multipoint approximations.
• void build_global ()
Builds a global approximation using daceIterator.
• void build_local_multipoint ()
Builds a local or multipoint approximation using actualModel.
• void update_actual_model ()
update actualModel with data from current variables/labels/bounds/targets
• void update_from_actual_model ()
update current variables/labels/bounds/targets with data from actualModel
• bool inside (const RealVector &c_vars, const IntVector &di_vars, const RealVector &dr_vars)
test if c_vars and d_vars are within [c_l_bnds,c_u_bnds] and [d_l_bnds,d_u_bnds]
Private Attributes
• int surrModelEvalCntr
number of calls to derived_compute_response()/ derived_asynch_compute_response()
• int pointsTotal
total points the user specified to construct the surrogate
• short pointsManagement
configuration for points management in build_global()
• String pointReuse
type of point reuse for approximation builds: all, region (default if points file), or none (default if no points
file)
• String importPointsFile
file name from import_points_file specification
• String exportPointsFile
file name from export_points_file specification
• bool exportAnnotated
annotation setting for file export of variables and approximate responses
• std::ofstream exportFileStream
file name for export_points_file specification
• VariablesList reuseFileVars
array of variables sets read from the import_points_file
• ResponseList reuseFileResponses
array of response sets read from the import_points_file
• Interface approxInterface
manages the building and subsequent evaluation of the approximations (required for both global and local)
• Model actualModel
the truth model which provides evaluations for building the surrogate (optional for global, required for local)
• Iterator daceIterator
selects parameter sets on which to evaluate actualModel in order to generate the necessary data for building global
approximations (optional for global since restart data may also be used)
• String evalTagPrefix
cached evalTag Prefix from parents to use at compute_response time
Derived model class within the surrogate model branch for managing data fit surrogates (global and local). The
DataFitSurrModel class manages global or local approximations (surrogates that involve data fits) that are used
in place of an expensive model. The class contains an approxInterface (required for both global and local) which
manages the approximate function evaluations, an actualModel (optional for global, required for local) which
provides truth evaluations for building the surrogate, and a daceIterator (optional for global, not used for local)
which selects parameter sets on which to evaluate actualModel in order to generate the necessary data for building
global approximations.
portion of compute_response() specific to DataFitSurrModel Compute the response synchronously using ac-
tualModel, approxInterface, or both (mixed case). For the approxInterface portion, build the approximation if
needed, evaluate the approximate response, and apply correction (if active) to the results.
Reimplemented from Model.
References Dakota::_NPOS, DiscrepancyCorrection::active(), Response::active_set(), DataFitSur-
rModel::actualModel, DiscrepancyCorrection::apply(), SurrogateModel::approxBuilds, DataFitSur-
rModel::approxInterface, SurrogateModel::asv_mapping(), DataFitSurrModel::build_approximation(),
DataFitSurrModel::component_parallel_mode(), DiscrepancyCorrection::compute(), Model::compute_-
response(), Response::copy(), Model::current_response(), Model::currentResponse, Model::currentVariables,
SurrogateModel::deltaCorr, Model::eval_tag_prefix(), DataFitSurrModel::evalTagPrefix, DataFitSur-
rModel::exportAnnotated, DataFitSurrModel::exportFileStream, DataFitSurrModel::exportPointsFile,
SurrogateModel::force_rebuild(), Model::hierarchicalTagging, Interface::map(), Model::outputLevel,
ActiveSet::request_vector(), SurrogateModel::response_mapping(), SurrogateModel::responseMode,
DataFitSurrModel::surrModelEvalCntr, Response::update(), DataFitSurrModel::update_actual_model(), and
Dakota::write_data_tabular().
portion of synchronize() specific to DataFitSurrModel Blocking retrieval of asynchronous evaluations from ac-
tualModel, approxInterface, or both (mixed case). For the approxInterface portion, apply correction (if active)
to each response in the array. derived_synchronize() is designed for the general case where derived_asynch_-
compute_response() may be inconsistent in its use of actual evaluations, approximate evaluations, or both.
Reimplemented from Model.
References DataFitSurrModel::actualModel, DataFitSurrModel::approxInterface,
DataFitSurrModel::component_parallel_mode(), DiscrepancyCorrection::compute(), SurrogateModel::deltaCorr,
DataFitSurrModel::derived_synchronize_approx(), Model::outputLevel, SurrogateModel::response_mapping(),
SurrogateModel::responseMode, SurrogateModel::surrIdMap, SurrogateModel::surrResponseMap, Inter-
face::synch(), Model::synchronize(), and SurrogateModel::truthIdMap.
Builds the local/multipoint/global approximation using daceIterator/actualModel to generate new data points.
This function constructs a new approximation, discarding any previous data. It constructs any required data for
SurrogateData::{vars,resp}Data and does not define an anchor point for SurrogateData::anchor{Vars,Resp}.
Reimplemented from Model.
References DataFitSurrModel::actualModel, SurrogateModel::approxBuilds, DataFitSur-
rModel::approxInterface, Interface::build_approximation(), DataFitSurrModel::build_global(),
DataFitSurrModel::build_local_multipoint(), Interface::clear_current(), Model::continuous_lower_bounds(),
Constraints::continuous_lower_bounds(), Model::continuous_upper_bounds(), Constraints::continuous_upper_-
bounds(), Model::discrete_int_lower_bounds(), Constraints::discrete_int_lower_bounds(), Model::discrete_-
int_upper_bounds(), Constraints::discrete_int_upper_bounds(), Model::discrete_real_lower_bounds(),
Constraints::discrete_real_lower_bounds(), Model::discrete_real_upper_bounds(), Constraints::discrete_real_-
upper_bounds(), Model::is_null(), Dakota::strbegins(), Model::surrogateType, DataFitSurrModel::update_-
actual_model(), DataFitSurrModel::update_global(), DataFitSurrModel::update_local_multipoint(), and
Model::userDefinedConstraints.
Referenced by DataFitSurrModel::derived_asynch_compute_response(), and DataFitSurrModel::derived_-
compute_response().
13.19.3.6 bool build_approximation (const Variables & vars, const IntResponsePair & response_pr)
[protected, virtual]
Builds the local/multipoint/global approximation using daceIterator/actualModel to generate new data points that
augment the vars/response anchor point. This function constructs a new approximation, discarding any previous
data. It uses the passed data to populate SurrogateData::anchor{Vars,Resp} and constructs any required data
points for SurrogateData::{vars,resp}Data.
Reimplemented from Model.
References DataFitSurrModel::actualModel, SurrogateModel::approxBuilds, DataFitSur-
rModel::approxInterface, Interface::build_approximation(), DataFitSurrModel::build_global(), Interface::clear_-
current(), Model::continuous_lower_bounds(), Constraints::continuous_lower_bounds(), Model::continuous_-
upper_bounds(), Constraints::continuous_upper_bounds(), Model::discrete_int_lower_bounds(),
Constraints::discrete_int_lower_bounds(), Model::discrete_int_upper_bounds(), Constraints::discrete_-
int_upper_bounds(), Model::discrete_real_lower_bounds(), Constraints::discrete_real_lower_bounds(),
Model::discrete_real_upper_bounds(), Constraints::discrete_real_upper_bounds(), Model::is_null(),
Dakota::strbegins(), Model::surrogateType, DataFitSurrModel::update_actual_model(), Interface::update_-
approximation(), DataFitSurrModel::update_global(), DataFitSurrModel::update_local_multipoint(), and
Model::userDefinedConstraints.
replaces the approximation data with daceIterator results and rebuilds the approximation if requested This function
populates/replaces SurrogateData::anchor{Vars,Resp} and rebuilds the approximation, if requested. It does not
clear other data (i.e., SurrogateData::{vars,resp}Data) and does not update the actualModel with revised bounds,
labels, etc. Thus, it updates data from a previous call to build_approximation(), and is not intended to be used in
isolation.
Reimplemented from Model.
References Iterator::all_responses(), Iterator::all_samples(), Iterator::all_variables(), Surrogate-
Model::approxBuilds, DataFitSurrModel::approxInterface, Iterator::compact_mode(), DataFitSur-
rModel::daceIterator, Model::numFns, Interface::rebuild_approximation(), Model::surrogateType, and
Interface::update_approximation().
13.19.3.8 void update_approximation (const Variables & vars, const IntResponsePair & response_pr,
bool rebuild_flag) [protected, virtual]
replaces the anchor point, and rebuilds the approximation if requested This function populates/replaces Surro-
gateData::anchor{Vars,Resp} and rebuilds the approximation, if requested. It does not clear other data (i.e.,
SurrogateData::{vars,resp}Data) and does not update the actualModel with revised bounds, labels, etc. Thus, it
updates data from a previous call to build_approximation(), and is not intended to be used in isolation.
Reimplemented from Model.
References SurrogateModel::approxBuilds, DataFitSurrModel::approxInterface, Model::numFns,
Interface::rebuild_approximation(), Model::surrogateType, and Interface::update_approximation().
13.19.3.9 void update_approximation (const VariablesArray & vars_array, const IntResponseMap &
resp_map, bool rebuild_flag) [protected, virtual]
replaces the current points array and rebuilds the approximation if requested This function populates/replaces
SurrogateData::{vars,resp}Data and rebuilds the approximation, if requested. It does not clear other data (i.e.,
SurrogateData::anchor{Vars,Resp}) and does not update the actualModel with revised bounds, labels, etc. Thus,
it updates data from a previous call to build_approximation(), and is not intended to be used in isolation.
Reimplemented from Model.
References SurrogateModel::approxBuilds, DataFitSurrModel::approxInterface, Model::numFns,
Interface::rebuild_approximation(), Model::surrogateType, and Interface::update_approximation().
appends daceIterator results to a global approximation and rebuilds it if requested This function appends one
point to SurrogateData::{vars,resp}Data and rebuilds the approximation, if requested. It does not modify other
data (i.e., SurrogateData::anchor{Vars,Resp}) and does not update the actualModel with revised bounds, labels,
etc. Thus, it appends to data from a previous call to build_approximation(), and is not intended to be used in
isolation.
Reimplemented from Model.
References Iterator::all_responses(), Iterator::all_samples(), Iterator::all_variables(), Interface::append_-
approximation(), SurrogateModel::approxBuilds, DataFitSurrModel::approxInterface, Iterator::compact_-
mode(), DataFitSurrModel::daceIterator, Model::numFns, Interface::rebuild_approximation(), and
Model::surrogateType.
13.19.3.11 void append_approximation (const Variables & vars, const IntResponsePair & response_pr,
bool rebuild_flag) [protected, virtual]
appends a point to a global approximation and rebuilds it if requested This function appends one point to Sur-
rogateData::{vars,resp}Data and rebuilds the approximation, if requested. It does not modify other data (i.e.,
SurrogateData::anchor{Vars,Resp}) and does not update the actualModel with revised bounds, labels, etc. Thus,
it appends to data from a previous call to build_approximation(), and is not intended to be used in isolation.
Reimplemented from Model.
References Interface::append_approximation(), SurrogateModel::approxBuilds, DataFitSur-
rModel::approxInterface, Model::numFns, Interface::rebuild_approximation(), and Model::surrogateType.
13.19.3.12 void append_approximation (const VariablesArray & vars_array, const IntResponseMap &
resp_map, bool rebuild_flag) [protected, virtual]
appends an array of points to a global approximation and rebuilds it if requested This function appends multiple
points to SurrogateData::{vars,resp}Data and rebuilds the approximation, if requested. It does not modify other
data (i.e., SurrogateData::anchor{Vars,Resp}) and does not update the actualModel with revised bounds, labels,
etc. Thus, it appends to data from a previous call to build_approximation(), and is not intended to be used in
isolation.
Reimplemented from Model.
set up actualModel for parallel operations asynchronous flags need to be initialized for the sub-models. In
addition, max_iterator_concurrency is the outer level iterator concurrency, not the DACE concurrency that ac-
tualModel will see, and recomputing the message_lengths on the sub-model is probably not a bad idea either.
Therefore, recompute everything on actualModel using init_communicators.
Reimplemented from Model.
References DataFitSurrModel::actualModel, DataFitSurrModel::approxInterface, DataFitSur-
rModel::daceIterator, Model::derivative_concurrency(), Model::init_communicators(), Iterator::is_null(),
Model::is_null(), Iterator::maximum_concurrency(), and Interface::minimum_points().
return the current evaluation id for the DataFitSurrModel return the DataFitSurrModel evaluation count. Due to
possibly intermittent use of surrogate bypass, this is not the same as either the approxInterface or actualModel
model evaluation counts. It also does not distinguish duplicate evals.
Reimplemented from Model.
References DataFitSurrModel::surrModelEvalCntr.
optionally read surrogate data points from provided file Constructor helper to read the points file once, if provided,
and then reuse its data as appropriate within build_global()
References DataFitSurrModel::actualModel, Model::continuous_variable_ids(), Variables::continuous_-
variable_ids(), Model::current_variables(), Model::currentVariables, Model::cv(), Variables::cv(),
ActiveSet::derivative_vector(), DataFitSurrModel::importPointsFile, Model::is_null(), Model::numFns,
Model::outputLevel, Dakota::read_data_tabular(), DataFitSurrModel::reuseFileResponses, DataFitSur-
rModel::reuseFileVars, and Variables::shared_data().
Referenced by DataFitSurrModel::DataFitSurrModel().
Builds a global approximation using daceIterator. Determine points to use in building the approximation and then
evaluate them on actualModel using daceIterator. Any changes to the bounds should be performed by setting them
at a higher level (e.g., SurrBasedOptStrategy).
References Dakota::abort_handler(), Iterator::active_set(), DataFitSurrModel::actualModel, Iterator::all_-
responses(), Iterator::all_samples(), Iterator::all_variables(), Interface::append_approximation(),
Interface::approximation_data(), DataFitSurrModel::approxInterface, SurrogateModel::asv_mapping(),
Builds a local or multipoint approximation using actualModel. Evaluate the value, gradient, and possibly Hessian
needed for a local or multipoint approximation using actualModel.
References Response::active_set(), DataFitSurrModel::actualModel, DataFitSurrModel::approxInterface,
SurrogateModel::asv_mapping(), DataFitSurrModel::component_parallel_mode(), Model::compute_response(),
Model::continuous_variable_ids(), Model::current_response(), Model::current_variables(), Model::evaluation_-
id(), Model::hessian_type(), Model::numFns, ActiveSet::request_vector(), Dakota::strbegins(),
Model::surrogateType, and Interface::update_approximation().
Referenced by DataFitSurrModel::build_approximation().
update actualModel with data from current variables/labels/bounds/targets Update variables and constraints data
within actualModel using values and labels from currentVariables and bound/linear/nonlinear constraints from
userDefinedConstraints.
References Dakota::abort_handler(), DataFitSurrModel::actualModel, Model::aleatDistParams,
Model::aleatory_distribution_parameters(), Model::all_continuous_lower_bounds(), Constraints::all_-
continuous_lower_bounds(), Model::all_continuous_upper_bounds(), Constraints::all_continuous_upper_-
bounds(), Model::all_continuous_variable_labels(), Variables::all_continuous_variable_labels(), Model::all_-
continuous_variables(), Variables::all_continuous_variables(), Model::all_discrete_int_lower_bounds(),
Constraints::all_discrete_int_lower_bounds(), Model::all_discrete_int_upper_bounds(), Constraints::all_-
discrete_int_upper_bounds(), Model::all_discrete_int_variable_labels(), Variables::all_discrete_int_variable_-
labels(), Model::all_discrete_int_variables(), Variables::all_discrete_int_variables(), Model::all_discrete_real_-
lower_bounds(), Constraints::all_discrete_real_lower_bounds(), Model::all_discrete_real_upper_bounds(),
Constraints::all_discrete_real_upper_bounds(), Model::all_discrete_real_variable_labels(), Variables::all_-
discrete_real_variable_labels(), Model::all_discrete_real_variables(), Variables::all_discrete_real_variables(),
SurrogateModel::approxBuilds, Constraints::continuous_lower_bounds(), Model::continuous_lower_bounds(),
Constraints::continuous_upper_bounds(), Model::continuous_upper_bounds(), Variables::continuous_variable_-
labels(), Model::continuous_variable_labels(), Variables::continuous_variables(), Model::continuous_variables(),
Model::current_variables(), Model::currentResponse, Model::currentVariables, Model::cv(), Variables::cv(),
Model::discrete_design_set_int_values(), Model::discrete_design_set_real_values(), Constraints::discrete_-
int_lower_bounds(), Model::discrete_int_lower_bounds(), Constraints::discrete_int_upper_bounds(),
update current variables/labels/bounds/targets with data from actualModel Update values and labels in current-
Variables and bound/linear/nonlinear constraints in userDefinedConstraints from variables and constraints data
within actualModel.
References Dakota::abort_handler(), DataFitSurrModel::actualModel, Model::aleatDistParams,
Model::aleatory_distribution_parameters(), Model::all_continuous_lower_bounds(), Constraints::all_-
continuous_lower_bounds(), Model::all_continuous_upper_bounds(), Constraints::all_continuous_upper_-
bounds(), Model::all_continuous_variable_labels(), Variables::all_continuous_variable_labels(), Model::all_-
continuous_variables(), Variables::all_continuous_variables(), Model::all_discrete_int_lower_bounds(),
Constraints::all_discrete_int_lower_bounds(), Model::all_discrete_int_upper_bounds(), Constraints::all_-
discrete_int_upper_bounds(), Model::all_discrete_int_variable_labels(), Variables::all_discrete_int_variable_-
labels(), Model::all_discrete_int_variables(), Variables::all_discrete_int_variables(), Model::all_discrete_real_-
lower_bounds(), Constraints::all_discrete_real_lower_bounds(), Model::all_discrete_real_upper_bounds(),
Constraints::all_discrete_real_upper_bounds(), Model::all_discrete_real_variable_labels(), Variables::all_-
discrete_real_variable_labels(), Model::all_discrete_real_variables(), Variables::all_discrete_real_variables(),
SurrogateModel::approxBuilds, Model::currentResponse, Model::currentVariables, Variables::cv(), Model::cv(),
Model::discrete_design_set_int_values(), Model::discrete_design_set_real_values(), Model::discrete_-
state_set_int_values(), Model::discrete_state_set_real_values(), Model::discreteDesignSetIntValues,
Model::discreteDesignSetRealValues, Model::discreteStateSetIntValues, Model::discreteStateSetRealValues,
Variables::div(), Model::div(), Variables::drv(), Model::drv(), Model::epistDistParams, Model::epistemic_-
distribution_parameters(), Response::function_labels(), Model::linear_eq_constraint_coeffs(),
Constraints::linear_eq_constraint_coeffs(), Model::linear_eq_constraint_targets(), Constraints::linear_-
the truth model which provides evaluations for building the surrogate (optional for global, required for local)
actualModel is unrestricted in type; arbitrary nestings are possible.
Referenced by DataFitSurrModel::build_approximation(), DataFitSurrModel::build_global(),
DataFitSurrModel::build_local_multipoint(), DataFitSurrModel::component_parallel_mode(),
DataFitSurrModel::DataFitSurrModel(), DataFitSurrModel::derived_asynch_compute_response(),
DataFitSurrModel::derived_compute_response(), DataFitSurrModel::derived_free_communicators(),
DataFitSurrModel::derived_init_communicators(), DataFitSurrModel::derived_init_serial(),
DataFitSurrModel::derived_set_communicators(), DataFitSurrModel::derived_subordinate_-
models(), DataFitSurrModel::derived_synchronize(), DataFitSurrModel::derived_synchronize_-
nowait(), DataFitSurrModel::fine_grained_evaluation_counters(), DataFitSurrModel::import_points(),
DataFitSurrModel::inactive_view(), DataFitSurrModel::inside(), DataFitSurrModel::primary_-
response_fn_weights(), DataFitSurrModel::print_evaluation_summary(), DataFitSurrModel::serve(),
DataFitSurrModel::stop_servers(), DataFitSurrModel::surrogate_response_mode(), DataFitSurrModel::truth_-
model(), DataFitSurrModel::update_actual_model(), DataFitSurrModel::update_from_actual_model(),
DataFitSurrModel::update_from_subordinate_model(), DataFitSurrModel::update_global(), and
DataFitSurrModel::update_local_multipoint().
The documentation for this class was generated from the following files:
• DataFitSurrModel.hpp
• DataFitSurrModel.cpp
• ∼DataInterface ()
destructor
Private Attributes
• DataInterfaceRep ∗ dataIfaceRep
pointer to the body (handle-body idiom)
Friends
• class ProblemDescDB
• class NIDRProblemDescDB
• void run_dakota_data ()
library_mode default data initializer
Handle class for interface specification data. The DataInterface class is used to provide a memory management
handle for the data in DataInterfaceRep. It is populated by IDRProblemDescDB::interface_kwhandler() and is
queried by the ProblemDescDB::get_<datatype>() functions. A list of DataInterface objects is maintained in
ProblemDescDB::dataInterfaceList, one for each interface specification in an input file.
The documentation for this class was generated from the following files:
• DataInterface.hpp
• DataInterface.cpp
• ∼DataMethod ()
destructor
Private Attributes
• DataMethodRep ∗ dataMethodRep
pointer to the body (handle-body idiom)
Friends
• class ProblemDescDB
• class NIDRProblemDescDB
• void run_dakota_data ()
library_mode default data initializer
Handle class for method specification data. The DataMethod class is used to provide a memory management
handle for the data in DataMethodRep. It is populated by IDRProblemDescDB::method_kwhandler() and is
queried by the ProblemDescDB::get_<datatype>() functions. A list of DataMethod objects is maintained in
ProblemDescDB::dataMethodList, one for each method specification in an input file.
The documentation for this class was generated from the following files:
• DataMethod.hpp
• DataMethod.cpp
Public Attributes
• String idMethod
string identifier for the method specification data set (from the id_method specification in MethodIndControl)
• String modelPointer
string pointer to the model specification to be used by this method (from the model_pointer specification in
MethodIndControl)
• short methodOutput
method verbosity control: {SILENT,QUIET,NORMAL,VERBOSE,DEBUG}_OUTPUT (from the output specifi-
cation in MethodIndControl)
• int maxIterations
maximum number of iterations allowed for the method (from the max_iterations specification in MethodInd-
Control)
• int maxFunctionEvaluations
maximum number of function evaluations allowed for the method (from the max_function_evaluations
specification in MethodIndControl)
• bool speculativeFlag
flag for use of speculative gradient approaches for maintaining parallel load balance during the line search portion
of optimization algorithms (from the speculative specification in MethodIndControl)
• bool methodUseDerivsFlag
flag for usage of derivative data to enhance the computation of surrogate models (PCE/SC expansions, GP models
for EGO/EGRA/EGIE) based on the use_derivatives specification
• Real convergenceTolerance
iteration convergence tolerance for the method (from the convergence_tolerance specification in
MethodIndControl)
• Real constraintTolerance
tolerance for controlling the amount of infeasibility that is allowed before an active constraint is considered to be
violated (from the constraint_tolerance specification in MethodIndControl)
• bool methodScaling
flag indicating scaling status (from the scaling specification in MethodIndControl)
• size_t numFinalSolutions
number of final solutions returned from the iterator
• RealVector linearIneqConstraintCoeffs
coefficient matrix for the linear inequality constraints (from the linear_inequality_constraint_-
matrix specification in MethodIndControl)
• RealVector linearIneqLowerBnds
lower bounds for the linear inequality constraints (from the linear_inequality_lower_bounds specifi-
cation in MethodIndControl)
• RealVector linearIneqUpperBnds
upper bounds for the linear inequality constraints (from the linear_inequality_upper_bounds specifi-
cation in MethodIndControl)
• StringArray linearIneqScaleTypes
scaling types for the linear inequality constraints (from the linear_inequality_scale_types specifica-
tion in MethodIndControl)
• RealVector linearIneqScales
scaling factors for the linear inequality constraints (from the linear_inequality_scales specification in
MethodIndControl)
• RealVector linearEqConstraintCoeffs
coefficient matrix for the linear equality constraints (from the linear_equality_constraint_matrix
specification in MethodIndControl)
• RealVector linearEqTargets
targets for the linear equality constraints (from the linear_equality_targets specification in MethodInd-
Control)
• StringArray linearEqScaleTypes
scaling types for the linear equality constraints (from the linear_equality_scale_types specification in
MethodIndControl)
• RealVector linearEqScales
scaling factors for the linear equality constraints (from the linear_equality_scales specification in
MethodIndControl)
• String methodName
the method selection: one of the optimizer, least squares, nond, dace, or parameter study methods
• String subMethodName
string identifier for a sub-method within a multi-option method specification (e.g., from sub_method_name in
SBL/SBG, dace option, or richardson_extrap option)
• String subMethodPointer
string pointer for a sub-method specification used by a multi-component method (from the sub_method_-
pointer specification in SBL/SBG)
• int surrBasedLocalSoftConvLimit
number of consecutive iterations with change less than convergenceTolerance required to trigger convergence
within the surrogate-based local method (from the soft_convergence_limit specification in MethodSBL)
• bool surrBasedLocalLayerBypass
flag to indicate user-specification of a bypass of any/all layerings in evaluating truth response values in SBL.
• Real surrBasedLocalTRInitSize
initial trust region size in the surrogate-based local method (from the initial_size specification in Meth-
odSBL) note: this is a relative value, e.g., 0.1 = 10% of global bounds distance (upper bound - lower bound) for
each variable
• Real surrBasedLocalTRMinSize
minimum trust region size in the surrogate-based local method (from the minimum_size specification in Meth-
odSBL), if the trust region size falls below this threshold the SBL iterations are terminated (note: if kriging is used
with SBL, the min trust region size is set to 1.0e-3 in attempt to avoid ill-conditioned matrixes that arise in kriging
over small trust regions)
• Real surrBasedLocalTRContractTrigger
trust region minimum improvement level (ratio of actual to predicted decrease in objective fcn) in the surrogate-
based local method (from the contract_threshold specification in MethodSBL), the trust region shrinks or
is rejected if the ratio is below this value ("eta_1" in the Conn-Gould-Toint trust region book)
• Real surrBasedLocalTRExpandTrigger
trust region sufficient improvement level (ratio of actual to predicted decrease in objective fn) in the surrogate-based
local method (from the expand_threshold specification in MethodSBL), the trust region expands if the ratio
is above this value ("eta_2" in the Conn-Gould-Toint trust region book)
• Real surrBasedLocalTRContract
trust region contraction factor in the surrogate-based local method (from the contraction_factor specifica-
tion in MethodSBL)
• Real surrBasedLocalTRExpand
trust region expansion factor in the surrogate-based local method (from the expansion_factor specification
in MethodSBL)
• short surrBasedLocalSubProbObj
SBL approximate subproblem objective: ORIGINAL_PRIMARY, SINGLE_OBJECTIVE, LAGRANGIAN_-
OBJECTIVE, or AUGMENTED_LAGRANGIAN_OBJECTIVE.
• short surrBasedLocalSubProbCon
SBL approximate subproblem constraints: NO_CONSTRAINTS, LINEARIZED_CONSTRAINTS, or ORIGINAL_-
CONSTRAINTS.
• short surrBasedLocalMeritFn
SBL merit function type: BASIC_PENALTY, ADAPTIVE_PENALTY, BASIC_LAGRANGIAN, or AUGMENTED_-
LAGRANGIAN.
• short surrBasedLocalAcceptLogic
• short surrBasedLocalConstrRelax
SBL constraint relaxation method: NO_RELAX or HOMOTOPY.
• bool surrBasedGlobalReplacePts
user-specified method for adding points to the set upon which the next surrogate is based in the surrogate_-
based_global strategy.
• String dlDetails
string of options for a dynamically linked solver
• void ∗ dlLib
handle to dynamically loaded library
• int verifyLevel
the verify_level specification in MethodNPSOLDC
• Real functionPrecision
the function_precision specification in MethodNPSOLDC and the EPSILON specification in NOMAD
• Real lineSearchTolerance
the linesearch_tolerance specification in MethodNPSOLDC
• Real absConvTol
absolute function convergence tolerance
• Real xConvTol
x-convergence tolerance
• Real singConvTol
singular convergence tolerance
• Real singRadius
radius for singular convergence test
• Real falseConvTol
false-convergence tolerance
• Real initTRRadius
initial trust radius
• int covarianceType
kind of covariance required
• bool regressDiag
• String searchMethod
the search_method specification for Newton and nonlinear interior-point methods in MethodOPTPPDC
• Real gradientTolerance
the gradient_tolerance specification in MethodOPTPPDC
• Real maxStep
the max_step specification in MethodOPTPPDC
• short meritFn
the merit_function specification for nonlinear interior-point methods in MethodOPTPPDC
• Real stepLenToBoundary
the steplength_to_boundary specification for nonlinear interior-point methods in MethodOPTPPDC
• Real centeringParam
the centering_parameter specification for nonlinear interior-point methods in MethodOPTPPDC
• int searchSchemeSize
the search_scheme_size specification for PDS methods in MethodOPTPPDC
• Real initStepLength
the initStepLength choice for nonlinearly constrained APPS in MethodAPPSDC
• Real contractStepLength
the contractStepLength choice for nonlinearly constrained APPS in MethodAPPSDC
• Real threshStepLength
the threshStepLength choice for nonlinearly constrained APPS in MethodAPPSDC
• String meritFunction
the meritFunction choice for nonlinearly constrained APPS in MethodAPPSDC
• Real constrPenalty
the constrPenalty choice for nonlinearly constrained APPS in MethodAPPSDC
• Real smoothFactor
the initial smoothFactor value for nonlinearly constrained APPS in MethodAPPSDC
• Real constraintPenalty
the initial constraint_penalty for COLINY methods in MethodAPPS, MethodSCOLIBDIR, Method-
SCOLIBPS, MethodSCOLIBSW and MethodSCOLIBEA
• bool constantPenalty
• Real globalBalanceParam
the global_balance_parameter for the DIRECT method in MethodSCOLIBDIR
• Real localBalanceParam
the local_balance_parameter for the DIRECT method in MethodSCOLIBDIR
• Real maxBoxSize
the max_boxsize_limit for the DIRECT method in MethodSCOLIBDIR
• Real minBoxSize
the min_boxsize_limit for the DIRECT method in MethodSCOLIBDIR and MethodNCSUDC
• String boxDivision
the division setting (major_dimension or all_dimensions) for the DIRECT method in MethodSCOL-
IBDIR
• bool mutationAdaptive
the non_adaptive specification for the coliny_ea method in MethodSCOLIBEA
• bool showMiscOptions
the show_misc_options specification in MethodSCOLIBDC
• StringArray miscOptions
the misc_options specification in MethodSCOLIBDC
• Real solnTarget
the solution_target specification in MethodSCOLIBDC
• Real crossoverRate
the crossover_rate specification for EA methods in MethodSCOLIBEA
• Real mutationRate
the mutation_rate specification for EA methods in MethodSCOLIBEA
• Real mutationScale
the mutation_scale specification for EA methods in MethodSCOLIBEA
• Real mutationMinScale
the min_scale specification for mutation in EA methods in MethodSCOLIBEA
• Real initDelta
the initial_delta specification for APPS/COBYLA/PS/SW methods in MethodAPPS, MethodSCOLIBCOB,
MethodSCOLIBPS, and MethodSCOLIBSW
• Real threshDelta
the threshold_delta specification for APPS/COBYLA/PS/SW methods in MethodAPPS, MethodSCOLIB-
COB, MethodSCOLIBPS, and MethodSCOLIBSW
• Real contractFactor
the contraction_factor specification for APPS/PS/SW methods in MethodAPPS, MethodSCOLIBPS, and
MethodSCOLIBSW
• int newSolnsGenerated
the new_solutions_generated specification for GA/EPSA methods in MethodSCOLIBEA
• int numberRetained
the integer assignment to random, chc, or elitist in the replacement_type specification for GA/EPSA methods
in MethodSCOLIBEA
• bool expansionFlag
the no_expansion specification for APPS/PS/SW methods in MethodAPPS, MethodSCOLIBPS, and Meth-
odSCOLIBSW
• int expandAfterSuccess
the expand_after_success specification for PS/SW methods in MethodSCOLIBPS and MethodSCOL-
IBSW
• int contractAfterFail
the contract_after_failure specification for the SW method in MethodSCOLIBSW
• int mutationRange
the mutation_range specification for the pga_int method in MethodSCOLIBEA
• int totalPatternSize
the total_pattern_size specification for PS methods in MethodSCOLIBPS
• bool randomizeOrderFlag
the stochastic specification for the PS method in MethodSCOLIBPS
• String selectionPressure
the fitness_type specification for EA methods in MethodSCOLIBEA
• String replacementType
the replacement_type specification for EA methods in MethodSCOLIBEA
• String crossoverType
the crossover_type specification for EA methods in MethodSCOLIBEA
• String mutationType
the mutation_type specification for EA methods in MethodSCOLIBEA
• String exploratoryMoves
the exploratory_moves specification for the PS method in MethodSCOLIBPS
• String patternBasis
the pattern_basis specification for APPS/PS methods in MethodAPPS and MethodSCOLIBPS
• String betaSolverName
beta solvers don’t need documentation
• String evalSynchronize
the synchronization setting for parallel pattern search methods in MethodSCOLIBPS and MethodAPPS
• size_t numCrossPoints
The number of crossover points or multi-point schemes.
• size_t numParents
The number of parents to use in a crossover operation.
• size_t numOffspring
The number of children to produce in a crossover operation.
• String fitnessType
the fitness assessment operator to use.
• String convergenceType
The means by which this JEGA should converge.
• Real percentChange
The minimum percent change before convergence for a fitness tracker converger.
• size_t numGenerations
The number of generations over which a fitness tracker converger should track.
• Real fitnessLimit
The cutoff value for survival in fitness limiting selectors (e.g., below_limit selector).
• Real shrinkagePercent
The minimum percentage of the requested number of selections that must take place on each call to the selector (0,
1).
• String nichingType
The niching type.
• RealVector nicheVector
The discretization percentage along each objective.
• size_t numDesigns
The maximum number of designs to keep when using the max_designs nicher.
• String postProcessorType
The post processor type.
• RealVector distanceVector
The discretization percentage along each objective.
• String initializationType
The means by which the JEGA should initialize the population.
• String flatFile
The filename to use for initialization.
• String logFile
The filename to use for logging.
• int populationSize
the population_size specification for GA methods in MethodSCOLIBEA
• bool printPopFlag
The print_each_pop flag to set the printing of the population at each generation.
• Real volBoxSize
the volume_boxsize_limit for the DIRECT method in MethodNCSUDC
• int numSymbols
the symbols specification for DACE methods
• bool mainEffectsFlag
the main_effects specification for sampling methods in MethodDDACE)
• bool latinizeFlag
the latinize specification for FSU QMC and CVT methods in MethodFSUDACE
• bool volQualityFlag
the quality_metrics specification for sampling methods (FSU QMC and CVT methods in MethodFSU-
DACE)
• IntVector sequenceStart
the sequenceStart specification in MethodFSUDACE
• IntVector sequenceLeap
the sequenceLeap specification in MethodFSUDACE
• IntVector primeBase
the primeBase specification in MethodFSUDACE
• int numTrials
the numTrials specification in MethodFSUDACE
• String trialType
the trial_type specification in MethodFSUDACE
• int randomSeed
the seed specification for COLINY, NonD, & DACE methods
• String historyFile
the HISTORY_FILE specification for NOMAD
• String displayFormat
the DISPLAY_STATS specification for NOMAD
• Real vns
the VNS specification for NOMAD
• bool showAllEval
the DISPLAY_ALL_EVAL specification for NOMAD
• int numSamples
the samples specification for NonD & DACE methods
• bool fixedSeedFlag
flag for fixing the value of the seed among different NonD/DACE sample sets. This results in the use of the same
sampling stencil/pattern throughout a strategy with repeated sampling.
• bool fixedSequenceFlag
flag for fixing the sequence for Halton or Hammersley QMC sample sets. This results in the use of the same
sampling stencil/pattern throughout a strategy with repeated sampling.
• int previousSamples
the number of previous samples when augmenting a LHS sample
• bool vbdFlag
the var_based_decomp specification for a variety of sampling methods
• Real vbdDropTolerance
the var_based_decomp tolerance for omitting index output
• short covarianceControl
restrict the calculation of a full response covariance matrix for high dimensional outputs:
{DEFAULT,DIAGONAL,FULL}_COVARIANCE
• String rngName
the basic random-number generator for NonD
• short refinementType
refinement type for stochastic expansions from dimension refinement keyword group
• short refinementControl
refinement control for stochastic expansions from dimension refinement keyword group
• short nestingOverride
override for default point nesting policy: NO_NESTING_OVERRIDE, NESTED, or NON_NESTED
• short growthOverride
override for default point growth restriction policy: NO_GROWTH_OVERRIDE, RESTRICTED, or UNRE-
STRICTED
• short expansionType
enumeration for u-space type that defines u-space variable targets for probability space transformations:
EXTENDED_U (default), ASKEY_U, STD_NORMAL_U, or STD_UNIFORM_U
• bool piecewiseBasis
boolean indicating presence of piecewise keyword
• short sparseGridBasisType
enumeration for type of basis in sparse grid interpolation: DEFAULT_INTERPOLANT, NODAL_INTERPOLANT,
or HIERARCHICAL_INTERPOLANT
• UShortArray expansionOrder
the expansion_order specification in MethodNonDPCE
• SizetArray expansionSamples
the expansion_samples specification in MethodNonDPCE
• String expansionSampleType
allows for incremental PCE construction using the incremental_lhs specification in MethodNonDPCE
• UShortArray quadratureOrder
the quadrature_order specification in MethodNonDPCE and MethodNonDSC
• UShortArray sparseGridLevel
the sparse_grid_level specification in MethodNonDPCE, MethodNonDSC, and other stochastic
expansion-enabled methods
• RealVector anisoDimPref
the dimension_preference specification for tensor and sparse grids and expansion orders in MethodNonD-
PCE and MethodNonDSC
• SizetArray collocationPoints
the collocation_points specification in MethodNonDPCE
• Real collocationRatio
the collocation_ratio specification in MethodNonDPCE
• Real collocRatioTermsOrder
order applied to the number of expansion terms when applying or computing the collocation ratio within regression
PCE; based on the ratio_order specification in MethodNonDPCE
• short regressionType
type of regression: LS, OMP, BP, BPDN, LARS, or LASSO
• short lsRegressionType
type of least squares regression: SVD or EQ_CON_QR
• RealVector regressionNoiseTol
noise tolerance(s) for OMP, BPDN, LARS, and LASSO
• Real regressionL2Penalty
L2 regression penalty for a variant of LASSO known as the elastic net method (default of 0 gives standard LASSO).
• bool crossValidation
flag indicating the use of cross-validation across expansion orders (given a prescribed maximum order) and, for
some methods, noise tolerances
• bool normalizedCoeffs
flag indicating the output of PCE coefficients corresponding to normalized basis polynomials
• String pointReuse
allows PCE construction to reuse points from previous sample sets or data import using the reuse_points
specification in MethodNonDPCE
• bool tensorGridFlag
flag for usage of a sub-sampled set of tensor-product grid points within regression PCE; based on the tensor_-
grid specification in MethodNonDPCE
• UShortArray tensorGridOrder
order of tensor-product grid points that are sub-sampled within orthogonal least interpolation PCE; based on the
tensor_grid specification in MethodNonDPCE
• String expansionImportFile
the expansion_import_file specification in MethodNonDPCE
• String sampleType
the sample_type specification in MethodNonDMC, MethodNonDPCE, and MethodNonDSC
• String reliabilitySearchType
the type of limit state search in MethodNonDLocalRel (x_taylor_mean, x_taylor_mpp, x_two_point,
u_taylor_mean, u_taylor_mpp, u_two_point, or no_approx) or MethodNonDGlobalRel (x_-
gaussian_process or u_gaussian_process)
• String reliabilityIntegration
the first_order or second_order integration selection in MethodNonDLocalRel
• String integrationRefine
the import, adapt_import, or mm_adapt_import integration refinement selection in MethodNonDLocal-
Rel, MethodNonDPCE, and MethodNonDSC
• String nondOptAlgorithm
the algorithm selection sqp or nip used for computing the MPP in MethodNonDLocalRel or the interval in
MethodNonDLocalIntervalEst
• short distributionType
the distribution cumulative or complementary specification in MethodNonD
• short responseLevelTarget
the compute probabilities, reliabilities, or gen_reliabilities specification in MethodNonD
• short responseLevelTargetReduce
the system series or parallel specification in MethodNonD
• RealVectorArray responseLevels
the response_levels specification in MethodNonD
• RealVectorArray probabilityLevels
the probability_levels specification in MethodNonD
• RealVectorArray reliabilityLevels
the reliability_levels specification in MethodNonD
• RealVectorArray genReliabilityLevels
the gen_reliability_levels specification in MethodNonD
• int emulatorSamples
the number of samples to construct a GP emulator for Bayesian calibration methods (MethodNonDBayesCalib)
• short emulatorType
the emulator specification in MethodNonDBayesCalib
• String rejectionType
the rejection type specification in MethodNonDBayesCalib
• String metropolisType
the metropolis type specification in MethodNonDBayesCalib
• RealVector proposalCovScale
the proposal covariance scale factor in MethodNonDBayesCalib
• Real likelihoodScale
the likelihood scale factor in MethodNonDBayesCalib
• String fitnessMetricType
the fitness metric type specification in MethodNonDAdaptive
• String batchSelectionType
the batch selection type specification in MethodNonDAdaptive
• int batchSize
The size of the batch (e.g. number of supplemental points added) to be added to be added to the build points for an
emulator at each iteration.
• bool calibrateSigmaFlag
flag to indicate if the sigma terms should be calibrated in MethodNonDBayesCalib
• int numChains
number of concurrent chains
• int numCR
number of CR-factors
• int crossoverChainPairs
number of crossover chain pairs
• Real grThreshold
threshold for the Gelmin-Rubin statistic
• int jumpStep
how often to perform a long jump in generations
• RealVector finalPoint
• RealVector stepVector
the step_vector specification in MethodPSVPS and MethodPSCPS
• int numSteps
the num_steps specification in MethodPSVPS
• IntVector stepsPerVariable
the deltas_per_variable specification in MethodPSCPS
• RealVector listOfPoints
the list_of_points specification in MethodPSLPS
• String pstudyFilename
the import_points_file spec for a file-based parameter study
• bool pstudyFileAnnotated
whether the parameter study points file is annotated
• UShortArray varPartitions
the partitions specification for PStudy method in MethodPSMPS
• Real refinementRate
rate of mesh refinement in Richardson extrapolation
• String approxImportFile
the file name for point import in surrogate-based methods
• bool approxImportAnnotated
whether the point import file is annotated (default true)
• String approxExportFile
the file name for point export in surrogate-based methods
• bool approxExportAnnotated
whether the point export file is annotated (default true)
• ∼DataMethodRep ()
destructor
Private Attributes
• int referenceCount
number of handle objects sharing this dataMethodRep
Friends
• class DataMethod
the handle class can access attributes of the body class directly
Body class for method specification data. The DataMethodRep class is used to contain the data from a method key-
word specification. Default values are managed in the DataMethodRep constructor. Data is public to avoid main-
taining set/get functions, but is still encapsulated within ProblemDescDB since ProblemDescDB::dataMethodList
is private.
The documentation for this class was generated from the following files:
• DataMethod.hpp
• DataMethod.cpp
• ∼DataModel ()
destructor
Private Attributes
• DataModelRep ∗ dataModelRep
pointer to the body (handle-body idiom)
Friends
• class ProblemDescDB
• class NIDRProblemDescDB
Handle class for model specification data. The DataModel class is used to provide a memory management han-
dle for the data in DataModelRep. It is populated by IDRProblemDescDB::model_kwhandler() and is queried
by the ProblemDescDB::get_<datatype>() functions. A list of DataModel objects is maintained in ProblemDe-
scDB::dataModelList, one for each model specification in an input file.
The documentation for this class was generated from the following files:
• DataModel.hpp
• DataModel.cpp
Public Attributes
• String idModel
string identifier for the model specification data set (from the id_model specification in ModelIndControl)
• String modelType
model type selection: single, surrogate, or nested (from the model type specification in ModelIndControl)
• String variablesPointer
string pointer to the variables specification to be used by this model (from the variables_pointer specifica-
tion in ModelIndControl)
• String interfacePointer
string pointer to the interface specification to be used by this model (from the interface_pointer specification
in ModelSingle and the optional_interface_pointer specification in ModelNested)
• String responsesPointer
string pointer to the responses specification to be used by this model (from the responses_pointer specifica-
tion in ModelIndControl)
• bool hierarchicalTags
whether this model and its children will add hierarchy-based tags to eval ids
• String subMethodPointer
pointer to a sub-iterator used for global approximations (from the dace_method_pointer specification in
ModelSurrG) or by nested models (from the sub_method_pointer specification in ModelNested)
• IntSet surrogateFnIndices
array specifying the response function set that is approximated
• String surrogateType
the selected surrogate type: local_taylor, multipoint_tana, global_(neural_network,mars,orthogonal_-
polynomial,gaussian, polynomial,kriging), or hierarchical
• String truthModelPointer
pointer to the model specification for constructing the truth model used in building local, multipoint, and hierarchi-
cal approximations (from the actual_model_pointer specification in ModelSurrL and ModelSurrMP and
the high_fidelity_model_pointer specification in ModelSurrH)
• String lowFidelityModelPointer
pointer to the low fidelity model specification used in hierarchical approximations (from the low_fidelity_-
model_pointer specification in ModelSurrH)
• int pointsTotal
user-specified lower bound on total points with which to build the model (if reuse_points < pointsTotal, new samples
will make up the difference)
• short pointsManagement
points management configuration for DataFitSurrModel: DEFAULT_POINTS, MINIMUM_POINTS, or
RECOMMENDED_POINTS
• String approxPointReuse
sample reuse selection for building global approximations: none, all, region, or file (from the reuse_samples
specification in ModelSurrG)
• String approxImportFile
the file name from the import_points_file specification in ModelSurrG
• bool approxImportAnnotated
whether the point import file is annotated (default true)
• String approxExportFile
the file name from the export_points_file specification in ModelSurrG
• bool approxExportAnnotated
whether the point export file is annotated (default true)
• String approxExportModelFile
the file name from the export_model_file specification in ModelSurrG
• short approxCorrectionType
correction type for global and hierarchical approximations: NO_CORRECTION, ADDITIVE_CORRECTION,
MULTIPLICATIVE_CORRECTION, or COMBINED_CORRECTION (from the correction specification in
ModelSurrG and ModelSurrH)
• short approxCorrectionOrder
correction order for global and hierarchical approximations: 0, 1, or 2 (from the correction specification in
ModelSurrG and ModelSurrH)
• bool modelUseDerivsFlag
flags the use of derivatives in building global approximations (from the use_derivatives specification in
ModelSurrG)
• short polynomialOrder
scalar integer indicating the order of the polynomial approximation (1=linear, 2=quadratic, 3=cubic; from the
polynomial specification in ModelSurrG)
• RealVector krigingCorrelations
vector of correlations used in building a kriging approximation (from the correlations specification in Mod-
elSurrG)
• String krigingOptMethod
optimization method to use in finding optimal correlation parameters: none, sampling, local, global
• short krigingMaxTrials
maximum number of trials in optimization of kriging correlations
• RealVector krigingMaxCorrelations
upper bound on kriging correlation vector
• RealVector krigingMinCorrelations
lower bound on kriging correlation vector
• Real krigingNugget
nugget value for kriging
• short krigingFindNugget
option to have Kriging find the best nugget value to use
• short mlsPolyOrder
polynomial order for moving least squares approximation
• short mlsWeightFunction
weight function for moving least squares approximation
• short rbfBases
bases for radial basis function approximation
• short rbfMaxPts
maximum number of points for radial basis function approximation
• short rbfMaxSubsets
maximum number of subsets for radial basis function approximation
• short rbfMinPartition
minimum partition for radial basis function approximation
• short marsMaxBases
maximum number of bases for MARS approximation
• String marsInterpolation
interpolation type for MARS approximation
• short annRandomWeight
random weight for artificial neural network approximation
• short annNodes
number of nodes for artificial neural network approximation
• Real annRange
range for artificial neural network approximation
• String trendOrder
scalar integer indicating the order of the Gaussian process mean (0= constant, 1=linear, 2=quadratic, 3=cubic);
from the gaussian_process specification in ModelSurrG)
• bool pointSelection
flag indicating the use of point selection in the Gaussian process
• StringArray diagMetrics
List of diagnostic metrics the user requests to assess the goodness of fit for a surrogate model.
• bool crossValidateFlag
flag indicating the use of cross validation on the metrics specified
• int numFolds
number of folds to perform in cross validation
• Real percentFold
percentage of data to withhold for cross validation process
• bool pressFlag
flag indicating the use of PRESS on the metrics specified
• String approxChallengeFile
the file name from the challenge_points_file specification in ModelSurrG
• bool approxChallengeAnnotated
whether the challenge data file is annotated (default true)
• String optionalInterfRespPointer
string pointer to the responses specification used by the optional interface in nested models (from the optional_-
interface_responses_pointer specification in ModelNested)
• StringArray primaryVarMaps
the primary variable mappings used in nested models for identifying the lower level variable targets for inserting
top level variable values (from the primary_variable_mapping specification in ModelNested)
• StringArray secondaryVarMaps
the secondary variable mappings used in nested models for identifying the (distribution) parameter targets within
the lower level variables for inserting top level variable values (from the secondary_variable_mapping
specification in ModelNested)
• RealVector primaryRespCoeffs
the primary response mapping matrix used in nested models for weighting contributions from the sub-iterator
responses in the top level (objective) functions (from the primary_response_mapping specification in Mod-
elNested)
• RealVector secondaryRespCoeffs
the secondary response mapping matrix used in nested models for weighting contributions from the sub-iterator
responses in the top level (constraint) functions (from the secondary_response_mapping specification in
ModelNested)
• DataModelRep ()
constructor
• ∼DataModelRep ()
destructor
Private Attributes
• int referenceCount
number of handle objects sharing this dataModelRep
Friends
• class DataModel
the handle class can access attributes of the body class directly
Body class for model specification data. The DataModelRep class is used to contain the data from a model key-
word specification. Default values are managed in the DataModelRep constructor. Data is public to avoid main-
taining set/get functions, but is still encapsulated within ProblemDescDB since ProblemDescDB::dataModelList
is private.
The documentation for this class was generated from the following files:
• DataModel.hpp
• DataModel.cpp
• ∼DataResponses ()
destructor
Private Attributes
• DataResponsesRep ∗ dataRespRep
pointer to the body (handle-body idiom)
Friends
• class ProblemDescDB
• class NIDRProblemDescDB
• void run_dakota_data ()
library_mode default data initializer
Handle class for responses specification data. The DataResponses class is used to provide a memory management
handle for the data in DataResponsesRep. It is populated by IDRProblemDescDB::responses_kwhandler() and is
queried by the ProblemDescDB::get_<datatype>() functions. A list of DataResponses objects is maintained in
ProblemDescDB::dataResponsesList, one for each responses specification in an input file.
The documentation for this class was generated from the following files:
• DataResponses.hpp
• DataResponses.cpp
Public Attributes
• String idResponses
string identifier for the responses specification data set (from the id_responses specification in RespSetId)
• StringArray responseLabels
the response labels array (from the response_descriptors specification in RespLabels)
• size_t numObjectiveFunctions
number of objective functions (from the num_objective_functions specification in RespFnOpt)
• size_t numNonlinearIneqConstraints
number of nonlinear inequality constraints (from the num_nonlinear_inequality_constraints speci-
fication in RespFnOpt)
• size_t numNonlinearEqConstraints
number of nonlinear equality constraints (from the num_nonlinear_equality_constraints specification
in RespFnOpt)
• size_t numLeastSqTerms
number of least squares terms (from the num_least_squares_terms specification in RespFnLS)
• size_t numResponseFunctions
number of generic response functions (from the num_response_functions specification in RespFnGen)
• StringArray primaryRespFnSense
optimization sense for each objective function: minimize or maximize
• RealVector primaryRespFnWeights
vector of weightings for multiobjective optimization or weighted nonlinear least squares (from the multi_-
objective_weights specification in RespFnOpt and the least_squares_weights specification in Re-
spFnLS)
• RealVector nonlinearIneqLowerBnds
vector of nonlinear inequality constraint lower bounds (from the nonlinear_inequality_lower_bounds
specification in RespFnOpt)
• RealVector nonlinearIneqUpperBnds
vector of nonlinear inequality constraint upper bounds (from the nonlinear_inequality_upper_bounds
specification in RespFnOpt)
• RealVector nonlinearEqTargets
• StringArray primaryRespFnScaleTypes
vector of primary response function scaling types (from the objective_function_scale_types specifica-
tion in RespFnOpt and the least_squares_term_scale_types specification in RespFnLS)
• RealVector primaryRespFnScales
vector of primary response function scaling factors (from the objective_function_scales specification in
RespFnOpt and the least_squares_term_scales specification in RespFnLS)
• StringArray nonlinearIneqScaleTypes
vector of nonlinear inequality constraint scaling types (from the nonlinear_inequality_scale_types
specification in RespFnOpt)
• RealVector nonlinearIneqScales
vector of nonlinear inequality constraint scaling factors (from the nonlinear_inequality_scales specifi-
cation in RespFnOpt)
• StringArray nonlinearEqScaleTypes
vector of nonlinear equality constraint scaling types (from the nonlinear_equality_scale_types speci-
fication in RespFnOpt)
• RealVector nonlinearEqScales
vector of nonlinear equality constraint scaling factors (from the nonlinear_equality_scales specification
in RespFnOpt)
• size_t numExperiments
number of distinct experiments in experimental data
• IntVector numReplicates
number of replicates in experimental data (e.g. one experiment run many times at the same configuration gives
replicates)
• size_t numExpConfigVars
number of experimental configuration vars (state variables) in each row of data
• size_t numExpStdDeviations
whether to read num_responses standard deviations from each row of data file
• RealVector expConfigVars
list of num_experiments x num_config_vars configuration variable values
• RealVector expObservations
list of num_calibration_terms observation data
• RealVector expStdDeviations
• String expDataFileName
name of experimental data file containing response data (with optional state variable and sigma data) to read
• bool expDataFileAnnotated
whether the experimental data is in annotated format
• String gradientType
gradient type: none, numerical, analytic, or mixed (from the no_gradients, numerical_gradients,
analytic_gradients, and mixed_gradients specifications in RespGrad)
• String hessianType
Hessian type: none, numerical, quasi, analytic, or mixed (from the no_hessians, numerical_hessians,
quasi_hessians, analytic_hessians, and mixed_hessians specifications in RespHess).
• bool ignoreBounds
option to ignore bounds when doing finite differences (default is to honor bounds)
• bool centralHess
Temporary(?) option to use old 2nd-order diffs when computing finite-difference Hessians; default is forward
differences.
• String quasiHessianType
quasi-Hessian type: bfgs, damped_bfgs, or sr1 (from the bfgs and sr1 specifications in RespHess)
• String methodSource
numerical gradient method source: dakota or vendor (from the method_source specification in RespGradNum
and RespGradMixed)
• String intervalType
numerical gradient interval type: forward or central (from the interval_type specification in RespGradNum
and RespGradMixed)
• RealVector fdGradStepSize
vector of finite difference step sizes for numerical gradients, one step size per active continuous variable, used
in computing 1st-order forward or central differences (from the fd_gradient_step_size specification in
RespGradNum and RespGradMixed)
• String fdGradStepType
type of finite difference step to use for numerical gradient: relative - step length is relative to x absolute - step length
is what is specified bounds - step length is relative to range of x
• RealVector fdHessStepSize
vector of finite difference step sizes for numerical Hessians, one step size per active continuous variable, used
in computing 1st-order gradient-based differences and 2nd-order function-based differences (from the fd_-
hessian_step_size specification in RespHessNum and RespHessMixed)
• String fdHessStepType
type of finite difference step to use for numerical Hessian: relative - step length is relative to x absolute - step length
is what is specified bounds - step length is relative to range of x
• IntSet idNumericalGrads
mixed gradient numerical identifiers (from the id_numerical_gradients specification in RespGradMixed)
• IntSet idAnalyticGrads
mixed gradient analytic identifiers (from the id_analytic_gradients specification in RespGradMixed)
• IntSet idNumericalHessians
mixed Hessian numerical identifiers (from the id_numerical_hessians specification in RespHessMixed)
• IntSet idQuasiHessians
mixed Hessian quasi identifiers (from the id_quasi_hessians specification in RespHessMixed)
• IntSet idAnalyticHessians
mixed Hessian analytic identifiers (from the id_analytic_hessians specification in RespHessMixed)
• DataResponsesRep ()
constructor
• ∼DataResponsesRep ()
destructor
Private Attributes
• int referenceCount
number of handle objects sharing this dataResponsesRep
Friends
• class DataResponses
the handle class can access attributes of the body class directly
Body class for responses specification data. The DataResponsesRep class is used to contain the data from a
responses keyword specification. Default values are managed in the DataResponsesRep constructor. Data is
public to avoid maintaining set/get functions, but is still encapsulated within ProblemDescDB since ProblemDe-
scDB::dataResponsesList is private.
The documentation for this class was generated from the following files:
• DataResponses.hpp
• DataResponses.cpp
• ∼DataStrategy ()
destructor
Private Attributes
• DataStrategyRep ∗ dataStratRep
pointer to the body (handle-body idiom)
Friends
• class ProblemDescDB
• class NIDRProblemDescDB
Handle class for strategy specification data. The DataStrategy class is used to provide a memory management
handle for the data in DataStrategyRep. It is populated by IDRProblemDescDB::strategy_kwhandler() and is
queried by the ProblemDescDB::get_<datatype>() functions. A single DataStrategy object is maintained in
ProblemDescDB::strategySpec.
The documentation for this class was generated from the following files:
• DataStrategy.hpp
• DataStrategy.cpp
Public Attributes
• String strategyType
the strategy selection: hybrid, multi_start, pareto_set, or single_method
• bool graphicsFlag
flags use of graphics by the strategy (from the graphics specification in StratIndControl)
• bool tabularDataFlag
flags tabular data collection by the strategy (from the tabular_graphics_data specification in StratIndControl)
• String tabularDataFile
the filename used for tabular data collection by the strategy (from the tabular_graphics_file specification in
StratIndControl)
• int outputPrecision
output precision for tabular and screen output
• bool resultsOutputFlag
flags use of results output to default file
• String resultsOutputFile
named file for results output
• int iteratorServers
number of servers for concurrent iterator parallelism (from the iterator_servers specification in StratIndControl)
• String iteratorScheduling
type of scheduling (self or static) used in concurrent iterator parallelism (from the iterator_self_-
scheduling and iterator_static_scheduling specifications in StratIndControl)
• String methodPointer
method identifier for the strategy (from the opt_method_pointer specifications in StratParetoSet and
method_pointer specifications in StratSingle and StratMultiStart)
• StringArray hybridMethodList
array of methods for the sequential and collaborative hybrid optimization strategies (from the method_list
specification in StratHybrid)
• String hybridType
the type of hybrid optimization strategy: collaborative, embedded, sequential, or sequential_adaptive (from the
collaborative, embedded, and sequential specifications in StratHybrid)
• String hybridGlobalMethodPointer
global method pointer for embedded hybrids (from the global_method_pointer specification in StratHy-
brid)
• String hybridLocalMethodPointer
local method pointer for embedded hybrids (from the local_method_pointer specification in StratHybrid)
• Real hybridLSProb
local search probability for embedded hybrids (from the local_search_probability specification in
StratHybrid)
• int concurrentRandomJobs
number of random jobs to perform in the concurrent strategy (from the random_starts and random_-
weight_sets specifications in StratMultiStart and StratParetoSet)
• int concurrentSeed
seed for the selected random jobs within the concurrent strategy (from the seed specification in StratMultiStart
and StratParetoSet)
• RealVector concurrentParameterSets
user-specified (i.e., nonrandom) parameter sets to evaluate in the concurrent strategy (from the starting_-
points and multi_objective_weight_sets specifications in StratMultiStart and StratParetoSet)
• ∼DataStrategyRep ()
destructor
Private Attributes
• int referenceCount
number of handle objects sharing this dataStrategyRep
Friends
• class DataStrategy
the handle class can access attributes of the body class directly
Body class for strategy specification data. The DataStrategyRep class is used to contain the data from the
strategy keyword specification. Default values are managed in the DataStrategyRep constructor. Data is pub-
lic to avoid maintaining set/get functions, but is still encapsulated within ProblemDescDB since ProblemDe-
scDB::strategySpec is private.
The documentation for this class was generated from the following files:
• DataStrategy.hpp
• DataStrategy.cpp
• ∼DataVariables ()
destructor
• size_t design ()
return total number of design variables
• size_t aleatory_uncertain ()
return total number of aleatory uncertain variables
• size_t epistemic_uncertain ()
return total number of epistemic uncertain variables
• size_t uncertain ()
return total number of uncertain variables
• size_t state ()
return total number of state variables
• size_t continuous_variables ()
return total number of continuous variables
• size_t discrete_variables ()
return total number of discrete variables
• size_t total_variables ()
return total number of variables
Private Attributes
• DataVariablesRep ∗ dataVarsRep
pointer to the body (handle-body idiom)
Friends
• class ProblemDescDB
• class NIDRProblemDescDB
• void run_dakota_data ()
library_mode default data initializer
Handle class for variables specification data. The DataVariables class is used to provide a memory management
handle for the data in DataVariablesRep. It is populated by IDRProblemDescDB::variables_kwhandler() and is
queried by the ProblemDescDB::get_<datatype>() functions. A list of DataVariables objects is maintained in
ProblemDescDB::dataVariablesList, one for each variables specification in an input file.
The documentation for this class was generated from the following files:
• DataVariables.hpp
• DataVariables.cpp
Public Attributes
• String idVariables
string identifier for the variables specification data set (from the id_variables specification in VarSetId)
• short varsView
user selection/override of variables view: {DEFAULT,ALL,DESIGN, UNCERTAIN,ALEATORY_-
UNCERTAIN,EPISTEMIC_UNCERTAIN,STATE}_VIEW
• short varsDomain
user selection/override of variables domain: {DEFAULT,MIXED,RELAXED}_DOMAIN
• size_t numContinuousDesVars
number of continuous design variables (from the continuous_design specification in VarDV)
• size_t numDiscreteDesRangeVars
number of discrete design variables defined by an integer range (from the discrete_design_range specifi-
cation in VarDV)
• size_t numDiscreteDesSetIntVars
number of discrete design variables defined by a set of integers (from the discrete_design_set_integer
specification in VarDV)
• size_t numDiscreteDesSetRealVars
number of discrete design variables defined by a set of reals (from the discrete_design_set_real specifi-
cation in VarDV)
• size_t numNormalUncVars
number of normal uncertain variables (from the normal_uncertain specification in VarAUV)
• size_t numLognormalUncVars
number of lognormal uncertain variables (from the lognormal_uncertain specification in VarAUV)
• size_t numUniformUncVars
number of uniform uncertain variables (from the uniform_uncertain specification in VarAUV)
• size_t numLoguniformUncVars
number of loguniform uncertain variables (from the loguniform_uncertain specification in VarAUV)
• size_t numTriangularUncVars
number of triangular uncertain variables (from the triangular_uncertain specification in VarAUV)
• size_t numExponentialUncVars
number of exponential uncertain variables (from the exponential_uncertain specification in VarAUV)
• size_t numBetaUncVars
number of beta uncertain variables (from the beta_uncertain specification in VarAUV)
• size_t numGammaUncVars
number of gamma uncertain variables (from the gamma_uncertain specification in VarAUV)
• size_t numGumbelUncVars
number of gumbel uncertain variables (from the gumbel_uncertain specification in VarAUV)
• size_t numFrechetUncVars
number of frechet uncertain variables (from the frechet_uncertain specification in VarAUV)
• size_t numWeibullUncVars
number of weibull uncertain variables (from the weibull_uncertain specification in VarAUV)
• size_t numHistogramBinUncVars
number of histogram bin uncertain variables (from the histogram_bin_uncertain specification in Va-
rAUV)
• size_t numPoissonUncVars
number of Poisson uncertain variables (from the poisson_uncertain specification in VarAUV)
• size_t numBinomialUncVars
number of binomial uncertain variables (from the binomial_uncertain specification in VarAUV)
• size_t numNegBinomialUncVars
number of negative binomial uncertain variables (from the negative_binomial_uncertain specification
in VarAUV)
• size_t numGeometricUncVars
number of geometric uncertain variables (from the geometric_uncertain specification in VarAUV
• size_t numHyperGeomUncVars
number of hypergeometric uncertain variables (from the hypergeometric_uncertain specification in Va-
rAUV))
• size_t numHistogramPtUncVars
number of histogram point uncertain variables (from the histogram_point_uncertain specification in Va-
rAUV)
• size_t numContinuousIntervalUncVars
number of continuous epistemic interval uncertain variables (from the continuous_interval_uncertain
specification in VarEUV)
• size_t numDiscreteIntervalUncVars
number of discrete epistemic interval uncertain variables (from the discrete_interval_uncertain speci-
fication in VarEUV)
• size_t numDiscreteUncSetIntVars
number of discrete epistemic uncertain integer set variables (from the discrete_uncertain_set_integer
specification in VarEUV)
• size_t numDiscreteUncSetRealVars
number of discrete epistemic uncertain real set variables (from the discrete_uncertain_set_real speci-
fication in VarEUV)
• size_t numContinuousStateVars
number of continuous state variables (from the continuous_state specification in VarSV)
• size_t numDiscreteStateRangeVars
number of discrete state variables defined by an integer range (from the discrete_state_range specification
in VarDV)
• size_t numDiscreteStateSetIntVars
number of discrete state variables defined by a set of integers (from the discrete_state_set_integer
specification in VarDV)
• size_t numDiscreteStateSetRealVars
number of discrete state variables defined by a set of reals (from the discrete_state_set_real specification
in VarDV)
• RealVector continuousDesignVars
initial values for the continuous design variables array (from the continuous_design initial_point
specification in VarDV)
• RealVector continuousDesignLowerBnds
lower bounds array for the continuous design variables (from the continuous_design lower_bounds spec-
ification in VarDV)
• RealVector continuousDesignUpperBnds
upper bounds array for the continuous design variables (from the continuous_design upper_bounds spec-
ification in VarDV)
• StringArray continuousDesignScaleTypes
scale types array for the continuous design variables (from the continuous_design scale_types specifi-
cation in VarDV)
• RealVector continuousDesignScales
scales array for the continuous design variables (from the continuous_design scales specification in
VarDV)
• IntVector discreteDesignRangeVars
initial values for the discrete design variables defined by an integer range (from the discrete_design_range
initial_point specification in VarDV)
• IntVector discreteDesignRangeLowerBnds
lower bounds array for the discrete design variables defined by an integer range (from the discrete_design_-
range lower_bounds specification in VarDV)
• IntVector discreteDesignRangeUpperBnds
upper bounds array for the discrete design variables defined by an integer range(from the discrete_design_-
range upper_bounds specification in VarDV)
• IntVector discreteDesignSetIntVars
initial values for the discrete design variables defined by an integer set (from the discrete_design_set_-
integer initial_point specification in VarDV)
• RealVector discreteDesignSetRealVars
initial values for the discrete design variables defined by a real set (from the discrete_design_set_real
initial_point specification in VarDV)
• IntSetArray discreteDesignSetInt
complete set of admissible values for each of the discrete design variables defined by an integer set (from the
discrete_design_set_integer set_values specification in VarDV)
• RealSetArray discreteDesignSetReal
complete set of admissible values for each of the discrete design variables defined by a real set (from the
discrete_design_set_real set_values specification in VarDV)
• StringArray continuousDesignLabels
labels array for the continuous design variables (from the continuous_design descriptors specification
in VarDV)
• StringArray discreteDesignRangeLabels
labels array for the discrete design variables defined by an integer range (from the discrete_design_range
descriptors specification in VarDV)
• StringArray discreteDesignSetIntLabels
labels array for the discrete design variables defined by an integer set (from the discrete_design_range
descriptors specification in VarDV)
• StringArray discreteDesignSetRealLabels
labels array for the discrete design variables defined by a real set (from the discrete_design_range
descriptors specification in VarDV)
• RealVector normalUncMeans
means of the normal uncertain variables (from the nuv_means specification in VarAUV)
• RealVector normalUncStdDevs
standard deviations of the normal uncertain variables (from the nuv_std_deviations specification in Va-
rAUV)
• RealVector normalUncLowerBnds
distribution lower bounds for the normal uncertain variables (from the nuv_lower_bounds specification in
VarAUV)
• RealVector normalUncUpperBnds
distribution upper bounds for the normal uncertain variables (from the nuv_upper_bounds specification in
VarAUV)
• RealVector lognormalUncLambdas
lambdas (means of the corresponding normals) of the lognormal uncertain variables (from the lnuv_lambdas
specification in VarAUV)
• RealVector lognormalUncZetas
zetas (standard deviations of the corresponding normals) of the lognormal uncertain variables (from the lnuv_-
zetas specification in VarAUV)
• RealVector lognormalUncMeans
means of the lognormal uncertain variables (from the lnuv_means specification in VarAUV)
• RealVector lognormalUncStdDevs
standard deviations of the lognormal uncertain variables (from the lnuv_std_deviations specification in
VarAUV)
• RealVector lognormalUncErrFacts
error factors for the lognormal uncertain variables (from the lnuv_error_factors specification in VarAUV)
• RealVector lognormalUncLowerBnds
distribution lower bounds for the lognormal uncertain variables (from the lnuv_lower_bounds specification
in VarAUV)
• RealVector lognormalUncUpperBnds
distribution upper bounds for the lognormal uncertain variables (from the lnuv_upper_bounds specification
in VarAUV)
• RealVector uniformUncLowerBnds
distribution lower bounds for the uniform uncertain variables (from the uuv_lower_bounds specification in
VarAUV)
• RealVector uniformUncUpperBnds
distribution upper bounds for the uniform uncertain variables (from the uuv_upper_bounds specification in
VarAUV)
• RealVector loguniformUncLowerBnds
distribution lower bounds for the loguniform uncertain variables (from the luuv_lower_bounds specification
in VarAUV)
• RealVector loguniformUncUpperBnds
distribution upper bounds for the loguniform uncertain variables (from the luuv_upper_bounds specification
in VarAUV)
• RealVector triangularUncModes
modes of the triangular uncertain variables (from the tuv_modes specification in VarAUV)
• RealVector triangularUncLowerBnds
distribution lower bounds for the triangular uncertain variables (from the tuv_lower_bounds specification in
VarAUV)
• RealVector triangularUncUpperBnds
distribution upper bounds for the triangular uncertain variables (from the tuv_upper_bounds specification in
VarAUV)
• RealVector exponentialUncBetas
beta factors for the exponential uncertain variables (from the euv_betas specification in VarAUV)
• RealVector betaUncAlphas
alpha factors for the beta uncertain variables (from the buv_means specification in VarAUV)
• RealVector betaUncBetas
beta factors for the beta uncertain variables (from the buv_std_deviations specification in VarAUV)
• RealVector betaUncLowerBnds
distribution lower bounds for the beta uncertain variables (from the buv_lower_bounds specification in Va-
rAUV)
• RealVector betaUncUpperBnds
distribution upper bounds for the beta uncertain variables (from the buv_upper_bounds specification in Va-
rAUV)
• RealVector gammaUncAlphas
alpha factors for the gamma uncertain variables (from the gauv_alphas specification in VarAUV)
• RealVector gammaUncBetas
beta factors for the gamma uncertain variables (from the gauv_betas specification in VarAUV)
• RealVector gumbelUncAlphas
alpha factors for the gumbel uncertain variables (from the guuv_alphas specification in VarAUV)
• RealVector gumbelUncBetas
beta factors for of the gumbel uncertain variables (from the guuv_betas specification in VarAUV)
• RealVector frechetUncAlphas
alpha factors for the frechet uncertain variables (from the fuv_alphas specification in VarAUV)
• RealVector frechetUncBetas
beta factors for the frechet uncertain variables (from the fuv_betas specification in VarAUV)
• RealVector weibullUncAlphas
alpha factors for the weibull uncertain variables (from the wuv_alphas specification in VarAUV)
• RealVector weibullUncBetas
beta factors for the weibull uncertain variables (from the wuv_betas specification in VarAUV)
• RealVectorArray histogramUncBinPairs
an array containing a vector of (x,c) pairs for each bin-based histogram uncertain variable (see continuous linear
histogram in LHS manual; from the histogram_bin_uncertain specification in VarAUV). (x,y) ordinate
specifications are converted to (x,c) counts within NIDR.
• RealVector poissonUncLambdas
lambdas (rate parameter) for the poisson uncertain variables (from the lambdas specification in VarAUV)
• RealVector binomialUncProbPerTrial
probabilities per each trial (p) for the binomial uncertain variables from the prob_per_trial specification in
VarAUV)
• IntVector binomialUncNumTrials
Number of trials (N) for the binomial uncertain variables from the num_trials specification in VarAUV).
• RealVector negBinomialUncProbPerTrial
probabilities per each trial (p) for the negative binomial uncertain variables from the prob_per_trial specifi-
cation in VarAUV)
• IntVector negBinomialUncNumTrials
Number of trials (N) for the negative binomial uncertain variables from the num_trials specification in Va-
rAUV).
• RealVector geometricUncProbPerTrial
probabilities per each trial (p) for the geometric uncertain variables from the prob_per_trial specification in
VarAUV)
• IntVector hyperGeomUncTotalPop
Size of total populations (N) for the hypergeometric uncertain variables from the total_population specifi-
cation in VarAUV).
• IntVector hyperGeomUncSelectedPop
Size of selected populations for the hypergeometric uncertain variables from the selected_population spec-
ification in VarAUV).
• IntVector hyperGeomUncNumDrawn
Number failed in the selected populations for the hypergeometric variablesfrom the num_drawn specification in
VarAUV).
• RealVectorArray histogramUncPointPairs
an array containing a vector of (x,c) pairs for each point-based histogram uncertain variable (see discrete his-
togram in LHS manual; from the histogram_point_uncertain specification in VarAUV)
• RealSymMatrix uncertainCorrelations
correlation matrix for all uncertain variables (from the uncertain_correlation_matrix specification in
VarAUV). This matrix specifies rank correlations for sampling methods (i.e., LHS) and correlation coefficients
(rho_ij = normalized covariance matrix) for analytic reliability methods.
• RealVectorArray continuousIntervalUncBasicProbs
Probability values per interval cell per epistemic interval uncertain variable (from the continuous_-
interval_uncertain interval_probs specification in VarEUV).
• RealVectorArray continuousIntervalUncLowerBounds
lower bounds defining cells for each epistemic interval uncertain variable (from the continuous_interval_-
uncertain lower_bounds specification in VarEUV)
• RealVectorArray continuousIntervalUncUpperBounds
upper bounds defining cells for each epistemic interval uncertain variable (from the continuous_interval_-
uncertain upper_bounds specification in VarEUV)
• RealVectorArray discreteIntervalUncBasicProbs
Probability values per interval cell per epistemic interval uncertain variable (from the discrete_interval_-
uncertain interval_probs specification in VarEUV).
• IntVectorArray discreteIntervalUncLowerBounds
lower bounds defining cells for each epistemic interval uncertain variable (from the discrete_interval_-
uncertain lower_bounds specification in VarEUV)
• IntVectorArray discreteIntervalUncUpperBounds
upper bounds defining cells for each epistemic interval uncertain variable (from the discrete_interval_-
uncertain upper_bounds specification in VarEUV)
• IntRealMapArray discreteUncSetIntValuesProbs
complete set of admissible values with associated basic probability assignments for each of the discrete epistemic
uncertain variables defined by an integer set (from the discrete_uncertain_set_integer set_values
specification in VarEUV)
• RealRealMapArray discreteUncSetRealValuesProbs
complete set of admissible values with associated basic probability assignments for each of the discrete epistemic
uncertain variables defined by a real set (from the discrete_uncertain_set_real set_values specifi-
cation in VarEUV)
• RealVector continuousStateVars
initial values for the continuous state variables array (from the continuous_state initial_point speci-
fication in VarSV)
• RealVector continuousStateLowerBnds
lower bounds array for the continuous state variables (from the continuous_state lower_bounds specifi-
cation in VarSV)
• RealVector continuousStateUpperBnds
upper bounds array for the continuous state variables (from the continuous_state upper_bounds specifi-
cation in VarSV)
• IntVector discreteStateRangeVars
initial values for the discrete state variables defined by an integer range (from the discrete_state_range
initial_point specification in VarSV)
• IntVector discreteStateRangeLowerBnds
lower bounds array for the discrete state variables defined by an integer range (from the discrete_state_-
range lower_bounds specification in VarSV)
• IntVector discreteStateRangeUpperBnds
upper bounds array for the discrete state variables defined by an integer range(from the discrete_state_-
range upper_bounds specification in VarSV)
• IntVector discreteStateSetIntVars
initial values for the discrete state variables defined by an integer set (from the discrete_state_set_-
integer initial_point specification in VarSV)
• RealVector discreteStateSetRealVars
initial values for the discrete state variables defined by a real set (from the discrete_state_set_real
initial_point specification in VarSV)
• IntSetArray discreteStateSetInt
complete set of admissible values for each of the discrete state variables defined by an integer set (from the
discrete_state_set_integer set_values specification in VarSV)
• RealSetArray discreteStateSetReal
complete set of admissible values for each of the discrete state variables defined by a real set (from the
discrete_state_set_real set_values specification in VarSV)
• StringArray continuousStateLabels
labels array for the continuous state variables (from the continuous_state descriptors specification in
VarSV)
• StringArray discreteStateRangeLabels
labels array for the discrete state variables defined by an integer range (from the discrete_state_range
descriptors specification in VarSV)
• StringArray discreteStateSetIntLabels
labels array for the discrete state variables defined by an integer set (from the discrete_state_range
descriptors specification in VarSV)
• StringArray discreteStateSetRealLabels
labels array for the discrete state variables defined by a real set (from the discrete_state_range
descriptors specification in VarSV)
• IntVector discreteDesignSetIntLowerBnds
discrete design integer set lower bounds inferred from set values
• IntVector discreteDesignSetIntUpperBnds
discrete design integer set upper bounds inferred from set values
• RealVector discreteDesignSetRealLowerBnds
discrete design real set lower bounds inferred from set values
• RealVector discreteDesignSetRealUpperBnds
discrete design real set upper bounds inferred from set values
• RealVector continuousAleatoryUncVars
array of values for all continuous aleatory uncertain variables
• RealVector continuousAleatoryUncLowerBnds
distribution lower bounds for all continuous aleatory uncertain variables (collected from nuv_lower_-
bounds, lnuv_lower_bounds, uuv_lower_bounds, luuv_lower_bounds, tuv_lower_bounds,
and buv_lower_bounds specifications in VarAUV, and derived for gamma, gumbel, frechet, weibull and his-
togram bin specifications)
• RealVector continuousAleatoryUncUpperBnds
distribution upper bounds for all continuous aleatory uncertain variables (collected from nuv_upper_-
bounds, lnuv_upper_bounds, uuv_upper_bounds, luuv_upper_bounds, tuv_lower_bounds,
and buv_upper_bounds specifications in VarAUV, and derived for gamma, gumbel, frechet, weibull and his-
togram bin specifications)
• StringArray continuousAleatoryUncLabels
labels for all continuous aleatory uncertain variables (collected from nuv_descriptors, lnuv_-
descriptors, uuv_descriptors, luuv_descriptors, tuv_descriptors, buv_descriptors,
gauv_descriptors, guuv_descriptors, fuv_descriptors, wuv_descriptors, and hbuv_-
descriptors specifications in VarAUV)
• IntVector discreteIntAleatoryUncVars
array of values for all discrete integer aleatory uncertain variables
• IntVector discreteIntAleatoryUncLowerBnds
distribution lower bounds for all discrete integer aleatory uncertain variables
• IntVector discreteIntAleatoryUncUpperBnds
distribution upper bounds for all discrete integer aleatory uncertain variables
• StringArray discreteIntAleatoryUncLabels
• RealVector discreteRealAleatoryUncVars
array of values for all discrete real aleatory uncertain variables
• RealVector discreteRealAleatoryUncLowerBnds
distribution lower bounds for all discrete real aleatory uncertain variables
• RealVector discreteRealAleatoryUncUpperBnds
distribution upper bounds for all discrete real aleatory uncertain variables
• StringArray discreteRealAleatoryUncLabels
labels for all discrete real aleatory uncertain variables
• RealVector continuousEpistemicUncVars
array of values for all continuous epistemic uncertain variables
• RealVector continuousEpistemicUncLowerBnds
distribution lower bounds for all continuous epistemic uncertain variables
• RealVector continuousEpistemicUncUpperBnds
distribution upper bounds for all continuous epistemic uncertain variables
• StringArray continuousEpistemicUncLabels
labels for all continuous epistemic uncertain variables
• IntVector discreteIntEpistemicUncVars
array of values for all discrete integer epistemic uncertain variables
• IntVector discreteIntEpistemicUncLowerBnds
distribution lower bounds for all discrete integer epistemic uncertain variables
• IntVector discreteIntEpistemicUncUpperBnds
distribution upper bounds for all discrete integer epistemic uncertain variables
• StringArray discreteIntEpistemicUncLabels
labels for all discrete integer epistemic uncertain variables
• RealVector discreteRealEpistemicUncVars
array of values for all discrete real epistemic uncertain variables
• RealVector discreteRealEpistemicUncLowerBnds
distribution lower bounds for all discrete real epistemic uncertain variables
• RealVector discreteRealEpistemicUncUpperBnds
distribution upper bounds for all discrete real epistemic uncertain variables
• StringArray discreteRealEpistemicUncLabels
labels for all discrete real epistemic uncertain variables
• IntVector discreteStateSetIntLowerBnds
discrete state integer set lower bounds inferred from set values
• IntVector discreteStateSetIntUpperBnds
discrete state integer set upper bounds inferred from set values
• RealVector discreteStateSetRealLowerBnds
discrete state real set lower bounds inferred from set values
• RealVector discreteStateSetRealUpperBnds
discrete state real set upper bounds inferred from set values
• ∼DataVariablesRep ()
destructor
Private Attributes
• int referenceCount
number of handle objects sharing dataVarsRep
Friends
• class DataVariables
the handle class can access attributes of the body class directly
Body class for variables specification data. The DataVariablesRep class is used to contain the data from a vari-
ables keyword specification. Default values are managed in the DataVariablesRep constructor. Data is pub-
lic to avoid maintaining set/get functions, but is still encapsulated within ProblemDescDB since ProblemDe-
scDB::dataVariablesList is private.
The documentation for this class was generated from the following files:
• DataVariables.hpp
• DataVariables.cpp
Iterator
Analyzer
PStudyDACE
DDACEDesignCompExp
• DDACEDesignCompExp (Model &model, int samples, int symbols, int seed, const String &sampling_-
method)
alternate constructor used for building approximations
• ∼DDACEDesignCompExp ()
destructor
• void pre_run ()
pre-run portion of run_iterator (optional); re-implemented by Iterators which can generate all Variables (parameter
sets) a priori
• void extract_trends ()
Redefines the run_iterator virtual function for the PStudy/DACE branch.
• void post_input ()
read tabular data for post-run mode
• void resolve_samples_symbols ()
convenience function for resolving number of samples and number of symbols from input.
Private Attributes
• String daceMethod
oas, lhs, oa_lhs, random, box_behnken, central_composite, or grid
• int samplesSpec
initial specification of number of samples
• int symbolsSpec
initial specification of number of symbols
• int numSamples
current number of samples to be evaluated
• int numSymbols
current number of symbols to be used in generating the sample set (inversely related to number of replications)
• int randomSeed
current seed for the random number generator
• bool allDataFlag
flag which triggers the update of allVars/allResponses for use by Iterator::all_variables() and Iterator::all_-
responses()
• size_t numDACERuns
counter for number of run() executions for this object
• bool varyPattern
flag for continuing the random number sequence from a previous run() execution (e.g., for surrogate-based opti-
mization) so that multiple executions are repeatable but not correlated.
• bool mainEffectsFlag
flag which specifies main effects
Wrapper class for the DDACE design of experiments library. The DDACEDesignCompExp class provides a
wrapper for DDACE, a C++ design of experiments library from the Computational Sciences and Mathematics
Research (CSMR) department at Sandia’s Livermore CA site. This class uses design and analysis of computer
experiments (DACE) methods to sample the design space spanned by the bounds of a Model. It returns all
generated samples and their corresponding responses as well as the best sample found.
primary constructor for building a standard DACE iterator This constructor is called for a standard iterator built
with data from probDescDB.
References Dakota::abort_handler(), DDACEDesignCompExp::daceMethod, DDACEDesignComp-
Exp::mainEffectsFlag, Iterator::maxConcurrency, Iterator::numContinuousVars, and DDACEDesignComp-
Exp::numSamples.
13.31.2.2 DDACEDesignCompExp (Model & model, int samples, int symbols, int seed, const String &
sampling_method)
alternate constructor used for building approximations This alternate constructor is used for instantiations on-the-
fly, using only the incoming data. No problem description database queries are used.
pre-run portion of run_iterator (optional); re-implemented by Iterators which can generate all Variables (parameter
sets) a priori pre-run phase, which a derived iterator may optionally reimplement; when not present, pre-run is
likely integrated into the derived run function. This is a virtual function; when re-implementing, a derived class
must call its nearest parent’s pre_run(), if implemented, typically _before_ performing its own implementation
steps.
Reimplemented from Iterator.
References DDACEDesignCompExp::get_parameter_sets(), Iterator::iteratedModel, and PStudy-
DACE::varBasedDecompFlag.
post-run portion of run_iterator (optional); verbose to print results; re-implemented by Iterators that can read all
Variables/Responses and perform final analysis phase in a standalone way Post-run phase, which a derived iterator
may optionally reimplement; when not present, post-run is likely integrated into run. This is a virtual function;
when re-implementing, a derived class must call its nearest parent’s post_run(), typically _after_ performing its
own implementation steps.
Reimplemented from Iterator.
References Analyzer::allResponses, Analyzer::allSamples, SensAnalysisGlobal::compute_correlations(),
DDACEDesignCompExp::compute_main_effects(), DDACEDesignCompExp::mainEffectsFlag, PStudy-
DACE::pStudyDACESensGlobal, Iterator::subIteratorFlag, and PStudyDACE::varBasedDecompFlag.
get the current number of samples Return current number of evaluation points. Since the calculation of samples,
collocation points, etc. might be costly, provide a default implementation here that backs out from the maxCon-
currency. May be (is) overridden by derived classes.
Reimplemented from Iterator.
References DDACEDesignCompExp::numSamples.
convenience function for resolving number of samples and number of symbols from input. This function must
define a combination of samples and symbols that is acceptable for a particular sampling algorithm. Users provide
requests for these quantities, but this function must enforce any restrictions imposed by the sampling algorithms.
References Dakota::abort_handler(), DDACEDesignCompExp::daceMethod, Iterator::numContinuousVars,
DDACEDesignCompExp::numSamples, and DDACEDesignCompExp::numSymbols.
13.31.3.5 void copy_data (const std::vector< DDaceSamplePoint > & dspa, Real ∗ ptr, const int ptr_len)
[static, private]
copy DDACE point to RealVector copy DDACE point array to RealVectorArray copy DDACE point array to
Real∗
References Dakota::abort_handler().
Referenced by DDACEDesignCompExp::get_parameter_sets().
The documentation for this class was generated from the following files:
• DDACEDesignCompExp.hpp
• DDACEDesignCompExp.cpp
Derived application interface class which spawns simulation codes and testers using direct procedure calls. Inher-
itance diagram for DirectApplicInterface::
Interface
ApplicationInterface
DirectApplicInterface
• ∼DirectApplicInterface ()
destructor
• void derived_map (const Variables &vars, const ActiveSet &set, Response &response, int fn_eval_id)
Called by map() and other functions to execute the simulation in synchronous mode. The portion of performing an
evaluation that is specific to a derived class.
• void set_local_data (const Variables &vars, const ActiveSet &set, const Response &response)
convenience function for local test simulators which sets per-evaluation variable, active set, and response attributes
Protected Attributes
• String iFilterName
name of the direct function input filter
• String oFilterName
name of the direct function output filter
• driver_t iFilterType
enum type of the direct function input filter
• driver_t oFilterType
enum type of the direct function output filter
• bool gradFlag
signals use of fnGrads in direct simulator functions
• bool hessFlag
signals use of fnHessians in direct simulator functions
• size_t numFns
• size_t numVars
total number of continuous and discrete variables
• size_t numACV
total number of continuous variables
• size_t numADIV
total number of discete integer variables
• size_t numADRV
total number of discete real variables
• size_t numDerivVars
number of active derivative variables
• RealVector xC
continuous variables used within direct simulator fns
• IntVector xDI
discrete int variables used within direct simulator fns
• RealVector xDR
discrete real variables used within direct simulator fns
• StringMultiArray xCLabels
continuous variable labels
• StringMultiArray xDILabels
discrete integer variable labels
• StringMultiArray xDRLabels
discrete real variable labels
• ShortArray directFnASV
class scope active set vector
• SizetArray directFnDVV
class scope derivative variables vector
• RealVector fnVals
response fn values within direct simulator fns
• RealMatrix fnGrads
response fn gradients w/i direct simulator fns
• RealSymMatrixArray fnHessians
response fn Hessians within direct fns
• StringArray analysisDrivers
the set of analyses within each function evaluation (from the analysis_drivers interface specification)
• size_t analysisDriverIndex
the index of the active analysis driver within analysisDrivers
• String2DArray analysisComponents
the set of optional analysis components used by the analysis drivers (from the analysis_components interface spec-
ification)
Derived application interface class which spawns simulation codes and testers using direct procedure calls. Direc-
tApplicInterface uses a few linkable simulation codes and several internal member functions to perform parameter
to response mappings.
Process init issues as warnings since some contexts (e.g., HierarchSurrModel) initialize more configurations than
will be used and DirectApplicInterface allows override by derived plug-ins.
Reimplemented from ApplicationInterface.
References ApplicationInterface::check_asynchronous(), and ApplicationInterface::check_multiprocessor_-
asynchronous().
execute an analysis code portion of a direct evaluation invocation When a direct analysis/filter is a member func-
tion, the (vars,set,response) data does not need to be passed through the API. If, however, non-member analy-
sis/filter functions are added, then pass (vars,set,response) through to the non-member fns:
// API declaration
int sim(const Variables& vars, const ActiveSet& set, Response& response);
// use of API within derived_map_ac()
if (ac_name == "sim")
fail_code = sim(directFnVars, directFnActSet, directFnResponse);
• DirectApplicInterface.hpp
• DirectApplicInterface.cpp
• DiscrepancyCorrection (Model &surr_model, const IntSet &surr_fn_indices, short corr_type, short corr_-
order)
standard constructor
• DiscrepancyCorrection (const IntSet &surr_fn_indices, size_t num_fns, size_t num_vars, short corr_type,
short corr_order)
alternate constructor
• ∼DiscrepancyCorrection ()
destructor
• void initialize (Model &surr_model, const IntSet &surr_fn_indices, short corr_type, short corr_order)
initialize the DiscrepancyCorrection data
• void initialize (const IntSet &surr_fn_indices, size_t num_fns, size_t num_vars, short corr_type, short
corr_order)
initialize the DiscrepancyCorrection data
• void compute (const Variables &vars, const Response &truth_response, const Response &approx_response,
bool quiet_flag=false)
compute the correction required to bring approx_response into agreement with truth_response and store in
{add,mult}Corrections
Protected Attributes
• IntSet surrogateFnIndices
for mixed response sets, this array specifies the response function subset that is approximated
• short correctionType
approximation correction approach to be used: NO_CORRECTION, ADDITIVE_CORRECTION,
MULTIPLICATIVE_CORRECTION, or COMBINED_CORRECTION.
• short correctionOrder
approximation correction order to be used: 0, 1, or 2
• short dataOrder
order of correction data in 3-bit format: overlay of 1 (value), 2 (gradient), and 4 (Hessian)
• bool correctionComputed
flag indicating whether or not a correction has been computed and is available for application
• size_t numFns
total number of response functions (of which surrogateFnIndices may define a subset)
• size_t numVars
number of continuous variables active in the correction
• void compute_additive (const Response &truth_response, const Response &approx_response, int index,
Real &discrep_fn, RealVector &discrep_grad, RealSymMatrix &discrep_hess)
internal convenience function for computing additive corrections between truth and approximate responses
• void compute_multiplicative (const Response &truth_response, const Response &approx_response, int in-
dex, Real &discrep_fn, RealVector &discrep_grad, RealSymMatrix &discrep_hess)
internal convenience function for computing multiplicative corrections between truth and approximate responses
• const Response & search_db (const Variables &search_vars, const ShortArray &search_asv)
search data_pairs for missing approximation data
Private Attributes
• bool badScalingFlag
flag used to indicate function values near zero for multiplicative corrections; triggers an automatic switch to addi-
tive corrections
• bool computeAdditive
flag indicating the need for additive correction calculations
• bool computeMultiplicative
flag indicating the need for multiplicative correction calculations
• Model surrModel
shallow copy of the surrogate model instance as returned by Model::surrogate_model() (the DataFitSurrModel or
HierarchSurrModel::lowFidelityModel instance)
• RealVector combineFactors
factors for combining additive and multiplicative corrections. Each factor is the weighting applied to the additive
correction and 1.-factor is the weighting applied to the multiplicative correction. The factor value is determined by
an additional requirement to match the high fidelity function value at the previous correction point (e.g., previous
trust region center). This results in a multipoint correction instead of a strictly local correction.
• Variables correctionPrevCenterPt
copy of center point from the previous correction cycle
• RealVector truthFnsCenter
truth function values at the current correction point
• RealVector approxFnsCenter
Surrogate function values at the current correction point.
• RealMatrix approxGradsCenter
Surrogate gradient values at the current correction point.
• RealVector truthFnsPrevCenter
copy of truth function values at center of previous correction cycle
• RealVector approxFnsPrevCenter
copy of approximate function values at center of previous correction cycle
Base class for discrepancy corrections. The DiscrepancyCorrection class provides common functions for comput-
ing and applying corrections to approximations.
13.33.2.1 void compute (const Variables & vars, const Response & truth_response, const Response &
approx_response, bool quiet_flag = false)
compute the correction required to bring approx_response into agreement with truth_response and store in
{add,mult}Corrections Compute an additive or multiplicative correction that corrects the approx_response to
have 0th-order consistency (matches values), 1st-order consistency (matches values and gradients), or 2nd-order
consistency (matches values, gradients, and Hessians) with the truth_response at a single point (e.g., the center of
a trust region). The 0th-order, 1st-order, and 2nd-order corrections use scalar values, linear scaling functions, and
quadratic scaling functions, respectively, for each response function.
References Response::active_set(), DiscrepancyCorrection::addCorrections, DiscrepancyCorrection::apply(),
DiscrepancyCorrection::apply_additive(), DiscrepancyCorrection::apply_multiplicative(), Discrepan-
cyCorrection::approxFnsCenter, DiscrepancyCorrection::approxFnsPrevCenter, DiscrepancyCorrec-
tion::approxGradsCenter, DiscrepancyCorrection::badScalingFlag, DiscrepancyCorrection::check_-
scaling(), DiscrepancyCorrection::combineFactors, DiscrepancyCorrection::compute_additive(),
• DiscrepancyCorrection.hpp
• DiscrepancyCorrection.cpp
Iterator
Minimizer
Optimizer
DOTOptimizer
• ∼DOTOptimizer ()
destructor
• void find_optimum ()
Used within the optimizer branch for computing the optimal solution. Redefines the run virtual function for the
optimizer branch.
• void allocate_workspace ()
Allocates workspace for the optimizer.
• void allocate_constraints ()
Allocates constraint mappings.
Private Attributes
• int dotInfo
INFO from DOT manual.
• int dotFDSinfo
internal DOT parameter NGOTOZ
• int dotMethod
METHOD from DOT manual.
• int printControl
IPRINT from DOT manual (controls output verbosity).
• RealArray realCntlParmArray
RPRM from DOT manual.
• IntArray intCntlParmArray
IPRM from DOT manual.
• RealVector designVars
array of design variable values passed to DOT
• Real objFnValue
value of the objective function passed to DOT
• RealVector constraintValues
array of nonlinear constraint values passed to DOT
• int realWorkSpaceSize
size of realWorkSpace
• int intWorkSpaceSize
size of intWorkSpace
• RealArray realWorkSpace
real work space for DOT
• IntArray intWorkSpace
int work space for DOT
• int numDotNlnConstr
• int numDotLinConstr
total number of linear constraints seen by DOT
• int numDotConstr
total number of linear and nonlinear constraints seen by DOT
• SizetArray constraintMappingIndices
a container of indices for referencing the corresponding Response constraints used in computing the DOT con-
straints.
• RealArray constraintMappingMultipliers
a container of multipliers for mapping the Response constraints to the DOT constraints.
• RealArray constraintMappingOffsets
a container of offsets for mapping the Response constraints to the DOT constraints.
Wrapper class for the DOT optimization library. The DOTOptimizer class provides a wrapper for DOT, a com-
mercial Fortran 77 optimization library from Vanderplaats Research and Development. It uses a reverse com-
munication mode, which avoids the static member function issues that arise with function pointer designs (see
NPSOLOptimizer and SNLLOptimizer).
The user input mappings are as follows: max_iterations is mapped into DOT’s ITMAX parameter within its
IPRM array, max_function_evaluations is implemented directly in the find_optimum() loop since there is
no DOT parameter equivalent, convergence_tolerance is mapped into DOT’s DELOBJ parameter (the rel-
ative convergence tolerance) within its RPRM array, output verbosity is mapped into DOT’s IPRINT parameter
within its function call parameter list (verbose: IPRINT = 7; quiet: IPRINT = 3), and optimization_type
is mapped into DOT’s MINMAX parameter within its function call parameter list. Refer to [Vanderplaats Research
and Development, 1995] for information on IPRM, RPRM, and the DOT function call parameter list.
INFO from DOT manual. Information requested by DOT: 0=optimization complete, 1=get values, 2=get gradients
Referenced by DOTOptimizer::find_optimum(), and DOTOptimizer::initialize_run().
internal DOT parameter NGOTOZ the DOT parameter list has been modified to pass NGOTOZ, which signals
whether DOT is finite-differencing (nonzero value) or performing the line search (zero value).
Referenced by DOTOptimizer::find_optimum().
METHOD from DOT manual. For nonlinear constraints: 0/1 = dot_mmfd, 2 = dot_slp, 3 = dot_sqp. For uncon-
strained: 0/1 = dot_bfgs, 2 = dot_frcg.
Referenced by DOTOptimizer::allocate_constraints(), DOTOptimizer::allocate_workspace(), DOTOpti-
mizer::DOTOptimizer(), and DOTOptimizer::find_optimum().
IPRINT from DOT manual (controls output verbosity). Values range from 0 (least output) to 7 (most output).
Referenced by DOTOptimizer::DOTOptimizer(), and DOTOptimizer::find_optimum().
array of nonlinear constraint values passed to DOT This array must be of nonzero length and must contain only
one-sided inequality constraints which are <= 0 (which requires a transformation from 2-sided inequalities and
equalities).
Referenced by DOTOptimizer::allocate_constraints(), and DOTOptimizer::find_optimum().
a container of indices for referencing the corresponding Response constraints used in computing the DOT con-
straints. The length of the container corresponds to the number of DOT constraints, and each entry in the container
points to the corresponding DAKOTA constraint.
Referenced by DOTOptimizer::allocate_constraints(), and DOTOptimizer::find_optimum().
a container of multipliers for mapping the Response constraints to the DOT constraints. The length of the container
corresponds to the number of DOT constraints, and each entry in the container stores a multiplier for the DAKOTA
constraint identified with constraintMappingIndices. These multipliers are currently +1 or -1.
Referenced by DOTOptimizer::allocate_constraints(), and DOTOptimizer::find_optimum().
a container of offsets for mapping the Response constraints to the DOT constraints. The length of the container
corresponds to the number of DOT constraints, and each entry in the container stores an offset for the DAKOTA
constraint identified with constraintMappingIndices. These offsets involve inequality bounds or equality targets,
since DOT assumes constraint allowables = 0.
Referenced by DOTOptimizer::allocate_constraints(), and DOTOptimizer::find_optimum().
The documentation for this class was generated from the following files:
• DOTOptimizer.hpp
• DOTOptimizer.cpp
A subclass of the JEGA front end driver that exposes the individual protected methods to execute the algorithm.
This is necessary because DAKOTA requires that all problem information be extracted from the problem descrip-
tion DB at the time of Optimizer construction and the front end does it all in the execute algorithm method which
must be called in find_optimum.
Parameters:
probConfig The definition of the problem to be solved by this Driver whenever ExecuteAlgorithm is called.
The problem can be solved in multiple ways by multiple algorithms even using multiple different evaluators by
issuing multiple calls to ExecuteAlgorithm with different AlgorithmConfigs.
Reads all required data from the problem description database stored in the supplied algorithm config. The
returned GA is fully configured and ready to be run. It must also be destroyed at some later time. You MUST call
DestroyAlgorithm for this purpose. Failure to do so could result in a memory leak and an eventual segmentation
fault! Be sure to call DestroyAlgorithm prior to destroying the algorithm config that was used to create it!
This is just here to expose the base class method to users.
Parameters:
algConfig The fully loaded configuration object containing the database of parameters for the algorithm to
be run on the known problem.
Returns:
The fully configured and loaded GA ready to be run using the PerformIterations method.
Referenced by JEGAOptimizer::find_optimum().
Performs the required iterations on the supplied GA. This includes the calls to AlgorithmInitialize and Algorithm-
Finalize and logs some information if appropriate.
This is just here to expose the base class method to users.
Parameters:
theGA The GA on which to perform iterations. This parameter must be non-null.
Returns:
The final solutions reported by the supplied GA after all iterations and call to AlgorithmFinalize.
Referenced by JEGAOptimizer::find_optimum().
Deletes the supplied GA. Use this method to destroy a GA after all iterations have been run. This method knows
if the log associated with the GA was created here and needs to be destroyed as well or not.
This is just here to expose the base class method to users.
Be sure to use this prior to destoying the algorithm config object which contains the target. The GA destructor
needs the target to be in tact.
Parameters:
theGA The algorithm that is no longer needed and thus must be destroyed.
Referenced by JEGAOptimizer::find_optimum().
The documentation for this class was generated from the following file:
• JEGAOptimizer.cpp
Iterator
Minimizer
SurrBasedMinimizer
EffGlobalMinimizer
• ∼EffGlobalMinimizer ()
alternate constructor for instantiations "on the fly"
• void minimize_surrogates ()
Used for computing the optimal solution using a surrogate-based approach. Redefines the Iterator::run() virtual
function.
• void get_best_sample ()
called by minimize_surrogates for setUpType == "user_functions"
• void update_penalty ()
initialize and update the penaltyParameter
Private Attributes
• String setUpType
controls iteration mode: "model" (normal usage) or "user_functions" (user-supplied functions mode for "on the fly"
instantiations).
• Model fHatModel
GP model of response, one approximation per response function.
• Model eifModel
recast model which assimilates mean and variance to solve the max(EIF) sub-problem
• Real meritFnStar
minimum penalized response from among true function evaluations
• RealVector truthFnStar
true function values corresponding to the minimum penalized response
• RealVector varStar
point that corresponds to the optimal value meritFnStar
• short dataOrder
order of the data used for surrogate construction, in ActiveSet request vector 3-bit format; user may override
responses spec
Implementation of Efficient Global Optimization/Least Squares algorithms. The EffGlobalMinimizer class pro-
vides an implementation of the Efficient Global Optimization algorithm developed by Jones, Schonlau, & Welch
as well as adaptation of the concept to nonlinear least squares.
13.36.2.1 ∼EffGlobalMinimizer ()
called by minimize_surrogates for setUpType == "user_functions" determine best solution from among sample
data for expected imporovement function
References Model::approximation_data(), SurrBasedMinimizer::augmented_lagrangian_merit(),
Model::compute_response(), Model::continuous_variables(), Dakota::copy_data(), Model::current_response(),
EffGlobalMinimizer::fHatModel, Response::function_values(), Iterator::iteratedModel, EffGlobalMini-
mizer::meritFnStar, Iterator::numFunctions, SurrBasedMinimizer::origNonlinEqTargets, SurrBasedMin-
imizer::origNonlinIneqLowerBnds, SurrBasedMinimizer::origNonlinIneqUpperBnds, Model::primary_-
response_fn_sense(), Model::primary_response_fn_weights(), EffGlobalMinimizer::truthFnStar, and Eff-
GlobalMinimizer::varStar.
Referenced by EffGlobalMinimizer::minimize_surrogates_on_model().
The documentation for this class was generated from the following files:
• EffGlobalMinimizer.hpp
• EffGlobalMinimizer.cpp
Iterator
Analyzer
NonD
EfficientSubspaceMethod
• ∼EfficientSubspaceMethod ()
Destructor.
• void quantify_uncertainty ()
ESM re-implementation of the virtual UQ iterator function.
• void init_fullspace_sampler ()
initialize the native problem space Monte Carlo sampler
• void print_svd_stats ()
print inner iteration stats after SVD
• void reduced_space_uq ()
experimental method to demonstrate creating a RecastModel and perform sampling-based UQ in the reduced space
Private Attributes
• int initialSamples
initial number of samples at which to query the truth model
• int batchSize
number of points to add at each iteration
• int subspaceSamples
number of UQ samples to perform in the reduced space
• double userSVTol
user-specified tolerance on singular value ratio
• double nullspaceTol
user-specified tolerance on nullspace
• double svRatio
current singular value ratio (sigma_k/sigma_0)
• RealMatrix reducedBasis
basis for the reduced subspace
• RealMatrix derivativeMatrix
matrix of derivative data with numFunctions columns per fullspace sample; each column contains the gradient of
one function at one sample point, so total matrix size is numContinuousVars ∗ (numFunctions ∗ numSamples) [ D1
| D2 | ... | Dnum_samples] [ dy1/dx(k=1) | dy2/dx(k=1) | ... | dyM/dx(k=1) | k=2 | ... | k=n_s ]
• RealMatrix varsMatrix
matrix of fullspace variable points samples size numContinuousVars ∗ (numSamples)
• Iterator fullSpaceSampler
Monte Carlo sampler for the full parameter space.
Efficient Subspace Method (ESM), as proposed by Hany S. Abdel-Khalik. ESM uses random sampling to con-
struct a low-dimensional subspace of the full dimensional parameter space, then performs UQ in the reduced
space
determine if the reduced basis yields acceptable reconstruction error, based on sampling in the orthogonal com-
plement of the reduced basis This function is experimental and needs to be carefully reviewed and cleaned up
References Iterator::activeSet, Model::aleatory_distribution_parameters(), Model::compute_response(),
Model::continuous_variables(), Model::current_response(), Response::function_values(), Iterator::iteratedModel,
Iterator::maxFunctionEvals, EfficientSubspaceMethod::nullspaceTol, Iterator::numContinuousVars, It-
erator::numFunctions, Iterator::outputLevel, EfficientSubspaceMethod::reducedBasis, EfficientSub-
spaceMethod::reducedRank, ActiveSet::request_values(), EfficientSubspaceMethod::totalEvals, EfficientSub-
spaceMethod::totalSamples, and EfficientSubspaceMethod::varsMatrix.
Referenced by EfficientSubspaceMethod::quantify_uncertainty().
experimental method to demonstrate creating a RecastModel and perform sampling-based UQ in the reduced
space This function is experimental and needs to be reviewed and cleaned up. In particular the translation of the
correlations from full to reduced space is likely wrong. Transformation may be correct for covariance, but likely
not correlations.
References Model::assign_rep(), NonD::construct_lhs(), Model::init_communicators(), Itera-
tor::iteratedModel, EfficientSubspaceMethod::map_xi_to_x(), Iterator::numContinuousVars, Itera-
tor::numFunctions, Iterator::print_results(), EfficientSubspaceMethod::reducedRank, Iterator::run_iterator(),
Iterator::sampling_reset(), Iterator::sub_iterator_flag(), EfficientSubspaceMethod::subspaceSamples,
EfficientSubspaceMethod::uncertain_vars_to_subspace(), and Analyzer::vary_pattern().
Referenced by EfficientSubspaceMethod::quantify_uncertainty().
translate the characterization of uncertain variables in the native_model to the reduced space of the transformed
model transform and set the distribution parameters in the reduced model
Convert the user-specified normal random variables to the appropriate reduced space variables, based on the
orthogonal transformation.
TODO: Generalize to convert other random variable types
References Dakota::abort_handler(), Model::aleatory_distribution_parameters(), Iterator::numContinuousVars,
Iterator::outputLevel, EfficientSubspaceMethod::reducedBasis, and EfficientSubspaceMethod::reducedRank.
Referenced by EfficientSubspaceMethod::reduced_space_uq().
13.37.2.4 void map_xi_to_x (const Variables & recast_xi_vars, Variables & sub_model_x_vars)
[static, private]
map the active continuous recast variables to the active submodel variables (linear transformation) Perform the
variables mapping from recast reduced dimension variables xi to original model x variables via linear transforma-
• EfficientSubspaceMethod.hpp
• EfficientSubspaceMethod.cpp
Strategy
HybridStrategy
EmbeddedHybridStrategy
• ∼EmbeddedHybridStrategy ()
destructor
Private Attributes
• Real localSearchProb
the probability of running a local search refinement within phases of the global minimization for coupled hybrids
Strategy for closely-coupled hybrid minimization, typically involving the embedding of local search methods
within global search methods. This strategy uses multiple methods in close coordination, generally using a local
search minimizer repeatedly within a global minimizer (the local search minimizer refines candidate minima
which are fed back to the global minimizer).
The documentation for this class was generated from the following files:
• EmbeddedHybridStrategy.hpp
• EmbeddedHybridStrategy.cpp
• ∼Evaluator (void)
Destructor.
• bool eval_x (NOMAD::Eval_Point &x, const NOMAD::Double &h_max, bool &count_eval) const
Main Evaluation Method.
Private Attributes
• Model & _model
• int n_cont
• int n_disc_int
• int n_disc_real
• int numNomadNonlinearIneqConstr
• int numNomadNonlinearEqConstr
• std::vector< int > constrMapIndices
map from Dakota constraint number to APPS constraint number
NOMAD-based Evaluator class. The NOMAD process requires an evaluation step, which calls the Simulation
program. In the simplest version of this call, NOMAD executes the black box executable, which proceeds to write
a file in a NOMAD-compatible format, which NOMAD reads to continue the process.
Because DAKOTA files are different form NOMAD files, and the simulations processed by DAKOTA al-
ready produce DAKOTA-compatible files, we cannot use this method for NOMAD. Instead, we implement the
NomadEvaluator class, which takes the NOMAD inputs and passes them to DAKOTA’s Interface for process-
ing. The evaluator then passes the evaluation Responses into the NOMAD objects for further analysis.
Parameters:
p NOMAD Parameters object
model DAKOTA Model object
13.39.3.1 bool eval_x (NOMAD::Eval_Point & x, const NOMAD::Double & h_max, bool & count_eval)
const
Main Evaluation Method. Method that handles the communication between the NOMAD search process and the
Black Box Evaluation managed by DAKOTA’s Interface.
Parameters:
x Object that contains the points that need to evaluated. Once the evaluation is completed, this object also
stores the output back to be read by NOMAD.
h_max Current value of the barrier parameter. Not used in this implementation.
count_eval Flag that indicates whether this evaluation counts towards the max number of evaluations, often
set to false when the evaluation does not meet certain costs during expensive evaluations. Not used
in this implementation.
Returns:
true if the evaluation was successful; false otherwise.
• NomadOptimizer.hpp
• NomadOptimizer.cpp
Private Attributes
An evaluator specialization that knows how to interact with Dakota. This evaluator knows how to use the model
to do evaluations both in synchronous and asynchronous modes.
Constructs a Evaluator for use by algorithm. The optimizer is needed for purposes of variable scaling.
Parameters:
algorithm The GA for which the new evaluator is to be used.
model The model through which evaluations will be done.
Referenced by Evaluator::Clone().
Parameters:
copy The evaluator from which properties are to be duplicated into this.
13.40.2.3 Evaluator (const Evaluator & copy, GeneticAlgorithm & algorithm, Model & model)
[inline]
Copy constructs a Evaluator for use by algorithm. The optimizer is needed for purposes of variable scaling.
Parameters:
copy The existing Evaluator from which to retrieve properties.
algorithm The GA for which the new evaluator is to be used.
model The model through which evaluations will be done.
This constructor has no implementation and cannot be used. This constructor can never be used. It is provided so
that this operator can still be registered in an operator registry even though it can never be instantiated from there.
Parameters:
algorithm The GA for which the new evaluator is to be used.
Returns:
The string "DAKOTA JEGA Evaluator".
Referenced by Evaluator::GetName().
Returns a full description of what this operator does and how. The returned text is:
Returns:
A description of the operation of this operator.
Referenced by Evaluator::GetDescription().
13.40.3.3 void SeparateVariables (const Design & from, RealVector & intoCont, IntVector & intoDiscInt,
RealVector & intoDiscReal) const [protected]
This method fills intoCont, intoDiscInt and intoDiscReal appropriately using the values of from. The discrete
integer design variable values are placed in intoDiscInt, the discrete real design variable values are placed in
intoDiscReal, and the continuum are placed into intoCont. The values are written into the vectors from the
beginning so any previous contents of the vectors will be overwritten.
Parameters:
from The Design class object from which to extract the discrete design variable values.
intoDiscInt The vector into which to place the extracted discrete integer values.
intoDiscReal The vector into which to place the extracted discrete real values.
intoCont The vector into which to place the extracted continuous values.
13.40.3.4 void RecordResponses (const RealVector & from, Design & into) const [protected]
Records the computed objective and constraint function values into into. This method takes the response values
stored in from and properly transfers them into the into design.
The response vector from is expected to contain values for each objective function followed by values for each
non-linear constraint in the order in which the info objects were loaded into the target by the optimizer class.
Parameters:
from The vector of responses to install into into.
into The Design to which the responses belong and into which they must be written.
References Evaluator::GetNumberNonLinearConstraints().
Referenced by Evaluator::Evaluate().
Returns the number of non-linear constraints for the problem. This is computed by adding the number of non-
linear equality constraints to the number of non-linear inequality constraints. These values are obtained from the
model.
Returns:
The total number of non-linear constraints.
Returns the number of linear constraints for the problem. This is computed by adding the number of linear equality
constraints to the number of linear inequality constraints. These values are obtained from the model.
Returns:
The total number of linear constraints.
Does evaluation of each design in group. This method uses the Model known by this class to get Designs eval-
uated. It properly formats the Design class information in a way that Dakota will understand and then interprets
the Dakota results and puts them back into the Design class object. It respects the asynchronous flag in the Model
so evaluations may occur synchronously or asynchronously.
Prior to evaluating a Design, this class checks to see if it is marked as already evaluated. If it is, then the evaluation
of that Design is not carried out. This is not strictly necessary because Dakota keeps track of evaluated designs
and does not re-evaluate. An exception is the case of a population read in from a file complete with responses
where Dakota is unaware of the evaluations.
Parameters:
group The group of Design class objects to be evaluated.
Returns:
true if all evaluations completed and false otherwise.
This method cannot be used!! This method does nothing and cannot be called. This is because in the case of
asynchronous evaluation, this method would be unable to conform. It would require that each evaluation be done
in a synchronous fashion.
Parameters:
des A Design that would be evaluated if this method worked.
Returns:
Would return true if the Design were evaluated and false otherwise. Never actually returns here. Issues a fatal
error. Otherwise, it would always return false.
References Evaluator::GetName().
Returns:
See Name().
References Evaluator::Name().
Referenced by Evaluator::Evaluate().
Returns:
See Description().
References Evaluator::Description().
Parameters:
algorithm The GA for which the clone is being created.
Returns:
A clone of this operator.
The Model known by this evaluator. It is through this model that evaluations will take place.
Referenced by Evaluator::Clone(), Evaluator::Evaluate(), Evaluator::GetNumberLinearConstraints(), Evalua-
tor::GetNumberNonLinearConstraints(), and Evaluator::SeparateVariables().
The documentation for this class was generated from the following file:
• JEGAOptimizer.cpp
Private Attributes
• Model & _theModel
The user defined model to be passed to the constructor of the Evaluator.
Parameters:
theModel The Dakota::Model this creator will pass to the created evaluator.
Overriden to return a newly created Evaluator. The GA will assume ownership of the evaluator so we needn’t
worry about keeping track of it for destruction. The additional parameters needed by the Evaluator are stored as
members of this class at construction time.
Parameters:
alg The GA for which the evaluator is to be created.
Returns:
A pointer to a newly created Evaluator.
References EvaluatorCreator::_theModel.
The documentation for this class was generated from the following file:
• JEGAOptimizer.cpp
Public Attributes
• std::vector< ExpDataPerResponse > allExperiments
At the outer level, ExperimentData will just be a vector of ExpDataPerResponse;.
The ExperimentData class is used to read and populate data (currently from user-specified files and/or the input
spec) relating to experimental (physical observations) data for the purposes of calibration. Such data may include
(for example): number of experiments, number of replicates, configuration variables, type of data (scalar vs.
functional), treatment of sigma (experimental uncertainties). This class also provides an interpolation capability to
interpolate between simulation or experimental data so that the differencing between simulation and experimental
data may be performed properly.
13.42.2.1 void load_scalar (const std::string & expDataFilename, const std::string & context_message,
size_t numExperiments, IntVector & numReplicates, size_t numExpConfigVars, size_t
numFunctions, size_t numExpStdDeviationsRead, bool expDataFileAnnotated, bool
calc_sigma_from_data, short verbosity)
• ExperimentData.hpp
• ExperimentData.cpp
Interface
ApplicationInterface
ProcessApplicInterface
ProcessHandleApplicInterface
ForkApplicInterface
• ∼ForkApplicInterface ()
destructor
• size_t wait_local_analyses ()
wait for asynchronous analyses on the local processor, completing at least one job
test for asynchronous analysis completions on the local processor and return results for any completions by sending
messages
Private Attributes
• pid_t evalProcGroupId
the process group id used to identify a set of child evaluation processes used by this interface instance (to distinguish
from other interface instances that could be running at the same time)
• pid_t analysisProcGroupId
the process group id used to identify a set of child analysis processes used by this interface instance (to distinguish
from other interface instances that could be running at the same time)
Derived application interface class which spawns simulation codes using fork/execvp/waitpid. ForkApplicInter-
face is used on Unix systems and is a peer to SpawnApplicInterface for Windows systems.
The documentation for this class was generated from the following files:
• ForkApplicInterface.hpp
• ForkApplicInterface.cpp
Iterator
Analyzer
PStudyDACE
FSUDesignCompExp
• FSUDesignCompExp (Model &model, int samples, int seed, const String &sampling_method)
alternate constructor for building a DACE iterator on-the-fly
• ∼FSUDesignCompExp ()
destructor
• void pre_run ()
pre-run portion of run_iterator (optional); re-implemented by Iterators which can generate all Variables (parameter
sets) a priori
• void extract_trends ()
Redefines the run_iterator virtual function for the PStudy/DACE branch.
• void post_input ()
read tabular data for post-run mode
Private Attributes
• int samplesSpec
initial specification of number of samples
• int numSamples
current number of samples to be evaluated
• bool allDataFlag
flag which triggers the update of allVars/allResponses for use by Iterator::all_variables() and Iterator::all_-
responses()
• size_t numDACERuns
counter for number of run() executions for this object
• bool latinizeFlag
flag which specifies latinization of QMC or CVT sample sets
• IntVector sequenceStart
Integer vector defining a starting index into the sequence for random variable sampled. Default is 0 0 0 (e.g. for
three random variables).
• IntVector sequenceLeap
Integer vector defining the leap number for each sequence being generated. Default is 1 1 1 (e.g. for three random
vars.).
• IntVector primeBase
Integer vector defining the prime base for each sequence being generated. Default is 2 3 5 (e.g., for three random
vars.).
• int seedSpec
the user seed specification for the random number generator (allows repeatable results)
• int randomSeed
current seed for the random number generator
• bool varyPattern
flag for continuing the random number or QMC sequence from a previous run() execution (e.g., for surrogate-based
optimization) so that multiple executions are repeatable but not identical.
• int numCVTTrials
specifies the number of sample points taken at internal CVT iteration
• int trialType
Trial type in CVT. Specifies where the points are placed for consideration relative to the centroids. Choices are grid
(2), halton (1), uniform (0), or random (-1). Default is random.
Wrapper class for the FSUDace QMC/CVT library. The FSUDesignCompExp class provides a wrapper for
FSUDace, a C++ design of experiments library from Florida State University. This class uses quasi Monte Carlo
(QMC) and Centroidal Voronoi Tesselation (CVT) methods to uniformly sample the parameter space spanned by
the active bounds of the current Model. It returns all generated samples and their corresponding responses as well
as the best sample found.
primary constructor for building a standard DACE iterator This constructor is called for a standard iterator built
with data from probDescDB.
References Dakota::abort_handler(), ProblemDescDB::get_bool(), ProblemDescDB::get_int(),
ProblemDescDB::get_iv(), ProblemDescDB::get_string(), Iterator::maxConcurrency, Iterator::methodName,
Iterator::numContinuousVars, FSUDesignCompExp::numCVTTrials, FSUDesignCompExp::numSamples,
FSUDesignCompExp::primeBase, Iterator::probDescDB, FSUDesignCompExp::randomSeed, FSUDesignCom-
pExp::seedSpec, FSUDesignCompExp::sequenceLeap, FSUDesignCompExp::sequenceStart, FSUDesignComp-
Exp::trialType, and FSUDesignCompExp::varyPattern.
13.44.2.2 FSUDesignCompExp (Model & model, int samples, int seed, const String & sampling_method)
alternate constructor for building a DACE iterator on-the-fly This alternate constructor is used for instantiations
on-the-fly, using only the incoming data. No problem description database queries are used.
pre-run portion of run_iterator (optional); re-implemented by Iterators which can generate all Variables (parameter
sets) a priori pre-run phase, which a derived iterator may optionally reimplement; when not present, pre-run is
likely integrated into the derived run function. This is a virtual function; when re-implementing, a derived class
must call its nearest parent’s pre_run(), if implemented, typically _before_ performing its own implementation
steps.
Reimplemented from Iterator.
References FSUDesignCompExp::get_parameter_sets(), Iterator::iteratedModel, and PStudy-
DACE::varBasedDecompFlag.
post-run portion of run_iterator (optional); verbose to print results; re-implemented by Iterators that can read all
Variables/Responses and perform final analysis phase in a standalone way Post-run phase, which a derived iterator
may optionally reimplement; when not present, post-run is likely integrated into run. This is a virtual function;
when re-implementing, a derived class must call its nearest parent’s post_run(), typically _after_ performing its
own implementation steps.
Reimplemented from Iterator.
References Analyzer::allResponses, Analyzer::allSamples, SensAnalysisGlobal::compute_correlations(),
PStudyDACE::pStudyDACESensGlobal, Iterator::subIteratorFlag, and PStudyDACE::varBasedDecompFlag.
get the current number of samples Return current number of evaluation points. Since the calculation of samples,
collocation points, etc. might be costly, provide a default implementation here that backs out from the maxCon-
currency. May be (is) overridden by derived classes.
Reimplemented from Iterator.
References FSUDesignCompExp::numSamples.
enforce sanity checks/modifications for the user input specification Users may input a variety of quantities, but
this function must enforce any restrictions imposed by the sampling algorithms.
References Dakota::abort_handler(), Iterator::methodName, Iterator::numContinuousVars, FSUDesignComp-
Exp::numSamples, and FSUDesignCompExp::primeBase.
Referenced by FSUDesignCompExp::get_parameter_sets().
The documentation for this class was generated from the following files:
• FSUDesignCompExp.hpp
• FSUDesignCompExp.cpp
Approximation
GaussProcApproximation
• ∼GaussProcApproximation ()
destructor
• void build ()
find the covariance parameters governing the Gaussian process response
• void normalize_training_data ()
Normalizes the initial inputs upon which the GP surface is based.
• void get_trend ()
Gets the trend (basis) functions for the calculation of the mean of the GP If the order = 0, the trend is a constant, if
the order = 1, trend is linear, if order = 2, trend is quadratic.
• void get_beta_coefficients ()
Gets the beta coefficients for the calculation of the mean of the GP.
• int get_cholesky_factor ()
Gets the Cholesky factorization of the covariance matrix, with error checking.
• void get_process_variance ()
Gets the estimate of the process variance given the values of beta and the correlation lengthscales.
• void get_cov_matrix ()
calculates the covariance matrix for a given set of input points
• void get_cov_vector ()
calculates the covariance vector between a new point x and the set of inputs upon which the GP is based
• void optimize_theta_global ()
sets up and performs the optimization of the negative log likelihood to determine the optimal values of the covari-
ance parameters using NCSUDirect
• void optimize_theta_multipoint ()
sets up and performs the optimization of the negative log likelihood to determine the optimal values of the covari-
ance parameters using a gradient-based solver and multiple starting points
• Real calc_nll ()
calculates the negative log likelihood function (based on covariance matrix)
• void calc_grad_nll ()
Gets the gradient of the negative log likelihood function with respect to the correlation lengthscales, theta.
• void get_grad_cov_vector ()
Calculate the derivatives of the covariance vector, with respect to each componeent of x.
• void run_point_selection ()
Runs the point selection algorithm, which will choose a subset of the training set with which to construct the GP
model, and estimate the necessary parameters.
• void initialize_point_selection ()
Initializes the point selection routine by choosing a small initial subset of the training points.
• void pointsel_write_points ()
Writes out the training set before and after point selection.
• void lhood_2d_grid_eval ()
For problems with 2D input, evaluates the negative log likelihood on a grid.
• static void constraint_eval (int mode, int n, const Teuchos::SerialDenseVector< int, double > &X,
Teuchos::SerialDenseVector< int, double > &g, Teuchos::SerialDenseMatrix< int, double > &gradC, int
&result_mode)
static function used by OPT++ as the constraint function in the optimization of the negative log likelihood. Cur-
rently this function is empty: it is an unconstrained optimization.
Private Attributes
• Real approxValue
value of the approximation returned by value()
• Real approxVariance
value of the approximation returned by prediction_variance()
• RealMatrix trainPoints
A 2-D array (num sample sites = rows, num vars = columns) used to create the Gaussian process.
• RealMatrix trainValues
An array of response values; one response value per sample site.
• RealVector trainMeans
The mean of the input columns of trainPoints.
• RealVector trainStdvs
The standard deviation of the input columns of trainPoints.
• RealMatrix normTrainPoints
Current working set of normalized points upon which the GP is based.
• RealMatrix trendFunction
matrix to hold the trend function
• RealMatrix betaCoeffs
matrix to hold the beta coefficients for the trend function
• RealSymMatrix covMatrix
The covariance matrix where each element (i,j) is the covariance between points Xi and Xj in the initial set of
samples.
• RealMatrix covVector
The covariance vector where each element (j,0) is the covariance between a new point X and point Xj from the
initial set of samples.
• RealMatrix approxPoint
Point at which a prediction is requested. This is currently a single point, but it could be generalized to be a vector
of points.
• RealMatrix gradNegLogLikTheta
matrix to hold the gradient of the negative log likelihood with respect to the theta correlation terms
• RealMatrix gradCovVector
A matrix, where each column is the derivative of the covVector with respect to a particular componenet of X.
• RealMatrix normTrainPointsAll
Set of all original samples available.
• RealMatrix trainValuesAll
All original samples available.
• RealMatrix trendFunctionAll
Trend function values corresponding to all original samples.
• RealMatrix Rinv_YFb
Matrix for storing inverse of correlation matrix Rinv∗(Y-FB).
• size_t numObs
The number of observations on which the GP surface is built.
• size_t numObsAll
The original number of observations.
• short trendOrder
The number of variables in each X variable (number of dimensions of the problem).
• RealVector thetaParams
Theta is the vector of covariance parameters for the GP. We determine the values of theta by optimization Currently,
the covariance function is theta[0]∗exp(-0.5∗sume)+delta∗pow(sige,2). sume is the sum squared of weighted dis-
tances; it involves a sum of theta[1](Xi(1)-Xj(1))∧ 2 + theta[2](Xi(2)-Xj(2))∧ 2 + ... where Xi(1) is the first dimen-
sion value of multi-dimensional variable Xi. delta∗pow(sige,2) is a jitter term used to improve matrix computations.
delta is zero for the covariance between different points and 1 for the covariance between the same point. sige is
the underlying process error.
• Real procVar
The process variance, the multiplier of the correlation matrix.
• IntArray pointsAddedIndex
Used by the point selection algorithm, this vector keeps track all points which have been added.
• int cholFlag
A global indicator for success of the Cholesky factorization.
• bool usePointSelection
a flag to indicate the use of point selection
Derived approximation class for Gaussian Process implementation. The GaussProcApproximation class provides
a global approximation (surrogate) based on a Gaussian process. The Gaussian process is built after normalizing
the function values, with zero mean. Opt++ is used to determine the optimal values of the covariance parameters,
those which minimize the negative log likelihood function.
default constructor alternate constructor used by EffGlobalOptimization and NonDGlobalReliability that does not
use a problem database defaults here are no point selectinn and quadratic trend function.
13.45.3.1 void GPmodel_apply (const RealVector & new_x, bool variance_flag, bool gradients_flag)
[private]
Function returns a response value using the GP surface. The response value is computed at the design point
specified by the RealVector function argument.
References Dakota::abort_handler(), GaussProcApproximation::approxPoint, GaussProcApproximation::get_-
cov_vector(), Approximation::numVars, GaussProcApproximation::predict(), GaussProcApproxima-
tion::trainMeans, and GaussProcApproximation::trainStdvs.
Referenced by GaussProcApproximation::gradient(), GaussProcApproximation::pointsel_get_errors(),
GaussProcApproximation::prediction_variance(), and GaussProcApproximation::value().
The number of variables in each X variable (number of dimensions of the problem). The order of the basis
function for the mean of the GP If the order = 0, the trend is a constant, if the order = 1, trend is linear, if order =
2, trend is quadratic.
Referenced by GaussProcApproximation::GaussProcApproximation(), GaussProcApproximation::get_beta_-
coefficients(), GaussProcApproximation::get_trend(), GaussProcApproximation::GPmodel_build(), and GaussP-
rocApproximation::predict().
The documentation for this class was generated from the following files:
• GaussProcApproximation.hpp
• GaussProcApproximation.cpp
GetLongOpt
CommandLineHandler
Public Types
• enum OptType { Valueless, OptionalValue, MandatoryValue }
enum for different types of values associated with command line options.
• ∼GetLongOpt ()
Destructor.
• int enroll (const char ∗const opt, const OptType t, const char ∗const desc, const char ∗const val)
Add an option to the list of valid command options.
• int setcell (Cell ∗c, char ∗valtoken, char ∗nexttoken, const char ∗p)
internal convenience function for setting Cell::value
Private Attributes
• Cell ∗ table
option table
• char ∗ pname
program basename
• char optmarker
option marker
• int enroll_done
finished enrolling
• Cell ∗ last
last entry in option table
GetLongOpt is a general command line utility from S. Manoharan (Advanced Computer Research Institute, Lyon,
France). GetLongOpt manages the definition and parsing of "long options." Command line options can be ab-
breviated as long as there is no ambiguity. If an option requires a value, the value should be separated from the
option either by whitespace or an "=".
enum for different types of values associated with command line options.
Enumerator:
Valueless option that may never have a value
Constructor. Constructor for GetLongOpt takes an optional argument: the option marker. If unspecified, this
defaults to ’-’, the standard (?) Unix option marker.
References GetLongOpt::enroll_done, GetLongOpt::last, GetLongOpt::optmarker, GetLongOpt::table, and Get-
LongOpt::ustring.
parse the command line args (argc, argv). A return value < 1 represents a parse error. Appropriate error messages
are printed when errors are seen. parse returns the the optind (see getopt(3)) if parsing is successful.
References GetLongOpt::basename(), GetLongOpt::enroll_done, GetLongOpt::optmarker, GetLongOpt::pname,
GetLongOpt::setcell(), and GetLongOpt::table.
Referenced by CommandLineHandler::check_usage().
parse a string of options (typically given from the environment). A return value < 1 represents a parse error.
Appropriate error messages are printed when errors are seen. parse takes two strings: the first one is the string to
be parsed and the second one is a string to be prefixed to the parse errors.
References GetLongOpt::enroll_done, GetLongOpt::optmarker, GetLongOpt::setcell(), and GetLongOpt::table.
13.46.4.3 int enroll (const char ∗const opt, const OptType t, const char ∗const desc, const char ∗const
val)
Add an option to the list of valid command options. enroll adds option specifications to its internal database. The
first argument is the option sting. The second is an enum saying if the option is a flag (Valueless), if it requires a
mandatory value (MandatoryValue) or if it takes an optional value (OptionalValue). The third argument is a string
giving a brief description of the option. This description will be used by GetLongOpt::usage. GetLongOpt, for
usage-printing, uses {$val} to represent values needed by the options. {< $val>="">} is a mandatory value and
{[$val]} is an optional value. The final argument to enroll is the default string to be returned if the option is not
specified. For flags (options with Valueless), use "" (empty string, or in fact any arbitrary string) for specifying
TRUE and 0 (null pointer) to specify FALSE.
References GetLongOpt::enroll_done, GetLongOpt::last, and GetLongOpt::table.
Referenced by CommandLineHandler::initialize_options().
Retrieve value of option. The values of the options that are enrolled in the database can be retrieved using retrieve.
This returns a string and this string should be converted to whatever type you want. See atoi, atof, atol, etc. If a
"parse" is not done before retrieving all you will get are the default values you gave while enrolling! Ambiguities
while retrieving (may happen when options are abbreviated) are resolved by taking the matching option that was
enrolled last. For example, -{v} will expand to {-verify}. If you try to retrieve something you didn’t enroll, you
will get a warning message.
References GetLongOpt::optmarker, and GetLongOpt::table.
Referenced by CommandLineHandler::assign_streams(), CommandLineHandler::check_usage(),
CommandLineHandler::instantiate_flag(), main(), ProblemDescDB::manage_inputs(), ParallelLibrary::manage_-
run_modes(), CommandLineHandler::read_restart_evals(), CommandLineHandler::reset_streams(),
CommandLineHandler::run_flag(), and ParallelLibrary::specify_outputs_restart().
Change header of usage output to str. GetLongOpt::usage is overloaded. If passed a string "str", it sets the internal
usage string to "str". Otherwise it simply prints the command usage.
References GetLongOpt::ustring.
The documentation for this class was generated from the following files:
• CommandLineHandler.hpp
• CommandLineHandler.cpp
• ∼Graphics ()
destructor
• void create_tabular_datastream (const Variables &vars, const Response &response, const std::string
&tabular_data_file)
opens the tabular data file stream and prints the headings
• void close ()
close graphics windows and tabular datastream
Private Attributes
• Graphics2D ∗ graphics2D
pointer to the 2D graphics object
• bool win2dOn
flag to indicate if 2D graphics window is active
• bool tabularDataFlag
flag to indicate if tabular data stream is active
• int graphicsCntr
used for x axis values in 2D graphics and for 1st column in tabular data
• std::string tabularCntrLabel
label for counter used in first line comment w/i the tabular data file
• std::ofstream tabularDataFStream
file stream for tabulation of graphics data within compute_response
The Graphics class provides a single interface to 2D (motif) and 3D (PLPLOT) graphics as well as tabular cata-
loguing of data for post-processing with Matlab, Tecplot, etc. There is only one Graphics object (dakotaGraphics)
and it is global (for convenient access from strategies, models, and approximations).
13.47.2.1 void create_plots_2d (const Variables & vars, const Response & response)
creates the 2d graphics window and initializes the plots Sets up a single event loop for duration of the dakota-
Graphics object, continuously adding data to a single window. There is no reset. To start over with a new data set,
you need a new object (delete old and instantiate new).
13.47.2.2 void create_tabular_datastream (const Variables & vars, const Response & response, const
std::string & tabular_data_file)
opens the tabular data file stream and prints the headings Opens the tabular data file stream and prints headings,
one for each continuous and discrete variable and one for each response function, using the variable and response
function labels. This tabular data is used for post-processing of DAKOTA results in Matlab, Tecplot, etc.
References Graphics::tabularCntrLabel, Graphics::tabularDataFlag, and Graphics::tabularDataFStream.
Referenced by SurrBasedMinimizer::initialize_graphics(), and Iterator::initialize_graphics().
13.47.2.3 void add_datapoint (const Variables & vars, const Response & response)
adds data to each window in the 2d graphics and adds a row to the tabular data file based on the results of a model
evaluation Adds data to each 2d plot and each tabular data column (one for each active variable and for each
response function). graphicsCntr is used for the x axis in the graphics and the first column in the tabular data.
References Response::active_set_request_vector(), Variables::continuous_variables(), Variables::discrete_-
int_variables(), Variables::discrete_real_variables(), Response::function_values(), Graphics::graphics2D,
Graphics::graphicsCntr, Graphics::tabularDataFlag, Graphics::tabularDataFStream, Graphics::win2dOn, and
Dakota::write_data_tabular().
Referenced by Model::compute_response(), NonDLocalReliability::mean_value(),
SurrBasedLocalMinimizer::minimize_surrogates(), Model::synchronize(), Model::synchronize_nowait(),
and NonDLocalReliability::update_level_data().
adds data to a single window in the 2d graphics Adds data to a single 2d plot. Allows complete flexibility in
defining other kinds of x-y plotting in the 2D graphics.
References Graphics::graphics2D, and Graphics::win2dOn.
creates a separate line graphic for subsequent data points for a single window in the 2d graphics Used for display-
ing multiple data sets within the same plot.
References Graphics::graphics2D, and Graphics::win2dOn.
Referenced by NonDLocalReliability::update_level_data().
The documentation for this class was generated from the following files:
• DakotaGraphics.hpp
• DakotaGraphics.cpp
Interface
ApplicationInterface
ProcessApplicInterface
SysCallApplicInterface
GridApplicInterface
• ∼GridApplicInterface ()
destructor
• void derived_map (const Variables &vars, const ActiveSet &set, Response &response, int fn_eval_id)
Called by map() and other functions to execute the simulation in synchronous mode. The portion of performing an
evaluation that is specific to a derived class.
Protected Attributes
• IntSet idSet
Set of function evaluation id’s for active asynchronous system call evaluations.
• IntShortMap failCountMap
map linking function evaluation id’s to number of response read failures
• start_grid_computing_t start_grid_computing
handle to dynamically linked start_grid_computing function
• perform_analysis_t perform_analysis
handle to dynamically linked perform_analysis grid function
• get_jobs_completed_t get_jobs_completed
handle to dynamically linked get_jobs_completed grid function
• stop_grid_computing_t stop_grid_computing
handle to dynamically linked stop_grid_computing function
Derived application interface class which spawns simulation codes using grid services such as Condor or Globus.
This class is currently a modified copy of SysCallApplicInterface adapted for use with an external grid dervices
library which was dynamically linked using dlopen() services.
Check for completion of active asynch jobs (tracked with sysCallSet). Wait for at least one completion and
complete all jobs that have returned. This satisifies a "fairness" principle, in the sense that a completed job will
_always_ be processed (whereas accepting only a single completion could always accept the same completion -
the case of very inexpensive fn. evals. - and starve some servers).
Reimplemented from SysCallApplicInterface.
References ApplicationInterface::completionSet, and GridApplicInterface::test_local_evaluations().
Referenced by GridApplicInterface::derived_map().
References SysCallApplicInterface::spawn_analysis_to_shell().
The documentation for this class was generated from the following files:
• GridApplicInterface.hpp
• GridApplicInterface.cpp
Model
SurrogateModel
HierarchSurrModel
• ∼HierarchSurrModel ()
destructor
• void build_approximation ()
use highFidelityModel to compute the truth values needed for correction of lowFidelityModel results
• void derived_init_serial ()
set up lowFidelityModel and highFidelityModel for serial operations.
• void stop_servers ()
Executed by the master to terminate lowFidelityModel and highFidelityModel server operations when iteration on
the HierarchSurrModel is complete.
• void set_evaluation_reference ()
set the evaluation counter reference points for the HierarchSurrModel (request forwarded to lowFidelityModel and
highFidelityModel)
• void fine_grained_evaluation_counters ()
request fine-grained evaluation reporting within lowFidelityModel and highFidelityModel
Private Attributes
• int hierModelEvalCntr
number of calls to derived_compute_response()/ derived_asynch_compute_response()
• IntResponseMap cachedTruthRespMap
map of high-fidelity responses retrieved in derived_synchronize_nowait() that could not be returned since corre-
sponding low-fidelity response portions were still pending.
• Model lowFidelityModel
provides approximate low fidelity function evaluations. Model is of arbitrary type and supports recursions (e.g.,
lowFidelityModel can be a data fit surrogate on a low fidelity model).
• Model highFidelityModel
provides truth evaluations for computing corrections to the low fidelity results. Model is of arbitrary type and
supports recursions.
• Response highFidRefResponse
the reference high fidelity response computed in build_approximation() and used for calculating corrections.
• String evalTagPrefix
cached evalTag Prefix from parents to use at compute_response time
Derived model class within the surrogate model branch for managing hierarchical surrogates (models of varying
fidelity). The HierarchSurrModel class manages hierarchical models of varying fidelity. In particular, it uses a
low fidelity model as a surrogate for a high fidelity model. The class contains a lowFidelityModel which performs
the approximate low fidelity function evaluations and a highFidelityModel which provides truth evaluations for
computing corrections to the low fidelity results.
portion of compute_response() specific to HierarchSurrModel Compute the response synchronously using low-
FidelityModel, highFidelityModel, or both (mixed case). For the lowFidelityModel portion, compute the high
fidelity response if needed with build_approximation(), and, if correction is active, correct the low fidelity results.
Reimplemented from Model.
References Response::active_set(), DiscrepancyCorrection::apply(), SurrogateModel::approxBuilds,
SurrogateModel::asv_mapping(), HierarchSurrModel::build_approximation(), HierarchSurrModel::component_-
parallel_mode(), DiscrepancyCorrection::compute(), Model::compute_response(), Discrepancy-
Correction::computed(), Response::copy(), Model::current_response(), Model::currentResponse,
Model::currentVariables, SurrogateModel::deltaCorr, Model::eval_tag_prefix(), HierarchSur-
rModel::evalTagPrefix, SurrogateModel::force_rebuild(), Model::hierarchicalTagging, Hi-
erarchSurrModel::hierModelEvalCntr, HierarchSurrModel::highFidelityModel, HierarchSur-
rModel::highFidRefResponse, HierarchSurrModel::lowFidelityModel, Model::outputLevel, ActiveSet::request_-
vector(), SurrogateModel::response_mapping(), SurrogateModel::responseMode, Response::update(), and
HierarchSurrModel::update_model().
portion of synchronize() specific to HierarchSurrModel Blocking retrieval of asynchronous evaluations from low-
FidelityModel, highFidelityModel, or both (mixed case). For the lowFidelityModel portion, apply correction (if
active) to each response in the array. derived_synchronize() is designed for the general case where derived_-
asynch_compute_response() may be inconsistent in its use of low fidelity evaluations, high fidelity evaluations, or
both.
Reimplemented from Model.
References DiscrepancyCorrection::apply(), SurrogateModel::cachedApproxRespMap, HierarchSur-
rModel::cachedTruthRespMap, HierarchSurrModel::component_parallel_mode(), DiscrepancyCorrec-
tion::compute(), DiscrepancyCorrection::computed(), SurrogateModel::deltaCorr, HierarchSurrModel::derived_-
synchronize_nowait(), HierarchSurrModel::highFidelityModel, HierarchSurrModel::highFidRefResponse,
HierarchSurrModel::lowFidelityModel, Model::outputLevel, SurrogateModel::rawVarsMap,
SurrogateModel::response_mapping(), SurrogateModel::responseMode, SurrogateModel::surrIdMap, Sur-
rogateModel::surrResponseMap, Model::synchronize(), and SurrogateModel::truthIdMap.
Return the current evaluation id for the HierarchSurrModel. return the hierarchical model evaluation count. Due
to possibly intermittent use of surrogate bypass, this is not the same as either the loFi or hiFi model evaluation
counts. It also does not distinguish duplicate evals.
Reimplemented from Model.
References HierarchSurrModel::hierModelEvalCntr.
The documentation for this class was generated from the following files:
• HierarchSurrModel.hpp
• HierarchSurrModel.cpp
Base class for hybrid minimization strategies. Inheritance diagram for HybridStrategy::
Strategy
HybridStrategy
• ∼HybridStrategy ()
destructor
• void allocate_methods ()
initialize selectedIterators and userDefinedModels
• void deallocate_methods ()
free communicators for selectedIterators and userDefinedModels
Protected Attributes
• StringArray methodList
the list of method identifiers
• int numIterators
number of methods in methodList
• IteratorArray selectedIterators
the set of iterators, one for each entry in methodList
• ModelArray userDefinedModels
the set of models, one for each iterator
Base class for hybrid minimization strategies. This base class shares code for three approaches to hybrid mini-
mization: (1) the sequential hybrid; (2) the embedded hybrid; and (3) the collaborative hybrid.
The documentation for this class was generated from the following files:
• HybridStrategy.hpp
• HybridStrategy.cpp
Interface
ApplicationInterface ApproximationInterface
DirectApplicInterface ProcessApplicInterface
MatlabInterface ProcessHandleApplicInterface
PythonInterface SysCallApplicInterface
ScilabInterface
TestDriverInterface
ParallelDirectApplicInterface
SerialDirectApplicInterface
• virtual ∼Interface ()
destructor
• virtual void map (const Variables &vars, const ActiveSet &set, Response &response, bool asynch_-
flag=false)
the function evaluator: provides a "mapping" from the variables to the responses.
• virtual void build_approximation (const RealVector &c_l_bnds, const RealVector &c_u_bnds, const IntVec-
tor &di_l_bnds, const IntVector &di_u_bnds, const RealVector &dr_l_bnds, const RealVector &dr_u_bnds)
• void set_evaluation_reference ()
set evaluation count reference points for the interface
• void algebraic_mappings (const Variables &vars, const ActiveSet &algebraic_set, Response &algebraic_-
response)
evaluate the algebraic_response using the AMPL solver library and the data extracted from the algebraic_mappings
file
Protected Attributes
• String interfaceType
the interface type: system, fork, direct, grid, or approximation
• String interfaceId
the interface specification identifier string from the DAKOTA input file
• bool algebraicMappings
flag for the presence of algebraic_mappings that define the subset of an Interface’s parameter to response mapping
that is explicit and algebraic.
• bool coreMappings
flag for the presence of non-algebraic mappings that define the core of an Interface’s parameter to response map-
ping (using analysis_drivers for ApplicationInterface or functionSurfaces for ApproximationInterface).
• int currEvalId
identifier for the current evaluation, which may differ from the evaluation counters in the case of evaluation schedul-
ing; used on iterator master as well as server processors. Currently, this is set prior to all invocations of derived_-
map() for all processors.
• bool fineGrainEvalCounters
controls use of fn val/grad/hess counters
• int evalIdCntr
total interface evaluation counter
• int newEvalIdCntr
new (non-duplicate) interface evaluation counter
• int evalIdRefPt
iteration reference point for evalIdCntr
• int newEvalIdRefPt
iteration reference point for newEvalIdCntr
• IntArray fnValCounter
number of value evaluations by resp fn
• IntArray fnGradCounter
number of gradient evaluations by resp fn
• IntArray fnHessCounter
number of Hessian evaluations by resp fn
• IntArray newFnValCounter
number of new value evaluations by resp fn
• IntArray newFnGradCounter
number of new gradient evaluations by resp fn
• IntArray newFnHessCounter
number of new Hessian evaluations by resp fn
• IntArray fnValRefPt
iteration reference point for fnValCounter
• IntArray fnGradRefPt
iteration reference point for fnGradCounter
• IntArray fnHessRefPt
iteration reference point for fnHessCounter
• IntArray newFnValRefPt
iteration reference point for newFnValCounter
• IntArray newFnGradRefPt
iteration reference point for newFnGradCounter
• IntArray newFnHessRefPt
iteration reference point for newFnHessCounter
• IntResponseMap rawResponseMap
Set of responses returned after either a blocking or nonblocking schedule of asynchronous evaluations.
• StringArray fnLabels
response function descriptors from the DAKOTA input file (used in print_evaluation_summary() and derived direct
interface classes)
• bool multiProcEvalFlag
flag for multiprocessor evaluation partitions (evalComm)
• bool ieDedMasterFlag
flag for dedicated master partitioning at the iterator level
• short outputLevel
output verbosity level: {SILENT,QUIET,NORMAL,VERBOSE,DEBUG}_OUTPUT
• String evalTagPrefix
set of period-delimited evaluation ID tags to use in evaluation tagging
• bool appendIfaceId
whether to append the interface ID to the prefix during map (default true)
Private Attributes
• StringArray algebraicVarTags
set of variable tags from AMPL stub.col
• SizetArray algebraicACVIndices
set of indices mapping AMPL algebraic variables to DAKOTA all continuous variables
• SizetArray algebraicACVIds
set of ids mapping AMPL algebraic variables to DAKOTA all continuous variables
• StringArray algebraicFnTags
set of function tags from AMPL stub.row
• IntArray algebraicFnTypes
function type: > 0 = objective, < 0 = constraint |value|-1 is the objective (constraint) index when making AMPL
objval (conival) calls
• SizetArray algebraicFnIndices
set of indices mapping AMPL algebraic objective functions to DAKOTA response functions
• RealArray algebraicConstraintWeights
set of weights for computing Hessian matrices for algebraic constraints;
• int numAlgebraicResponses
number of algebraic responses (objectives+constraints)
• Interface ∗ interfaceRep
pointer to the letter (initialized only for the envelope)
• int referenceCount
number of objects sharing interfaceRep
• ASL ∗ asl
pointer to an AMPL solver library (ASL) object
Base class for the interface class hierarchy. The Interface class hierarchy provides the part of a Model that is
responsible for mapping a set of Variables into a set of Responses. The mapping is performed using either a
simulation-based application interface or a surrogate-based approximation interface. For memory efficiency and
enhanced polymorphism, the interface hierarchy employs the "letter/envelope idiom" (see Coplien "Advanced
C++", p. 133), for which the base class (Interface) serves as the envelope and one of the derived classes (selected
in Interface::get_interface()) serves as the letter.
13.51.2.1 Interface ()
standard constructor for envelope Used in Model instantiation to build the envelope. This constructor only needs
to extract enough data to properly execute get_interface, since Interface::Interface(BaseConstructor, problem_db)
builds the actual base class data inherited by the derived interfaces.
References Dakota::abort_handler(), Interface::get_interface(), and Interface::interfaceRep.
copy constructor Copy constructor manages sharing of interfaceRep and incrementing of referenceCount.
References Interface::interfaceRep, and Interface::referenceCount.
destructor Destructor decrements referenceCount and only deletes interfaceRep if referenceCount is zero.
References Interface::interfaceRep, and Interface::referenceCount.
constructor initializes the base class part of letter classes (BaseConstructor overloading avoids infinite recursion
in the derived class constructors - Coplien, p. 139) This constructor is the one which must build the base class data
for all inherited interfaces. get_interface() instantiates a derived class letter and the derived constructor selects
this base class constructor in its initialization list (to avoid the recursion of the base class constructor calling get_-
interface() again). Since this is the letter and the letter IS the representation, interfaceRep is set to NULL (an
uninitialized pointer causes problems in ∼Interface).
References Dakota::abort_handler(), Interface::algebraic_function_type(), Interface::algebraicConstraintWeights,
Interface::algebraicFnTags, Interface::algebraicFnTypes, Interface::algebraicMappings, Inter-
face::algebraicVarTags, Interface::asl, Interface::fineGrainEvalCounters, Interface::fnLabels,
assignment operator Assignment operator decrements referenceCount for old interfaceRep, assigns new interfac-
eRep, and increments referenceCount for new interfaceRep.
References Interface::interfaceRep, and Interface::referenceCount.
replaces existing letter with a new one Similar to the assignment operator, the assign_rep() function decrements
referenceCount for the old interfaceRep and assigns the new interfaceRep. It is different in that it is used for pub-
lishing derived class letters to existing envelopes, as opposed to sharing representations among multiple envelopes
(in particular, assign_rep is passed a letter object and operator= is passed an envelope object). Letter assignment
supports two models as governed by ref_count_incr:
• ref_count_incr = true (default): the incoming letter belongs to another envelope. In this case, increment the
reference count in the normal manner so that deallocation of the letter is handled properly.
• ref_count_incr = false: the incoming letter is instantiated on the fly and has no envelope. This case is
modeled after get_interface(): a letter is dynamically allocated using new and passed into assign_rep, the
letter’s reference count is not incremented, and the letter is not remotely deleted (its memory management
is passed over to the envelope).
13.51.3.3 void eval_tag_prefix (const String & eval_id_str, bool append_iface_id = true)
set the evaluation tag prefix (does not recurse) default implementation just sets the list of eval ID tags; derived
classes containing additional models or interfaces should override (currently no use cases)
References Interface::appendIfaceId, Interface::eval_tag_prefix(), Interface::evalTagPrefix, and Inter-
face::interfaceRep.
Referenced by NestedModel::derived_compute_response(), SingleModel::eval_tag_prefix(), and Interface::eval_-
tag_prefix().
13.51.3.4 void response_mapping (const Response & algebraic_response, const Response &
core_response, Response & total_response) [protected]
combine the response from algebraic_mappings() with the response from derived_map() to create the total re-
sponse This function will get invoked even when only algebraic mappings are active (no core mappings from
derived_map), since the AMPL algebraic_response may be ordered differently from the total_response. In this
case, the core_response object is unused.
References Dakota::_NPOS, Dakota::abort_handler(), Response::active_set_derivative_vector(),
Response::active_set_request_vector(), Interface::algebraicACVIds, Interface::algebraicFnIndices, Inter-
face::coreMappings, Dakota::find_index(), Response::function_gradient(), Response::function_gradient_-
view(), Response::function_gradients(), Response::function_hessian(), Response::function_hessian_view(),
Response::function_hessians(), Response::function_value(), Response::function_values(), Response::function_-
values_view(), Interface::outputLevel, Response::reset(), and Response::reset_inactive().
Referenced by ApproximationInterface::map(), ApplicationInterface::map(), ApplicationInterface::synch(), and
ApplicationInterface::synch_nowait().
Used by the envelope to instantiate the correct letter class. used only by the envelope constructor to initialize
interfaceRep to the appropriate derived type.
References ProblemDescDB::get_string(), and Interface::interface_type().
Referenced by Interface::Interface().
Set of responses returned after either a blocking or nonblocking schedule of asynchronous evaluations. The map
is a full/partial set of completions which are identified through their evalIdCntr key. The raw set is postprocessed
(i.e., finite diff grads merged) in Model::synchronize() where it becomes responseMap.
Referenced by ApplicationInterface::asynchronous_local_evaluations(), ApplicationInterface::process_-
asynch_local(), ApplicationInterface::process_synch_local(), ApplicationInterface::receive_evaluation(),
ApproximationInterface::synch(), ApplicationInterface::synch(), ApproximationInterface::synch_-
nowait(), ApplicationInterface::synch_nowait(), ApplicationInterface::test_local_backfill(), and
ApplicationInterface::test_receives_backfill().
The documentation for this class was generated from the following files:
• DakotaInterface.hpp
• DakotaInterface.cpp
Analyzer Minimizer
NonDInterval JEGAOptimizer
NonDPOFDarts NCSUOptimizer
NonDReliability NLPQLPOptimizer
NonDSampling NomadOptimizer
NonlinearCGOptimizer
NPSOLOptimizer
SNLLOptimizer
• virtual ∼Iterator ()
destructor
• virtual void initialize_graphics (bool graph_2d, bool tabular_data, const String &tabular_file)
initialize the 2D graphics window and the tabular graphics data
• void active_variable_mappings (const SizetArray &c_index1, const SizetArray &di_index1, const Size-
tArray &dr_index1, const ShortArray &c_target2, const ShortArray &di_target2, const ShortArray &dr_-
target2)
set primaryA{CV,DIV,DRV}MapIndices, secondaryA{CV,DIV,DRV}MapTargets
• Iterator (NoDBBaseConstructor)
alternate constructor for base iterator classes constructed on the fly
Protected Attributes
• Model iteratedModel
shallow copy of the model passed into the constructor or a thin RecastModel wrapped around it
• String methodName
name of the iterator (the user’s method spec)
• Real convergenceTol
iteration convergence tolerance
• int maxIterations
maximum number of iterations for the iterator
• int maxFunctionEvals
maximum number of fn evaluations for the iterator
• int maxConcurrency
maximum coarse-grained concurrency
• size_t numFunctions
number of response functions
• size_t numContinuousVars
• size_t numDiscreteIntVars
number of active discrete integer vars
• size_t numDiscreteRealVars
number of active discrete real vars
• size_t numFinalSolutions
number of solutions to retain in best variables/response arrays
• ActiveSet activeSet
tracks the response data requirements on each function evaluation
• VariablesArray bestVariablesArray
collection of N best solution variables found during the study; always in context of Model originally passed to the
Iterator (any in-flight Recasts must be undone)
• ResponseArray bestResponseArray
collection of N best solution responses found during the study; always in context of Model originally passed to the
Iterator (any in-flight Recasts must be undone)
• bool subIteratorFlag
flag indicating if this Iterator is a sub-iterator (NestedModel::subIterator or DataFitSurrModel::daceIterator)
• SizetArray primaryACVarMapIndices
"primary" all continuous variable mapping indices flowed down from higher level iteration
• SizetArray primaryADIVarMapIndices
"primary" all discrete int variable mapping indices flowed down from higher level iteration
• SizetArray primaryADRVarMapIndices
"primary" all discrete real variable mapping indices flowed down from higher level iteration
• ShortArray secondaryACVarMapTargets
"secondary" all continuous variable mapping targets flowed down from higher level iteration
• ShortArray secondaryADIVarMapTargets
"secondary" all discrete int variable mapping targets flowed down from higher level iteration
• ShortArray secondaryADRVarMapTargets
"secondary" all discrete real variable mapping targets flowed down from higher level iteration
• String gradientType
type of gradient data: analytic, numerical, mixed, or none
• String methodSource
source of numerical gradient routine: dakota or vendor
• String intervalType
type of numerical gradient interval: central or forward
• String hessianType
type of Hessian data: analytic, numerical, quasi, mixed, or none
• Real fdGradStepSize
relative finite difference step size for numerical gradients
• String fdGradStepType
type of finite difference step to use for numerical gradient: relative - step length is relative to x absolute - step length
is what is specified bounds - step length is relative to range of x
• Real fdHessByGradStepSize
relative finite difference step size for numerical Hessians estimated using first-order differences of gradients
• Real fdHessByFnStepSize
relative finite difference step size for numerical Hessians estimated using second-order differences of function values
• String fdHessStepType
type of finite difference step to use for numerical Hessian: relative - step length is relative to x absolute - step length
is what is specified bounds - step length is relative to range of x
• short outputLevel
output verbosity level: {SILENT,QUIET,NORMAL,VERBOSE,DEBUG}_OUTPUT
• bool summaryOutputFlag
flag for summary output (evaluation stats, final results); default true, but false for on-the-fly (helper) iterators and
sub-iterator use cases
• bool asynchFlag
copy of the model’s asynchronous evaluation flag
• int writePrecision
write precision as specified by the user
• ResultsNames resultsNames
valid names for iterator results
Private Attributes
• String methodId
method identifier string from the input file
• Iterator ∗ iteratorRep
pointer to the letter (initialized only for the envelope)
• int referenceCount
number of objects sharing iteratorRep
• size_t execNum
an execution number for this instance of the class, unique across all instances of same methodName/methodId
Base class for the iterator class hierarchy. The Iterator class is the base class for one of the primary class hierar-
chies in DAKOTA. The iterator hierarchy contains all of the iterative algorithms which use repeated execution of
simulations as function evaluations. For memory efficiency and enhanced polymorphism, the iterator hierarchy
employs the "letter/envelope idiom" (see Coplien "Advanced C++", p. 133), for which the base class (Iterator)
serves as the envelope and one of the derived classes (selected in Iterator::get_iterator()) serves as the letter.
13.52.2.1 Iterator ()
default constructor The default constructor is used in Vector<Iterator> instantiations and for initialization of
Iterator objects contained in Strategy derived classes (see derived class header files). iteratorRep is NULL in this
case (a populated problem_db is needed to build a meaningful Iterator object). This makes it necessary to check
for NULL pointers in the copy constructor, assignment operator, and destructor.
standard envelope constructor Used in iterator instantiations within strategy constructors. Envelope constructor
only needs to extract enough data to properly execute get_iterator(), since letter holds the actual base class data.
References Dakota::abort_handler(), Iterator::get_iterator(), and Iterator::iteratorRep.
alternate envelope constructor for instantiations by name Used in sub-iterator instantiations within iterator con-
structors. Envelope constructor only needs to extract enough data to properly execute get_iterator(), since letter
holds the actual base class data.
References Dakota::abort_handler(), Iterator::get_iterator(), and Iterator::iteratorRep.
copy constructor Copy constructor manages sharing of iteratorRep and incrementing of referenceCount.
References Iterator::iteratorRep, and Iterator::referenceCount.
destructor Destructor decrements referenceCount and only deletes iteratorRep when referenceCount reaches zero.
References Iterator::iteratorRep, and Iterator::referenceCount.
constructor initializes the base class part of letter classes (BaseConstructor overloading avoids infinite recursion
in the derived class constructors - Coplien, p. 139) This constructor builds the base class data for all inherited
iterators. get_iterator() instantiates a derived class and the derived class selects this base class constructor in
its initialization list (to avoid the recursion of the base class constructor calling get_iterator() again). Since the
letter IS the representation, its representation pointer is set to NULL (an uninitialized pointer causes problems in
∼Iterator).
References Dakota::abort_handler(), Iterator::fdGradStepSize, Iterator::fdHessByFnStepSize, Itera-
tor::fdHessByGradStepSize, ProblemDescDB::get_is(), ProblemDescDB::get_rv(), Iterator::gradientType,
Iterator::hessianType, Iterator::intervalType, Iterator::methodName, Iterator::methodSource, Itera-
tor::numContinuousVars, Iterator::numDiscreteIntVars, Iterator::numDiscreteRealVars, Iterator::numFunctions,
and Iterator::probDescDB.
alternate constructor for base iterator classes constructed on the fly This alternate constructor builds base class
data for inherited iterators. It is used for on-the-fly instantiations for which DB queries cannot be used. Therefore
it only sets attributes taken from the incoming model. Since there are no iterator-specific redefinitions of maxIter-
ations or numFinalSolutions in NoDBBaseConstructor mode, go ahead and assign default value for all iterators.
alternate constructor for base iterator classes constructed on the fly This alternate constructor builds base class
data for inherited iterators. It is used for on-the-fly instantiations for which DB queries cannot be used. It has no
incoming model, so only sets up a minimal set of defaults. However, its use is preferable to the default constructor,
which should remain as minimal as possible. Since there are no iterator-specific redefinitions of maxIterations or
numFinalSolutions in NoDBBaseConstructor mode, go ahead and assign default value for all iterators.
assignment operator Assignment operator decrements referenceCount for old iteratorRep, assigns new iterator-
Rep, and increments referenceCount for new iteratorRep.
References Iterator::iteratorRep, and Iterator::referenceCount.
utility function to perform common operations prior to pre_run(); typically memory initialization; setting of in-
stance pointers Perform initialization phases of run sequence, like allocating memory and setting instance pointers.
Commonly used in sub-iterator executions. This is a virtual function; when re-implementing, a derived class must
call its nearest parent’s initialize_run(), typically _before_ performing its own implementation steps.
Reimplemented in CONMINOptimizer, LeastSq, Minimizer, NonD, Optimizer, DOTOptimizer, NLPQLPOpti-
mizer, SNLLLeastSq, and SNLLOptimizer.
References Model::asynch_flag(), Iterator::asynchFlag, Iterator::initialize_run(), Model::is_null(), Itera-
tor::iteratedModel, Iterator::iteratorRep, Iterator::maxConcurrency, Model::set_communicators(), Model::set_-
evaluation_reference(), and Iterator::summaryOutputFlag.
Referenced by Iterator::initialize_run(), and Iterator::run_iterator().
pre-run portion of run_iterator (optional); re-implemented by Iterators which can generate all Variables (parameter
sets) a priori pre-run phase, which a derived iterator may optionally reimplement; when not present, pre-run is
likely integrated into the derived run function. This is a virtual function; when re-implementing, a derived class
must call its nearest parent’s pre_run(), if implemented, typically _before_ performing its own implementation
steps.
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post Virtual run function for the iterator class hierarchy. All derived classes need to redefine it.
Reimplemented in LeastSq, NonD, Optimizer, PStudyDACE, Verification, and SurrBasedMinimizer.
References Dakota::abort_handler(), Iterator::iteratorRep, and Iterator::run().
Referenced by Iterator::run(), and Iterator::run_iterator().
post-run portion of run_iterator (optional); verbose to print results; re-implemented by Iterators that can read all
Variables/Responses and perform final analysis phase in a standalone way Post-run phase, which a derived iterator
may optionally reimplement; when not present, post-run is likely integrated into run. This is a virtual function;
when re-implementing, a derived class must call its nearest parent’s post_run(), typically _after_ performing its
own implementation steps.
Reimplemented in COLINOptimizer, LeastSq, Optimizer, DDACEDesignCompExp, FSUDesignCompExp,
NonDLHSSampling, ParamStudy, PSUADEDesignCompExp, SNLLLeastSq, and SNLLOptimizer.
References Model::is_null(), Iterator::iteratedModel, Iterator::iteratorRep, Iterator::post_run(), Model::print_-
evaluation_summary(), Iterator::print_results(), Iterator::resultsDB, Iterator::summaryOutputFlag, and
ResultsManager::write_databases().
Referenced by Iterator::post_run(), and Iterator::run_iterator().
utility function to perform common operations following post_run(); deallocation and resetting of instance point-
ers Optional: perform finalization phases of run sequence, like deallocating memory and resetting instance point-
ers. Commonly used in sub-iterator executions. This is a virtual function; when re-implementing, a derived class
must call its nearest parent’s finalize_run(), typically _after_ performing its own implementation steps.
Reimplemented in LeastSq, Minimizer, NonD, Optimizer, SNLLLeastSq, and SNLLOptimizer.
References Iterator::finalize_run(), and Iterator::iteratorRep.
Referenced by Iterator::finalize_run(), and Iterator::run_iterator().
13.52.3.7 void initialize_graphics (bool graph_2d, bool tabular_data, const String & tabular_file)
[virtual]
initialize the 2D graphics window and the tabular graphics data This is a convenience function for encapsulating
graphics initialization operations. It does not require a strategyRep forward since it is only used by letter objects.
print the final iterator results This virtual function provides additional iterator-specific final results outputs beyond
the function evaluation summary printed in finalize_run().
Reimplemented in Analyzer, LeastSq, Optimizer, PStudyDACE, Verification, NonDAdaptImpSampling, Non-
DAdaptiveSampling, NonDExpansion, NonDGlobalReliability, NonDGPImpSampling, NonDIncremLHSSam-
pling, NonDInterval, NonDLHSSampling, NonDLocalReliability, NonDPOFDarts, RichExtrapVerification, and
SurrBasedMinimizer.
References Iterator::iteratorRep, and Iterator::print_results().
Referenced by Iterator::post_run(), Iterator::print_results(), and EfficientSubspaceMethod::reduced_space_uq().
get the current number of samples Return current number of evaluation points. Since the calculation of samples,
collocation points, etc. might be costly, provide a default implementation here that backs out from the maxCon-
currency. May be (is) overridden by derived classes.
Reimplemented in DDACEDesignCompExp, FSUDesignCompExp, NonDCubature, NonDQuadrature, NonD-
Sampling, NonDSparseGrid, and PSUADEDesignCompExp.
References Model::derivative_concurrency(), Iterator::iteratedModel, Iterator::iteratorRep, Itera-
tor::maxConcurrency, and Iterator::num_samples().
Referenced by DataFitSurrModel::build_global(), NonDGlobalReliability::get_best_sample(), Iterator::num_-
samples(), Analyzer::samples_to_variables_array(), and Analyzer::variables_array_to_samples().
replaces existing letter with a new one Similar to the assignment operator, the assign_rep() function decrements
referenceCount for the old iteratorRep and assigns the new iteratorRep. It is different in that it is used for publish-
ing derived class letters to existing envelopes, as opposed to sharing representations among multiple envelopes
(in particular, assign_rep is passed a letter object and operator= is passed an envelope object). Letter assignment
supports two models as governed by ref_count_incr:
• ref_count_incr = true (default): the incoming letter belongs to another envelope. In this case, increment the
reference count in the normal manner so that deallocation of the letter is handled properly.
• ref_count_incr = false: the incoming letter is instantiated on the fly and has no envelope. This case is
modeled after get_iterator(): a letter is dynamically allocated using new and passed into assign_rep, the
letter’s reference count is not incremented, and the letter is not remotely deleted (its memory management
is passed over to the envelope).
set the hierarchical eval ID tag prefix This prepend may need to become a virtual function if the tagging should
propagate to other subModels or helper Iterators an Iterator may contain.
Used by the envelope to instantiate the correct letter class. Used only by the envelope constructor to initialize
iteratorRep to the appropriate derived type, as given by the methodName attribute.
References ProblemDescDB::get_string(), Iterator::method_name(), Iterator::methodName, Itera-
tor::probDescDB, Dakota::strbegins(), and Dakota::strends().
Referenced by Iterator::Iterator().
13.52.3.14 Iterator ∗ get_iterator (const String & method_name, Model & model) [private]
Used by the envelope to instantiate the correct letter class. Used only by the envelope constructor to initialize
iteratorRep to the appropriate derived type, as given by the passed method_name.
References Dakota::strbegins(), and Dakota::strends().
relative finite difference step size for numerical gradients A scalar value (instead of the vector fd_gradient_step_-
size spec) is used within the iterator hierarchy since this attribute is only used to publish a step size to vendor
numerical gradient algorithms.
Referenced by DOTOptimizer::initialize(), CONMINOptimizer::initialize(), Iterator::Iterator(), NLSSOL-
LeastSq::NLSSOLLeastSq(), NPSOLOptimizer::NPSOLOptimizer(), SNLLLeastSq::SNLLLeastSq(), and SNL-
LOptimizer::SNLLOptimizer().
relative finite difference step size for numerical Hessians estimated using first-order differences of gradients A
scalar value (instead of the vector fd_hessian_step_size spec) is used within the iterator hierarchy since this
attribute is only used to publish a step size to vendor numerical Hessian algorithms.
Referenced by Iterator::Iterator().
relative finite difference step size for numerical Hessians estimated using second-order differences of function
values A scalar value (instead of the vector fd_hessian_step_size spec) is used within the iterator hierarchy since
this attribute is only used to publish a step size to vendor numerical Hessian algorithms.
Referenced by Iterator::Iterator().
The documentation for this class was generated from the following files:
• DakotaIterator.hpp
• DakotaIterator.cpp
Iterator
Minimizer
Optimizer
JEGAOptimizer
Classes
• class Driver
A subclass of the JEGA front end driver that exposes the individual protected methods to execute the algorithm.
• class Evaluator
An evaluator specialization that knows how to interact with Dakota.
• class EvaluatorCreator
A specialization of the JEGA::FrontEnd::EvaluatorCreator that creates a new instance of a Evaluator.
• ∼JEGAOptimizer ()
Destructs a JEGAOptimizer.
• void ReCreateTheParameterDatabase ()
Destroys the current parameter database and creates a new empty one.
• void LoadTheParameterDatabase ()
Reads information out of the known Dakota::ProblemDescDB and puts it into the current parameter database.
Private Attributes
• EvaluatorCreator ∗ _theEvalCreator
A pointer to an EvaluatorCreator used to create the evaluator used by JEGA in Dakota (a JEGAEvaluator).
• JEGA::Utilities::ParameterDatabase ∗ _theParamDB
A pointer to the ParameterDatabase from which all parameters are retrieved by the created algorithms.
• VariablesArray _initPts
An array of initial points to use as an initial population.
A version of Dakota::Optimizer for instantiation of John Eddy’s Genetic Algorithms (JEGA). This class encapsu-
lates the necessary functionality for creating and properly initializing the JEGA algorithms (MOGA and SOGA).
Constructs a JEGAOptimizer class object. This method does some of the initialization work for the algorithm. In
particular, it initialized the JEGA core.
Parameters:
model The Dakota::Model that will be used by this optimizer for problem information, etc.
13.53.3.1 void LoadDakotaResponses (const JEGA::Utilities::Design & from, Dakota::Variables & vars,
Dakota::Response & resp) const [protected]
Loads the JEGA-style Design class into equivolent Dakota-style Variables and Response objects. This version is
meant for the case where a Variables and a Response object exist and just need to be loaded.
Parameters:
from The JEGA Design class object from which to extract the variable and response information for Dakota.
vars The Dakota::Variables object into which to load the design variable values of from.
resp The Dakota::Response object into which to load the objective function and constraint values of from.
Reads information out of the known Dakota::ProblemDescDB and puts it into the current parameter database.
This should be called from the JEGAOptimizer constructor since it is the only time when the problem description
database is certain to be configured to supply data for this optimizer.
References JEGAOptimizer::_theParamDB, ProblemDescDB::get_bool(), ProblemDescDB::get_int(),
ProblemDescDB::get_real(), ProblemDescDB::get_rv(), ProblemDescDB::get_short(), ProblemDescDB::get_-
sizet(), ProblemDescDB::get_string(), Iterator::methodName, JEGAOptimizer::MOGA_METHOD_TXT,
Iterator::probDescDB, JEGAOptimizer::ReCreateTheParameterDatabase(), and JEGAOptimizer::SOGA_-
METHOD_TXT.
Referenced by JEGAOptimizer::JEGAOptimizer().
Completely initializes the supplied algorithm configuration. This loads the supplied configuration object with
appropriate data retrieved from the parameter database.
Parameters:
aConfig The algorithm configuration object to load.
Completely initializes the supplied problem configuration. This loads the fresh configuration object using the
LoadTheDesignVariables, LoadTheObjectiveFunctions, and LoadTheConstraints methods.
Parameters:
pConfig The problem configuration object to load.
Adds DesignVariableInfo objects into the problem configuration object. This retrieves design variable information
from the ParameterDatabase and creates DesignVariableInfo’s from it.
Parameters:
pConfig The problem configuration object to load.
Adds ObjectiveFunctionInfo objects into the problem configuration object. This retrieves objective function in-
formation from the ParameterDatabase and creates ObjectiveFunctionInfo’s from it.
Parameters:
pConfig The problem configuration object to load.
Adds ConstraintInfo objects into the problem configuration object. This retrieves constraint function information
from the ParameterDatabase and creates ConstraintInfo’s from it.
Parameters:
pConfig The problem configuration object to load.
Returns up to _numBest designs sorted by DAKOTA’s fitness (L2 constraint violation, then utopia or objective),
taking into account the algorithm type. The front of the returned map can be viewed as a single "best".
Parameters:
from The full set of designs returned by the solver.
designSortMap Map of best solutions with key pair<constraintViolation, fitness>
eventually this functionality must be moved into a separate post-processing application for MO datasets.
References JEGAOptimizer::GetBestMOSolutions(), JEGAOptimizer::GetBestSOSolutions(), Itera-
tor::methodName, JEGAOptimizer::MOGA_METHOD_TXT, and JEGAOptimizer::SOGA_METHOD_TXT.
Referenced by JEGAOptimizer::find_optimum().
Retreive the best Designs from a set of solutions assuming that they are generated by a multi objective algorithm.
eventually this functionality must be moved into a separate post-processing application for MO datasets.
References Iterator::numFinalSolutions.
Referenced by JEGAOptimizer::GetBestSolutions().
Retreive the best Designs from a set of solutions assuming that they are generated by a single objective algorithm.
eventually this functionality must be moved into a separate post-processing application for MO datasets.
References JEGAOptimizer::_theParamDB, and Iterator::numFinalSolutions.
Referenced by JEGAOptimizer::GetBestSolutions().
Converts the items in a VariablesArray into a DoubleMatrix whereby the items in the matrix are the design
variables. The matrix will not contain responses but when being used by Dakota, this doesn’t matter. JEGA will
attempt to re-evaluate these points but Dakota will recognize that they do not require re-evaluation and thus it will
be a cheap operation.
Parameters:
variables The array of DakotaVariables objects to use as the contents of the returned matrix.
Returns:
The matrix created using the supplied VariablesArray.
Referenced by JEGAOptimizer::find_optimum().
Performs the iterations to determine the optimal set of solutions. Override of pure virtual method in Optimizer
base class.
The extraction of parameter values actually occurs in this method when the
JEGA::FrontEnd::Driver::ExecuteAlgorithm is called. Also the loading of the problem and algorithm con-
figurations occurs in this method. That way, if it is called more than once and the algorithm or problem has
changed, it will be accounted for.
Implements Optimizer.
References JEGAOptimizer::_initPts, JEGAOptimizer::_theEvalCreator, JEGAOptimizer::_-
theParamDB, Driver::DestroyAlgorithm(), Driver::ExtractAllData(), JEGAOptimizer::GetBestSolutions(),
JEGAOptimizer::initial_points(), JEGAOptimizer::LoadAlgorithmConfig(), JEGAOpti-
mizer::LoadDakotaResponses(), JEGAOptimizer::LoadProblemConfig(), Driver::PerformIterations(),
Minimizer::resize_best_resp_array(), Minimizer::resize_best_vars_array(), and JEGAOpti-
mizer::ToDoubleMatrix().
Overridden to return true since JEGA algorithms can accept multiple initial points.
Returns:
true, always.
Overridden to return true since JEGA algorithms can return multiple final points.
Returns:
true, always.
Overridden to assign the _initPts member variable to the passed in collection of Dakota::Variables.
Parameters:
pts The array of initial points for the JEGA algorithm created and run by this JEGAOptimizer.
Overridden to return the collection of initial points for the JEGA algorithm created and run by this JEGAOpti-
mizer.
Returns:
The collection of initial points for the JEGA algorithm created and run by this JEGAOptimizer.
An array of initial points to use as an initial population. This member is here to help support the use of JEGA
algorithms in Dakota strategies. If this array is populated, then whatever initializer is specified will be ignored
and the DoubleMatrix initializer will be used instead on a matrix created from the data in this array.
Referenced by JEGAOptimizer::find_optimum(), and JEGAOptimizer::initial_points().
The documentation for this class was generated from the following files:
• JEGAOptimizer.hpp
• JEGAOptimizer.cpp
Iterator
Minimizer
LeastSq
• ∼LeastSq ()
destructor
• void initialize_run ()
• void run ()
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post
• void get_confidence_intervals ()
Calculate confidence intervals on estimated parameters.
Protected Attributes
• int numLeastSqTerms
number of least squares terms
• LeastSq ∗ prevLSqInstance
pointer containing previous value of leastSqInstance
• bool weightFlag
flag indicating whether weighted least squares is active
• RealVector confBoundsLower
lower bounds for confidence intervals on calibration parameters
• RealVector confBoundsUpper
upper bounds for confidence intervals on calibration parameters
• void weight_model ()
Wrap iteratedModel in a RecastModel that weights the residuals.
Base class for the nonlinear least squares branch of the iterator hierarchy. The LeastSq class provides common
data and functionality for least squares solvers (including NL2OL, NLSSOLLeastSq, and SNLLLeastSq.
standard constructor This constructor extracts the inherited data for the least squares branch and performs sanity
checking on gradient and constraint settings.
References Dakota::abort_handler(), Iterator::bestVariablesArray, Variables::copy(), Model::current_-
variables(), Minimizer::data_transform_model(), Model::init_communicators(), Iterator::iteratedModel,
Iterator::maxConcurrency, Minimizer::minimizerRecasts, Minimizer::numIterPrimaryFns,
LeastSq::numLeastSqTerms, Minimizer::numRowsExpData, Minimizer::numUserPrimaryFns, Mini-
mizer::obsDataFlag, Minimizer::optimizationFlag, Minimizer::scale_model(), Minimizer::scaleFlag,
LeastSq::weight_model(), and LeastSq::weightFlag.
This function should be invoked (or reimplemented) by any derived implementations of initialize_run() (which
would otherwise hide it).
Reimplemented from Minimizer.
Reimplemented in SNLLLeastSq.
References Iterator::iteratedModel, LeastSq::leastSqInstance, Minimizer::obsDataFlag,
LeastSq::prevLSqInstance, Minimizer::scaleFlag, and Model::update_from_subordinate_model().
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post Virtual run function for the iterator class hierarchy. All derived classes need to redefine it.
Reimplemented from Iterator.
References LeastSq::minimize_residuals().
Implements portions of post_run specific to LeastSq for scaling back to native variables and functions. This func-
tion should be invoked (or reimplemented) by any derived implementations of post_run() (which would otherwise
hide it).
Reimplemented from Iterator.
Reimplemented in SNLLLeastSq.
References Dakota::abort_handler(), Response::active_set_request_vector(), Iterator::bestResponseArray, Iter-
ator::bestVariablesArray, Variables::continuous_variables(), Response::copy(), Minimizer::cvScaleMultipliers,
Minimizer::cvScaleOffsets, Minimizer::cvScaleTypes, Response::function_value(), Response::function_-
values(), Iterator::iteratedModel, Minimizer::modify_s2n(), Minimizer::need_resp_trans_byvars(),
LeastSq::numLeastSqTerms, Minimizer::numNonlinearConstraints, Model::primary_response_fn_weights(),
utility function to perform common operations following post_run(); deallocation and resetting of instance point-
ers Optional: perform finalization phases of run sequence, like deallocating memory and resetting instance point-
ers. Commonly used in sub-iterator executions. This is a virtual function; when re-implementing, a derived class
must call its nearest parent’s finalize_run(), typically _after_ performing its own implementation steps.
Reimplemented from Minimizer.
Reimplemented in SNLLLeastSq.
References LeastSq::leastSqInstance, and LeastSq::prevLSqInstance.
Redefines default iterator results printing to include nonlinear least squares results (residual terms and constraints).
Reimplemented from Iterator.
References Iterator::activeSet, Minimizer::archive_allocate_best(), Minimizer::archive_best(), Itera-
tor::bestResponseArray, Iterator::bestVariablesArray, LeastSq::confBoundsLower, LeastSq::confBoundsUpper,
Model::continuous_variable_labels(), Dakota::data_pairs, Model::interface_id(), Iterator::iteratedModel,
Dakota::lookup_by_val(), Iterator::numContinuousVars, Iterator::numFunctions, LeastSq::numLeastSqTerms,
Model::primary_response_fn_weights(), ActiveSet::request_values(), Model::subordinate_model(),
Dakota::write_data_partial(), and Dakota::write_precision.
Calculate confidence intervals on estimated parameters. Calculate individual confidence intervals for each param-
eter. These bounds are based on a linear approximation of the nonlinear model.
References Iterator::activeSet, Iterator::bestResponseArray, Iterator::bestVariablesArray, Model::compute_-
response(), LeastSq::confBoundsLower, LeastSq::confBoundsUpper, Model::continuous_-
variables(), Model::current_response(), Response::function_gradients(), Iterator::iteratedModel, Itera-
tor::numContinuousVars, LeastSq::numLeastSqTerms, ActiveSet::request_values(), Minimizer::scaleFlag,
and Minimizer::vendorNumericalGradFlag.
Referenced by NLSSOLLeastSq::minimize_residuals(), NL2SOLLeastSq::minimize_residuals(), and
SNLLLeastSq::post_run().
Wrap iteratedModel in a RecastModel that weights the residuals. Setup Recast for weighting model the weighting
transformation doesn’t resize, so use numUserPrimaryFns. No vars, active set or secondary mapping. All indices
are one-to-one mapped (no change in counts)
References Model::assign_rep(), Iterator::iteratedModel, Iterator::numContinuousVars,
LeastSq::numLeastSqTerms, Minimizer::numNonlinearConstraints, Minimizer::numNonlinearIneqConstraints,
13.54.3.8 void primary_resp_weighter (const Variables & unweighted_vars, const Variables &
weighted_vars, const Response & unweighted_response, Response & weighted_response)
[static, private]
Recast callback function to weight least squares residuals, gradients, and Hessians. Apply weights to least squares
residuals
References Dakota::_NPOS, Response::active_set_derivative_vector(), Response::active_set_request_-
vector(), Variables::acv(), Variables::all_continuous_variable_ids(), Variables::continuous_variable_ids(),
Variables::cv(), Dakota::find_index(), Response::function_gradients(), Response::function_gradients_-
view(), Response::function_hessian(), Response::function_hessian_view(), Response::function_values(),
Response::function_values_view(), Variables::icv(), Variables::inactive_continuous_variable_ids(), It-
erator::iteratedModel, LeastSq::leastSqInstance, LeastSq::numLeastSqTerms, Iterator::outputLevel,
Model::primary_response_fn_weights(), and Model::subordinate_model().
Referenced by LeastSq::weight_model().
The documentation for this class was generated from the following files:
• DakotaLeastSq.hpp
• DakotaLeastSq.cpp
Interface
ApplicationInterface
DirectApplicInterface
MatlabInterface
• ∼MatlabInterface ()
Destructor: close Matlab engine.
Protected Attributes
• engine ∗ matlabEngine
pointer to the MATLAB engine used for direct evaluations
Specialization of DirectApplicInterface to link to Matlab analysis drivers. Includes convenience functions to map
data to/from Matlab
execute an analysis code portion of a direct evaluation invocation Matlab specialization of dervied analysis com-
ponents.
Reimplemented from DirectApplicInterface.
References ApplicationInterface::analysisServerId, and MatlabInterface::matlab_engine_run().
Helper function supporting derived_map_ac. Sends data to Matlab, executes analysis, collects return data. Direct
interface to Matlab through Mathworks external API. m-file executed is specified through analysis_drivers, extra
strings through analysis_components. (Original BMA 11/28/2005)
Special thanks to Lee Peterson for substantial enhancements 12/15/2007: Added output buffer for the MATLAB
command response and error messages Made the Dakota variable persistent in the MATLAB engine workspace
Added robustness to the user deleting required Dakota fields
References Dakota::abort_handler(), DirectApplicInterface::analysisComponents, DirectApplicInter-
face::analysisDriverIndex, Interface::currEvalId, DirectApplicInterface::directFnASV, DirectApplicIn-
terface::directFnDVV, Dakota::FIELD_NAMES, DirectApplicInterface::fnGrads, DirectApplicInter-
face::fnHessians, Interface::fnLabels, DirectApplicInterface::fnVals, DirectApplicInterface::gradFlag, DirectAp-
plicInterface::hessFlag, MatlabInterface::matlab_field_prep(), MatlabInterface::matlabEngine, DirectApplicIn-
terface::numACV, DirectApplicInterface::numADIV, DirectApplicInterface::numADRV, Dakota::NUMBER_-
OF_FIELDS, DirectApplicInterface::numDerivVars, DirectApplicInterface::numFns, DirectApplicInter-
face::numVars, Interface::outputLevel, DirectApplicInterface::xC, DirectApplicInterface::xCLabels, Direc-
tApplicInterface::xDI, DirectApplicInterface::xDILabels, DirectApplicInterface::xDR, and DirectApplicInter-
face::xDRLabels.
Referenced by MatlabInterface::derived_map_ac().
The documentation for this class was generated from the following files:
• MatlabInterface.hpp
• MatlabInterface.cpp
Minimizer
DOTOptimizer
JEGAOptimizer
NCSUOptimizer
NLPQLPOptimizer
NomadOptimizer
NonlinearCGOptimizer
NPSOLOptimizer
SNLLOptimizer
• ∼Minimizer ()
destructor
• void initialize_run ()
utility function to perform common operations prior to pre_run(); typically memory initialization; setting of in-
stance pointers
• void finalize_run ()
utility function to perform common operations following post_run(); deallocation and resetting of instance pointers
• void scale_model ()
Wrap iteratedModel in a RecastModel that performs variable and/or response scaling.
• RealVector modify_s2n (const RealVector &scaled_vars, const IntArray &scale_types, const RealVector
&multipliers, const RealVector &offsets) const
general RealVector mapping from scaled to native variables (and values)
• Real objective (const RealVector &fn_vals, const BoolDeque &max_sense, const RealVector &primary_-
wts) const
compute a composite objective value from one or more primary functions
• Real objective (const RealVector &fn_vals, size_t num_fns, const BoolDeque &max_sense, const RealVec-
tor &primary_wts) const
compute a composite objective with specified number of source primary functions, instead of userPrimaryFns
• void objective_gradient (const RealVector &fn_vals, const RealMatrix &fn_grads, const BoolDeque
&max_sense, const RealVector &primary_wts, RealVector &obj_grad) const
compute the gradient of the composite objective function
• void objective_gradient (const RealVector &fn_vals, size_t num_fns, const RealMatrix &fn_grads, const
BoolDeque &max_sense, const RealVector &primary_wts, RealVector &obj_grad) const
compute the gradient of the composite objective function
• void objective_hessian (const RealVector &fn_vals, const RealMatrix &fn_grads, const RealSymMa-
trixArray &fn_hessians, const BoolDeque &max_sense, const RealVector &primary_wts, RealSymMatrix
&obj_hess) const
compute the Hessian of the composite objective function
• void objective_hessian (const RealVector &fn_vals, size_t num_fns, const RealMatrix &fn_grads, const
RealSymMatrixArray &fn_hessians, const BoolDeque &max_sense, const RealVector &primary_wts, Re-
alSymMatrix &obj_hess) const
compute the Hessian of the composite objective function
• void archive_best (size_t index, const Variables &best_vars, const Response &best_resp)
archive the best point into the results array
• static void replicate_set_recast (const Variables &recast_vars, const ActiveSet &recast_set, ActiveSet
&sub_model_set)
conversion of request vector values for Least Squares
• static void secondary_resp_copier (const Variables &input_vars, const Variables &output_vars, const Re-
sponse &input_response, Response &output_response)
copy the partial response for secondary functions when needed (data and reduction transforms)
Protected Attributes
• Real constraintTol
optimizer/least squares constraint tolerance
• Real bigRealBoundSize
• int bigIntBoundSize
cutoff value for discrete variable bounds
• size_t numNonlinearIneqConstraints
number of nonlinear inequality constraints
• size_t numNonlinearEqConstraints
number of nonlinear equality constraints
• size_t numLinearIneqConstraints
number of linear inequality constraints
• size_t numLinearEqConstraints
number of linear equality constraints
• int numNonlinearConstraints
total number of nonlinear constraints
• int numLinearConstraints
total number of linear constraints
• int numConstraints
total number of linear and nonlinear constraints
• bool optimizationFlag
flag for use where optimization and NLS must be distinguished
• size_t numUserPrimaryFns
number of objective functions or least squares terms in the user’s model always initialize at Minimizer, even if
overridden later
• size_t numIterPrimaryFns
number of objective functions or least squares terms in iterator’s view always initialize at Minimizer, even if over-
ridden later
• bool boundConstraintFlag
convenience flag for denoting the presence of user-specified bound constraints. Used for method selection and error
checking.
• bool speculativeFlag
flag for speculative gradient evaluations
• String obsDataFilename
filename from which to read observed data
• bool obsDataFlag
flag indicating whether user-supplied data is active
• ExperimentData expData
Container for experimental data to which to calibrate model using least squares or other formulations which mini-
mize SSE.
• size_t numExperiments
number of experiments
• IntVector numReplicates
number of replicates
• size_t numRowsExpData
number of total rows of data since we are allowing varying numbers of experiments and replicates per experiment
• bool scaleFlag
flag for overall scaling status
• bool varsScaleFlag
flag for variables scaling
• bool primaryRespScaleFlag
flag for primary response scaling
• bool secondaryRespScaleFlag
flag for secondary response scaling
• IntArray cvScaleTypes
scale flags for continuous vars.
• RealVector cvScaleMultipliers
scales for continuous variables
• RealVector cvScaleOffsets
offsets for continuous variables
• IntArray responseScaleTypes
scale flags for all responses
• RealVector responseScaleMultipliers
scales for all responses
• RealVector responseScaleOffsets
offsets for all responses (zero < for functions, not for nonlin con)
• IntArray linearIneqScaleTypes
scale flags for linear ineq
• RealVector linearIneqScaleMultipliers
scales for linear ineq constrs.
• RealVector linearIneqScaleOffsets
offsets for linear ineq constrs.
• IntArray linearEqScaleTypes
scale flags for linear eq.
• RealVector linearEqScaleMultipliers
scales for linear constraints
• RealVector linearEqScaleOffsets
offsets for linear constraints
• Minimizer ∗ prevMinInstance
pointer containing previous value of minimizerInstance
• bool vendorNumericalGradFlag
convenience flag for gradType == numerical && methodSource == vendor
• void initialize_scaling ()
initialize scaling types, multipliers, and offsets; perform error checking
• void compute_scaling (int object_type, int auto_type, int num_vars, RealVector &lbs, RealVector &ubs,
RealVector &targets, const StringArray &scale_strings, const RealVector &scales, IntArray &scale_types,
RealVector &scale_mults, RealVector &scale_offsets)
general helper function for initializing scaling types and factors on a vector of variables, functions, constraints,
etc.
• bool compute_scale_factor (const Real lower_bound, const Real upper_bound, Real ∗multiplier, Real
∗offset)
automatically compute a single scaling factor -- bounds case
• void response_scaler_core (const Variables &native_vars, const Variables &scaled_vars, const Response
&native_response, Response &iterator_response, size_t start_offset, size_t num_responses)
Core of response scaling, which doesn’t perform any output.
• RealVector modify_n2s (const RealVector &native_vars, const IntArray &scale_types, const RealVector
&multipliers, const RealVector &offsets) const
general RealVector mapping from native to scaled variables vectors:
• void print_scaling (const String &info, const IntArray &scale_types, const RealVector &scale_mults, const
RealVector &scale_offsets, const StringArray &labels)
print scaling information for a particular response type in tabular form
• static void primary_resp_scaler (const Variables &native_vars, const Variables &scaled_vars, const Re-
sponse &native_response, Response &iterator_response)
RecastModel callback for primary response scaling: transform responses (grads, Hessians) from native (user) to
scaled space.
• static void secondary_resp_scaler (const Variables &native_vars, const Variables &scaled_vars, const Re-
sponse &native_response, Response &scaled_response)
RecastModel callback for secondary response scaling: transform constraints (grads, Hessians) from native (user)
to scaled space.
Friends
• class SOLBase
the SOLBase class is not derived the iterator hierarchy but still needs access to iterator hierarchy data (to avoid
attribute replication)
• class SNLLBase
the SNLLBase class is not derived the iterator hierarchy but still needs access to iterator hierarchy data (to avoid
attribute replication)
Base class for the optimizer and least squares branches of the iterator hierarchy. The Minimizer class provides
common data and functionality for Optimizer and LeastSq.
standard constructor This constructor extracts inherited data for the optimizer and least squares branches and
performs sanity checking on constraint settings.
References Dakota::abort_handler(), Response::active_set_request_vector(), Iterator::bestResponseArray,
Minimizer::bigIntBoundSize, Minimizer::bigRealBoundSize, Minimizer::boundConstraintFlag,
Model::continuous_lower_bounds(), Model::continuous_upper_bounds(), Response::copy(),
Model::current_response(), Model::discrete_int_lower_bounds(), Model::discrete_int_upper_bounds(),
Model::discrete_real_lower_bounds(), Model::discrete_real_upper_bounds(), Iterator::gradientType,
Iterator::hessianType, Iterator::iteratedModel, Iterator::maxIterations, Iterator::methodName, Itera-
tor::methodSource, Iterator::numContinuousVars, Iterator::numDiscreteIntVars, Iterator::numDiscreteRealVars,
Iterator::numFinalSolutions, Iterator::numFunctions, Minimizer::numLinearEqConstraints,
Minimizer::numLinearIneqConstraints, Minimizer::numNonlinearEqConstraints, Mini-
mizer::numNonlinearIneqConstraints, Dakota::strbegins(), Dakota::strends(), and Mini-
mizer::vendorNumericalGradFlag.
utility function to perform common operations prior to pre_run(); typically memory initialization; setting of in-
stance pointers Perform initialization phases of run sequence, like allocating memory and setting instance pointers.
Commonly used in sub-iterator executions. This is a virtual function; when re-implementing, a derived class must
call its nearest parent’s initialize_run(), typically _before_ performing its own implementation steps.
utility function to perform common operations following post_run(); deallocation and resetting of instance point-
ers Optional: perform finalization phases of run sequence, like deallocating memory and resetting instance point-
ers. Commonly used in sub-iterator executions. This is a virtual function; when re-implementing, a derived class
must call its nearest parent’s finalize_run(), typically _after_ performing its own implementation steps.
Reimplemented from Iterator.
Reimplemented in LeastSq, Optimizer, SNLLLeastSq, and SNLLOptimizer.
References Minimizer::minimizerInstance, and Minimizer::prevMinInstance.
Wrap iteratedModel in a RecastModel that subtracts provided observed data from the primary response functions
(variables and secondary responses are unchanged). Reads observation data to compute least squares residuals.
Does not change size of responses, and is the first wrapper, therefore sizes are based on userDefinedModel. This
will set weights to sigma[i]∧ -2 if appropriate. weight_flag is true is there already exist user-specified weights in
the calling context.
References Dakota::abort_handler(), Iterator::activeSet, Model::assign_rep(), Minimizer::expData,
ProblemDescDB::get_bool(), ProblemDescDB::get_iv(), ProblemDescDB::get_sizet(), Iterator::iteratedModel,
ExperimentData::load_scalar(), Iterator::numContinuousVars, Minimizer::numExperiments, Itera-
tor::numFunctions, Minimizer::numNonlinearConstraints, Minimizer::numNonlinearIneqConstraints,
Minimizer::numReplicates, Minimizer::numRowsExpData, Minimizer::numUserPrimaryFns, Mini-
mizer::obsDataFilename, Iterator::outputLevel, Minimizer::primary_resp_differencer(), Model::primary_-
response_fn_sense(), Model::primary_response_fn_weights(), Iterator::probDescDB, Minimizer::replicate_set_-
recast(), ActiveSet::request_vector(), Minimizer::secondary_resp_copier(), and Model::subordinate_model().
Referenced by LeastSq::LeastSq(), and Optimizer::Optimizer().
Wrap iteratedModel in a RecastModel that performs variable and/or response scaling. Wrap the iteratedModel in a
scaling transformation, such that iteratedModel now contains a scaling recast model. Potentially affects variables,
primary, and secondary responses
References Model::assign_rep(), Minimizer::cvScaleTypes, RecastModel::initialize(), Minimizer::initialize_-
scaling(), Iterator::iteratedModel, Model::model_rep(), Iterator::numContinuousVars, Mini-
mizer::numNonlinearConstraints, Minimizer::numNonlinearIneqConstraints, Minimizer::numUserPrimaryFns,
Iterator::outputLevel, Minimizer::primary_resp_scaler(), Model::primary_response_fn_sense(),
13.56.3.5 void gnewton_set_recast (const Variables & recast_vars, const ActiveSet & recast_set,
ActiveSet & sub_model_set) [static, protected]
conversion of request vector values for the Gauss-Newton Hessian approximation For Gauss-Newton Hessian
requests, activate the 2 bit and mask the 4 bit.
References ActiveSet::request_value(), and ActiveSet::request_vector().
Referenced by Optimizer::reduce_model(), and SurrBasedLocalMinimizer::SurrBasedLocalMinimizer().
13.56.3.6 void secondary_resp_copier (const Variables & input_vars, const Variables & output_vars,
const Response & input_response, Response & output_response) [static, protected]
copy the partial response for secondary functions when needed (data and reduction transforms) Constraint function
map from user/native space to iterator/scaled/combined space using a RecastModel.
References Minimizer::minimizerInstance, Minimizer::numIterPrimaryFns, Mini-
mizer::numNonlinearConstraints, Minimizer::numUserPrimaryFns, and Response::update_partial().
Referenced by Minimizer::data_transform_model(), Optimizer::reduce_model(), and LeastSq::weight_model().
13.56.3.7 bool need_resp_trans_byvars (const ShortArray & asv, int start_index, int num_resp)
[protected]
determine if response transformation is needed due to variable transformations Determine if variable transforma-
tions present and derivatives requested, which implies a response transformation is necessay
References Minimizer::varsScaleFlag.
Referenced by SNLLLeastSq::post_run(), Optimizer::post_run(), LeastSq::post_run(), and
Minimizer::response_scaler_core().
13.56.3.8 RealVector modify_s2n (const RealVector & scaled_vars, const IntArray & scale_types, const
RealVector & multipliers, const RealVector & offsets) const [protected]
general RealVector mapping from scaled to native variables (and values) general RealVector mapping from scaled
to native variables and/or vals; loosely, in greatest generality: scaled_var = (LOG_BASE∧ scaled_var) ∗ multiplier
+ offset
Referenced by SNLLLeastSq::post_run(), Optimizer::post_run(), LeastSq::post_run(), and
Minimizer::variables_scaler().
13.56.3.9 void response_modify_s2n (const Variables & native_vars, const Response & scaled_response,
Response & native_response, int start_offset, int num_responses) const [protected]
map responses from scaled to native space Unscaling response mapping: modifies response from scaled (iterator)
to native (user) space. Maps num_responses starting at response_offset
References Response::active_set(), Variables::acv(), Variables::all_continuous_variable_ids(), Variables::all_-
continuous_variables(), Variables::continuous_variable_ids(), Variables::continuous_variables(),
Dakota::copy_data(), Variables::cv(), Minimizer::cvScaleMultipliers, Minimizer::cvScaleOffsets, Min-
imizer::cvScaleTypes, Dakota::find_index(), Response::function_gradient_view(), Response::function_-
gradients(), Response::function_hessian_view(), Response::function_hessians(), Response::function_-
labels(), Response::function_value(), Response::function_values(), Variables::icv(), Variables::inactive_-
continuous_variable_ids(), Variables::inactive_continuous_variables(), Minimizer::numUserPrimaryFns,
Iterator::outputLevel, Minimizer::responseScaleMultipliers, Minimizer::responseScaleOffsets, Mini-
mizer::responseScaleTypes, Dakota::write_col_vector_trans(), Dakota::write_data(), and Dakota::write_-
precision.
Referenced by Optimizer::post_run(), and LeastSq::post_run().
13.56.3.10 Real objective (const RealVector & fn_vals, const BoolDeque & max_sense, const RealVector
& primary_wts) const [protected]
compute a composite objective value from one or more primary functions The composite objective computation
sums up the contributions from one of more primary functions using the primary response fn weights.
References Minimizer::numUserPrimaryFns.
Referenced by SurrBasedLocalMinimizer::approx_subprob_objective_eval(),
SurrBasedMinimizer::augmented_lagrangian_merit(), EffGlobalMinimizer::expected_-
improvement(), SurrBasedMinimizer::lagrangian_merit(), Optimizer::objective_reduction(),
SurrBasedMinimizer::penalty_merit(), COLINOptimizer::post_run(), SurrBasedMinimizer::update_filter(),
and SurrBasedLocalMinimizer::update_penalty().
13.56.3.11 Real objective (const RealVector & fn_vals, size_t num_fns, const BoolDeque & max_sense,
const RealVector & primary_wts) const [protected]
compute a composite objective with specified number of source primary functions, instead of userPrimaryFns
This "composite" objective is a more general case of the previous objective(), but doesn’t presume a reduction
map from user to iterated space. Used to apply weights and sense in COLIN results sorting. Leaving as a duplicate
implementation pending resolution of COLIN lookups.
References Minimizer::optimizationFlag.
13.56.3.12 void objective_gradient (const RealVector & fn_vals, size_t num_fns, const RealMatrix &
fn_grads, const BoolDeque & max_sense, const RealVector & primary_wts, RealVector &
obj_grad) const [protected]
compute the gradient of the composite objective function The composite objective gradient computation com-
bines the contributions from one of more primary function gradients, including the effect of any primary function
weights. In the case of a linear mapping (MOO), only the primary function gradients are required, but in the case
of a nonlinear mapping (NLS), primary function values are also needed. Within RecastModel::set_mapping(),
the active set requests are automatically augmented to make values available when needed, based on nonlinear-
RespMapping settings.
References Iterator::numContinuousVars, and Minimizer::optimizationFlag.
13.56.3.13 void objective_hessian (const RealVector & fn_vals, size_t num_fns, const RealMatrix &
fn_grads, const RealSymMatrixArray & fn_hessians, const BoolDeque & max_sense, const
RealVector & primary_wts, RealSymMatrix & obj_hess) const [protected]
compute the Hessian of the composite objective function The composite objective Hessian computation com-
bines the contributions from one of more primary function Hessians, including the effect of any primary function
weights. In the case of a linear mapping (MOO), only the primary function Hessians are required, but in the
case of a nonlinear mapping (NLS), primary function values and gradients are also needed in general (gradi-
ents only in the case of a Gauss-Newton approximation). Within the default RecastModel::set_mapping(), the
active set requests are automatically augmented to make values and gradients available when needed, based on
nonlinearRespMapping settings.
References Dakota::abort_handler(), Iterator::numContinuousVars, and Minimizer::optimizationFlag.
Safely resize the best variables array to newsize taking into account the envelope-letter design pattern and any
recasting. Uses data from the innermost model, should any Minimizer recasts be active. Called by multipoint
return solvers. Do not directly call resize on the bestVariablesArray object unless you intend to share the internal
content (letter) with other objects after assignment.
References Iterator::bestVariablesArray, Variables::copy(), Model::current_variables(), Iterator::iteratedModel,
Minimizer::minimizerRecasts, and Model::subordinate_model().
Referenced by JEGAOptimizer::find_optimum(), and COLINOptimizer::post_run().
Safely resize the best response array to newsize taking into account the envelope-letter design pattern and any
recasting. Uses data from the innermost model, should any Minimizer recasts be active. Called by multipoint
return solvers. Do not directly call resize on the bestResponseArray object unless you intend to share the internal
content (letter) with other objects after assignment.
References Iterator::bestResponseArray, Response::copy(), Model::current_response(), Iterator::iteratedModel,
Minimizer::minimizerRecasts, and Model::subordinate_model().
Referenced by JEGAOptimizer::find_optimum(), and COLINOptimizer::post_run().
13.56.3.16 void primary_resp_differencer (const Variables & raw_vars, const Variables & residual_vars,
const Response & raw_response, Response & residual_response) [static, private]
Recast callback function to difference residuals with observed data. Difference the primary responses with ob-
served data
initialize scaling types, multipliers, and offsets; perform error checking Initialize scaling types, multipliers, and
offsets. Update the iteratedModel appropriately
References Dakota::abort_handler(), Minimizer::compute_scaling(), Model::continuous_lower_-
bounds(), Model::continuous_upper_bounds(), Model::continuous_variable_labels(), Model::continuous_-
variables(), Dakota::copy_data(), Minimizer::cvScaleMultipliers, Minimizer::cvScaleOffsets, Mini-
mizer::cvScaleTypes, ProblemDescDB::get_rv(), ProblemDescDB::get_sa(), Iterator::iteratedModel,
Minimizer::lin_coeffs_modify_n2s(), Model::linear_eq_constraint_coeffs(), Model::linear_eq_-
constraint_targets(), Model::linear_ineq_constraint_coeffs(), Model::linear_ineq_constraint_lower_-
bounds(), Model::linear_ineq_constraint_upper_bounds(), Minimizer::linearEqScaleMultipliers, Mini-
mizer::linearEqScaleOffsets, Minimizer::linearEqScaleTypes, Minimizer::linearIneqScaleMultipliers, Mini-
mizer::linearIneqScaleOffsets, Minimizer::linearIneqScaleTypes, Model::model_rep(), Minimizer::modify_-
n2s(), Model::nonlinear_eq_constraint_targets(), Model::nonlinear_ineq_constraint_lower_-
bounds(), Model::nonlinear_ineq_constraint_upper_bounds(), Iterator::numContinuousVars, Iter-
ator::numFunctions, Minimizer::numLinearEqConstraints, Minimizer::numLinearIneqConstraints,
Minimizer::numNonlinearEqConstraints, Minimizer::numNonlinearIneqConstraints, Mini-
mizer::numUserPrimaryFns, Iterator::outputLevel, Minimizer::primaryRespScaleFlag, Minimizer::print_-
scaling(), Iterator::probDescDB, Model::response_labels(), Minimizer::responseScaleMultipliers, Min-
imizer::responseScaleOffsets, Minimizer::responseScaleTypes, Minimizer::secondaryRespScaleFlag,
RecastModel::submodel_supports_derivative_estimation(), Model::subordinate_model(), Model::supports_-
derivative_estimation(), and Minimizer::varsScaleFlag.
Referenced by Minimizer::scale_model().
13.56.3.18 void variables_scaler (const Variables & scaled_vars, Variables & native_vars) [static,
private]
RecastModel callback for variables scaling: transform variables from scaled to native (user) space. Variables map
from iterator/scaled space to user/native space using a RecastModel.
References Variables::continuous_variable_labels(), Variables::continuous_variables(), Min-
imizer::cvScaleMultipliers, Minimizer::cvScaleOffsets, Minimizer::cvScaleTypes, Mini-
mizer::minimizerInstance, Minimizer::modify_s2n(), Iterator::outputLevel, and Dakota::write_data().
Referenced by Minimizer::scale_model().
13.56.3.19 void secondary_resp_scaler (const Variables & native_vars, const Variables & scaled_vars,
const Response & native_response, Response & iterator_response) [static, private]
RecastModel callback for secondary response scaling: transform constraints (grads, Hessians) from native (user)
to scaled space. Constraint function map from user/native space to iterator/scaled/combined space using a Recast-
Model.
13.56.3.20 RealVector modify_n2s (const RealVector & native_vars, const IntArray & scale_types, const
RealVector & multipliers, const RealVector & offsets) const [private]
general RealVector mapping from native to scaled variables vectors: general RealVector mapping from native to
scaled variables; loosely, in greatest generality: scaled_var = log( (native_var - offset) / multiplier )
Referenced by Minimizer::initialize_scaling().
13.56.3.21 void response_modify_n2s (const Variables & native_vars, const Response & native_response,
Response & recast_response, int start_offset, int num_responses) const [private]
map reponses from native to scaled variable space Scaling response mapping: modifies response from a model
(user/native) for use in iterators (scaled). Maps num_responses starting at response_offset
References Response::active_set(), Variables::acv(), Variables::all_continuous_variable_ids(), Variables::all_-
continuous_variables(), Variables::continuous_variable_ids(), Variables::continuous_variables(),
Dakota::copy_data(), Variables::cv(), Minimizer::cvScaleMultipliers, Minimizer::cvScaleOffsets, Min-
imizer::cvScaleTypes, Dakota::find_index(), Response::function_gradient_view(), Response::function_-
gradients(), Response::function_hessian_view(), Response::function_hessians(), Response::function_-
labels(), Response::function_value(), Response::function_values(), Variables::icv(), Variables::inactive_-
continuous_variable_ids(), Variables::inactive_continuous_variables(), Minimizer::numUserPrimaryFns,
Iterator::outputLevel, Minimizer::responseScaleMultipliers, Minimizer::responseScaleOffsets, Mini-
mizer::responseScaleTypes, Dakota::write_col_vector_trans(), Dakota::write_data(), and Dakota::write_-
precision.
Referenced by Minimizer::response_scaler_core().
13.56.3.22 RealMatrix lin_coeffs_modify_n2s (const RealMatrix & src_coeffs, const RealVector &
cv_multipliers, const RealVector & lin_multipliers) const [private]
general linear coefficients mapping from native to scaled space compute scaled linear constraint matrix given
design variable multipliers and linear scaling multipliers. Only scales components corresponding to continuous
variables so for src_coeffs of size MxN, lin_multipliers.size() <= M, cv_multipliers.size() <= N
Referenced by Minimizer::initialize_scaling().
The documentation for this class was generated from the following files:
• DakotaMinimizer.hpp
• DakotaMinimizer.cpp
Derived class within the Constraints hierarchy which separates continuous and discrete variables (no domain type
array merging). Inheritance diagram for MixedVarConstraints::
Constraints
MixedVarConstraints
• ∼MixedVarConstraints ()
destructor
• void build_active_views ()
construct active views of all variables bounds arrays
• void build_inactive_views ()
construct inactive views of all variables bounds arrays
Derived class within the Constraints hierarchy which separates continuous and discrete variables (no domain
type array merging). Derived variable constraints classes take different views of the design, uncertain, and state
variable types and the continuous and discrete domain types. The MixedVarConstraints derived class separates
the continuous and discrete domain types (see Variables::get_variables(problem_db) for variables type selection;
variables type is passed to the Constraints constructor in Model).
standard constructor In this class, mixed continuous/discrete variables are used. Most iterators/strategies use this
approach, which is the default in Constraints::get_constraints().
References Constraints::allContinuousLowerBnds, Constraints::allContinuousUpperBnds,
Constraints::allDiscreteIntLowerBnds, Constraints::allDiscreteIntUpperBnds, Con-
straints::allDiscreteRealLowerBnds, Constraints::allDiscreteRealUpperBnds, Constraints::build_views(),
SharedVariablesData::components_totals(), Dakota::copy_data_partial(), ProblemDescDB::get_iv(),
ProblemDescDB::get_rv(), Constraints::manage_linear_constraints(), Constraints::numLinearEqCons, Con-
straints::numLinearIneqCons, Constraints::sharedVarsData, SharedVariablesData::vc_lookup(), and SharedVari-
ablesData::view().
reshape the lower/upper bound arrays within the Constraints hierarchy Resizes the derived bounds arrays.
Reimplemented from Constraints.
References Constraints::allContinuousLowerBnds, Constraints::allContinuousUpperBnds,
Constraints::allDiscreteIntLowerBnds, Constraints::allDiscreteIntUpperBnds, Con-
straints::allDiscreteRealLowerBnds, Constraints::allDiscreteRealUpperBnds, and Constraints::build_views().
Referenced by MixedVarConstraints::MixedVarConstraints().
The documentation for this class was generated from the following files:
• MixedVarConstraints.hpp
• MixedVarConstraints.cpp
Variables
MixedVariables
• ∼MixedVariables ()
destructor
• void build_active_views ()
construct active views of all variables arrays
• void build_inactive_views ()
Derived class within the Variables hierarchy which separates continuous and discrete variables (no domain type
array merging). Derived variables classes take different views of the design, uncertain, and state variable types
and the continuous and discrete domain types. The MixedVariables derived class separates the continuous and
discrete domain types (see Variables::get_variables(problem_db)).
13.58.2.1 MixedVariables (const ProblemDescDB & problem_db, const std::pair< short, short > & view)
standard constructor In this class, the distinct approach is used (design, uncertain, and state variable types and
continuous and discrete domain types are distinct). Most iterators/strategies use this approach.
References Variables::allContinuousVars, Variables::allDiscreteIntVars, Variables::allDiscreteRealVars,
Variables::build_views(), SharedVariablesData::components_totals(), Dakota::copy_data_-
partial(), ProblemDescDB::get_iv(), ProblemDescDB::get_rv(), Variables::sharedVarsData, and
SharedVariablesData::vc_lookup().
• MixedVariables.hpp
• MixedVariables.cpp
Model
DataFitSurrModel HierarchSurrModel
• virtual ∼Model ()
destructor
• virtual void update_approximation (const Variables &vars, const IntResponsePair &response_pr, bool
rebuild_flag)
replace the anchor point data within an existing surrogate
• virtual void append_approximation (const Variables &vars, const IntResponsePair &response_pr, bool
rebuild_flag)
append a single point to an existing surrogate’s data
• void compute_response ()
• void asynch_compute_response ()
Spawn an asynchronous job (or jobs) that computes the value of the Response at currentVariables (default Ac-
tiveSet).
• void init_serial ()
for cases where init_communicators() will not be called, modify some default settings to behave properly in serial.
• void stop_configurations ()
called from Strategy::init_iterator() for iteratorComm rank 0 to terminate serve_configurations() on other iterator-
Comm processors
• int serve_configurations ()
called from Strategy::init_iterator() for iteratorComm rank != 0 to balance init_communicators() calls on iterator-
Comm rank 0
• void estimate_message_lengths ()
estimate messageLengths for a model
• size_t tv () const
returns total number of vars
• size_t cv () const
returns number of active continuous variables
• bool derivative_estimation ()
• Real initialize_h (Real x_j, Real lb_j, Real ub_j, Real step_size, String step_type)
function to determine initial finite difference h (before step length adjustment) based on type of step desired
• Real FDstep1 (Real x0_j, Real lb_j, Real ub_j, Real h_mag)
function returning finite-difference step size (affected by bounds)
Public Attributes
• bool shortStep
flags finite-difference step size adjusted by bounds
Protected Attributes
• Variables currentVariables
the set of current variables used by the model for performing function evaluations
• size_t numDerivVars
the number of active continuous variables used in computing most response derivatives (i.e., in places such as
quasi-Hessians and response corrections where only the active continuous variables are supported)
• Response currentResponse
the set of current responses that holds the results of model function evaluations
• size_t numFns
the number of functions in currentResponse
• Constraints userDefinedConstraints
Explicit constraints on variables are maintained in the Constraints class hierarchy. Currently, this includes linear
constraints and bounds, but could be extended in the future to include other explicit constraints which (1) have their
form specified by the user, and (2) are not catalogued in Response since their form and coefficients are published
to an iterator at startup.
• String modelType
type of model: single, nested, or surrogate
• String surrogateType
type of surrogate model: local_∗, multipoint_∗, global_∗, or hierarchical
• String gradType
grad type: none,numerical,analytic,mixed
• String methodSrc
method source: dakota,vendor
• String intervalType
interval type: forward,central
• bool ignoreBounds
option to ignore bounds when computing < finite differences
• bool centralHess
option to use old 2nd-order finite diffs for Hessians
• RealVector fdGradSS
relative step sizes for numerical gradients
• String fdGradST
step type for numerical gradients
• IntSet gradIdAnalytic
analytic id’s for mixed gradients
• IntSet gradIdNumerical
numerical id’s for mixed gradients
• String hessType
Hess type: none,numerical,quasi,analytic,mixed.
• String quasiHessType
quasi-Hessian type: bfgs, damped_bfgs, sr1
• RealVector fdHessByGradSS
relative step sizes for numerical Hessians < estimated with 1st-order grad differences
• RealVector fdHessByFnSS
relative step sizes for numerical Hessians < estimated with 2nd-order fn differences
• String fdHessST
step type for numerical Hessians
• IntSet hessIdAnalytic
analytic id’s for mixed Hessians
• IntSet hessIdNumerical
numerical id’s for mixed Hessians
• IntSet hessIdQuasi
quasi id’s for mixed Hessians
• bool supportsEstimDerivs
whether model should perform or forward < derivative estimation
• IntArray messageLengths
length of packed MPI buffers containing vars, vars/set, response, and PRPair
• ParConfigLIter modelPCIter
the ParallelConfiguration node used by this model instance
• short componentParallelMode
the component parallelism mode: 0 (none), 1 (INTERFACE/LF_MODEL), or 2 (SUB_MODEL/HF_-
MODEL/TRUTH_MODEL)
• bool asynchEvalFlag
flags asynch evaluations (local or distributed)
• int evaluationCapacity
capacity for concurrent evaluations supported by the Model
• short outputLevel
output verbosity level: {SILENT,QUIET,NORMAL,VERBOSE,DEBUG}_OUTPUT
• IntSetArray discreteDesignSetIntValues
array of IntSet’s, each containing the set of allowable integer values corresponding to a discrete design integer set
variable
• RealSetArray discreteDesignSetRealValues
array of RealSet’s, each containing the set of allowable real values corresponding to a discrete design real set
variable
• IntSetArray discreteStateSetIntValues
array of IntSet’s, each containing the set of allowable integer values corresponding to a discrete state integer set
variable
• RealSetArray discreteStateSetRealValues
array of RealSet’s, each containing the set of allowable real values corresponding to a discrete state real set
variable
• Pecos::AleatoryDistParams aleatDistParams
container for aleatory random variable distribution parameters
• Pecos::EpistemicDistParams epistDistParams
container for epistemic random variable distribution parameters
• BoolDeque primaryRespFnSense
array of flags (one per primary function) for switching the sense to maximize the primary function (default is
minimize)
• RealVector primaryRespFnWts
primary response function weightings (either weights for multiobjective optimization or weighted least squares)
• bool hierarchicalTagging
whether to perform hierarchical evalID tagging of params/results
• int estimate_derivatives (const ShortArray &map_asv, const ShortArray &fd_grad_asv, const ShortArray
&fd_hess_asv, const ShortArray &quasi_hess_asv, const ActiveSet &original_set, const bool asynch_-
flag)
evaluate numerical gradients using finite differences. This routine is selected with "method_source dakota" (the
default method_source) in the numerical gradient specification.
• void update_response (const Variables &vars, Response &new_response, const ShortArray &fd_grad_-
asv, const ShortArray &fd_hess_asv, const ShortArray &quasi_hess_asv, const ActiveSet &original_set,
Response &initial_map_response, const RealMatrix &new_fn_grads, const RealSymMatrixArray &new_-
fn_hessians)
overlay results to update a response object
Private Attributes
• String modelId
model identifier string from the input file
• int modelEvalCntr
evaluation counter for top-level compute_response() and asynch_compute_response() calls. Differs from lower
level counters in case of numerical derivative estimation (several lower level evaluations are assimilated into a
single higher level evaluation)
• bool estDerivsFlag
flags presence of estimated derivatives within a set of calls to asynch_compute_response()
• bool initCommsBcastFlag
flag for determining need to bcast the max concurrency from init_communicators(); set from Strategy::init_-
iterator()
• bool modelAutoGraphicsFlag
flag for posting of graphics data within compute_response (automatic graphics posting in the model as opposed to
graphics posting at the strategy level)
• ModelList modelList
used to collect sub-models for subordinate_models()
• VariablesList varsList
history of vars populated in asynch_compute_response() and used in synchronize().
• BoolList initialMapList
transfers initial_map flag values from estimate_derivatives() to synchronize_derivatives()
• BoolList dbCaptureList
transfers db_capture flag values from estimate_derivatives() to synchronize_derivatives()
• ResponseList dbResponseList
transfers database captures from estimate_derivatives() to synchronize_derivatives()
• RealList deltaList
transfers deltas from estimate_derivatives() to synchronize_derivatives()
• IntIntMap numFDEvalsMap
tracks the number of evaluations used within estimate_derivatives(). Used in synchronize() as a key for combining
finite difference responses into numerical gradients.
• IntIntMap rawEvalIdMap
maps from the raw evaluation ids returned by derived_synchronize() and derived_synchronize_nowait() to the cor-
responding modelEvalCntr id. Used for rekeying responseMap.
• RealVectorArray xPrev
previous parameter vectors used in computing s for quasi-Newton updates
• RealMatrix fnGradsPrev
previous gradient vectors used in computing y for quasi-Newton updates
• RealSymMatrixArray quasiHessians
quasi-Newton Hessian approximations
• SizetArray numQuasiUpdates
number of quasi-Newton Hessian updates applied
• IntResponseMap responseMap
used to return a map of responses for asynchronous evaluations in final concatenated form. The similar map in
Interface contains raw responses.
• IntResponseMap graphicsRespMap
used to cache the data returned from derived_synchronize_nowait() prior to sequential input into the graphics
• IntSetArray activeDiscSetIntValues
aggregation of the admissible value sets for all active discrete set integer variables
• RealSetArray activeDiscSetRealValues
aggregation of the admissible value sets for all active discrete set real variables
• BitArray discreteIntSets
key for identifying discrete integer set variables within the active discrete integer variables
• Model ∗ modelRep
pointer to the letter (initialized only for the envelope)
• int referenceCount
number of objects sharing modelRep
Base class for the model class hierarchy. The Model class is the base class for one of the primary class hierarchies
in DAKOTA. The model hierarchy contains a set of variables, an interface, and a set of responses, and an iterator
operates on the model to map the variables into responses using the interface. For memory efficiency and enhanced
polymorphism, the model hierarchy employs the "letter/envelope idiom" (see Coplien "Advanced C++", p. 133),
for which the base class (Model) serves as the envelope and one of the derived classes (selected in Model::get_-
model()) serves as the letter.
13.59.2.1 Model ()
default constructor The default constructor is used in vector<Model> instantiations and for initialization of Model
objects contained in Iterator and derived Strategy classes. modelRep is NULL in this case (a populated problem_-
db is needed to build a meaningful Model object). This makes it necessary to check for NULL in the copy
constructor, assignment operator, and destructor.
standard constructor for envelope Used in model instantiations within strategy constructors. Envelope constructor
only needs to extract enough data to properly execute get_model, since Model(BaseConstructor, problem_db)
builds the actual base class data for the derived models.
References Dakota::abort_handler(), Model::get_model(), and Model::modelRep.
copy constructor Copy constructor manages sharing of modelRep and incrementing of referenceCount.
References Model::modelRep, and Model::referenceCount.
destructor Destructor decrements referenceCount and only deletes modelRep when referenceCount reaches zero.
References Model::modelRep, and Model::referenceCount.
constructor initializing the base class part of letter classes (BaseConstructor overloading avoids infinite recursion
in the derived class constructors - Coplien, p. 139) This constructor builds the base class data for all inherited
models. get_model() instantiates a derived class and the derived class selects this base class constructor in its
initialization list (to avoid the recursion of the base class constructor calling get_model() again). Since the letter IS
the representation, its representation pointer is set to NULL (an uninitialized pointer causes problems in ∼Model).
References Dakota::abort_handler(), Model::currentResponse, Model::fdGradSS, Model::fdHessByFnSS,
Model::fdHessByGradSS, ProblemDescDB::get_sa(), Model::gradIdNumerical, Model::gradType,
constructor initializing base class for recast model class instances constructed on the fly This constructor also
builds the base class data for inherited models. However, it is used for recast models which are instantiated on
the fly. Therefore it only initializes a small subset of attributes. Note that parallel_lib is managed separately
from problem_db since parallel_lib is needed even in cases where problem_db is an empty envelope (i.e., use of
dummy_db in Model(NoDBBaseConstructor) above.
assignment operator Assignment operator decrements referenceCount for old modelRep, assigns new modelRep,
and increments referenceCount for new modelRep.
References Model::modelRep, and Model::referenceCount.
return the sub-iterator in nested and surrogate models return by reference requires use of dummy objects, but is
important to allow use of assign_rep() since this operation must be performed on the original envelope object.
Reimplemented in DataFitSurrModel, NestedModel, and RecastModel.
References Dakota::dummy_iterator, Model::modelRep, and Model::subordinate_iterator().
Referenced by NonDExpansion::compute_expansion(), NonDExpansion::compute_print_-
converged_results(), NonDExpansion::compute_print_iteration_results(), NonDExpansion::finalize_-
sets(), NonDGlobalReliability::get_best_sample(), NonDPolynomialChaos::increment_order(),
NonDExpansion::increment_sets(), NonDPolynomialChaos::increment_specification_sequence(),
NonDExpansion::increment_specification_sequence(), NLPQLPOptimizer::initialize(), NCSUOpti-
mizer::initialize(), DOTOptimizer::initialize(), CONMINOptimizer::initialize(), NonDExpansion::initialize_-
expansion(), NonDExpansion::initialize_sets(), NonDStochCollocation::initialize_u_space_-
model(), NonDPolynomialChaos::initialize_u_space_model(), NonDExpansion::initialize_u_space_-
model(), SurrBasedLocalMinimizer::minimize_surrogates(), SurrBasedGlobalMinimizer::minimize_-
surrogates(), NonDLocalInterval::NonDLocalInterval(), NonDLocalReliability::NonDLocalReliability(),
NonDGlobalReliability::optimize_gaussian_process(), NonDExpansion::refine_expansion(), SOL-
Base::SOLBase(), RecastModel::subordinate_iterator(), Model::subordinate_iterator(), and
NonDStochCollocation::update_expansion().
return a single sub-model defined from subModel in nested and recast models and truth_model() in surrogate
models; used for a directed dive through model recursions that may bypass some components. return by refer-
ence requires use of dummy objects, but is important to allow use of assign_rep() since this operation must be
performed on the original envelope object.
Reimplemented in NestedModel, RecastModel, and SurrogateModel.
References Dakota::dummy_model, Model::modelRep, and Model::subordinate_model().
Referenced by Minimizer::data_transform_model(), NonDGlobalReliability::expected_-
feasibility(), NonDGlobalReliability::expected_improvement(), SurrogateModel::force_rebuild(),
NonDExpansion::initialize_expansion(), Minimizer::initialize_run(), Minimizer::initialize_scaling(),
NonDExpansion::initialize_u_space_model(), NonDGlobalReliability::optimize_gaussian_process(),
LeastSq::post_run(), COLINOptimizer::post_run(), Optimizer::primary_resp_reducer(), LeastSq::primary_-
resp_weighter(), Optimizer::print_results(), LeastSq::print_results(), Minimizer::resize_best_resp_-
array(), Minimizer::resize_best_vars_array(), Minimizer::scale_model(), Model::subordinate_model(),
DataFitSurrModel::update_global(), and LeastSq::weight_model().
return the approximation sub-model in surrogate models return by reference requires use of dummy objects, but
is important to allow use of assign_rep() since this operation must be performed on the original envelope object.
Reimplemented in DataFitSurrModel, HierarchSurrModel, and RecastModel.
References Dakota::dummy_model, Model::modelRep, and Model::surrogate_model().
Referenced by NonDAdaptiveSampling::calc_score_delta_y(), NonDAdaptiveSampling::calc_score_topo_alm_-
hybrid(), NonDAdaptiveSampling::calc_score_topo_avg_persistence(), NonDAdaptiveSampling::calc_score_-
topo_bottleneck(), SurrBasedLocalMinimizer::find_center_approx(), SurrBasedLocalMinimizer::minimize_-
surrogates(), SurrBasedGlobalMinimizer::minimize_surrogates(), NonDAdaptiveSampling::output_round_-
data(), SurrBasedLocalMinimizer::SurrBasedLocalMinimizer(), RecastModel::surrogate_model(), and
Model::surrogate_model().
return the truth sub-model in surrogate models return by reference requires use of dummy objects, but is important
to allow use of assign_rep() since this operation must be performed on the original envelope object.
Reimplemented in DataFitSurrModel, HierarchSurrModel, and RecastModel.
References Dakota::dummy_model, Model::modelRep, and Model::truth_model().
Referenced by SurrogateModel::force_rebuild(), SurrBasedMinimizer::initialize_graphics(),
SurrBasedLocalMinimizer::minimize_surrogates(), SurrBasedGlobalMinimizer::minimize_surrogates(),
SurrBasedMinimizer::print_results(), SurrogateModel::subordinate_model(), SurrBasedGlobalMin-
imizer::SurrBasedGlobalMinimizer(), SurrBasedLocalMinimizer::SurrBasedLocalMinimizer(),
RecastModel::truth_model(), and Model::truth_model().
propagate vars/labels/bounds/targets from the bottom up used only for instantiate-on-the-fly model recursions
(all RecastModel instantiations and alternate DataFitSurrModel instantiations). Single, Hierarchical, and Nested
Models do not redefine the function since they do not support instantiate-on-the-fly. This means that the re-
cursion will stop as soon as it encounters a Model that was instantiated normally, which is appropriate since
ProblemDescDB-constructed Models use top-down information flow and do not require bottom-up updating.
Reimplemented in DataFitSurrModel, and RecastModel.
References Model::modelRep, and Model::update_from_subordinate_model().
Referenced by NonDLocalReliability::initialize_class_data(), NonDExpansion::initialize_expansion(),
Optimizer::initialize_run(), LeastSq::initialize_run(), EffGlobalMinimizer::minimize_surrogates_-
on_model(), NonDGlobalReliability::optimize_gaussian_process(), NonDLocalInterval::quantify_-
uncertainty(), NonDGlobalInterval::quantify_uncertainty(), RecastModel::update_from_subordinate_model(),
DataFitSurrModel::update_from_subordinate_model(), and Model::update_from_subordinate_model().
return the interface employed by the derived model class, if present: SingleModel::userDefinedInterface, DataFit-
SurrModel::approxInterface, or NestedModel::optionalInterface return by reference requires use of dummy ob-
jects, but is important to allow use of assign_rep() since this operation must be performed on the original envelope
object.
Reimplemented in DataFitSurrModel, NestedModel, RecastModel, and SingleModel.
References Dakota::dummy_interface, Model::interface(), and Model::modelRep.
Referenced by RecastModel::interface(), Model::interface(), and SurrBasedGlobalMinimizer::minimize_-
surrogates().
return derived model synchronization setting SingleModels and HierarchSurrModels redefine this virtual function.
A default value of "synchronous" prevents asynch local operations for:
• NestedModels: a subIterator can support message passing parallelism, but not asynch local.
• DataFitSurrModels: while asynch evals on approximations will work due to some added bookkeeping,
avoiding them is preferable.
return derived model asynchronous evaluation concurrency SingleModels and HierarchSurrModels redefine this
virtual function.
Reimplemented in RecastModel, and SingleModel.
References Model::local_eval_concurrency(), and Model::modelRep.
return the interface identifier return by reference requires use of dummy objects, but is important to allow use of
assign_rep() since this operation must be performed on the original envelope object.
Reimplemented in DataFitSurrModel, NestedModel, RecastModel, and SingleModel.
References Dakota::dummy_interface, Interface::interface_id(), Model::interface_id(), and Model::modelRep.
Referenced by DataFitSurrModel::build_global(), DataFitSurrModel::DataFitSurrModel(), Model::estimate_-
derivatives(), Model::estimate_message_lengths(), SurrBasedLocalMinimizer::find_center_approx(),
RecastModel::interface_id(), Model::interface_id(), Optimizer::local_objective_recast_retrieve(),
SNLLLeastSq::post_run(), SurrBasedMinimizer::print_results(), Optimizer::print_results(), LeastSq::print_-
results(), SequentialHybridStrategy::run_sequential(), DiscrepancyCorrection::search_db(), Analyzer::update_-
best(), SequentialHybridStrategy::update_local_results(), ConcurrentStrategy::update_local_results(), and
NonDLocalReliability::update_mpp_search_data().
Indicates the usage of an evaluation cache by the Model. Only Models including ApplicationInterfaces support
an evaluation cache: surrogate, nested, and recast mappings are not stored in the cache. Possible exceptions:
HierarchSurrModel, NestedModel::optionalInterface.
Reimplemented in SingleModel.
References Model::evaluation_cache(), and Model::modelRep.
Referenced by DataFitSurrModel::DataFitSurrModel(), and Model::evaluation_cache().
set the hierarchical eval ID tag prefix Derived classes containing additional models or interfaces should implement
this function to pass along to their sub Models/Interfaces.
Reimplemented in DataFitSurrModel, HierarchSurrModel, NestedModel, RecastModel, and SingleModel.
References Model::eval_tag_prefix(), and Model::modelRep.
Referenced by HierarchSurrModel::build_approximation(), HierarchSurrModel::derived_asynch_compute_-
response(), DataFitSurrModel::derived_asynch_compute_response(), HierarchSurrModel::derived_compute_-
response(), DataFitSurrModel::derived_compute_response(), RecastModel::eval_tag_prefix(), Model::eval_tag_-
prefix(), and Iterator::eval_tag_prefix().
return the sub-models in nested and surrogate models since modelList is built with list insertions (using envelope
copies), these models may not be used for model.assign_rep() since this operation must be performed on the
original envelope object. They may, however, be used for letter-based operations (including assign_rep() on letter
contents such as an interface).
References Model::derived_subordinate_models(), Model::modelList, Model::modelRep, and
Model::subordinate_models().
Referenced by DataFitSurrModel::build_global(), NLPQLPOptimizer::initialize(), NCSUOptimizer::initialize(),
DOTOptimizer::initialize(), CONMINOptimizer::initialize(), NonDLocalInterval::NonDLocalInterval(), NonD-
LocalReliability::NonDLocalReliability(), SOLBase::SOLBase(), Model::subordinate_models(), and SurrBased-
LocalMinimizer::SurrBasedLocalMinimizer().
allocate communicator partitions for a model and store configuration in modelPCIterMap The init_-
communicators() and derived_init_communicators() functions are stuctured to avoid performing the message-
Lengths estimation more than once. init_communicators() (not virtual) performs the estimation and then forwards
the results to derived_init_communicators (virtual) which uses the data in different contexts.
References ParallelLibrary::bcast_i(), Model::derived_init_communicators(), Model::estimate_-
message_lengths(), ParallelLibrary::increment_parallel_configuration(), Model::init_communicators(),
Model::initCommsBcastFlag, Model::messageLengths, Model::modelPCIter, Model::modelPCIterMap,
Model::modelRep, ParallelLibrary::parallel_configuration_iterator(), and Model::parallelLib.
Referenced by APPSOptimizer::APPSOptimizer(), COLINOptimizer::COLINOptimizer(),
NonDExpansion::construct_expansion_sampler(), RecastModel::derived_init_communicators(),
NestedModel::derived_init_communicators(), HierarchSurrModel::derived_init_communicators(),
DataFitSurrModel::derived_init_communicators(), EffGlobalMinimizer::EffGlobalMinimizer(),
Model::init_communicators(), EfficientSubspaceMethod::init_fullspace_sampler(), Strategy::init_-
iterator(), JEGAOptimizer::JEGAOptimizer(), LeastSq::LeastSq(), NonDLocalReliability::method_-
recourse(), NonDLocalInterval::method_recourse(), NonDAdaptiveSampling::NonDAdaptiveSampling(),
NonDBayesCalibration::NonDBayesCalibration(), NonDGlobalInterval::NonDGlobalInterval(),
NonDGlobalReliability::NonDGlobalReliability(), NonDGPImpSampling::NonDGPImpSampling(),
NonDGPMSABayesCalibration::NonDGPMSABayesCalibration(), NonDLHSInterval::NonDLHSInterval(),
NonDLocalInterval::NonDLocalInterval(), NonDLocalReliability::NonDLocalReliability(), NonD-
PolynomialChaos::NonDPolynomialChaos(), NonDStochCollocation::NonDStochCollocation(), Opti-
mizer::Optimizer(), EfficientSubspaceMethod::reduced_space_uq(), Model::serve_configurations(), SNL-
LOptimizer::SNLLOptimizer(), SurrBasedGlobalMinimizer::SurrBasedGlobalMinimizer(), and SurrBasedLo-
calMinimizer::SurrBasedLocalMinimizer().
for cases where init_communicators() will not be called, modify some default settings to behave properly in serial.
The init_serial() and derived_init_serial() functions are stuctured to separate base class (common) operations from
derived class (specialized) operations.
References Model::asynchEvalFlag, Model::derived_init_serial(), Model::init_serial(), Model::local_eval_-
synchronization(), and Model::modelRep.
Referenced by RecastModel::derived_init_serial(), NestedModel::derived_init_serial(),
HierarchSurrModel::derived_init_serial(), DataFitSurrModel::derived_init_serial(), and Model::init_serial().
estimate messageLengths for a model This functionality has been pulled out of init_communicators() and defined
separately so that it may be used in those cases when messageLengths is needed but model.init_communicators()
is not called, e.g., for the master processor in the self-scheduling of a concurrent iterator strategy.
References Response::active_set_derivative_vector(), Response::copy(), Model::currentResponse,
Model::currentVariables, Model::estimate_message_lengths(), Model::interface_id(), Model::messageLengths,
Model::modelRep, Model::numFns, Model::parallelLib, MPIPackBuffer::reset(), MPIPackBuffer::size(), and
ParallelLibrary::world_size().
Referenced by ConcurrentStrategy::ConcurrentStrategy(), Model::estimate_message_lengths(), and Model::init_-
communicators().
replaces existing letter with a new one Similar to the assignment operator, the assign_rep() function decrements
referenceCount for the old modelRep and assigns the new modelRep. It is different in that it is used for publishing
derived class letters to existing envelopes, as opposed to sharing representations among multiple envelopes (in
particular, assign_rep is passed a letter object and operator= is passed an envelope object). Letter assignment
supports two models as governed by ref_count_incr:
• ref_count_incr = true (default): the incoming letter belongs to another envelope. In this case, increment the
reference count in the normal manner so that deallocation of the letter is handled properly.
• ref_count_incr = false: the incoming letter is instantiated on the fly and has no envelope. This case is
modeled after get_model(): a letter is dynamically allocated using new and passed into assign_rep, the
letter’s reference count is not incremented, and the letter is not remotely deleted (its memory management
is passed over to the envelope).
return the gradient concurrency for use in parallel configuration logic This function assumes derivatives with
respect to the active continuous variables. Therefore, concurrency with respect to the inactive continuous variables
is not captured.
References Dakota::contains(), Model::derivative_concurrency(), Model::gradIdAnalytic, Model::gradType,
Model::hessIdNumerical, Model::hessType, Model::intervalType, Model::methodSrc, Model::modelRep, and
Model::numDerivVars.
13.59.3.19 Real initialize_h (Real x_j, Real lb_j, Real ub_j, Real step_size, String step_type)
function to determine initial finite difference h (before step length adjustment) based on type of step desired
Auxiliary function to determine initial finite difference h (before step length adjustment) based on type of step
desired.
Referenced by Model::estimate_derivatives().
13.59.3.20 Real FDstep1 (Real x0_j, Real lb_j, Real ub_j, Real h_mag)
function returning finite-difference step size (affected by bounds) Auxiliary function to compute forward or first
central-difference step size.
References Model::ignoreBounds, and Model::shortStep.
Referenced by Model::estimate_derivatives().
13.59.3.21 Real FDstep2 (Real x0_j, Real lb_j, Real ub_j, Real h)
function returning second central-difference step size (affected by bounds) Auxiliary function to second central-
difference step size, honoring bounds.
References Model::ignoreBounds, and Model::shortStep.
Referenced by Model::estimate_derivatives().
Used by the envelope to instantiate the correct letter class. Used only by the envelope constructor to initialize
modelRep to the appropriate derived type, as given by the modelType attribute.
References ProblemDescDB::get_string(), Model::model_type(), and Model::modelType.
Referenced by Model::Model().
13.59.3.23 int estimate_derivatives (const ShortArray & map_asv, const ShortArray & fd_grad_asv,
const ShortArray & fd_hess_asv, const ShortArray & quasi_hess_asv, const ActiveSet &
original_set, const bool asynch_flag) [private]
evaluate numerical gradients using finite differences. This routine is selected with "method_source dakota" (the
default method_source) in the numerical gradient specification. Estimate derivatives by computing finite differ-
ence gradients, finite difference Hessians, and/or quasi-Newton Hessians. The total number of finite difference
evaluations is returned for use by synchronize() to track response arrays, and it could be used to improve manage-
ment of max_function_evaluations within the iterators.
! new logic
References Model::all_continuous_lower_bounds(), Model::all_continuous_upper_bounds(), Model::all_-
continuous_variable_ids(), Model::all_continuous_variable_types(), Variables::all_continuous_-
variables(), Model::centralHess, Model::continuous_lower_bounds(), Model::continuous_upper_bounds(),
Model::continuous_variable_ids(), Variables::continuous_variable_ids(), Model::continuous_variable_-
types(), Variables::continuous_variables(), Response::copy(), Dakota::copy_data(), Model::currentResponse,
Model::currentVariables, Dakota::data_pairs, Model::dbCaptureList, Model::dbResponseList, Model::deltaList,
ActiveSet::derivative_vector(), Model::derived_asynch_compute_response(), Model::derived_compute_-
response(), Model::fdGradSS, Model::fdGradST, Model::fdHessByFnSS, Model::fdHessByGradSS,
Model::fdHessST, Model::FDstep1(), Model::FDstep2(), Dakota::find_index(), Model::finite_difference_-
lower_bound(), Model::finite_difference_upper_bound(), Response::function_gradients(), Response::function_-
values(), Model::ignoreBounds, Model::inactive_continuous_lower_bounds(), Model::inactive_continuous_-
upper_bounds(), Model::inactive_continuous_variable_ids(), Variables::inactive_continuous_variable_ids(),
Model::inactive_continuous_variable_types(), Variables::inactive_continuous_variables(), Model::initialize_h(),
Model::initialMapList, Model::interface_id(), Model::intervalType, Dakota::lookup_by_val(), Model::numFns,
Model::outputLevel, ActiveSet::request_vector(), Model::shortStep, and Model::update_response().
Referenced by Model::asynch_compute_response(), and Model::compute_response().
13.59.3.24 void synchronize_derivatives (const Variables & vars, const IntResponseMap & fd_responses,
Response & new_response, const ShortArray & fd_grad_asv, const ShortArray &
fd_hess_asv, const ShortArray & quasi_hess_asv, const ActiveSet & original_set)
[private]
combine results from an array of finite difference response objects (fd_grad_responses) into a single response
(new_response) Merge an array of fd_responses into a single new_response. This function is used both by syn-
chronous compute_response() for the case of asynchronous estimate_derivatives() and by synchronize() for the
case where one or more asynch_compute_response() calls has employed asynchronous estimate_derivatives().
!
References Response::active_set(), Model::acv(), Variables::all_continuous_variable_ids(), Model::centralHess,
Variables::continuous_variable_ids(), Response::copy(), Model::currentResponse, Model::currentVariables,
Model::cv(), Model::dbCaptureList, Model::dbResponseList, Model::deltaList, ActiveSet::derivative_-
vector(), Dakota::find_index(), Response::function_gradients(), Response::function_values(), Model::icv(),
Variables::inactive_continuous_variable_ids(), Model::initialMapList, Model::intervalType, Model::numFns,
ActiveSet::request_values(), Response::reset_inactive(), and Model::update_response().
Referenced by Model::compute_response(), and Model::synchronize().
13.59.3.25 void update_response (const Variables & vars, Response & new_response, const ShortArray
& fd_grad_asv, const ShortArray & fd_hess_asv, const ShortArray & quasi_hess_asv,
const ActiveSet & original_set, Response & initial_map_response, const RealMatrix &
new_fn_grads, const RealSymMatrixArray & new_fn_hessians) [private]
overlay results to update a response object Overlay the initial_map_response with numerically estimated new_fn_-
grads and new_fn_hessians to populate new_response as governed by asv vectors. Quasi-Newton secant Hessian
updates are also performed here, since this is where the gradient data needed for the updates is first consolidated.
Convenience function used by estimate_derivatives() for the synchronous case and by synchronize_derivatives()
for the asynchronous case.
References Response::active_set_request_vector(), Variables::continuous_variable_ids(), Response::copy(),
Model::currentResponse, Model::currentVariables, ActiveSet::derivative_vector(), Response::function_-
gradients(), Response::function_hessians(), Response::function_values(), Model::hessIdQuasi, Model::hessType,
Response::is_null(), Model::numFns, Model::outputLevel, Model::quasiHessians, ActiveSet::request_-
vector(), Response::reset_inactive(), Model::supportsEstimDerivs, Model::surrogate_response_mode(), and
Model::update_quasi_hessians().
Referenced by Model::estimate_derivatives(), and Model::synchronize_derivatives().
13.59.3.26 void update_quasi_hessians (const Variables & vars, Response & new_response, const
ActiveSet & original_set) [private]
perform quasi-Newton Hessian updates quasi-Newton updates are performed for approximating response function
Hessians using BFGS or SR1 formulations. These Hessians are supported only for the active continuous variables,
and a check is performed on the DVV prior to invoking the function.
References Dakota::contains(), Variables::continuous_variables(), Dakota::copy_data(), Model::fnGradsPrev,
Response::function_gradients(), Model::hessIdQuasi, Model::hessType, Model::modelType,
Model::numDerivVars, Model::numFns, Model::numQuasiUpdates, Model::outputLevel, Model::quasiHessians,
Model::quasiHessType, ActiveSet::request_vector(), and Model::xPrev.
Referenced by Model::update_response().
13.59.3.27 bool manage_asv (const ShortArray & asv_in, ShortArray & map_asv_out, ShortArray
& fd_grad_asv_out, ShortArray & fd_hess_asv_out, ShortArray & quasi_hess_asv_out)
[private]
Coordinates usage of estimate_derivatives() calls based on asv_in. Splits asv_in total request into map_asv_out,
fd_grad_asv_out, fd_hess_asv_out, and quasi_hess_asv_out as governed by the responses specification. If the
returned use_est_deriv is true, then these asv outputs are used by estimate_derivatives() for the initial map, finite
difference gradient evals, finite difference Hessian evals, and quasi-Hessian updates, respectively. If the returned
use_est_deriv is false, then only map_asv_out is used.
References Dakota::abort_handler(), Dakota::contains(), Model::gradIdAnalytic, Model::gradIdNumerical,
Model::gradType, Model::hessIdAnalytic, Model::hessIdNumerical, Model::hessIdQuasi, Model::hessType,
Model::intervalType, Model::methodSrc, Model::supportsEstimDerivs, and Model::surrogate_response_mode().
Referenced by Model::asynch_compute_response(), and Model::compute_response().
The documentation for this class was generated from the following files:
• DakotaModel.hpp
• DakotaModel.cpp
• ∼MPIPackBuffer ()
Desctructor.
• int size ()
The number of bytes of packed data.
• int capacity ()
the allocated size of Buffer.
• void reset ()
Resets the buffer index in order to reuse the internal buffer.
Protected Attributes
• char ∗ Buffer
The internal buffer for packing.
• int Index
The index into the current buffer.
• int Size
The total size that has been allocated for the buffer.
Class for packing MPI message buffers. A class that provides a facility for packing message buffers using the
MPI_Pack facility. The MPIPackBuffer class dynamically resizes the internal buffer to contain enough mem-
ory to pack the entire object. When deleted, the MPIPackBuffer object deletes this internal buffer. This class
is based on the Dakota_Version_3_0 version of utilib::PackBuffer from utilib/src/io/PackBuf.[cpp,h]
The documentation for this class was generated from the following files:
• MPIPackBuffer.hpp
• MPIPackBuffer.cpp
• MPIUnpackBuffer ()
Default constructor.
• ∼MPIUnpackBuffer ()
Destructor.
• int size ()
Returns the length of the buffer.
• int curr ()
Returns the number of bytes that have been unpacked from the buffer.
• void reset ()
Resets the index of the internal buffer.
Protected Attributes
• char ∗ Buffer
The internal buffer for unpacking.
• int Index
The index into the current buffer.
• int Size
The total size that has been allocated for the buffer.
• bool ownFlag
If TRUE, then this class owns the internal buffer.
Class for unpacking MPI message buffers. A class that provides a facility for unpacking message buffers using
the MPI_Unpack facility. This class is based on the Dakota_Version_3_0 version of utilib::UnPackBuffer from
utilib/src/io/PackBuf.[cpp,h]
The documentation for this class was generated from the following files:
• MPIPackBuffer.hpp
• MPIPackBuffer.cpp
Iterator
Minimizer
Optimizer
NCSUOptimizer
• NCSUOptimizer (Model &model, const int &max_iter, const int &max_eval, double min_box_size=-1.,
double vol_box_size=-1., double solution_target=-DBL_MAX)
alternate constructor for instantiations "on the fly"
• NCSUOptimizer (const RealVector &var_l_bnds, const RealVector &var_u_bnds, const int &max_iter,
const int &max_eval, double(∗user_obj_eval)(const RealVector &x), double min_box_size=-1., double
vol_box_size=-1., double solution_target=-DBL_MAX)
alternate constructor for instantiations "on the fly"
• ∼NCSUOptimizer ()
destructor
• void find_optimum ()
Used within the optimizer branch for computing the optimal solution. Redefines the run virtual function for the
optimizer branch.
• void check_inputs ()
• static int objective_eval (int ∗n, double c[ ], double l[ ], double u[ ], int point[ ], int ∗maxI, int ∗start, int
∗maxfunc, double fvec[ ], int iidata[ ], int ∗iisize, double ddata[ ], int ∗idsize, char cdata[ ], int ∗icsize)
’fep’ in Griffin-modified NCSUDirect: computes the value of the objective function (potentially at multiple points,
passed by function pointer to NCSUDirect). Include unscaling from DIRECT.
Private Attributes
• short setUpType
controls iteration mode: SETUP_MODEL (normal usage) or SETUP_USERFUNC (user-supplied functions mode
for "on the fly" instantiations). see enum in NCSUOptimizer.cpp NonDGlobalReliability currently uses the model
mode. GaussProcApproximation currently uses the user_functions mode.
• Real minBoxSize
holds the minimum boxsize
• Real volBoxSize
hold the minimum volume boxsize
• Real solutionTarget
holds the solution target minimum to drive towards
• RealVector lowerBounds
holds variable lower bounds passed in for "user_functions" mode.
• RealVector upperBounds
holds variable upper bounds passed in for "user_functions" mode.
Wrapper class for the NCSU DIRECT optimization library. The NCSUOptimizer class provides a wrapper for
a Fortran 77 implementation of the DIRECT algorithm developed at North Carolina State University. It uses a
function pointer approach for which passed functions must be either global functions or static member functions.
Any attribute used within static member functions must be either local to that function or accessed through a static
pointer.
The user input mappings are as follows:
standard constructor This is the standard constructor with method specification support.
References NCSUOptimizer::check_inputs(), and NCSUOptimizer::initialize().
13.62.2.2 NCSUOptimizer (Model & model, const int & max_iter, const int & max_eval, double
min_box_size = -1., double vol_box_size = -1., double solution_target = -DBL_MAX)
alternate constructor for instantiations "on the fly" This is an alternate constructor for instantiations on the fly
using a Model but no ProblemDescDB.
References NCSUOptimizer::check_inputs(), NCSUOptimizer::initialize(), Iterator::maxFunctionEvals, and Iter-
ator::maxIterations.
alternate constructor for Iterator instantiations by name This is an alternate constructor for Iterator instantiations
by name using a Model but no ProblemDescDB.
References NCSUOptimizer::check_inputs(), and NCSUOptimizer::initialize().
13.62.2.4 NCSUOptimizer (const RealVector & var_l_bnds, const RealVector & var_u_bnds, const int
& max_iter, const int & max_eval, double(∗)(const RealVector &x) user_obj_eval, double
min_box_size = -1., double vol_box_size = -1., double solution_target = -DBL_MAX)
alternate constructor for instantiations "on the fly" This is an alternate constructor for performing an optimization
using the passed in objective function pointer.
References NCSUOptimizer::check_inputs(), Iterator::maxFunctionEvals, and Iterator::maxIterations.
13.62.3.1 int objective_eval (int ∗ n, double c[ ], double l[ ], double u[ ], int point[ ], int ∗ maxI, int ∗
start, int ∗ maxfunc, double fvec[ ], int iidata[ ], int ∗ iisize, double ddata[ ], int ∗ idsize, char
cdata[ ], int ∗ icsize) [static, private]
’fep’ in Griffin-modified NCSUDirect: computes the value of the objective function (potentially at multiple points,
passed by function pointer to NCSUDirect). Include unscaling from DIRECT. Modified batch evaluator that ac-
cepts multiple points and returns corresponding vector of functions in fvec. Must be used with modified DIRECT
src (DIRbatch.f).
References Model::asynch_compute_response(), Model::asynch_flag(), Model::compute_response(),
Model::continuous_variables(), Model::current_response(), Response::function_value(), Iterator::iteratedModel,
NCSUOptimizer::ncsudirectInstance, Model::primary_response_fn_sense(), NCSUOptimizer::setUpType,
Model::synchronize(), and NCSUOptimizer::userObjectiveEval.
Referenced by NCSUOptimizer::find_optimum().
The documentation for this class was generated from the following files:
• NCSUOptimizer.hpp
• NCSUOptimizer.cpp
Model
NestedModel
• ∼NestedModel ()
destructor
• void derived_init_serial ()
set up optionalInterface and subModel for serial operations.
• void stop_servers ()
Executed by the master to terminate server operations for subModel and optionalInterface when iteration on the
NestedModel is complete.
• void set_evaluation_reference ()
set the evaluation counter reference points for the NestedModel (request forwarded to optionalInterface and sub-
Model)
• void fine_grained_evaluation_counters ()
request fine-grained evaluation reporting within optionalInterface and subModel
• void resolve_integer_variable_mapping (const String &map1, const String &map2, size_t curr_index, short
&inactive_sm_view)
for a named integer mapping, resolve primary index and secondary target
• void set_mapping (const ActiveSet &mapped_set, ActiveSet &interface_set, bool &opt_interface_map, Ac-
tiveSet &sub_iterator_set, bool &sub_iterator_map)
define the evaluation requirements for the optionalInterface (interface_set) and the subIterator (sub_iterator_set)
from the total model evaluation requirements (mapped_set)
• void update_sub_model ()
update subModel with current variable values/bounds/labels
Private Attributes
• int nestedModelEvalCntr
number of calls to derived_compute_response()/ derived_asynch_compute_response()
• Iterator subIterator
the sub-iterator that is executed on every evaluation of this model
• Model subModel
the sub-model used in sub-iterator evaluations
• size_t numSubIterFns
number of sub-iterator response functions prior to mapping
• size_t numSubIterMappedIneqCon
number of top-level inequality constraints mapped from the sub-iteration results
• size_t numSubIterMappedEqCon
number of top-level equality constraints mapped from the sub-iteration results
• Interface optionalInterface
the optional interface contributes nonnested response data to the total model response
• String optInterfacePointer
the optional interface pointer from the nested model specification
• Response optInterfaceResponse
the response object resulting from optional interface evaluations
• size_t numOptInterfPrimary
number of primary response functions (objective/least squares/generic functions) resulting from optional interface
evaluations
• size_t numOptInterfIneqCon
number of inequality constraints resulting from optional interface evaluations
• size_t numOptInterfEqCon
number of equality constraints resulting from the optional interface evaluations
• SizetArray active1ACVarMapIndices
"primary" variable mappings for inserting active continuous currentVariables within all continuous subModel vari-
ables. If there are no secondary mappings defined, then the insertions replace the subModel variable values.
• SizetArray active1ADIVarMapIndices
"primary" variable mappings for inserting active discrete int currentVariables within all discrete int subModel
variables. No secondary mappings are defined for discrete int variables, so the insertions replace the subModel
variable values.
• SizetArray active1ADRVarMapIndices
"primary" variable mappings for inserting active discrete real currentVariables within all discrete real subModel
variables. No secondary mappings are defined for discrete real variables, so the insertions replace the subModel
variable values.
• ShortArray active2ACVarMapTargets
"secondary" variable mappings for inserting active continuous currentVariables into sub-parameters (e.g., distri-
bution parameters for uncertain variables or bounds for continuous design/state variables) within all continuous
subModel variables.
• ShortArray active2ADIVarMapTargets
"secondary" variable mappings for inserting active discrete int currentVariables into sub-parameters (e.g., bounds
for discrete design/state variables) within all discrete int subModel variables.
• ShortArray active2ADRVarMapTargets
"secondary" variable mappings for inserting active discrete real currentVariables into sub-parameters (e.g., bounds
for discrete design/state variables) within all discrete real subModel variables.
• SizetArray complement1ACVarMapIndices
"primary" variable mappings for inserting the complement of the active continuous currentVariables within all
continuous subModel variables
• SizetArray complement1ADIVarMapIndices
"primary" variable mappings for inserting the complement of the active discrete int currentVariables within all
discrete int subModel variables
• SizetArray complement1ADRVarMapIndices
"primary" variable mappings for inserting the complement of the active discrete real currentVariables within all
discrete real subModel variables
• BoolDeque extraCVarsData
flags for updating subModel continuous bounds and labels, one for each active continuous variable in currentVari-
ables
• BoolDeque extraDIVarsData
flags for updating subModel discrete int bounds and labels, one for each active discrete int variable in currentVari-
ables
• BoolDeque extraDRVarsData
flags for updating subModel discrete real bounds and labels, one for each active discrete real variable in current-
Variables
• RealMatrix primaryRespCoeffs
"primary" response_mapping matrix applied to the sub-iterator response functions. For OUU, the matrix is applied
to UQ statistics to create contributions to the top-level objective functions/least squares/ generic response terms.
• RealMatrix secondaryRespCoeffs
"secondary" response_mapping matrix applied to the sub-iterator response functions. For OUU, the matrix is
applied to UQ statistics to create contributions to the top-level inequality and equality constraints.
• String evalTagPrefix
cached evalTag Prefix from parents to use at compute_response time
Derived model class which performs a complete sub-iterator execution within every evaluation of the model.
The NestedModel class nests a sub-iterator execution within every model evaluation. This capability is most
commonly used for optimization under uncertainty, in which a nondeterministic iterator is executed on every
optimization function evaluation. The NestedModel also contains an optional interface, for portions of the model
evaluation which are independent from the sub-iterator, and a set of mappings for combining sub-iterator and
optional interface data into a top level response for the model.
portion of compute_response() specific to NestedModel Update subModel’s inactive variables with active vari-
ables from currentVariables, compute the optional interface and sub-iterator responses, and map these to the total
model response.
Reimplemented from Model.
References Response::active_set(), NestedModel::component_parallel_mode(), Model::currentResponse,
Model::currentVariables, Iterator::eval_tag_prefix(), Interface::eval_tag_prefix(), NestedModel::evalTagPrefix,
Model::hierarchicalTagging, Interface::map(), NestedModel::nestedModelEvalCntr, Nested-
Model::optInterfaceResponse, NestedModel::optionalInterface, NestedModel::response_mapping(),
Iterator::response_results(), Iterator::response_results_active_set(), Iterator::run_iterator(), NestedModel::set_-
mapping(), NestedModel::subIterator, and NestedModel::update_sub_model().
flag which prevents overloading the master with a multiprocessor evaluation (forwarded to optionalInterface)
Derived master overload for subModel is handled separately in subModel.compute_response() within subItera-
tor.run().
Reimplemented from Model.
References Interface::iterator_eval_dedicated_master_flag(), Interface::multi_proc_eval_flag(), Nested-
Model::optInterfacePointer, and NestedModel::optionalInterface.
set up optionalInterface and subModel for parallel operations Asynchronous flags need to be initialized for the
subModel. In addition, max_iterator_concurrency is the outer level iterator concurrency, not the subIterator con-
currency that subModel will see, and recomputing the message_lengths on the subModel is probably not a bad
idea either. Therefore, recompute everything on subModel using init_communicators().
Reimplemented from Model.
References Model::init_communicators(), Interface::init_communicators(), Iterator::maximum_concurrency(),
Model::messageLengths, NestedModel::optInterfacePointer, NestedModel::optionalInterface, Nested-
Model::subIterator, and NestedModel::subModel.
Return the current evaluation id for the NestedModel. return the top level nested evaluation count. To get the lower
level eval count, the subModel must be explicitly queried. This is consistent with the eval counter definitions in
surrogate models.
Reimplemented from Model.
References NestedModel::nestedModelEvalCntr.
offset cv_index to create index into aggregated primary/secondary arrays maps index within active continuous
variables to index within aggregated active continuous/discrete-int/discrete-real variables.
offset div_index to create index into aggregated primary/secondary arrays maps index within active discrete int
variables to index within aggregated active continuous/discrete-int/discrete-real variables.
References Model::currentVariables, Variables::cv(), Variables::variables_components_totals(), and Vari-
ables::view().
Referenced by NestedModel::NestedModel(), and NestedModel::update_sub_model().
offset drv_index to create index into aggregated primary/secondary arrays maps index within active discrete real
variables to index within aggregated active continuous/discrete-int/discrete-real variables.
References Model::currentVariables, Variables::cv(), Variables::div(), Variables::variables_components_totals(),
and Variables::view().
Referenced by NestedModel::NestedModel(), and NestedModel::update_sub_model().
offset active complement ccv_index to create index into all continuous arrays maps index within complement of
active continuous variables to index within all continuous variables.
References Dakota::abort_handler(), Model::currentVariables, Variables::variables_components_totals(), and
Variables::view().
Referenced by NestedModel::NestedModel(), and NestedModel::update_sub_model().
offset active complement cdiv_index to create index into all discrete int arrays maps index within complement of
active discrete int variables to index within all discrete int variables.
References Dakota::abort_handler(), Model::currentVariables, Variables::variables_components_totals(), and
Variables::view().
Referenced by NestedModel::NestedModel(), and NestedModel::update_sub_model().
offset active complement cdrv_index to create index into all discrete real arrays maps index within complement
of active discrete real variables to index within all discrete real variables.
References Dakota::abort_handler(), Model::currentVariables, Variables::variables_components_totals(), and
Variables::view().
13.63.2.12 void response_mapping (const Response & opt_interface_response, const Response &
sub_iterator_response, Response & mapped_response) [private]
combine the response from the optional interface evaluation with the response from the sub-iteration using the
primaryCoeffs/secondaryCoeffs mappings to create the total response for the model In the OUU case,
where [W] is the primary_mapping_matrix user input (primaryRespCoeffs class attribute), [A] is the secondary_-
mapping_matrix user input (secondaryRespCoeffs class attribute), {{g_l},{a_l}} are the top level inequality con-
straint lower bounds, {{g_u},{a_u}} are the top level inequality constraint upper bounds, and {{g_t},{a_t}} are
the top level equality constraint targets.
NOTE: optionalInterface/subIterator primary fns (obj/lsq/generic fns) overlap but optionalInterface/subIterator
secondary fns (ineq/eq constraints) do not. The [W] matrix can be specified so as to allow
• some purely deterministic primary functions and some combined: [W] filled and [W].num_rows() <
{f}.length() [combined first] or [W].num_rows() == {f}.length() and [W] contains rows of zeros [combined
last]
• some combined and some purely stochastic primary functions: [W] filled and [W].num_rows() >
{f}.length()
• separate deterministic and stochastic primary functions: [W].num_rows() > {f}.length() and [W] contains
{f}.length() rows of zeros.
If the need arises, could change constraint definition to allow overlap as well: {g_l} <= {g} + [A]{S} <= {g_u}
with [A] usage the same as for [W] above.
In the UOO case, things are simpler, just compute statistics of each optimization response function: [W] = [I],
{f}/{g}/[A] are empty.
References Dakota::abort_handler(), Response::active_set_derivative_vector(), Response::active_set_-
request_vector(), Dakota::copy_data(), Response::function_gradient(), Response::function_gradient_-
view(), Response::function_gradients(), Response::function_hessian(), Response::function_hessian_-
view(), Response::function_hessians(), Response::function_value(), Response::function_values(),
Response::function_values_view(), Response::num_functions(), NestedModel::numOptInterfEqCon, Nest-
edModel::numOptInterfIneqCon, NestedModel::numOptInterfPrimary, NestedModel::numSubIterFns,
NestedModel::numSubIterMappedEqCon, NestedModel::numSubIterMappedIneqCon, Nested-
Model::optInterfacePointer, NestedModel::primaryRespCoeffs, Response::reset_inactive(), and Nested-
Model::secondaryRespCoeffs.
Referenced by NestedModel::derived_compute_response().
the sub-model used in sub-iterator evaluations There are no restrictions on subModel, so arbitrary nestings are pos-
sible. This is commonly used to support surrogate-based optimization under uncertainty by having NestedModels
contain SurrogateModels and vice versa.
Referenced by NestedModel::component_parallel_mode(), NestedModel::derived_free_communicators(),
NestedModel::derived_init_communicators(), NestedModel::derived_init_serial(), NestedModel::derived_-
set_communicators(), NestedModel::derived_subordinate_models(), NestedModel::fine_grained_-
evaluation_counters(), NestedModel::integer_variable_mapping(), NestedModel::NestedModel(),
NestedModel::print_evaluation_summary(), NestedModel::real_variable_mapping(), NestedModel::resolve_-
integer_variable_mapping(), NestedModel::resolve_real_variable_mapping(), NestedModel::serve(),
NestedModel::set_mapping(), NestedModel::sm_acv_index_map(), NestedModel::sm_adiv_index_map(),
NestedModel::subordinate_model(), NestedModel::surrogate_response_mode(), NestedModel::update_-
inactive_view(), and NestedModel::update_sub_model().
The documentation for this class was generated from the following files:
• NestedModel.hpp
• NestedModel.cpp
ProblemDescDB
NIDRProblemDescDB
• ∼NIDRProblemDescDB ()
destructor
• void derived_broadcast ()
perform any data processing that must be coordinated with DB buffer broadcasting (performed prior to broadcast-
ing the DB buffer on rank 0 and after receiving the DB buffer on other processor ranks)
• void derived_post_process ()
perform any additional data post-processing
• KWH (iface_Rlit)
• KWH (iface_false)
• KWH (iface_ilit)
• KWH (iface_pint)
• KWH (iface_lit)
• KWH (iface_start)
• KWH (iface_stop)
• KWH (iface_str)
• KWH (iface_str2D)
• KWH (iface_strL)
• KWH (iface_true)
• KWH (method_Ii)
• KWH (method_Real)
• KWH (method_Real01)
• KWH (method_RealDL)
• KWH (method_RealLlit)
• KWH (method_Realp)
• KWH (method_Realz)
• KWH (method_Ri)
• KWH (method_false)
• KWH (method_szarray)
• KWH (method_ilit2)
• KWH (method_ilit2p)
• KWH (method_int)
• KWH (method_ivec)
• KWH (method_lit)
• KWH (method_lit2)
• KWH (method_litc)
• KWH (method_liti)
• KWH (method_litp)
• KWH (method_litr)
• KWH (method_litz)
• KWH (method_nnint)
• KWH (method_num_resplevs)
• KWH (method_piecewise)
• KWH (method_pint)
• KWH (method_pintz)
• KWH (method_resplevs)
• KWH (method_resplevs01)
• KWH (method_shint)
• KWH (method_sizet)
• KWH (method_slit2)
• KWH (method_start)
• KWH (method_stop)
• KWH (method_str)
• KWH (method_strL)
• KWH (method_true)
• KWH (method_tr_final)
• KWH (method_usharray)
• KWH (method_ushint)
• KWH (method_type)
• KWH (model_Real)
• KWH (model_RealDL)
• KWH (model_false)
• KWH (model_int)
• KWH (model_intsetm1)
• KWH (model_lit)
• KWH (model_order)
• KWH (model_shint)
• KWH (model_start)
• KWH (model_stop)
• KWH (model_str)
• KWH (model_strL)
• KWH (model_true)
• KWH (model_type)
• KWH (resp_RealDL)
• KWH (resp_RealL)
• KWH (resp_false)
• KWH (resp_intset)
• KWH (resp_ivec)
• KWH (resp_lit)
• KWH (resp_sizet)
• KWH (resp_start)
• KWH (resp_stop)
• KWH (resp_str)
• KWH (resp_strL)
• KWH (resp_true)
• KWH (strategy_Real)
• KWH (strategy_RealL)
• KWH (strategy_int)
• KWH (strategy_lit)
• KWH (strategy_start)
• KWH (strategy_str)
• KWH (strategy_strL)
• KWH (strategy_true)
• KWH (var_RealLb)
• KWH (var_RealUb)
• KWH (var_caulbl)
• KWH (var_dauilbl)
• KWH (var_daurlbl)
• KWH (var_ceulbl)
• KWH (var_deuilbl)
• KWH (var_deurlbl)
• KWH (var_pintz)
• KWH (var_start)
• KWH (var_stop)
• KWH (var_str)
• KWH (var_strL)
• KWH (var_true)
• KWH (var_newiarray)
• KWH (var_newivec)
• KWH (var_newrvec)
• KWH (var_ivec)
• KWH (var_rvec)
• KWH (var_type)
Private Attributes
• std::list< void ∗ > VIL
The derived input file database utilizing the new IDR parser. The NIDRProblemDescDB class is derived from
ProblemDescDB for use by the NIDR parser in processing DAKOTA input file data. For information on modi-
fying the NIDR input parsing procedures, refer to Dakota/docs/Dev_Spec_Change.dox. For more on the parsing
technology, see "Specifying and Reading Program Input with NIDR" by David M. Gay (report SAND2008-
2261P, which is available in PDF form as http://www.sandia.gov/∼dmgay/nidr08.pdf). Source for
the routines declared herein is NIDRProblemDescDB.cpp, in which most routines are so short that a description
seems unnecessary.
parses the input file and populates the problem description database using NIDR. Parse the input file using the
Input Deck Reader (IDR) parsing system. IDR populates the IDRProblemDescDB object with the input file data.
Reimplemented from ProblemDescDB.
• NIDRProblemDescDB.hpp
• NIDRProblemDescDB.cpp
Public Attributes
• Real ∗ r
residual r = r(x)
• Real ∗ J
Jacobian J = J(x).
• Real ∗ x
corresponding parameter vector
• int nf
function invocation count for r(x)
• NL2SOLLeastSq.cpp
Iterator
Minimizer
LeastSq
NL2SOLLeastSq
• ∼NL2SOLLeastSq ()
destructor
• void minimize_residuals ()
Used within the least squares branch for minimizing the sum of squares residuals. Redefines the run virtual function
for the least squares branch.
• static void calcj (int ∗np, int ∗pp, Real ∗x, int ∗nfp, Real ∗J, int ∗ui, void ∗ur, Vf vf)
evaluator function for residual Jacobian
Private Attributes
• int auxprt
auxilary printing bits (see Dakota Ref Manual): sum of < 1 = x0prt (print initial guess) < 2 = solprt (print final
solution) < 4 = statpr (print solution statistics) < 8 = parprt (print nondefault parameters) < 16 = dradpr (print
bound constraint drops/adds) < debug/verbose/normal use default = 31 (everything), < quiet uses 3, silent uses 0.
• int outlev
frequency of output summary lines in number of iterations < (debug/verbose/normal/quiet use default = 1, silent
uses 0)
• Real dltfdj
finite-diff step size for computing Jacobian approximation < (fd_gradient_step_size)
• Real delta0
finite-diff step size for gradient differences for H < (a component of some covariance approximations, if desired) <
(fd_hessian_step_size)
• Real dltfdc
finite-diff step size for function differences for H < (fd_hessian_step_size)
• int mxfcal
function-evaluation limit (max_function_evaluations)
• int mxiter
iteration limit (max_iterations)
• Real rfctol
relative fn convergence tolerance (convergence_tolerance)
• Real afctol
absolute fn convergence tolerance (absolute_conv_tol)
• Real xctol
x-convergence tolerance (x_conv_tol)
• Real sctol
singular convergence tolerance (singular_conv_tol)
• Real lmaxs
radius for singular-convergence test (singular_radius)
• Real xftol
false-convergence tolerance (false_conv_tol)
• int covreq
kind of covariance required (covariance): < 1 or -1 ==> sigma∧ 2 H∧ -1 J∧ T J H∧ -1 < 2 or -2 ==> sigma∧ 2
H∧ -1 < 3 or -3 ==> sigma∧ 2 (J∧ T J)∧ -1 < 1 or 2 ==> use gradient diffs to estimate H < -1 or -2 ==> use
function diffs to estimate H < default = 0 (no covariance)
• int rdreq
whether to compute the regression diagnostic vector < (regression_diagnostics)
• Real fprec
expected response function precision (function_precision)
• Real lmax0
initial trust-region radius (initial_trust_radius)
Wrapper class for the NL2SOL nonlinear least squares library. The NL2SOLLeastSq class provides a wrapper
for NL2SOL (TOMS Algorithm 573), in the updated form of Port Library routines dn[fg][b ] from Bell Labs;
see http://www.netlib.org/port/readme. The Fortran from Port has been turned into C by f2c.
NL2SOL uses a function pointer approach for which passed functions must be either global functions or static
member functions.
Used within the least squares branch for minimizing the sum of squares residuals. Redefines the run virtual
function for the least squares branch.
Details on the following subscript values appear in "Usage Summary for Selected Optimization Rou-
tines" by David M. Gay, Computing Science Technical Report No. 153, AT&T Bell Laboratories, 1990.
http://netlib.bell-labs.com/cm/cs/cstr/153.ps.gz
Implements LeastSq.
References NL2SOLLeastSq::afctol, NL2SOLLeastSq::auxprt, Iterator::bestResponseArray, Itera-
tor::bestVariablesArray, Minimizer::boundConstraintFlag, NL2SOLLeastSq::calcj(), NL2SOLLeastSq::calcr(),
Model::continuous_lower_bounds(), Model::continuous_upper_bounds(), Model::continuous_variables(),
Dakota::copy_data(), NL2SOLLeastSq::covreq, NL2SOLLeastSq::delta0, NL2SOLLeastSq::dltfdc,
NL2SOLLeastSq::dltfdj, NL2SOLLeastSq::fprec, LeastSq::get_confidence_intervals(), Iterator::gradientType,
Iterator::iteratedModel, NL2SOLLeastSq::lmax0, NL2SOLLeastSq::lmaxs, NL2SOLLeastSq::mxfcal,
NL2SOLLeastSq::mxiter, NL2SOLLeastSq::nl2solInstance, Iterator::numContinuousVars,
LeastSq::numLeastSqTerms, NL2SOLLeastSq::outlev, NL2SOLLeastSq::rdreq, NL2SOLLeastSq::rfctol,
NL2SOLLeastSq::sctol, Minimizer::speculativeFlag, Minimizer::vendorNumericalGradFlag,
NL2SOLLeastSq::xctol, and NL2SOLLeastSq::xftol.
The documentation for this class was generated from the following files:
• NL2SOLLeastSq.hpp
• NL2SOLLeastSq.cpp
Iterator
Minimizer
Optimizer
NLPQLPOptimizer
• ∼NLPQLPOptimizer ()
destructor
• void find_optimum ()
Used within the optimizer branch for computing the optimal solution. Redefines the run virtual function for the
optimizer branch.
• void allocate_workspace ()
Allocates workspace for the optimizer.
• void deallocate_workspace ()
Releases workspace memory.
• void allocate_constraints ()
Allocates constraint mappings.
Private Attributes
• int L
L : Number of parallel systems, i.e. function calls during line search at predetermined iterates. HINT: If only less
than 10 parallel function evaluations are possible, it is recommended to apply the serial version by setting L=1.
• int numEqConstraints
numEqConstraints : Number of equality constraints.
• int MMAX
MMAX : Row dimension of array DG containing Jacobian of constraints. MMAX must be at least one and greater
or equal to M.
• int N
N : Number of optimization variables.
• int NMAX
NMAX : Row dimension of C. NMAX must be at least two and greater than N.
• int MNN2
MNN2 : Must be equal to M+N+N+2.
• double ∗ X
X(NMAX,L) : Initially, the first column of X has to contain starting values for the optimal solution. On return, X is
replaced by the current iterate. In the driving program the row dimension of X has to be equal to NMAX. X is used
internally to store L different arguments for which function values should be computed simultaneously.
• double ∗ F
F(L) : On return, F(1) contains the final objective function value. F is used also to store L different objective
function values to be computed from L iterates stored in X.
• double ∗ G
G(MMAX,L) : On return, the first column of G contains the constraint function values at the final iterate X. In the
driving program the row dimension of G has to be equal to MMAX. G is used internally to store L different set of
constraint function values to be computed from L iterates stored in X.
• double ∗ DF
DF(NMAX) : DF contains the current gradient of the objective function. In case of numerical differentiation and a
distributed system (L>1), it is recommended to apply parallel evaluations of F to compute DF.
• double ∗ DG
DG(MMAX,NMAX) : DG contains the gradients of the active constraints (ACTIVE(J)=.true.) at a current iterate
X. The remaining rows are filled with previously computed gradients. In the driving program the row dimension of
DG has to be equal to MMAX.
• double ∗ U
U(MNN2) : U contains the multipliers with respect to the actual iterate stored in the first column of X. The first M
locations contain the multipliers of the M nonlinear constraints, the subsequent N locations the multipliers of the
lower bounds, and the final N locations the multipliers of the upper bounds. At an optimal solution, all multipliers
with respect to inequality constraints should be nonnegative.
• double ∗ C
C(NMAX,NMAX) : On return, C contains the last computed approximation of the Hessian matrix of the Lagrangian
function stored in form of an LDL decomposition. C contains the lower triangular factor of an LDL factorization
of the final quasi-Newton matrix (without diagonal elements, which are always one). In the driving program, the
row dimension of C has to be equal to NMAX.
• double ∗ D
D(NMAX) : The elements of the diagonal matrix of the LDL decomposition of the quasi-Newton matrix are stored
in the one-dimensional array D.
• double ACC
ACC : The user has to specify the desired final accuracy (e.g. 1.0D-7). The termination accuracy should not be
smaller than the accuracy by which gradients are computed.
• double ACCQP
ACCQP : The tolerance is needed for the QP solver to perform several tests, for example whether optimality
conditions are satisfied or whether a number is considered as zero or not. If ACCQP is less or equal to zero, then
the machine precision is computed by NLPQLP and subsequently multiplied by 1.0D+4.
• double STPMIN
STPMIN : Minimum steplength in case of L>1. Recommended is any value in the order of the accuracy by which
functions are computed. The value is needed to compute a steplength reduction factor by STPMIN∗∗(1/L-1). If
STPMIN<=0, then STPMIN=ACC is used.
• int MAXFUN
MAXFUN : The integer variable defines an upper bound for the number of function calls during the line search
(e.g. 20). MAXFUN is only needed in case of L=1, and must not be greater than 50.
• int MAXIT
MAXIT : Maximum number of outer iterations, where one iteration corresponds to one formulation and solution of
the quadratic programming subproblem, or, alternatively, one evaluation of gradients (e.g. 100).
• int MAX_NM
MAX_NM : Stack size for storing merit function values at previous iterations for non-monotone line search (e.g.
10). In case of MAX_NM=0, monotone line search is performed.
• double TOL_NM
TOL_NM : Relative bound for increase of merit function value, if line search is not successful during the very first
step. Must be non-negative (e.g. 0.1).
• int IPRINT
IPRINT : Specification of the desired output level. IPRINT = 0 : No output of the program. IPRINT = 1 : Only
a final convergence analysis is given. IPRINT = 2 : One line of intermediate results is printed in each iteration.
IPRINT = 3 : More detailed information is printed in each iteration step, e.g. variable, constraint and multiplier
values. IPRINT = 4 : In addition to ’IPRINT=3’, merit function and steplength values are displayed during the line
search.
• int MODE
MODE : The parameter specifies the desired version of NLPQLP. MODE = 0 : Normal execution (reverse commu-
nication!). MODE = 1 : The user wants to provide an initial guess for the multipliers in U and for the Hessian of
the Lagrangian function in C and D in form of an LDL decomposition.
• int IOUT
IOUT : Integer indicating the desired output unit number, i.e. all write-statements start with ’WRITE(IOUT,... ’.
• int IFAIL
IFAIL : The parameter shows the reason for terminating a solution process. Initially IFAIL must be set to zero. On
return IFAIL could contain the following values: IFAIL =-2 : Compute gradient values w.r.t. the variables stored in
first column of X, and store them in DF and DG. Only derivatives for active constraints ACTIVE(J)=.TRUE. need
to be computed. Then call NLPQLP again, see below. IFAIL =-1 : Compute objective fn and all constraint values
subject the variables found in the first L columns of X, and store them in F and G. Then call NLPQLP again, see
below. IFAIL = 0 : The optimality conditions are satisfied. IFAIL = 1 : The algorithm has been stopped after MAXIT
iterations. IFAIL = 2 : The algorithm computed an uphill search direction. IFAIL = 3 : Underflow occurred when
determining a new approxi- mation matrix for the Hessian of the Lagrangian. IFAIL = 4 : The line search could
not be terminated successfully. IFAIL = 5 : Length of a working array is too short. More detailed error information
is obtained with ’IPRINT>0’. IFAIL = 6 : There are false dimensions, for example M>MMAX, N>=NMAX, or
MNN2<>M+N+N+2. IFAIL = 7 : The search direction is close to zero, but the current iterate is still infeasible.
IFAIL = 8 : The starting point violates a lower or upper bound. IFAIL = 9 : Wrong input parameter, i.e., MODE,
LDL decomposition in D and C (in case of MODE=1), IPRINT, IOUT IFAIL = 10 : Internal inconsistency of the
quadratic subproblem, division by zero. IFAIL > 100 : The solution of the quadratic programming subproblem
has been terminated with an error message and IFAIL is set to IFQL+100, where IFQL denotes the index of an
inconsistent constraint.
• double ∗ WA
WA(LWA) : WA is a real working array of length LWA.
• int LWA
LWA : LWA value extracted from NLPQLP20.f.
• int ∗ KWA
KWA(LKWA) : The user has to provide working space for an integer array.
• int LKWA
LKWA : LKWA should be at least N+10.
• int ∗ ACTIVE
ACTIVE(LACTIV) : The logical array shows a user the constraints, which NLPQLP considers to be active at the
last computed iterate, i.e. G(J,X) is active, if and only if ACTIVE(J)=.TRUE., J=1,...,M.
• int LACTIVE
LACTIV : The length LACTIV of the logical array should be at least 2∗M+10.
• int LQL
LQL : If LQL = .TRUE., the quadratic programming subproblem is to be solved with a full positive definite quasi-
Newton matrix. Otherwise, a Cholesky decomposition is performed and updated, so that the subproblem matrix
contains only an upper triangular factor.
• int numNlpqlConstr
total number of constraints seen by NLPQL
• SizetList nonlinIneqConMappingIndices
a list of indices for referencing the DAKOTA nonlinear inequality constraints used in computing the corresponding
NLPQL constraints.
• RealList nonlinIneqConMappingMultipliers
a list of multipliers for mapping the DAKOTA nonlinear inequality constraints to the corresponding NLPQL con-
straints.
• RealList nonlinIneqConMappingOffsets
a list of offsets for mapping the DAKOTA nonlinear inequality constraints to the corresponding NLPQL constraints.
• SizetList linIneqConMappingIndices
a list of indices for referencing the DAKOTA linear inequality constraints used in computing the corresponding
NLPQL constraints.
• RealList linIneqConMappingMultipliers
a list of multipliers for mapping the DAKOTA linear inequality constraints to the corresponding NLPQL constraints.
• RealList linIneqConMappingOffsets
a list of offsets for mapping the DAKOTA linear inequality constraints to the corresponding NLPQL constraints.
Wrapper class for the NLPQLP optimization library, Version 2.0. ∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗∗
AN IMPLEMENTATION OF A SEQUENTIAL QUADRATIC PROGRAMMING METHOD FOR SOLVING
NONLINEAR OPTIMIZATION PROBLEMS BY DISTRIBUTED COMPUTING AND NON-MONOTONE
LINE SEARCH
This subroutine solves the general nonlinear programming problem
minimize F(X) subject to G(J,X) = 0 , J=1,...,ME G(J,X) >= 0 , J=ME+1,...,M XL <= X <= XU
and is an extension of the code NLPQLD. NLPQLP is specifically tuned to run under distributed systems. A new
input parameter L is introduced for the number of parallel computers, that is the number of function calls to be
executed simultaneously. In case of L=1, NLPQLP is identical to NLPQLD. Otherwise the line search is modified
to allow L parallel function calls in advance. Moreover the user has the opportunity to used distributed function
calls for evaluating gradients.
The algorithm is a modification of the method of Wilson, Han, and Powell. In each iteration step, a linearly con-
strained quadratic programming problem is formulated by approximating the Lagrangian function quadratically
and by linearizing the constraints. Subsequently, a one-dimensional line search is performed with respect to an
augmented Lagrangian merit function to obtain a new iterate. Also the modified line search algorithm guarantees
convergence under the same assumptions as before.
For the new version, a non-monotone line search is implemented which allows to increase the merit function in
case of instabilities, for example caused by round-off errors, errors in gradient approximations, etc.
The subroutine contains the option to predetermine initial guesses for the multipliers or the Hessian of the La-
grangian function and is called by reverse communication.
The documentation for this class was generated from the following files:
• NLPQLPOptimizer.hpp
• NLPQLPOptimizer.cpp
Iterator
Minimizer
LeastSq SOLBase
NLSSOLLeastSq
• ∼NLSSOLLeastSq ()
destructor
• void minimize_residuals ()
Used within the least squares branch for minimizing the sum of squares residuals. Redefines the run virtual function
for the least squares branch.
Wrapper class for the NLSSOL nonlinear least squares library. The NLSSOLLeastSq class provides a wrapper
for NLSSOL, a Fortran 77 sequential quadratic programming library from Stanford University marketed by Stan-
ford Business Associates. It uses a function pointer approach for which passed functions must be either global
functions or static member functions. Any nonstatic attribute used within static member functions must be either
local to that function or accessed through a static pointer.
The user input mappings are as follows: max_function_evaluations is implemented directly in NLSSOL-
LeastSq’s evaluator functions since there is no NLSSOL parameter equivalent, and max_iterations,
convergence_tolerance, output verbosity, verify_level, function_precision, and
linesearch_tolerance are mapped into NLSSOL’s "Major Iteration Limit", "Optimality Tolerance", "Ma-
jor Print Level" (verbose: Major Print Level = 20; quiet: Major Print Level = 10), "Verify Level", "Func-
tion Precision", and "Linesearch Tolerance" parameters, respectively, using NLSSOL’s npoptn() subroutine (as
wrapped by npoptn2() from the npoptn_wrapper.f file). Refer to [Gill, P.E., Murray, W., Saunders, M.A., and
Wright, M.H., 1986] for information on NLSSOL’s optional input parameters and the npoptn() subroutine.
alternate constructor This is an alternate constructor which accepts a Model but does not have a supporting method
specification from the ProblemDescDB.
References Minimizer::constraintTol, Iterator::convergenceTol, Iterator::fdGradStepSize, Iterator::gradientType,
Iterator::maxIterations, Iterator::outputLevel, SOLBase::set_options(), Minimizer::speculativeFlag, and Mini-
mizer::vendorNumericalGradFlag.
The documentation for this class was generated from the following files:
• NLSSOLLeastSq.hpp
• NLSSOLLeastSq.cpp
Dummy struct for overloading constructors used in on-the-fly instantiations. NoDBBaseConstructor is used to
overload the constructor used for on-the-fly instantiations in which ProblemDescDB queries cannot be used.
Putting this struct here avoids circular dependencies.
The documentation for this struct was generated from the following file:
• dakota_global_defs.hpp
Iterator
Minimizer
Optimizer
NomadOptimizer
Classes
• class Evaluator
NOMAD-based Evaluator class.
• ∼NomadOptimizer ()
Destructor.
• void find_optimum ()
Calls the NOMAD solver.
Private Attributes
• int numTotalVars
• int numNomadNonlinearIneqConstraints
• int randomSeed
• int maxBlackBoxEvals
• int maxIterations
maximum number of iterations for the iterator
• std::string outputFormat
• std::string historyFile
• bool displayAll
• Real epsilon
• Real vns
• NOMAD::Point initialPoint
• NOMAD::Point upperBound
• NOMAD::Point lowerBound
• std::vector< int > constraintMapIndices
map from Dakota constraint number to Nomad constraint number
Wrapper class for NOMAD Optimizer. NOMAD (is a Nonlinear Optimization by Mesh Adaptive Direct search)
is a simulation-based optimization package designed to efficiently explore a design space using Mesh Adaptive
Search.
Mesh Adaptive Direct Search uses Meshes, discretizations of the domain space of variables. It generates multiple
meshes, and as its name implies, it also adapts the refinement of the meshes in order to find the best solution of a
problem.
The objective of each iteration is to find points in a mesh that improves the current solution. If a better solution is
not found, the next iteration is done over a finer mesh.
Each iteration is composed of two steps: Search and Poll. The Search step finds any point in the mesh in an
attempt to find an improvement; while the Poll step generates trial mesh points surrounding the current best
current solution.
The NomadOptimizer is a wrapper for the NOMAD library. It features the following attributes:
max_function_evaluations, display_format, display_all_evaluations, function_-
precision, max_iterations.
Parameters:
model DAKOTA Model object
Convenience function for Parameter loading. This function takes the Parameters provided by the user in the
DAKOTA model.
Parameters:
model NOMAD Model object
• NomadOptimizer.hpp
• NomadOptimizer.cpp
Iterator
Analyzer
NonD
EfficientSubspaceMethod
NonDCalibration
NonDExpansion
NonDIntegration
NonDInterval
NonDPOFDarts
NonDReliability
NonDSampling
• ∼NonD ()
destructor
• void initialize_run ()
utility function to perform common operations prior to pre_run(); typically memory initialization; setting of in-
stance pointers
• void run ()
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post
• void finalize_run ()
utility function to perform common operations following post_run(); deallocation and resetting of instance pointers
initializes respCovariance
• int generate_system_seed ()
create a system-generated unique seed (when a seed is unspecified)
• void initialize_random_variable_transformation ()
instantiate natafTransform
• void initialize_random_variable_parameters ()
initializes ranVarMeansX, ranVarStdDevsX, ranVarLowerBndsX, ranVarUpperBndsX, and ranVarAddtlParamsX
within natafTransform
• void initialize_random_variable_correlations ()
propagate iteratedModel correlations to natafTransform
• void verify_correlation_support ()
verify that correlation warping supported by Der Kiureghian & Liu for given variable types
• void initialize_final_statistics_gradients ()
initializes finalStatistics::functionGradients
• void update_aleatory_final_statistics ()
update finalStatistics::functionValues from momentStats and computed{Prob,Rel,GenRel,Resp}Levels
• void update_system_final_statistics ()
update system metrics from component metrics within finalStatistics
• void update_system_final_statistics_gradients ()
update finalStatistics::functionGradients
• void initialize_distribution_mappings ()
size computed{Resp,Prob,Rel,GenRel}Levels
• void transform_model (Model &x_model, Model &u_model, bool global_bounds=false, Real bound=10.)
recast x_model from x-space to u-space to create u_model
• void construct_lhs (Iterator &u_space_sampler, Model &u_model, const String &sample_type, int num_-
samples, int seed, const String &rng, bool vary_pattern, short sampling_vars_mode=ACTIVE)
assign a NonDLHSSampling instance within u_space_sampler
• void archive_allocate_mappings ()
allocate results array storage for distribution mappings
• static void set_u_to_x_mapping (const Variables &u_vars, const ActiveSet &u_set, ActiveSet &x_set)
static function for RecastModels used to map u-space ActiveSets from NonD Iterators to x-space ActiveSets for
Model evaluations
• static void resp_x_to_u_mapping (const Variables &x_vars, const Variables &u_vars, const Response &x_-
response, Response &u_response)
static function for RecastModels used to map x-space responses from Model evaluations to u-space responses for
return to NonD Iterator.
Protected Attributes
• NonD ∗ prevNondInstance
pointer containing previous value of nondInstance
• Pecos::ProbabilityTransformation natafTransform
Nonlinear variable transformation that encapsulates the required data for performing transformations from X ->
Z -> U and back.
• size_t numContDesVars
number of continuous design variables (modeled using uniform distribution for All view modes)
• size_t numDiscIntDesVars
number of discrete integer design variables (modeled using discrete histogram distributions for All view modes)
• size_t numDiscRealDesVars
number of discrete real design variables (modeled using discrete histogram distributions for All view modes)
• size_t numDesignVars
total number of design variables
• size_t numContStateVars
number of continuous state variables (modeled using uniform distribution for All view modes)
• size_t numDiscIntStateVars
number of discrete integer state variables (modeled using discrete histogram distributions for All view modes)
• size_t numDiscRealStateVars
number of discrete real state variables (modeled using discrete histogram distributions for All view modes)
• size_t numStateVars
total number of state variables
• size_t numNormalVars
number of normal uncertain variables (native space)
• size_t numLognormalVars
number of lognormal uncertain variables (native space)
• size_t numUniformVars
number of uniform uncertain variables (native space)
• size_t numLoguniformVars
number of loguniform uncertain variables (native space)
• size_t numTriangularVars
number of triangular uncertain variables (native space)
• size_t numExponentialVars
number of exponential uncertain variables (native space)
• size_t numBetaVars
• size_t numGammaVars
number of gamma uncertain variables (native space)
• size_t numGumbelVars
number of gumbel uncertain variables (native space)
• size_t numFrechetVars
number of frechet uncertain variables (native space)
• size_t numWeibullVars
number of weibull uncertain variables (native space)
• size_t numHistogramBinVars
number of histogram bin uncertain variables (native space)
• size_t numPoissonVars
number of Poisson uncertain variables (native space)
• size_t numBinomialVars
number of binomial uncertain variables (native space)
• size_t numNegBinomialVars
number of negative binomial uncertain variables (native space)
• size_t numGeometricVars
number of geometric uncertain variables (native space)
• size_t numHyperGeomVars
number of hypergeometric uncertain variables (native space)
• size_t numHistogramPtVars
number of histogram point uncertain variables (native space)
• size_t numContIntervalVars
number of continuous interval uncertain variables (native space)
• size_t numDiscIntervalVars
number of discrete interval uncertain variables (native space)
• size_t numDiscSetIntUncVars
number of discrete integer set uncertain variables (native space)
• size_t numDiscSetRealUncVars
number of discrete real set uncertain variables (native space)
• size_t numContAleatUncVars
total number of aleatory uncertain variables (native space)
• size_t numDiscIntAleatUncVars
total number of aleatory uncertain variables (native space)
• size_t numDiscRealAleatUncVars
total number of aleatory uncertain variables (native space)
• size_t numAleatoryUncVars
total number of aleatory uncertain variables (native space)
• size_t numContEpistUncVars
total number of epistemic uncertain variables (native space)
• size_t numDiscIntEpistUncVars
total number of epistemic uncertain variables (native space)
• size_t numDiscRealEpistUncVars
total number of epistemic uncertain variables (native space)
• size_t numEpistemicUncVars
total number of epistemic uncertain variables (native space)
• size_t numUncertainVars
total number of uncertain variables (native space)
• bool epistemicStats
flag for computing interval-type metrics instead of integrated metrics If any epistemic variables are active in a
metric evaluation, then this flag is set.
• RealMatrix momentStats
moments of response functions (mean, std deviation, skewness, and kurtosis calculated in compute_moments()),
indexed as (moment,fn)
• RealVectorArray requestedRespLevels
requested response levels for all response functions
• RealVectorArray computedProbLevels
output probability levels for all response functions resulting from requestedRespLevels
• RealVectorArray computedRelLevels
output reliability levels for all response functions resulting from requestedRespLevels
• RealVectorArray computedGenRelLevels
output generalized reliability levels for all response functions resulting from requestedRespLevels
• short respLevelTarget
indicates mapping of z->p (PROBABILITIES), z->beta (RELIABILITIES), or z->beta∗ (GEN_RELIABILITIES)
• short respLevelTargetReduce
indicates component or system series/parallel failure metrics
• RealVectorArray requestedProbLevels
requested probability levels for all response functions
• RealVectorArray requestedRelLevels
requested reliability levels for all response functions
• RealVectorArray requestedGenRelLevels
requested generalized reliability levels for all response functions
• RealVectorArray computedRespLevels
output response levels for all response functions resulting from requestedProbLevels, requestedRelLevels, or re-
questedGenRelLevels
• size_t totalLevelRequests
total number of levels specified within requestedRespLevels, requestedProbLevels, and requestedRelLevels
• bool cdfFlag
flag for type of probabilities/reliabilities used in mappings: cumulative/CDF (true) or complementary/CCDF (false)
• bool pdfOutput
flag for managing output of response probability density functions (PDFs)
• Response finalStatistics
final statistics from the uncertainty propagation used in strategies: response means, standard deviations, and prob-
abilities of failure
Private Attributes
• bool distParamDerivs
flags calculation of derivatives with respect to distribution parameters s within resp_x_to_u_mapping() using the
chain rule df/dx dx/ds. The default is to calculate derivatives with respect to standard random variables u using the
chain rule df/dx dx/du.
Base class for all nondetermistic iterators (the DAKOTA/UQ branch). The base class for nondeterministic iterators
consolidates uncertain variable data and probabilistic utilities for inherited classes.
alternate form: initialize natafTransform based on incoming data This function is commonly used to publish
tranformation data when the Model variables are in a transformed space (e.g., u-space) and ProbabilityTransfor-
mation::ranVarTypes et al. may not be generated directly. This allows for the use of inverse transformations to
return the transformed space variables to their original states.
References NonD::initialize_random_variable_transformation(), NonD::natafTransform,
NonD::numContDesVars, and NonD::numContStateVars.
utility function to perform common operations prior to pre_run(); typically memory initialization; setting of in-
stance pointers Perform initialization phases of run sequence, like allocating memory and setting instance pointers.
Commonly used in sub-iterator executions. This is a virtual function; when re-implementing, a derived class must
call its nearest parent’s initialize_run(), typically _before_ performing its own implementation steps.
Reimplemented from Iterator.
References NonD::nondInstance, and NonD::prevNondInstance.
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post Virtual run function for the iterator class hierarchy. All derived classes need to redefine it.
Reimplemented from Iterator.
References Analyzer::bestVarsRespMap, and NonD::quantify_uncertainty().
utility function to perform common operations following post_run(); deallocation and resetting of instance point-
ers Optional: perform finalization phases of run sequence, like deallocating memory and resetting instance point-
ers. Commonly used in sub-iterator executions. This is a virtual function; when re-implementing, a derived class
must call its nearest parent’s finalize_run(), typically _after_ performing its own implementation steps.
Reimplemented from Iterator.
References NonD::nondInstance, and NonD::prevNondInstance.
initializes finalStatistics for storing NonD final results Default definition of virtual function (used by sampling,
reliability, and stochastic expansion methods) defines the set of statistical results to include means, standard
deviations, and level mappings.
Reimplemented in NonDInterval.
References Dakota::abort_handler(), NonD::cdfFlag, Model::cv(), ActiveSet::derivative_vector(),
NonD::epistemicStats, NonD::finalStatistics, Response::function_labels(), Model::inactive_continuous_-
variable_ids(), Iterator::iteratedModel, Iterator::numFunctions, NonD::requestedGenRelLevels,
NonD::requestedProbLevels, NonD::requestedRelLevels, NonD::requestedRespLevels, NonD::respLevelTarget,
NonD::respLevelTargetReduce, and NonD::totalLevelRequests.
Referenced by NonDExpansion::NonDExpansion(), NonDIntegration::NonDIntegration(), NonDReliabil-
ity::NonDReliability(), NonDSampling::NonDSampling(), and NonD::requested_levels().
13.71.2.10 void vars_u_to_x_mapping (const Variables & u_vars, Variables & x_vars) [static,
protected]
static function for RecastModels used for forward mapping of u-space variables from NonD Iterators to x-space
variables for Model evaluations Map the variables from iterator space (u) to simulation space (x).
References Variables::continuous_variables(), Variables::continuous_variables_view(), NonD::natafTransform,
and NonD::nondInstance.
Referenced by NonD::transform_model().
13.71.2.11 void vars_x_to_u_mapping (const Variables & x_vars, Variables & u_vars) [static,
protected]
static function for RecastModels used for inverse mapping of x-space variables from data import to u-space
variables for NonD Iterators Map the variables from simulation space (x) to iterator space (u).
References Variables::continuous_variables(), Variables::continuous_variables_view(), NonD::natafTransform,
and NonD::nondInstance.
Referenced by NonD::transform_model().
13.71.2.12 void set_u_to_x_mapping (const Variables & u_vars, const ActiveSet & u_set, ActiveSet &
x_set) [static, protected]
static function for RecastModels used to map u-space ActiveSets from NonD Iterators to x-space ActiveSets
for Model evaluations Define the DVV for x-space derivative evaluations by augmenting the iterator requests to
account for correlations.
References Dakota::_NPOS, Variables::all_continuous_variable_ids(), Dakota::contains(),
Variables::continuous_variable_ids(), ActiveSet::derivative_vector(), Dakota::find_index(), Variables::inactive_-
continuous_variable_ids(), NonD::natafTransform, and NonD::nondInstance.
Referenced by NonD::transform_model().
Print distribution mapping for a single response function to ostream. Print the distribution mapping for a single
response function to the passed output stream
References NonD::cdfFlag, NonD::computedGenRelLevels, NonD::computedProbLevels,
NonD::computedRelLevels, NonD::computedRespLevels, Iterator::iteratedModel,
NonD::requestedGenRelLevels, NonD::requestedProbLevels, NonD::requestedRelLevels,
NonD::requestedRespLevels, NonD::respLevelTarget, Model::response_labels(), and Dakota::write_precision.
Referenced by NonD::distribution_mappings_file(), and NonD::print_distribution_mappings().
The documentation for this class was generated from the following files:
• DakotaNonD.hpp
• DakotaNonD.cpp
Iterator
Analyzer
NonD
NonDSampling
NonDAdaptImpSampling
• NonDAdaptImpSampling (Model &model, const String &sample_type, int samples, int seed, const String
&rng, bool vary_pattern, short is_type, bool cdf_flag, bool x_space_data, bool x_space_model, bool
bounded_model)
• ∼NonDAdaptImpSampling ()
destructor
• void quantify_uncertainty ()
performs an adaptive importance sampling and returns probability of failure.
• void initialize (const RealVectorArray &initial_points, int resp_fn, const Real &initial_prob, const Real
&failure_threshold)
initializes data needed for importance sampling: an initial set of points around which to sample, a failure threshold,
an initial probability to refine, and flags to control transformations
• void initialize (const RealVector &initial_point, int resp_fn, const Real &initial_prob, const Real &failure_-
threshold)
initializes data needed for importance sampling: an initial point around which to sample, a failure threshold, an
initial probability to refine, and flags to control transformations
• void converge_probability ()
iteratively generate samples from final set of representative points until probability converges
• void calculate_rep_weights ()
calculate relative weights of representative points
Private Attributes
• short importanceSamplingType
integration type (is, ais, mmais) provided by input specification
• bool invertProb
flag for inversion of probability values using 1.-p
• size_t numRepPoints
the number of representative points around which to sample
• size_t respFn
the response function in the model to be sampled
• RealVectorArray initPoints
the original set of samples passed into the MMAIS routine
• RealVectorArray repPoints
the set of representative points around which to sample
• RealVector repWeights
the weight associated with each representative point
• RealVector designPoint
design point at which uncertain space is being sampled
• bool transInitPoints
flag to control if x->u transformation should be performed for initial points
• bool transPoints
flag to control if u->x transformation should be performed before evaluation
• bool useModelBounds
flag to control if the sampler should respect the model bounds
• bool initLHS
flag to identify if initial points are generated from an LHS sample
• Real initProb
the initial probability (from FORM or SORM)
• Real finalProb
the final calculated probability (p)
• Real failThresh
the failure threshold (z-bar) for the problem.
Class for the Adaptive Importance Sampling methods within DAKOTA. The NonDAdaptImpSampling imple-
ments the multi-modal adaptive importance sampling used for reliability calculations. (eventually we will want to
broaden this). Need to add more detail to this description.
13.72.2.2 NonDAdaptImpSampling (Model & model, const String & sample_type, int samples, int seed,
const String & rng, bool vary_pattern, short is_type, bool cdf_flag, bool x_space_data, bool
x_space_model, bool bounded_model)
This is an alternate constructor for instantiations on the fly using a Model but no ProblemDescDB.
References NonD::cdfFlag.
13.72.3.1 void initialize (const RealVectorArray & initial_points, int resp_fn, const Real & initial_prob,
const Real & failure_threshold)
initializes data needed for importance sampling: an initial set of points around which to sample, a failure threshold,
an initial probability to refine, and flags to control transformations Initializes data using a set of starting points.
References NonDAdaptImpSampling::designPoint, NonDAdaptImpSampling::failThresh, NonDAdaptImpSam-
pling::initPoints, NonDAdaptImpSampling::initProb, NonD::natafTransform, NonD::numContDesVars, Itera-
tor::numContinuousVars, NonD::numUncertainVars, NonDAdaptImpSampling::respFn, and NonDAdaptImp-
Sampling::transInitPoints.
Referenced by NonDExpansion::compute_statistics(), NonDGlobalReliability::importance_sampling(), and
NonDAdaptImpSampling::quantify_uncertainty().
13.72.3.2 void initialize (const RealVector & initial_point, int resp_fn, const Real & initial_prob, const
Real & failure_threshold)
initializes data needed for importance sampling: an initial point around which to sample, a failure threshold, an
initial probability to refine, and flags to control transformations Initializes data using only one starting point.
References NonDAdaptImpSampling::designPoint, NonDAdaptImpSampling::failThresh, NonDAdaptImpSam-
pling::initPoints, NonDAdaptImpSampling::initProb, NonD::natafTransform, NonD::numContDesVars, Itera-
tor::numContinuousVars, NonD::numUncertainVars, NonDAdaptImpSampling::respFn, and NonDAdaptImp-
Sampling::transInitPoints.
The documentation for this class was generated from the following files:
• NonDAdaptImpSampling.hpp
• NonDAdaptImpSampling.cpp
Iterator
Analyzer
NonD
NonDSampling
NonDAdaptiveSampling
• ∼NonDAdaptiveSampling ()
alternate constructor for sample generation and evaluation "on the fly" has not been implemented
• void calc_score_delta_x ()
Function to compute the Distance scores for the candidate points Distance score is the shortest distance between
the candidate and an existing training point.
• void calc_score_delta_y ()
Function to compute the Gradient scores for the candidate points Gradient score is the function value difference
between a candidate’s surrogate response and its nearest evaluated true response from the training set.
• void calc_score_topo_bottleneck ()
Function to compute the Bottleneck scores for the candidate points Bottleneck score is computed by determining the
bottleneck distance between the persistence diagrams of two approximate Morse-Smale complices. The complices
used include one built from only the training data, and another built from the training data and the single candidate.
• Real compute_rmspe ()
Using the validationSet, compute the RMSE over the surface.
• void parse_options ()
Parse misc_options specified in a user input deck.
• void construct_fsu_sampler (Iterator &u_space_sampler, Model &u_model, int num_samples, int seed,
const String &sample_type)
Copy of construct_lhs only it allows for the construction of FSU sample designs. This can break the fsu_cvt, so it
is not used at the moment, and these designs only affect the initial sample build not the candidate sets constructed
at each round.
• void pick_new_candidates ()
Pick new candidates from Emulator.
• void score_new_candidates ()
Score New candidates based on the chosen metrics.
Private Attributes
• Iterator gpBuild
• Iterator gpEval
LHS iterator for sampling on the GP.
• Iterator gpFinalEval
LHS iterator for sampling on the final GP.
• Model gpModel
GP model of response, one approximation per response function.
• int numRounds
the number of rounds of additions of size batchSize to add to the original set of LHS samples
• int numPtsTotal
the total number of points
• int numEmulEval
the number of points evaluated by the GP each iteration
• int numFinalEmulEval
number of points evaluated on the final GP
• int scoringMethod
the type of scoring metric to use for sampling
• Real finalProb
the final calculated probability (p)
• RealVectorArray gpCvars
Vector to hold the current values of the current sample inputs on the GP.
• RealVectorArray gpMeans
Vector to hold the current values of the current mean estimates for the sample values on the GP.
• RealVectorArray gpVar
Vector to hold the current values of the current variance estimates for the sample values on the GP.
• RealVector emulEvalScores
Vector to hold the scored values for the current GP samples.
• RealVector predictionErrors
Vector to hold the RMSE after each round of adaptively fitting the model.
• RealVectorArray validationSet
Validation point set used to determine predictionErrors above.
• RealVector yTrue
True function responses at the values corresponding to validationSet.
• RealVector yModel
Surrogate function responses at the values corresponding to validationSet.
• int validationSetSize
Number of points used in the validationSet.
• int batchSize
Number of points to add each round, default = 1.
• String batchStrategy
String describing the tpye of batch addition to use. Allowable values are naive, distance, topology.
• String outputDir
Temporary string for dumping validation files used in TopoAS visualization.
• String scoringMetric
String describing the method for scoring candidate points. Options are: alm, distance, gradient, highest_-
persistence, avg_persistence, bottleneck, alm_topo_hybrid Note: alm and alm_topo_hybrid will fail when used
with surrogates other than global_kriging as it is based on the variance of the surrogate. At the time of implemen-
tation, global_kriging is the only surrogate capable of yielding this information.
• String sampleDesign
String describing the initial sample design is based on. Options are: sampling_lhs, fsu_cvt, fsu_halton, fsu_-
hammersley.
• String approx_type
String describing type of surrogate is used to fit the data. Options are: global_kriging, global_mars, global_-
neural_network, global_polynomial, globabl_moving_least_squares, global_radial_basis.
• MS_Complex ∗ AMSC
The approximate Morse-Smale complex data structure.
• int numKneighbors
The number of approximate nearest neighbors to use in computing the AMSC.
• bool outputValidationData
Temporary variable for toggling writing of data files to be used by TopoAS.
Class for testing various Adaptively sampling methods using geometric, statisctical, and topological information
of the surrogate. NonDAdaptiveSampling implements an adaptive sampling method based on the work presented
in Adaptive Sampling with Topological Scores by Dan Maljovec, Bei Wang, Ana Kupresanin, Gardar Johannes-
son, Valerio Pascucci, and Peer-Timo Bremer presented in IJUQ (insert issue). The method computes scores
based on the topology of the known data and the topology of the surrogate model. A number of alternate adaption
strategies are offered as well.
standard constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case,
set_db_list_nodes has been called and probDescDB can be queried for settings from the method specification.
References NonDAdaptiveSampling::AMSC, NonDAdaptiveSampling::approx_type, Model::assign_-
rep(), Iterator::assign_rep(), NonDAdaptiveSampling::batchSize, NonDAdaptiveSampling::batchStrategy,
NonDAdaptiveSampling::construct_fsu_sampler(), NonD::construct_lhs(), ProblemDescDB::get_bool(),
ProblemDescDB::get_int(), ProblemDescDB::get_sa(), ProblemDescDB::get_string(), NonDAdaptiveSam-
pling::gpBuild, NonDAdaptiveSampling::gpEval, NonDAdaptiveSampling::gpFinalEval, NonDAdap-
tiveSampling::gpModel, Iterator::gradientType, Iterator::hessianType, Model::init_communicators(), It-
erator::iteratedModel, Iterator::maximum_concurrency(), Iterator::maxIterations, NonDAdaptiveSam-
pling::numEmulEval, NonDAdaptiveSampling::numFinalEmulEval, NonDAdaptiveSampling::numKneighbors,
NonDAdaptiveSampling::numRounds, NonDSampling::numSamples, NonDAdaptiveSampling::outputDir,
Iterator::outputLevel, NonDAdaptiveSampling::outputValidationData, NonDAdaptiveSampling::parse_-
options(), Iterator::probDescDB, NonDSampling::randomSeed, NonDSampling::rngName, NonDAdap-
tiveSampling::sampleDesign, NonDAdaptiveSampling::scoringMetric, NonDSampling::vary_pattern(), and
NonDSampling::varyPattern.
13.73.2.2 ∼NonDAdaptiveSampling ()
alternate constructor for sample generation and evaluation "on the fly" has not been implemented destructor
References Model::free_communicators(), NonDAdaptiveSampling::gpEval, NonDAdaptiveSampling::gpModel,
and Iterator::maximum_concurrency().
The documentation for this class was generated from the following files:
• NonDAdaptiveSampling.hpp
• NonDAdaptiveSampling.cpp
Iterator
Analyzer
NonD
NonDCalibration
NonDBayesCalibration
• ∼NonDBayesCalibration ()
destructor
Protected Attributes
• Model emulatorModel
Model instance employed in the likelihood function; provides response function values from Gaussian processes,
stochastic expansions (PCE/SC), or direct access to simulations (no surrogate option).
• bool standardizedSpace
flag indicating use of a variable transformation to standardized probability space
• Iterator stochExpIterator
NonDPolynomialChaos or NonDStochCollocation instance for defining a PCE/SC-based emulatorModel.
• Iterator lhsIterator
LHS iterator for generating samples for GP.
Private Attributes
• short emulatorType
the emulator type: NO_EMULATOR, GAUSSIAN_PROCESS, POLYNOMIAL_CHAOS, or STOCHASTIC_-
COLLOCATION
Base class for Bayesian inference: generates posterior distribution on model parameters given experimental data.
This class will eventually provide a general-purpose framework for Bayesian inference. In the short term, it only
collects shared code between QUESO and GPMSA implementations.
standard constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case,
set_db_list_nodes has been called and probDescDB can be queried for settings from the method specification.
References Iterator::algorithm_space_model(), Model::assign_rep(), Iterator::assign_rep(), NonD::cdfFlag,
NonDBayesCalibration::emulatorModel, NonDBayesCalibration::emulatorType, ProblemDescDB::get_-
bool(), ProblemDescDB::get_int(), ProblemDescDB::get_short(), ProblemDescDB::get_string(),
ProblemDescDB::get_usa(), Iterator::gradientType, Iterator::hessianType, Model::init_communicators(),
NonD::initialize_random_variable_correlations(), NonD::initialize_random_variable_transformation(),
NonD::initialize_random_variable_types(), Iterator::iteratedModel, Iterator::iterator_rep(), NonD-
BayesCalibration::lhsIterator, Iterator::outputLevel, Iterator::probDescDB, NonD::requested_levels(),
NonD::respLevelTarget, NonD::respLevelTargetReduce, NonDBayesCalibration::standardizedSpace, NonD-
BayesCalibration::stochExpIterator, NonD::transform_model(), and NonD::verify_correlation_support().
The documentation for this class was generated from the following files:
• NonDBayesCalibration.hpp
• NonDBayesCalibration.cpp
Iterator
Analyzer
NonD
NonDCalibration
NonDBayesCalibration
• ∼NonDCalibration ()
destructor
Protected Attributes
• RealVector expStdDeviations
1 or numFunctions standard deviations
• String expDataFileName
filename from which to read experimental data; optionally configuration vars x and standard deviations sigma
• bool expDataFileAnnotated
whether the data file is in annotated format
• size_t numExperiments
number of experiments to read from data file
• IntVector numReplicates
number of replicates per experiment
• size_t numExpConfigVars
number of columns in data file which are state variables
• size_t numExpStdDeviationsRead
how many sigmas to read from the data file (1 or numFunctions)
• ExperimentData expData
Container for experimental data to which to calibrate model.
Private Attributes
• size_t continuousConfigVars
number of continuous configuration variables
• size_t discreteIntConfigVars
number of discrete integer configuration variables
• size_t discreteRealConfigVars
number of discrete real configuration variables
• size_t continuousConfigStart
index of configuration variables in all continuous array
• size_t discreteIntConfigStart
index of configuration variables in all discrete integer array
• size_t discreteRealConfigStart
index of configuration variables in all discrete real array
standard constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case,
set_db_list_nodes has been called and probDescDB can be queried for settings from the method specification.
References Dakota::abort_handler(), Model::all_continuous_variable_types(), Model::all_discrete_int_variable_-
types(), Model::all_discrete_real_variable_types(), NonDCalibration::continuousConfigStart, NonDCalibra-
tion::continuousConfigVars, NonDCalibration::discreteIntConfigStart, NonDCalibration::discreteIntConfigVars,
NonDCalibration::discreteRealConfigStart, NonDCalibration::discreteRealConfigVars, NonDCalibra-
tion::expDataFileName, NonDCalibration::expStdDeviations, NonDCalibration::find_state_index(),
ProblemDescDB::get_sizet(), Iterator::iteratedModel, NonDCalibration::numExpConfigVars, NonDCalibra-
tion::numExperiments, Iterator::numFunctions, NonDCalibration::numReplicates, and Iterator::probDescDB.
The documentation for this class was generated from the following files:
• NonDCalibration.hpp
• NonDCalibration.cpp
Iterator
Analyzer
NonD
NonDIntegration
NonDCubature
• ∼NonDCubature ()
destructor
• void increment_reference ()
increment each cubIntOrderRef entry by 1
Private Attributes
• Pecos::CubatureDriver ∗ cubDriver
convenience pointer to the numIntDriver representation
Derived nondeterministic class that generates N-dimensional numerical cubature points for evaluation of expec-
tation integrals. This class is used by NonDPolynomialChaos, but could also be used for general numerical
integration of moments. It employs Stroud cubature rules and extensions by D. Xiu.
13.76.2.1 NonDCubature (Model & model, const Pecos::ShortArray & u_types, unsigned short
cub_int_order)
This alternate constructor is used for on-the-fly generation and evaluation of numerical cubature points.
References Model::aleatory_distribution_parameters(), NonDCubature::check_integration(), NonDCuba-
ture::cubDriver, NonDCubature::cubIntOrderRef, Iterator::iteratedModel, and NonDIntegration::numIntDriver.
constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case, set_db_-
list_nodes has been called and probDescDB can be queried for settings from the method specification. It is not
currently used, as there is not yet a separate nond_cubature method specification.
13.76.3.1 void sampling_reset (int min_samples, bool all_data_flag, bool stats_flag) [protected,
virtual]
used by DataFitSurrModel::build_global() to publish the minimum number of points needed from the cubature
routine in order to build a particular global approximation.
Reimplemented from Iterator.
References NonDCubature::cubDriver, and NonDCubature::cubIntOrderRef.
get the current number of samples Return current number of evaluation points. Since the calculation of samples,
collocation points, etc. might be costly, provide a default implementation here that backs out from the maxCon-
currency. May be (is) overridden by derived classes.
Reimplemented from Iterator.
References NonDCubature::cubDriver.
• NonDCubature.hpp
• NonDCubature.cpp
Iterator
Analyzer
NonD
NonDCalibration
NonDBayesCalibration
NonDDREAMBayesCalibration
• ∼NonDDREAMBayesCalibration ()
destructor
Protected Attributes
• Real likelihoodScale
scale factor for proposal covariance
• int numSamples
number of samples in the chain (e.g. number of MCMC samples)
• bool calibrateSigmaFlag
flag to indicate if the sigma terms should be calibrated (default true)
• int randomSeed
random seed to pass to QUESO
• RealVector paramMins
lower bounds on calibrated parameters
• RealVector paramMaxs
upper bounds on calibrated parameters
• int numChains
number of concurrent chains
• int numGenerations
number of generations
• int numCR
number of CR-factors
• int crossoverChainPairs
number of crossover chain pairs
• Real grThreshold
threshold for the Gelmin-Rubin statistic
• int jumpStep
• boost::mt19937 rnumGenerator
random number engine for sampling the prior
Private Attributes
• short emulatorType
the emulator type: NO_EMULATOR, GAUSSIAN_PROCESS, POLYNOMIAL_CHAOS, or STOCHASTIC_-
COLLOCATION
Bayesian inference using the DREAM approach. This class performed Bayesian calibration using the DREAM
(Markov Chain Monte Carlo acceleration by Differential Evolution) implementation of John Burkhardt (FSU),
adapted from that of Guannan Zhang (ORNL)
standard constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case,
set_db_list_nodes has been called and probDescDB can be queried for settings from the method specification.
References NonDDREAMBayesCalibration::crossoverChainPairs, NonDDREAMBayesCalibra-
tion::grThreshold, NonDDREAMBayesCalibration::jumpStep, NonDDREAMBayesCalibration::numChains,
NonDDREAMBayesCalibration::numCR, NonDDREAMBayesCalibration::numGenerations, and NonD-
DREAMBayesCalibration::numSamples.
13.77.3.1 void problem_size (int & chain_num, int & cr_num, int & gen_num, int & pair_num, int &
par_num) [static]
initializer for problem size characteristics in DREAM See documentation in DREAM examples)
References NonDDREAMBayesCalibration::crossoverChainPairs, NonDDREAMBayesCalibra-
tion::NonDDREAMInstance, NonDDREAMBayesCalibration::numChains, Iterator::numContinuousVars,
NonDDREAMBayesCalibration::numCR, and NonDDREAMBayesCalibration::numGenerations.
Compute the prior density at specified point zp. See documentation in DREAM examples)
References NonDDREAMBayesCalibration::NonDDREAMInstance, and NonDDREAMBayesCalibra-
tion::priorDistributions.
Sample the prior and return an array of parameter values. See documentation in DREAM examples)
References NonDDREAMBayesCalibration::NonDDREAMInstance, NonDDREAMBayesCalibra-
tion::priorSamplers, and NonDDREAMBayesCalibration::rnumGenerator.
Likelihood function for call-back from DREAM to DAKOTA for evaluation. Static callback function to evaluate
the likelihood
References NonDDREAMBayesCalibration::calibrateSigmaFlag, Model::compute_response(),
Model::continuous_variables(), Model::current_response(), NonDBayesCalibration::emulatorModel, NonD-
DREAMBayesCalibration::emulatorType, NonDCalibration::expData, Response::function_values(), NonD-
DREAMBayesCalibration::likelihoodScale, NonDDREAMBayesCalibration::NonDDREAMInstance, It-
erator::numContinuousVars, NonDCalibration::numExperiments, Iterator::numFunctions, NonDCalibra-
tion::numReplicates, and Iterator::outputLevel.
• NonDDREAMBayesCalibration.hpp
• NonDDREAMBayesCalibration.cpp
Iterator
Analyzer
NonD
NonDExpansion
NonDPolynomialChaos NonDStochCollocation
• ∼NonDExpansion ()
destructor
• void quantify_uncertainty ()
perform a forward uncertainty propagation using PCE/SC methods
• void initialize_response_covariance ()
set covarianceControl defaults and shape respCovariance
• void update_final_statistics ()
update function values within finalStatistics
• void update_final_statistics_gradients ()
update function gradients within finalStatistics
• void refine_expansion ()
refine the reference expansion found by compute_expansion() using uniform/adaptive p-/h-refinement strategies
• void construct_quadrature (Iterator &u_space_sampler, Model &g_u_model, int random_samples, int seed,
const UShortArray &quad_order_seq, const RealVector &dim_pref)
assign a NonDQuadrature instance within u_space_sampler that samples randomly from a tensor product multi-
index
• void construct_expansion_sampler ()
construct the expansionSampler operating on uSpaceModel
• void compute_statistics ()
calculate analytic and numerical statistics from the expansion
• void archive_moments ()
archive the central moments (numerical and expansion) to ResultsDB
Protected Attributes
• Model uSpaceModel
Model representing the approximate response function in u-space, after u-space recasting and orthogonal polyno-
mial data fit recursions.
• short expansionCoeffsApproach
method for collocation point generation and subsequent calculation of the expanion coefficients
• size_t numUncertainQuant
number of invocations of quantify_uncertainty()
• int numSamplesOnModel
number of truth samples performed on g_u_model to form the expansion
• int numSamplesOnExpansion
number of approximation samples performed on the polynomial expansion in order to estimate probabilities
• bool nestedRules
flag for indicating state of nested and non_nested overrides of default rule nesting, which depends on the type
of integration driver
• bool piecewiseBasis
flag for piecewise specification, indicating usage of local basis polynomials within the stochastic expansion
• bool useDerivs
flag for use_derivatives specification, indicating usage of derivative data (with respect to expansion vari-
ables) to enhance the calculation of the stochastic expansion.
• short refineType
refinement type: NO_REFINEMENT, P_REFINEMENT, or H_REFINEMENT
• short refineControl
refinement control: NO_CONTROL, UNIFORM_CONTROL, LOCAL_ADAPTIVE_CONTROL, DIMENSION_-
ADAPTIVE_CONTROL_SOBOL, DIMENSION_ADAPTIVE_CONTROL_DECAY, or DIMENSION_ADAPTIVE_-
CONTROL_GENERALIZED
• RealSymMatrix respCovariance
symmetric matrix of analytic response covariance (full response covariance option)
• RealVector respVariance
vector of response variances (diagonal response covariance option)
• RealVector initialPtU
stores the initial variables data in u-space
• void initialize_sets ()
initialization of adaptive refinement using generalized sparse grids
• Real increment_sets ()
perform an adaptive refinement increment using generalized sparse grids
• void compute_covariance ()
calculate the response covariance (diagonal or full matrix)
• void compute_diagonal_variance ()
calculate respVariance or diagonal terms respCovariance(i,i)
• void compute_off_diagonal_covariance ()
calculate respCovariance(i,j) for j<i
• void compute_print_increment_results ()
manage print of results following a refinement increment
Private Attributes
• short ruleNestingOverride
user override of default rule nesting: NO_NESTING_OVERRIDE, NESTED, or NON_NESTED
• short ruleGrowthOverride
user override of default rule growth: NO_GROWTH_OVERRIDE, RESTRICTED, or UNRESTRICTED
• Iterator expansionSampler
Iterator used for sampling on the uSpaceModel to generate approximate probability/reliability/response level statis-
tics. Currently this is an LHS sampling instance, but AIS could also be used.
• Iterator importanceSampler
Iterator used to refine the approximate probability estimates generated by the expansionSampler using importance
sampling.
• bool expSampling
flag to indicate calculation of numerical statistics by sampling on the expansion
• bool impSampling
flag to use LHS sampling or MMAIS sampling on the expansion
• RealMatrix expGradsMeanX
derivative of the expansion with respect to the x-space variables evaluated at the means (used as uncertainty
importance metrics)
• bool vbdFlag
flag indicating the activation of variance-bsaed decomposition for computing Sobol’ indices
• Real vbdDropTol
tolerance for omitting output of small VBD indices
• short covarianceControl
enumeration for controlling response covariance calculation and output: {DEFAULT,DIAGONAL,FULL}_-
COVARIANCE
Base class for polynomial chaos expansions (PCE) and stochastic collocation (SC). The NonDExpansion class
provides a base class for methods that use polynomial expansions to approximate the effect of parameter uncer-
tainties on response functions of interest.
increment the input specification sequence (PCE only) default implementation is overridden by PCE
Reimplemented in NonDPolynomialChaos.
References NonDIntegration::increment_specification_sequence(), Iterator::iterator_rep(), Model::subordinate_-
iterator(), and NonDExpansion::uSpaceModel.
Referenced by NonDExpansion::quantify_uncertainty().
compute 2-norm of change in response covariance computes the default refinement metric based on change in
respCovariance
Reimplemented in NonDStochCollocation.
References NonDExpansion::compute_covariance(), NonDExpansion::covarianceControl, NonDExpan-
sion::respCovariance, NonDExpansion::respVariance, and Dakota::write_data().
Referenced by NonDExpansion::increment_sets(), and NonDExpansion::refine_expansion().
compute 2-norm of change in final statistics computes a "goal-oriented" refinement metric employing finalStatis-
tics
Reimplemented in NonDStochCollocation.
References NonDExpansion::compute_statistics(), NonD::finalStatistics, Response::function_values(), Itera-
tor::numFunctions, NonD::requestedGenRelLevels, NonD::requestedProbLevels, NonD::requestedRelLevels,
and NonD::requestedRespLevels.
Referenced by NonDExpansion::increment_sets().
calculate analytic and numerical statistics from the expansion Calculate analytic and numerical statistics from the
expansion and log results within final_stats for use in OUU.
References Dakota::abort_handler(), ResultsManager::active(), Iterator::active_set(), Response::active_-
set_derivative_vector(), Response::active_set_request_vector(), Iterator::all_responses(), Iterator::all_-
samples(), Model::approximation_data(), Model::approximations(), NonD::archive_allocate_mappings(),
NonDExpansion::archive_coefficients(), NonD::archive_from_resp(), NonDExpansion::archive_-
moments(), NonD::archive_to_resp(), NonD::cdfFlag, PecosApproximation::compute_component_-
effects(), PecosApproximation::compute_moments(), NonDExpansion::compute_off_diagonal_covariance(),
PecosApproximation::compute_total_effects(), NonD::computedGenRelLevels, NonD::computedProbLevels,
NonD::computedRelLevels, NonD::computedRespLevels, Model::continuous_variable_ids(),
Model::continuous_variable_labels(), Model::continuous_variables(), Dakota::copy_data(), NonDExpan-
sion::covarianceControl, Model::current_variables(), PecosApproximation::expansion_coefficient_flag(),
NonDExpansion::expansionSampler, NonDExpansion::expGradsMeanX, NonDExpansion::expSampling,
NonD::finalStatistics, Response::function_gradient(), Response::function_value(), Response::function_-
values(), NonDAdaptImpSampling::get_probability(), NonDExpansion::importanceSampler, NonDEx-
pansion::impSampling, Iterator::initial_points(), NonDAdaptImpSampling::initialize(), NonD::initialize_-
distribution_mappings(), NonD::initialize_random_variables(), NonDExpansion::initialPtU, Results-
Manager::insert(), Iterator::iteratedModel, Iterator::iterator_rep(), PecosApproximation::mean_-
gradient(), PecosApproximation::moments(), NonD::natafTransform, NonD::numContDesVars,
NonD::numContEpistUncVars, Iterator::numContinuousVars, NonD::numContStateVars, Itera-
tor::numFunctions, NonDExpansion::numSamplesOnExpansion, Iterator::outputLevel, ActiveSet::request_-
vector(), NonD::requestedGenRelLevels, NonD::requestedProbLevels, NonD::requestedRelLevels,
NonD::requestedRespLevels, NonDExpansion::respCovariance, NonD::respLevelTarget, Model::response_-
labels(), Iterator::response_results(), NonDExpansion::respVariance, Iterator::resultsDB, Iterator::resultsNames,
flag for use_derivatives specification, indicating usage of derivative data (with respect to expansion vari-
ables) to enhance the calculation of the stochastic expansion. This is part of the method specification since the
instantiation of the global data fit surrogate is implicit with no user specification. This behavior is distinct from
the usage of response derivatives with respect to auxilliary variables (design, epistemic) for computing derivatives
of aleatory expansion statistics with respect to these variables.
Referenced by NonDExpansion::compute_expansion(), NonDStochCollocation::initialize_u_space_-
model(), NonDPolynomialChaos::initialize_u_space_model(), NonDStochCollocation::resolve_inputs(),
NonDPolynomialChaos::resolve_inputs(), NonDPolynomialChaos::terms_ratio_to_samples(), and
NonDPolynomialChaos::terms_samples_to_ratio().
The documentation for this class was generated from the following files:
• NonDExpansion.hpp
• NonDExpansion.cpp
Iterator
Analyzer
NonD
NonDInterval
NonDGlobalInterval
NonDGlobalEvidence
• ∼NonDGlobalEvidence ()
destructor
• void initialize ()
perform any required initialization
• void set_cell_bounds ()
set the optimization variable bounds for each cell
• void post_process_response_fn_results ()
post-process the interval computed for a response function
• void post_process_final_results ()
perform final post-processing
Class for the Dempster-Shafer Evidence Theory methods within DAKOTA/UQ. The NonDEvidence class imple-
ments the propagation of epistemic uncertainty using Dempster-Shafer theory of evidence. In this approach, one
assigns a set of basic probability assignments (BPA) to intervals defined for the uncertain variables. Input interval
combinations are calculated, along with their BPA. Currently, the response function is evaluated at a set of sample
points, then a response surface is constructed which is sampled extensively to find the minimum and maximum
within each input interval cell, corresponding to the belief and plausibility within that cell, respectively. This data
is then aggregated to calculate cumulative distribution functions for belief and plausibility.
The documentation for this class was generated from the following files:
• NonDGlobalEvidence.hpp
• NonDGlobalEvidence.cpp
Iterator
Analyzer
NonD
NonDInterval
NonDGlobalInterval
NonDGlobalEvidence NonDGlobalSingleInterval
• ∼NonDGlobalInterval ()
destructor
• void quantify_uncertainty ()
Performs an optimization to determine interval bounds for an entire function or interval bounds on a particular
statistical estimator.
• void evaluate_response_star_truth ()
evaluate the truth response at the optimal variables solution and update the GP with the new data
Protected Attributes
• Iterator daceIterator
LHS iterator for constructing initial GP for all response functions.
• Model fHatModel
GP model of response, one approximation per response function.
• Iterator intervalOptimizer
optimizer for solving surrogate-based subproblem: NCSU DIRECT optimizer for maximizing expected improvement
or mixed EA if discrete variables.
• Model intervalOptModel
recast model which formulates the surrogate-based optimization subproblem (recasts as design problem; may as-
similate mean and variance to enable max(expected improvement))
• Real approxFnStar
approximate response corresponding to minimum/maximum truth response
• Real truthFnStar
minimum/maximum truth response function value
• static void EIF_objective_max (const Variables &sub_model_vars, const Variables &recast_vars, const Re-
sponse &sub_model_response, Response &recast_response)
static function used as the objective function in the Expected Improvement Function (EIF) for maximizing the GP
• static void extract_objective (const Variables &sub_model_vars, const Variables &recast_vars, const Re-
sponse &sub_model_response, Response &recast_response)
static function used to extract the active objective function when optimizing for an interval lower or upper bound
(non-EIF formulations). The sense of the optimization is set separately.
Private Attributes
• const int seedSpec
the user seed specification (default is 0)
• int numSamples
the number of samples used in the surrogate
• String rngName
name of the random number generator
• bool gpModelFlag
flag indicating use of GP surrogate emulation
• bool eifFlag
flag indicating use of maximized expected improvement for GP iterate selection
• Real distanceTol
tolerance for L_2 change in optimal solution
counter for number of successive iterations that the L_2 change in optimal solution is less than the convergenceTol
• RealVector prevCVStar
stores previous optimal point for continuous variables; used for assessing convergence
• IntVector prevDIVStar
stores previous optimal point for discrete integer variables; used for assessing convergence
• RealVector prevDRVStar
stores previous optimal point for discrete real variables; used for assessing convergence
• Real prevFnStar
stores previous solution value for assessing convergence
• size_t sbIterNum
surrogate-based minimization/maximization iteration count
• bool boundConverged
flag indicating convergence of a minimization or maximization cycle
• bool allResponsesPerIter
flag for maximal response extraction (all response values obtained on each function call)
• short dataOrder
order of the data used for surrogate construction, in ActiveSet request vector 3-bit format; user may override
responses spec
Class for using global nongradient-based optimization approaches to calculate interval bounds for epistemic un-
certainty quantification. The NonDGlobalInterval class supports global nongradient-based optimization appo-
raches to determining interval bounds for epistemic UQ. The interval bounds may be on the entire function in the
case of pure interval analysis (e.g. intervals on input = intervals on output), or the intervals may be on statistics
of an "inner loop" aleatory analysis such as intervals on means, variances, or percentile levels. The preliminary
implementation will use a Gaussian process surrogate to determine interval bounds.
The documentation for this class was generated from the following files:
• NonDGlobalInterval.hpp
• NonDGlobalInterval.cpp
Iterator
Analyzer
NonD
NonDReliability
NonDGlobalReliability
• ∼NonDGlobalReliability ()
destructor
• void quantify_uncertainty ()
performs an uncertainty propagation using analytical reliability methods which solve constrained optimization
problems to obtain approximations of the cumulative distribution function of response
• void importance_sampling ()
perform multimodal adaptive importance sampling on the GP
• void get_best_sample ()
determine current best solution from among sample data for expected imporovement function in Performance Mea-
sure Approach (PMA)
• static void EFF_objective_eval (const Variables &sub_model_vars, const Variables &recast_vars, const
Response &sub_model_response, Response &recast_response)
static function used as the objective function in the Expected Feasibility (EFF) problem formulation for RIA
Private Attributes
• Real fnStar
minimum penalized response from among true function evaluations
• short meritFunctionType
type of merit function used to penalize sample data
• Real lagrangeMult
Lagrange multiplier for standard Lagrangian merit function.
• Real augLagrangeMult
Lagrange multiplier for augmented Lagrangian merit function.
• Real penaltyParameter
penalty parameter for augmented Lagrangian merit funciton
• Real lastConstraintViolation
constraint violation at last iteration, used to determine if the current iterate should be accepted (must reduce
violation)
• bool lastIterateAccepted
flag to determine if last iterate was accepted this controls update of parameters for augmented Lagrangian merit fn
• short dataOrder
order of the data used for surrogate construction, in ActiveSet request vector 3-bit format; user may override
responses spec
Class for global reliability methods within DAKOTA/UQ. The NonDGlobalReliability class implements
EGO/SKO for global MPP search, which maximizes an expected improvement function derived from Gaussian
process models. Once the limit state has been characterized, a multimodal importance sampling approach is used
to compute probabilities.
The documentation for this class was generated from the following files:
• NonDGlobalReliability.hpp
• NonDGlobalReliability.cpp
Iterator
Analyzer
NonD
NonDInterval
NonDGlobalInterval
NonDGlobalSingleInterval
• ∼NonDGlobalSingleInterval ()
destructor
Private Attributes
• size_t statCntr
counter for finalStatistics
Class for using global nongradient-based optimization approaches to calculate interval bounds for epistemic un-
certainty quantification. The NonDGlobalSingleInterval class supports global nongradient-based optimization
apporaches to determining interval bounds for epistemic UQ. The interval bounds may be on the entire function
in the case of pure interval analysis (e.g. intervals on input = intervals on output), or the intervals may be on
statistics of an "inner loop" aleatory analysis such as intervals on means, variances, or percentile levels. The
preliminary implementation will use a Gaussian process surrogate to determine interval bounds.
The documentation for this class was generated from the following files:
• NonDGlobalSingleInterval.hpp
• NonDGlobalSingleInterval.cpp
Iterator
Analyzer
NonD
NonDSampling
NonDGPImpSampling
• ∼NonDGPImpSampling ()
alternate constructor for sample generation and evaluation "on the fly"
• void quantify_uncertainty ()
perform the GP importance sampling and return probability of failure.
• Real calcExpIndPoint (const int respFnCount, const Real respThresh, const RealVector this_mean, const
RealVector this_var)
function to calculate the expected indicator probabilities for one point
• void calcRhoDraw ()
function to update the rhoDraw data, adding x values and rho draw values
Private Attributes
• Iterator gpBuild
LHS iterator for building the initial GP.
• Iterator gpEval
LHS iterator for sampling on the GP.
• Model gpModel
GP model of response, one approximation per response function.
• Iterator sampleRhoOne
LHS iterator for sampling from the rhoOneDistribution.
• int numPtsAdd
the number of points added to the original set of LHS samples
• int numPtsTotal
the total number of points
• int numEmulEval
the number of points evaluated by the GP each iteration
• Real finalProb
the final calculated probability (p)
• RealVectorArray gpCvars
Vector to hold the current values of the current sample inputs on the GP.
• RealVectorArray gpMeans
Vector to hold the current values of the current mean estimates for the sample values on the GP.
• RealVectorArray gpVar
Vector to hold the current values of the current variance estimates for the sample values on the GP.
• RealVector expIndicator
Vector to hold the expected indicator values for the current GP samples.
• RealVector rhoDraw
• RealVector normConst
Vector to hold the normalization constant calculated for each point added.
• RealVector indicator
IntVector to hold indicator for actual simulation values vs. threshold.
• RealVectorArray xDrawThis
xDrawThis, appended to locally to hold the X values of emulator points chosen
• RealVector expIndThis
expIndThis, appended locally to hold the expected indicator
• RealVector rhoDrawThis
rhoDrawThis, appended locally to hold the rhoDraw density for calculating draws
• RealVector rhoMix
rhoMix, mixture density
• RealVector rhoOne
rhoOne, original importance density
Class for the Gaussian Process-based Importance Sampling method. The NonDGPImpSampling implements a
method developed by Keith Dalbey that uses a Gaussian process surrogate in the calculation of the importance
density. Specifically, the mean and variance of the GP prediction are used to calculate an expected value that a
particular point fails, and that is used as part of the computation of the "draw distribution." The normalization
constants and the mixture distribution used are defined in (need to get SAND report).
standard constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case,
set_db_list_nodes has been called and probDescDB can be queried for settings from the method specification.
References Model::assign_rep(), Iterator::assign_rep(), NonD::construct_lhs(), ProblemDescDB::get_bool(),
ProblemDescDB::get_int(), ProblemDescDB::get_string(), NonDGPImpSampling::gpBuild, NonDGPImpSam-
pling::gpEval, NonDGPImpSampling::gpModel, Iterator::gradientType, Iterator::hessianType, Model::init_-
communicators(), Iterator::iteratedModel, Iterator::maximum_concurrency(), Iterator::maxIterations,
NonDGPImpSampling::numEmulEval, NonDGPImpSampling::numPtsAdd, NonDSampling::numSamples,
Iterator::outputLevel, Iterator::probDescDB, NonDSampling::randomSeed, NonDSampling::rngName,
NonDGPImpSampling::sampleRhoOne, NonDSampling::samplingVarsMode, NonDSampling::statsFlag,
NonDSampling::vary_pattern(), and NonDSampling::varyPattern.
13.83.2.2 ∼NonDGPImpSampling ()
alternate constructor for sample generation and evaluation "on the fly" destructor
References Model::free_communicators(), NonDGPImpSampling::gpEval, NonDGPImpSampling::gpModel,
and Iterator::maximum_concurrency().
perform the GP importance sampling and return probability of failure. Calculate the failure probabilities for
specified probability levels using Gaussian process based importance sampling.
Implements NonD.
References Model::acv(), Iterator::all_responses(), Iterator::all_samples(), Analyzer::all_samples(),
Model::append_approximation(), Model::approximation_data(), Model::approximation_variances(),
Model::build_approximation(), NonDGPImpSampling::calcExpIndicator(), NonDGPImpSam-
pling::calcExpIndPoint(), NonDGPImpSampling::calcRhoDraw(), NonD::cdfFlag, Model::compute_-
response(), NonD::computedProbLevels, Model::continuous_lower_bounds(), Model::continuous_upper_-
bounds(), Model::continuous_variables(), Model::current_response(), Model::current_variables(), NonDG-
PImpSampling::drawNewX(), Model::evaluation_id(), NonDGPImpSampling::expIndicator, NonDGPImp-
Sampling::expIndThis, NonDGPImpSampling::finalProb, Response::function_values(), NonDGPImp-
Sampling::gpCvars, NonDGPImpSampling::gpEval, NonDGPImpSampling::gpMeans, NonDGPImp-
Sampling::gpModel, NonDGPImpSampling::gpVar, NonDGPImpSampling::indicator, NonD::initialize_-
distribution_mappings(), Iterator::iteratedModel, NonDGPImpSampling::normConst, NonDGPImpSam-
pling::numEmulEval, Iterator::numFunctions, NonDGPImpSampling::numPtsAdd, NonDGPImpSam-
pling::numPtsTotal, NonDSampling::numSamples, Iterator::outputLevel, Model::pop_approximation(),
NonD::requestedRespLevels, NonDGPImpSampling::rhoDraw, NonDGPImpSampling::rhoDrawThis,
NonDGPImpSampling::rhoMix, NonDGPImpSampling::rhoOne, Iterator::run_iterator(), NonDGPImpSam-
pling::sampleRhoOne, and NonDGPImpSampling::xDrawThis.
The documentation for this class was generated from the following files:
• NonDGPImpSampling.hpp
• NonDGPImpSampling.cpp
Iterator
Analyzer
NonD
NonDCalibration
NonDBayesCalibration
NonDGPMSABayesCalibration
• ∼NonDGPMSABayesCalibration ()
destructor
Public Attributes
• String rejectionType
Rejection type (standard or delayed, in the DRAM framework).
• String metropolisType
Metropolis type (hastings or adaptive, in the DRAM framework).
• int numSamples
number of samples in the chain (e.g. number of MCMC samples)
• int emulatorSamples
number of samples of the simulation to construct the GP
• RealVector proposalCovScale
scale factor for proposal covariance
• Real likelihoodScale
scale factor for likelihood
• bool calibrateSigmaFlag
flag to indicated if the sigma terms should be calibrated (default true)
• String approxImportFile
name of file from which to import build points to build GP
• bool approxImportAnnotated
annotate flag
• void quantify_uncertainty ()
performs a forward uncertainty propagation by using GPM/SA to generate a posterior distribution on parameters
given a set of simulation parameter/response data, a set of experimental data, and additional variables to be
specified here.
Protected Attributes
• int randomSeed
print the final statistics
Private Attributes
• short emulatorType
the emulator type: NO_EMULATOR, GAUSSIAN_PROCESS, POLYNOMIAL_CHAOS, or STOCHASTIC_-
COLLOCATION
• Iterator lhsIter
LHS iterator for generating samples for GP.
Generates posterior distribution on model parameters given experiment data. This class provides a wrapper for the
functionality provided in the Los Alamos National Laboratory code called GPM/SA (Gaussian Process Models
for Simulation Analysis). Although this is a code that provides input/output mapping, it DOES NOT provide
the mapping that we usually think of in the NonDeterministic class hierarchy in DAKOTA, where uncertainty in
parameter inputs are mapped to uncertainty in simulation responses. Instead, this class takes a pre-existing set
of simulation data as well as experimental data, and maps priors on input parameters to posterior distributions
on those input parameters, according to a likelihood function. The goal of the MCMC sampling is to produce
posterior values of parameter estimates which will produce simulation response values that "match well" to the
experimental data. The MCMC is an integral part of the calibration. The data structures in GPM/SA are fairly
detailed and nested. Part of this prototyping exercise is to determine what data structures need to be specified and
initialized in DAKOTA and sent to GPM/SA, and what data structures will be returned.
standard constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case,
set_db_list_nodes has been called and probDescDB can be queried for settings from the method specification.
References Iterator::assign_rep(), NonDGPMSABayesCalibration::emulatorSamples, ProblemDescDB::get_-
string(), Model::init_communicators(), Iterator::iteratedModel, NonDGPMSABayesCalibration::lhsIter,
Iterator::maximum_concurrency(), Iterator::probDescDB, and NonDGPMSABayesCalibration::randomSeed.
performs a forward uncertainty propagation by using GPM/SA to generate a posterior distribution on parameters
given a set of simulation parameter/response data, a set of experimental data, and additional variables to be
specified here. Perform the uncertainty quantification
Reimplemented from NonDBayesCalibration.
References Iterator::all_responses(), Iterator::all_samples(), Analyzer::all_samples(), NonDGPMSABayesCali-
bration::approxImportAnnotated, NonDGPMSABayesCalibration::approxImportFile, NonDGPMSABayesCal-
ibration::calibrateSigmaFlag, Model::continuous_lower_bounds(), Model::continuous_upper_bounds(),
Model::continuous_variables(), NonDGPMSABayesCalibration::emulatorSamples, NonDGPMSABayesCal-
ibration::emulatorType, NonDCalibration::expData, NonDCalibration::expDataFileAnnotated, NonD-
Calibration::expDataFileName, Iterator::iteratedModel, NonDGPMSABayesCalibration::lhsIter,
ExperimentData::load_scalar(), NonDGPMSABayesCalibration::metropolisType, NonDGPMSABayesCali-
bration::NonDGPMSAInstance, NonDCalibration::numExpConfigVars, NonDCalibration::numExperiments,
NonDCalibration::numExpStdDeviationsRead, Iterator::numFunctions, NonDCalibration::numReplicates,
NonDGPMSABayesCalibration::numSamples, NonD::numUncertainVars, Iterator::outputLevel, Dakota::read_-
data_tabular(), NonDGPMSABayesCalibration::rejectionType, and Iterator::run_iterator().
• NonDGPMSABayesCalibration.hpp
• NonDGPMSABayesCalibration.cpp
Iterator
Analyzer
NonD
NonDSampling
NonDIncremLHSSampling
• ∼NonDIncremLHSSampling ()
destructor
• void quantify_uncertainty ()
performs a forward uncertainty propagation by using LHS to generate a set of parameter samples, performing
function evaluations on these parameter samples, and computing statistics on the ensemble of results.
Private Attributes
• int previousSamples
number of samples in previous LHS run
• bool varBasedDecompFlag
flags computation of VBD
Performs icremental LHS sampling for uncertainty quantification. The Latin Hypercube Sampling (LHS) package
from Sandia Albuquerque’s Risk and Reliability organization provides comprehensive capabilities for Monte
Carlo and Latin Hypercube sampling within a broad array of user-specified probabilistic parameter distributions.
The icremental LHS sampling capability allows one to supplement an initial sample of size n to size 2n while
maintaining the correct stratification of the 2n samples and also maintaining the specified correlation structure.
The icremental version of LHS will return a sample of size n, which when combined with the original sample of
size n, allows one to double the size of the sample.
constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case, set_db_-
list_nodes has been called and probDescDB can be queried for settings from the method specification.
performs a forward uncertainty propagation by using LHS to generate a set of parameter samples, performing
function evaluations on these parameter samples, and computing statistics on the ensemble of results. Generate
incremental samples. Loop over the set of samples and compute responses. Compute statistics on the set of
responses if statsFlag is set.
Implements NonD.
References Dakota::abort_handler(), Model::aleatory_distribution_parameters(), Analyzer::allResponses,
Analyzer::allSamples, NonDSampling::compute_statistics(), Dakota::copy_data(), Dakota::data_pairs,
Analyzer::evaluate_parameter_sets(), NonDSampling::get_parameter_sets(), Iterator::iteratedModel,
NonD::numBetaVars, NonD::numBinomialVars, Iterator::numContinuousVars, NonD::numExponentialVars,
NonD::numFrechetVars, NonD::numGammaVars, NonD::numGeometricVars, NonD::numGumbelVars,
NonD::numHistogramBinVars, NonD::numHistogramPtVars, NonD::numHyperGeomVars,
NonD::numLognormalVars, NonD::numLoguniformVars, NonD::numNegBinomialVars,
NonD::numNormalVars, NonD::numPoissonVars, NonDSampling::numSamples, NonD::numTriangularVars,
NonD::numUniformVars, NonD::numWeibullVars, NonDIncremLHSSampling::previousSamples,
• NonDIncremLHSSampling.hpp
• NonDIncremLHSSampling.cpp
Iterator
Analyzer
NonD
NonDIntegration
• ∼NonDIntegration ()
destructor
• void quantify_uncertainty ()
performs a forward uncertainty propagation of parameter distributions into response statistics
Protected Attributes
• Pecos::IntegrationDriver numIntDriver
Pecos utlity class for managing interface to tensor-product grids and VPISparseGrid utilities for Smolyak sparse
grids and cubature.
• size_t numIntegrations
counter for number of integration executions for this object
• size_t sequenceIndex
index into NonDQuadrature::quadOrderSpec and NonDSparseGrid::ssgLevelSpec that defines the current instance
of several possible refinement levels
• RealVector dimPrefSpec
the user specification for anisotropic dimension preference
Derived nondeterministic class that generates N-dimensional numerical integration points for evaluation of expec-
tation integrals. This class provides a base class for shared code among NonDQuadrature and NonDSparseGrid.
constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case, set_db_-
list_nodes has been called and probDescDB can be queried for settings from the method specification. It is not
currently used, as there are not yet separate nond_quadrature/nond_sparse_grid method specifications.
References Dakota::abort_handler(), NonD::initialize_final_statistics(), NonD::initialize_random_variable_-
correlations(), NonD::initialize_random_variable_transformation(), NonD::initialize_random_variable_types(),
Iterator::numDiscreteIntVars, Iterator::numDiscreteRealVars, and NonD::verify_correlation_support().
alternate constructor for instantiations "on the fly" This alternate constructor is used for on-the-fly generation and
evaluation of numerical integration points.
13.86.2.3 NonDIntegration (NoDBBaseConstructor, Model & model, const RealVector & dim_pref)
[protected]
alternate constructor for instantiations "on the fly" This alternate constructor is used for on-the-fly generation and
evaluation of numerical integration points.
convert scalar_order_spec and vector dim_pref_spec to vector aniso_order Converts a scalar order specification
and a vector anisotropic dimension preference into an anisotropic order vector. It is used for initialization and
does not enforce a reference lower bound (see also NonDQuadrature::update_anisotropic_order()).
Referenced by NonDPolynomialChaos::increment_specification_sequence(), NonDQuadrature::initialize_-
dimension_quadrature_order(), and NonDPolynomialChaos::NonDPolynomialChaos().
convert vector aniso_order to scalar_order and vector dim_pref Converts a vector anisotropic order into a scalar
order and vector anisotropic dimension preference.
Referenced by NonDPolynomialChaos::NonDPolynomialChaos().
verify self-consistency of variables data Virtual function called from probDescDB-based constructors and from
NonDIntegration::quantify_uncertainty()
References Dakota::abort_handler(), NonD::numContAleatUncVars, NonD::numContDesVars,
NonD::numContEpistUncVars, Iterator::numContinuousVars, and NonD::numContStateVars.
Referenced by NonDCubature::NonDCubature(), NonDQuadrature::NonDQuadrature(), NonDSparseG-
rid::NonDSparseGrid(), and NonDIntegration::quantify_uncertainty().
The documentation for this class was generated from the following files:
• NonDIntegration.hpp
• NonDIntegration.cpp
Base class for interval-based methods within DAKOTA/UQ. Inheritance diagram for NonDInterval::
Iterator
Analyzer
NonD
NonDInterval
• ∼NonDInterval ()
destructor
• void initialize_final_statistics ()
initialize finalStatistics for belief/plausibility results sets
• void compute_evidence_statistics ()
method for computing belief and plausibility values for response levels or vice-versa
• void calculate_cells_and_bpas ()
computes the interval combinations (cells) and their bpas replaces CBPIIC_F77 from wrapper calculate_basic_-
prob_intervals()
Protected Attributes
• bool singleIntervalFlag
flag for SingleInterval derived class
• RealVectorArray ccBelFn
Storage array to hold CCBF values.
• RealVectorArray ccPlausFn
Storage array to hold CCPF values.
• RealVectorArray ccBelVal
Storage array to hold CCB response values.
• RealVectorArray ccPlausVal
Storage array to hold CCP response values.
• RealVectorArray cellContLowerBounds
Storage array to hold cell lower bounds for continuous variables.
• RealVectorArray cellContUpperBounds
Storage array to hold cell upper bounds for continuous variables.
• IntVectorArray cellIntRangeLowerBounds
Storage array to hold cell lower bounds for discrete int range variables.
• IntVectorArray cellIntRangeUpperBounds
Storage array to hold cell upper bounds for discrete int range variables.
• IntVectorArray cellIntSetBounds
Storage array to hold cell values for discrete integer set variables.
• IntVectorArray cellRealSetBounds
Storage array to hold cell value for discrete real set variables.
• RealVectorArray cellFnLowerBounds
Storage array to hold cell min.
• RealVectorArray cellFnUpperBounds
Storage array to hold cell max.
• RealVector cellBPA
Storage array to hold cell bpa.
• size_t respFnCntr
response function counter
• size_t cellCntr
cell counter
• size_t numCells
total number of interval combinations
Base class for interval-based methods within DAKOTA/UQ. The NonDInterval class implements the propagation
of epistemic uncertainty using either pure interval propagation or Dempster-Shafer theory of evidence. In the
latter approach, one assigns a set of basic probability assignments (BPA) to intervals defined for the uncertain
variables. Input interval combinations are calculated, along with their BPA. Currently, the response function is
evaluated at a set of sample points, then a response surface is constructed which is sampled extensively to find
the minimum and maximum within each input interval cell, corresponding to the belief and plausibility within
that cell, respectively. This data is then aggregated to calculate cumulative distribution functions for belief and
plausibility.
performs an epistemic uncertainty propagation using Dempster-Shafer evidence theory methods which solve for
cumulative distribution functions of belief and plausibility print the cumulative distribution functions for belief
and plausibility
Reimplemented from Analyzer.
References NonDInterval::ccBelFn, NonDInterval::ccBelVal, NonDInterval::ccPlausFn, NonD-
Interval::ccPlausVal, NonD::cdfFlag, NonDInterval::cellBPA, NonDInterval::cellFnLowerBounds,
NonDInterval::cellFnUpperBounds, NonD::computedGenRelLevels, NonD::computedProbLevels,
NonD::computedRespLevels, NonD::finalStatistics, Response::function_values(), Iterator::iteratedModel,
NonDInterval::numCells, Iterator::numFunctions, NonD::requestedGenRelLevels, NonD::requestedProbLevels,
NonD::requestedRespLevels, NonD::respLevelTarget, Model::response_labels(), NonDInter-
val::singleIntervalFlag, and Dakota::write_precision.
The documentation for this class was generated from the following files:
• NonDInterval.hpp
• NonDInterval.cpp
Iterator
Analyzer
NonD
NonDInterval
NonDLHSInterval
NonDLHSEvidence
• ∼NonDLHSEvidence ()
destructor
• void initialize ()
perform any required initialization
• void post_process_samples ()
post-process the output from executing lhsSampler
Class for the Dempster-Shafer Evidence Theory methods within DAKOTA/UQ. The NonDEvidence class imple-
ments the propagation of epistemic uncertainty using Dempster-Shafer theory of evidence. In this approach, one
assigns a set of basic probability assignments (BPA) to intervals defined for the uncertain variables. Input interval
combinations are calculated, along with their BPA. Currently, the response function is evaluated at a set of sample
points, then a response surface is constructed which is sampled extensively to find the minimum and maximum
within each input interval cell, corresponding to the belief and plausibility within that cell, respectively. This data
is then aggregated to calculate cumulative distribution functions for belief and plausibility.
The documentation for this class was generated from the following files:
• NonDLHSEvidence.hpp
• NonDLHSEvidence.cpp
Iterator
Analyzer
NonD
NonDInterval
NonDLHSInterval
NonDLHSEvidence NonDLHSSingleInterval
• ∼NonDLHSInterval ()
destructor
• void quantify_uncertainty ()
performs an epistemic uncertainty propagation using LHS samples
Protected Attributes
• Iterator lhsSampler
the LHS sampler instance
• int numSamples
the number of samples used
• String rngName
name of the random number generator
Class for the LHS-based interval methods within DAKOTA/UQ. The NonDLHSInterval class implements the
propagation of epistemic uncertainty using LHS-based methods.
The documentation for this class was generated from the following files:
• NonDLHSInterval.hpp
• NonDLHSInterval.cpp
Iterator
Analyzer
NonD
NonDSampling
NonDLHSSampling
• NonDLHSSampling (Model &model, const String &sample_type, int samples, int seed, const String &rng,
bool vary_pattern=true, short sampling_vars_mode=ACTIVE)
alternate constructor for sample generation and evaluation "on the fly"
• NonDLHSSampling (const String &sample_type, int samples, int seed, const String &rng, const RealVector
&lower_bnds, const RealVector &upper_bnds)
alternate constructor for sample generation "on the fly"
• ∼NonDLHSSampling ()
destructor
• void post_input ()
read tabular data for post-run mode
• void quantify_uncertainty ()
perform the evaluate parameter sets portion of run
Private Attributes
• size_t numResponseFunctions
number of response functions; used to distinguish NonD from opt/NLS usage
• bool varBasedDecompFlag
flags computation of variance-based decomposition indices
Performs LHS and Monte Carlo sampling for uncertainty quantification. The Latin Hypercube Sampling (LHS)
package from Sandia Albuquerque’s Risk and Reliability organization provides comprehensive capabilities for
Monte Carlo and Latin Hypercube sampling within a broad array of user-specified probabilistic parameter distri-
butions. It enforces user-specified rank correlations through use of a mixing routine. The NonDLHSSampling
class provides a C++ wrapper for the LHS library and is used for performing forward propagations of parameter
uncertainties into response statistics.
standard constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case,
set_db_list_nodes has been called and probDescDB can be queried for settings from the method specification.
13.90.2.2 NonDLHSSampling (Model & model, const String & sample_type, int samples, int seed, const
String & rng, bool vary_pattern = true, short sampling_vars_mode = ACTIVE)
alternate constructor for sample generation and evaluation "on the fly" This alternate constructor is used for
generation and evaluation of Model-based sample sets. A set_db_list_nodes has not been performed so required
data must be passed through the constructor. It’s purpose is to avoid the need for a separate LHS specification
within methods that use LHS sampling.
13.90.2.3 NonDLHSSampling (const String & sample_type, int samples, int seed, const String & rng,
const RealVector & lower_bnds, const RealVector & upper_bnds)
alternate constructor for sample generation "on the fly" This alternate constructor is used by ConcurrentStrategy
for generation of uniform, uncorrelated sample sets. It is _not_ a letter-envelope instantiation and a set_db_list_-
nodes has not been performed. It is called with all needed data passed through the constructor and is designed
to allow more flexibility in variables set definition (i.e., relax connection to a variables specification and allow
sampling over parameter sets such as multiobjective weights). In this case, a Model is not used and the object
must only be used for sample generation (no evaluation).
References NonDSampling::get_parameter_sets().
perform the evaluate parameter sets portion of run Loop over the set of samples and compute responses. Compute
statistics on the set of responses if statsFlag is set.
Implements NonD.
References NonDSampling::allDataFlag, Analyzer::evaluate_parameter_sets(), Iterator::iteratedModel, It-
erator::numContinuousVars, Iterator::numDiscreteIntVars, Iterator::numDiscreteRealVars, NonDLHSSam-
pling::numResponseFunctions, NonDSampling::numSamples, NonDSampling::statsFlag, NonDLHSSam-
pling::varBasedDecompFlag, and Analyzer::variance_based_decomp().
The documentation for this class was generated from the following files:
• NonDLHSSampling.hpp
• NonDLHSSampling.cpp
Class for pure interval propagation using LHS. Inheritance diagram for NonDLHSSingleInterval::
Iterator
Analyzer
NonD
NonDInterval
NonDLHSInterval
NonDLHSSingleInterval
• ∼NonDLHSSingleInterval ()
destructor
• void initialize ()
perform any required initialization
• void post_process_samples ()
post-process the output from executing lhsSampler
Private Attributes
• size_t statCntr
counter for finalStatistics
Class for pure interval propagation using LHS. The NonDSingleInterval class implements the propagation of
epistemic uncertainty using ...
The documentation for this class was generated from the following files:
• NonDLHSSingleInterval.hpp
• NonDLHSSingleInterval.cpp
Iterator
Analyzer
NonD
NonDInterval
NonDLocalInterval
NonDLocalEvidence
• ∼NonDLocalEvidence ()
destructor
• void set_cell_bounds ()
set the optimization variable bounds for each cell
• void post_process_response_fn_results ()
post-process the interval computed for a response function
• void post_process_final_results ()
perform final post-processing
Class for the Dempster-Shafer Evidence Theory methods within DAKOTA/UQ. The NonDEvidence class imple-
ments the propagation of epistemic uncertainty using Dempster-Shafer theory of evidence. In this approach, one
assigns a set of basic probability assignments (BPA) to intervals defined for the uncertain variables. Input interval
combinations are calculated, along with their BPA. Currently, the response function is evaluated at a set of sample
points, then a response surface is constructed which is sampled extensively to find the minimum and maximum
within each input interval cell, corresponding to the belief and plausibility within that cell, respectively. This data
is then aggregated to calculate cumulative distribution functions for belief and plausibility.
The documentation for this class was generated from the following files:
• NonDLocalEvidence.hpp
• NonDLocalEvidence.cpp
Iterator
Analyzer
NonD
NonDInterval
NonDLocalInterval
NonDLocalEvidence NonDLocalSingleInterval
• ∼NonDLocalInterval ()
destructor
• void quantify_uncertainty ()
Performs an optimization to determine interval bounds for an entire function or interval bounds on a particular
statistical estimator.
• void method_recourse ()
perform an MPP optimizer method switch due to a detected conflict
Protected Attributes
• Iterator minMaxOptimizer
local gradient-based optimizer
• Model minMaxModel
recast model which extracts the active objective function
• static void extract_objective (const Variables &sub_model_vars, const Variables &recast_vars, const Re-
sponse &sub_model_response, Response &recast_response)
static function used to extract the active objective function when optimizing for an interval lower or upper bound
Private Attributes
• bool npsolFlag
flag representing the gradient-based optimization algorithm selection (NPSOL SQP or OPT++ NIP)
Class for using local gradient-based optimization approaches to calculate interval bounds for epistemic uncertainty
quantification. The NonDLocalInterval class supports local gradient-based optimization apporaches to determin-
ing interval bounds for epistemic UQ. The interval bounds may be on the entire function in the case of pure
interval analysis (e.g. intervals on input = intervals on output), or the intervals may be on statistics of an "inner
loop" aleatory analysis such as intervals on means, variances, or percentile levels.
The documentation for this class was generated from the following files:
• NonDLocalInterval.hpp
• NonDLocalInterval.cpp
Iterator
Analyzer
NonD
NonDReliability
NonDLocalReliability
• ∼NonDLocalReliability ()
destructor
• void quantify_uncertainty ()
performs an uncertainty propagation using analytical reliability methods which solve constrained optimization
problems to obtain approximations of the cumulative distribution function of response
• void method_recourse ()
perform an MPP optimizer method switch due to a detected conflict
• void mean_value ()
convenience function for encapsulating the simple Mean Value computation of approximate statistics and impor-
tance factors
• void mpp_search ()
convenience function for encapsulating the reliability methods that employ a search for the most probable point
(AMV, AMV+, FORM, SORM)
• void initialize_class_data ()
convenience function for initializing class scope arrays
• void initialize_level_data ()
convenience function for initializing/warm starting MPP search data for each response function prior to level 0
• void initialize_mpp_search_data ()
convenience function for initializing/warm starting MPP search data for each z/p/beta level for each response
function
• void update_level_data ()
convenience function for updating z/p/beta level data and final statistics following MPP convergence
• void update_pma_maximize (const RealVector &mpp_u, const RealVector &fn_grad_u, const RealSym-
Matrix &fn_hess_u)
update pmaMaximizeG from prescribed probabilities or prescribed generalized reliabilities by inverting second-
order integrations
• void update_limit_state_surrogate ()
convenience function for passing the latest variables/response data to the data fit embedded within uSpaceModel
• void assign_mean_data ()
update mostProbPointX/U, computedRespLevel, fnGradX/U, and fnHessX/U from ranVarMeansX/U, fnValsMeanX,
fnGradsMeanX, and fnHessiansMeanX
• void dg_ds_eval (const RealVector &x_vars, const RealVector &fn_grad_x, RealVector &final_stat_grad)
convenience function for evaluating dg/ds
• Real signed_norm (const RealVector &mpp_u, const RealVector &fn_grad_u, bool cdf_flag)
convert norm of mpp_u (u-space solution) to a signed reliability index
• Real signed_norm (Real norm_mpp_u, const RealVector &mpp_u, const RealVector &fn_grad_u, bool
cdf_flag)
shared helper function
• Real probability (bool cdf_flag, const RealVector &mpp_u, const RealVector &fn_grad_u, const RealSym-
Matrix &fn_hess_u)
Convert computed reliability to probability using either a first-order or second-order integration.
• Real probability (Real beta, bool cdf_flag, const RealVector &mpp_u, const RealVector &fn_grad_u, const
RealSymMatrix &fn_hess_u)
Convert provided reliability to probability using either a first-order or second-order integration.
• Real reliability (Real p, bool cdf_flag, const RealVector &mpp_u, const RealVector &fn_grad_u, const
RealSymMatrix &fn_hess_u)
Convert probability to reliability using the inverse of a first-order or second-order integration.
• bool reliability_residual (const Real &p, const Real &beta, const RealVector &kappa, Real &res)
compute the residual for inversion of second-order probability corrections using Newton’s method (called by relia-
bility(p))
• Real reliability_residual_derivative (const Real &p, const Real &beta, const RealVector &kappa)
compute the residual derivative for inversion of second-order probability corrections using Newton’s method (called
by reliability(p))
• void principal_curvatures (const RealVector &mpp_u, const RealVector &fn_grad_u, const RealSymMatrix
&fn_hess_u, RealVector &kappa_u)
Compute the kappaU vector of principal curvatures from fnHessU.
• void scale_curvature (Real beta, bool cdf_flag, const RealVector &kappa, RealVector &scaled_kappa)
scale copy of principal curvatures by -1 if needed; else take a view
• static void RIA_constraint_eval (const Variables &sub_model_vars, const Variables &recast_vars, const
Response &sub_model_response, Response &recast_response)
static function used as the constraint function in the Reliability Index Approach (RIA) problem formulation. This
equality-constrained optimization problem performs the search for the most probable point (MPP) with the con-
straint of G(u) = response level.
• static void PMA_objective_eval (const Variables &sub_model_vars, const Variables &recast_vars, const
Response &sub_model_response, Response &recast_response)
static function used as the objective function in the Performance Measure Approach (PMA) problem formulation.
This equality-constrained optimization problem performs the search for the most probable point (MPP) with the
objective function of G(u).
• static void PMA_constraint_eval (const Variables &sub_model_vars, const Variables &recast_vars, const
Response &sub_model_response, Response &recast_response)
static function used as the constraint function in the first-order Performance Measure Approach (PMA) problem
formulation. This optimization problem performs the search for the most probable point (MPP) with the equality
constraint of (norm u)∧ 2 = (beta-bar)∧ 2.
• static void PMA2_constraint_eval (const Variables &sub_model_vars, const Variables &recast_vars, const
Response &sub_model_response, Response &recast_response)
static function used as the constraint function in the second-order Performance Measure Approach (PMA) problem
formulation. This optimization problem performs the search for the most probable point (MPP) with the equality
constraint of beta∗ = beta∗-bar.
• static void PMA2_set_mapping (const Variables &recast_vars, const ActiveSet &recast_set, ActiveSet
&sub_model_set)
static function used to augment the sub-model ASV requests for second-order PMA
Private Attributes
• Real computedRespLevel
output response level calculated
• Real computedRelLevel
output reliability level calculated for RIA and 1st-order PMA
• Real computedGenRelLevel
output generalized reliability level calculated for 2nd-order PMA
• RealVector fnGradX
actual x-space gradient for current function from most recent response evaluation
• RealVector fnGradU
u-space gradient for current function updated from fnGradX and Jacobian dx/du
• RealSymMatrix fnHessX
actual x-space Hessian for current function from most recent response evaluation
• RealSymMatrix fnHessU
u-space Hessian for current function updated from fnHessX and Jacobian dx/du
• RealVector kappaU
principal curvatures derived from eigenvalues of orthonormal transformation of fnHessU
• RealVector fnValsMeanX
response function values evaluated at mean x
• RealMatrix fnGradsMeanX
response function gradients evaluated at mean x
• RealSymMatrixArray fnHessiansMeanX
response function Hessians evaluated at mean x
• RealVector ranVarMeansU
vector of means for all uncertain random variables in u-space
• RealVector initialPtU
initial guess for MPP search in u-space
• RealVector mostProbPointX
location of MPP in x-space
• RealVector mostProbPointU
location of MPP in u-space
• RealVectorArray prevMPPULev0
array of converged MPP’s in u-space for level 0. Used for warm-starting initialPtU within RBDO.
• RealMatrix prevFnGradDLev0
matrix of limit state sensitivities w.r.t. inactive/design variables for level 0. Used for warm-starting initialPtU
within RBDO.
• RealMatrix prevFnGradULev0
matrix of limit state sensitivities w.r.t. active/uncertain variables for level 0. Used for warm-starting initialPtU
within RBDO.
• RealVector prevICVars
previous design vector. Used for warm-starting initialPtU within RBDO.
• ShortArray prevCumASVLev0
accumulation (using |=) of all previous design ASV’s from requested finalStatistics. Used to detect availability of
prevFnGradDLev0 data for warm-starting initialPtU within RBDO.
• bool npsolFlag
flag representing the optimization MPP search algorithm selection (NPSOL SQP or OPT++ NIP)
• bool warmStartFlag
flag indicating the use of warm starts
• bool nipModeOverrideFlag
flag indicating the use of move overrides within OPT++ NIP
• bool curvatureDataAvailable
flag indicating that sufficient data (i.e., fnGradU, fnHessU, mostProbPointU) is available for computing principal
curvatures
• bool kappaUpdated
track when kappaU requires updating via principal_curvatures()
• short integrationOrder
integration order (1 or 2) provided by integration specification
• short secondOrderIntType
type of second-order integration: Breitung, Hohenbichler-Rackwitz, or Hong
• Real curvatureThresh
cut-off value for 1/sqrt() term in second-order probability corrections.
• short taylorOrder
order of Taylor series approximations (1 or 2) in MV/AMV/AMV+ derived from hessianType
• RealMatrix impFactor
importance factors predicted by MV
• int npsolDerivLevel
derivative level for NPSOL executions (1 = analytic grads of objective fn, 2 = analytic grads of constraints, 3 =
analytic grads of both).
Class for the reliability methods within DAKOTA/UQ. The NonDLocalReliability class implements the following
reliability methods through the support of different limit state approximation and integration options: mean value
(MVFOSM/MVSOSM), advanced mean value method (AMV, AMV∧ 2) in x- or u-space, iterated advanced mean
value method (AMV+, AMV∧ 2+) in x- or u-space, two-point adaptive nonlinearity approximation (TANA) in
x- or u-space, first order reliability method (FORM), and second order reliability method (SORM). All options
except mean value employ an optimizer (currently NPSOL SQP or OPT++ NIP) to solve an equality-constrained
optimization problem for the most probable point (MPP). The MPP search may be formulated as the reliability
index approach (RIA) for mapping response levels to reliabilities/probabilities or as the performance measure
approach (PMA) for performing the inverse mapping of reliability/probability levels to response levels.
13.94.2.1 void RIA_objective_eval (const Variables & sub_model_vars, const Variables & recast_vars,
const Response & sub_model_response, Response & recast_response) [static, private]
static function used as the objective function in the Reliability Index Approach (RIA) problem formulation. This
equality-constrained optimization problem performs the search for the most probable point (MPP) with the objec-
tive function of (norm u)∧ 2. This function recasts a G(u) response set (already transformed and approximated in
other recursions) into an RIA objective function.
References Response::active_set_request_vector(), Variables::continuous_variables(), Response::function_-
gradient_view(), Response::function_hessian_view(), and Response::function_value().
Referenced by NonDLocalReliability::mpp_search().
13.94.2.2 void RIA_constraint_eval (const Variables & sub_model_vars, const Variables & recast_vars,
const Response & sub_model_response, Response & recast_response) [static, private]
static function used as the constraint function in the Reliability Index Approach (RIA) problem formulation.
This equality-constrained optimization problem performs the search for the most probable point (MPP) with
the constraint of G(u) = response level. This function recasts a G(u) response set (already transformed and
approximated in other recursions) into an RIA equality constraint.
References Response::active_set_request_vector(), Response::function_gradient(), Response::function_-
gradient_view(), Response::function_hessian(), Response::function_value(), NonDLocalReliabil-
ity::nondLocRelInstance, NonDReliability::requestedTargetLevel, and NonDReliability::respFnCount.
Referenced by NonDLocalReliability::mpp_search().
13.94.2.3 void PMA_objective_eval (const Variables & sub_model_vars, const Variables & recast_vars,
const Response & sub_model_response, Response & recast_response) [static, private]
static function used as the objective function in the Performance Measure Approach (PMA) problem formulation.
This equality-constrained optimization problem performs the search for the most probable point (MPP) with the
objective function of G(u). This function recasts a G(u) response set (already transformed and approximated in
other recursions) into an PMA objective function.
13.94.2.4 void PMA_constraint_eval (const Variables & sub_model_vars, const Variables & recast_vars,
const Response & sub_model_response, Response & recast_response) [static, private]
static function used as the constraint function in the first-order Performance Measure Approach (PMA) problem
formulation. This optimization problem performs the search for the most probable point (MPP) with the equality
constraint of (norm u)∧ 2 = (beta-bar)∧ 2. This function recasts a G(u) response set (already transformed and
approximated in other recursions) into a first-order PMA equality constraint on reliability index beta.
References Response::active_set_request_vector(), Variables::continuous_variables(), Response::function_-
gradient_view(), Response::function_hessian_view(), Response::function_value(), NonDLocalReliabil-
ity::nondLocRelInstance, and NonDReliability::requestedTargetLevel.
Referenced by NonDLocalReliability::mpp_search().
13.94.2.5 void PMA2_constraint_eval (const Variables & sub_model_vars, const Variables & recast_vars,
const Response & sub_model_response, Response & recast_response) [static, private]
static function used as the constraint function in the second-order Performance Measure Approach (PMA) problem
formulation. This optimization problem performs the search for the most probable point (MPP) with the equality
constraint of beta∗ = beta∗-bar. This function recasts a G(u) response set (already transformed and approximated
in other recursions) into a second-order PMA equality constraint on generalized reliability index beta-star.
References Dakota::abort_handler(), Response::active_set_request_vector(), NonD::cdfFlag, NonDLocal-
Reliability::computedGenRelLevel, NonDLocalReliability::computedRelLevel, Variables::continuous_-
variables(), NonDLocalReliability::dp2_dbeta_factor(), NonDLocalReliability::fnGradU, NonDLocalReli-
ability::fnHessU, Response::function_gradient_view(), Response::function_hessian(), Response::function_-
value(), NonDLocalReliability::mostProbPointU, NonDReliability::mppSearchType, NonDLocalReliabil-
ity::nondLocRelInstance, NonDLocalReliability::probability(), NonDLocalReliability::reliability(), NonDReli-
ability::requestedTargetLevel, NonDReliability::respFnCount, NonDLocalReliability::signed_norm(), and
Dakota::write_data().
Referenced by NonDLocalReliability::mpp_search().
convenience function for performing the initial limit state Taylor-series approximation An initial first- or second-
order Taylor-series approximation is required for MV/AMV/AMV+/TANA or for the case where momentStats
(from MV) are required within finalStatistics for subIterator usage of NonDLocalReliability.
References Response::active_set_request_vector(), Iterator::activeSet, Model::component_parallel_mode(),
Model::compute_response(), Model::continuous_variables(), Model::current_response(), NonD::finalStatistics,
convenience function for initializing class scope arrays Initialize class-scope arrays and perform other start-up
activities, such as evaluating median limit state responses.
References Response::active_set_derivative_vector(), NonD::finalStatistics, NonDReliabil-
ity::importanceSampler, NonD::initialize_random_variables(), NonDReliability::integrationRefinement,
Iterator::iterator_rep(), NonDReliability::mppModel, NonD::natafTransform, Iterator::numFunctions, Non-
DReliability::numRelAnalyses, NonD::numUncertainVars, NonDLocalReliability::prevCumASVLev0,
NonDLocalReliability::prevFnGradDLev0, NonDLocalReliability::prevFnGradULev0, NonDLocalReliabil-
ity::prevMPPULev0, NonDLocalReliability::ranVarMeansU, Iterator::subIteratorFlag, Model::update_from_-
subordinate_model(), and NonDLocalReliability::warmStartFlag.
Referenced by NonDLocalReliability::mpp_search().
convenience function for initializing/warm starting MPP search data for each response function prior to level 0
For a particular response function prior to the first z/p/beta level, initialize/warm-start optimizer initial guess (ini-
tialPtU), expansion point (mostProbPointX/U), and associated response data (computedRespLevel, fnGradX/U,
and fnHessX/U).
References Iterator::activeSet, NonDLocalReliability::assign_mean_data(), Model::component_parallel_-
mode(), Model::compute_response(), NonDLocalReliability::computedRespLevel, Model::continuous_-
variable_ids(), Model::continuous_variables(), Dakota::copy_data(), Model::current_response(), NonDLo-
calReliability::curvatureDataAvailable, NonDLocalReliability::fnGradU, NonDLocalReliability::fnGradX,
NonDLocalReliability::fnHessU, NonDLocalReliability::fnHessX, Response::function_gradient_copy(),
Response::function_hessian(), Response::function_value(), Model::inactive_continuous_variables(), NonD-
LocalReliability::initialPtU, Iterator::iteratedModel, NonDLocalReliability::kappaUpdated, NonDLocal-
Reliability::mostProbPointU, NonDLocalReliability::mostProbPointX, NonDReliability::mppSearchType,
NonD::natafTransform, NonDReliability::numRelAnalyses, NonD::numUncertainVars, NonDLo-
calReliability::prevCumASVLev0, NonDLocalReliability::prevFnGradDLev0, NonDLocalReliabil-
ity::prevFnGradULev0, NonDLocalReliability::prevICVars, NonDLocalReliability::prevMPPULev0,
ActiveSet::request_value(), ActiveSet::request_values(), NonD::requestedRespLevels, NonDReliabil-
ity::respFnCount, Iterator::subIteratorFlag, Model::surrogate_function_indices(), NonDLocalReliabil-
ity::taylorOrder, NonDLocalReliability::update_limit_state_surrogate(), NonDReliability::uSpaceModel,
and NonDLocalReliability::warmStartFlag.
Referenced by NonDLocalReliability::mpp_search().
convenience function for initializing/warm starting MPP search data for each z/p/beta level for each response
function For a particular response function at a particular z/p/beta level, warm-start or reset the optimizer ini-
tial guess (initialPtU), expansion point (mostProbPointX/U), and associated response data (computedRespLevel,
fnGradX/U, and fnHessX/U).
References NonDLocalReliability::assign_mean_data(), NonD::computedGenRelLevels,
NonD::computedRelLevels, NonDLocalReliability::fnGradU, Iterator::hessianType, NonDLocalRe-
liability::initialPtU, NonDLocalReliability::integrationOrder, NonDReliability::levelCount, NonD-
LocalReliability::mostProbPointU, NonDReliability::mppSearchType, NonD::numUncertainVars,
NonD::requestedProbLevels, NonD::requestedRelLevels, NonD::requestedRespLevels, NonDReliabil-
ity::requestedTargetLevel, NonDReliability::respFnCount, NonDLocalReliability::taylorOrder, and NonD-
LocalReliability::warmStartFlag.
Referenced by NonDLocalReliability::mpp_search().
13.94.2.10 void update_mpp_search_data (const Variables & vars_star, const Response & resp_star)
[private]
convenience function for updating MPP search data for each z/p/beta level for each response function Includes
case-specific logic for updating MPP search data for the AMV/AMV+/TANA/NO_APPROX methods.
References Response::active_set(), Response::active_set_request_vector(), Iterator::activeSet, NonDReliabil-
ity::approxConverged, NonDReliability::approxIters, Model::component_parallel_mode(), Model::compute_-
response(), NonDLocalReliability::computedRelLevel, NonDLocalReliability::computedRespLevel,
Model::continuous_variable_ids(), Model::continuous_variables(), Variables::continuous_variables(), Itera-
tor::convergenceTol, Variables::copy(), Dakota::copy_data(), Model::current_response(), Model::current_-
variables(), NonDLocalReliability::curvatureDataAvailable, Dakota::data_pairs, NonD::finalStatistics,
NonDLocalReliability::fnGradU, NonDLocalReliability::fnGradX, NonDLocalReliability::fnHessU,
NonDLocalReliability::fnHessX, Response::function_gradient_copy(), Response::function_hessian(),
Response::function_value(), Response::function_values(), NonDLocalReliability::initialPtU, NonD-
LocalReliability::integrationOrder, Model::interface_id(), Iterator::iteratedModel, NonDLocalReliabil-
ity::kappaUpdated, NonDReliability::levelCount, Dakota::lookup_by_val(), Iterator::maxIterations, NonD-
LocalReliability::mostProbPointU, NonDLocalReliability::mostProbPointX, NonDReliability::mppSearchType,
NonD::natafTransform, Iterator::numFunctions, NonD::numNormalVars, NonD::numUncertainVars, Non-
DReliability::pmaMaximizeG, ActiveSet::request_value(), ActiveSet::request_values(), ActiveSet::request_-
vector(), NonD::requestedProbLevels, NonD::requestedRelLevels, NonD::requestedRespLevels, Non-
DReliability::requestedTargetLevel, NonDReliability::respFnCount, NonDLocalReliability::signed_norm(),
NonDReliability::statCount, NonDLocalReliability::taylorOrder, NonDLocalReliability::update_limit_state_-
surrogate(), NonDLocalReliability::update_pma_maximize(), NonDReliability::uSpaceModel, NonDLocalReli-
ability::warmStartFlag, and NonDLocalReliability::warningBits.
Referenced by NonDLocalReliability::mpp_search().
convenience function for updating z/p/beta level data and final statistics following MPP convergence Updates
computedRespLevels/computedProbLevels/computedRelLevels, finalStatistics, warm start, and graphics data.
References Response::active_set_derivative_vector(), Response::active_set_request_vector(), Graphics::add_-
13.94.2.12 void dg_ds_eval (const RealVector & x_vars, const RealVector & fn_grad_x, RealVector &
final_stat_grad) [private]
convenience function for evaluating dg/ds Computes dg/ds where s = design variables. Supports potentially
overlapping cases of design variable augmentation and insertion.
References Response::active_set_derivative_vector(), Iterator::activeSet, Model::all_continuous_-
variable_ids(), Model::component_parallel_mode(), Model::compute_response(), Dakota::contains(),
Model::continuous_variable_ids(), Model::continuous_variables(), Dakota::copy_data(), Model::current_-
response(), ActiveSet::derivative_vector(), NonD::finalStatistics, Response::function_gradient_copy(),
Response::function_gradients(), Model::inactive_continuous_variable_ids(), Iterator::iteratedModel, Non-
DReliability::mppSearchType, NonD::natafTransform, Iterator::primaryACVarMapIndices, ActiveSet::request_-
value(), ActiveSet::request_values(), NonDReliability::respFnCount, Iterator::secondaryACVarMapTargets, and
NonDReliability::uSpaceModel.
Referenced by NonDLocalReliability::mean_value(), NonDLocalReliability::mpp_search(), and
NonDLocalReliability::update_level_data().
compute factor for derivative of second-order probability with respect to reliability index (from differentiating
BREITUNG or HOHENRACK expressions) Compute sensitivity of second-order probability w.r.t. beta for use
in derivatives of p_2 or beta∗ w.r.t. auxilliary parameters s (design, epistemic) or derivatives of beta∗ w.r.t. u in
PMA2_constraint_eval().
References Dakota::abort_handler(), NonDLocalReliability::curvatureDataAvailable, NonDLocalRelia-
bility::curvatureThresh, NonDLocalReliability::kappaU, NonD::numUncertainVars, NonDLocalReliabil-
ity::probability(), NonDLocalReliability::scale_curvature(), NonDLocalReliability::secondOrderIntType, and
NonDLocalReliability::warningBits.
Referenced by NonDLocalReliability::PMA2_constraint_eval(), and NonDLocalReliability::update_level_-
data().
13.94.2.14 Real probability (Real beta, bool cdf_flag, const RealVector & mpp_u, const RealVector &
fn_grad_u, const RealSymMatrix & fn_hess_u) [private]
Convert provided reliability to probability using either a first-order or second-order integration. Converts beta into
a probability using either first-order (FORM) or second-order (SORM) integration. The SORM calculation first
calculates the principal curvatures at the MPP (using the approach in Ch. 8 of Haldar & Mahadevan), and then
applies correction formulations from the literature (Breitung, Hohenbichler-Rackwitz, or Hong).
References NonDLocalReliability::curvatureDataAvailable, NonDLocalReliability::curvatureThresh,
NonDAdaptImpSampling::get_probability(), NonDReliability::importanceSampler, NonDLocalReliabil-
ity::integrationOrder, NonDReliability::integrationRefinement, Iterator::iterator_rep(), NonDLocalRelia-
bility::kappaU, NonDLocalReliability::kappaUpdated, NonD::numUncertainVars, Iterator::outputLevel,
NonDLocalReliability::principal_curvatures(), NonDLocalReliability::probability(), NonDReliabil-
ity::requestedTargetLevel, NonDReliability::respFnCount, Iterator::run_iterator(), NonDLocalReliability::scale_-
curvature(), NonDLocalReliability::secondOrderIntType, NonDLocalReliability::warningBits, Dakota::write_-
data(), and Dakota::write_precision.
The documentation for this class was generated from the following files:
• NonDLocalReliability.hpp
• NonDLocalReliability.cpp
Iterator
Analyzer
NonD
NonDInterval
NonDLocalInterval
NonDLocalSingleInterval
• ∼NonDLocalSingleInterval ()
destructor
• void initialize ()
perform any required initialization
Private Attributes
• size_t statCntr
counter for finalStatistics
Class for using local gradient-based optimization approaches to calculate interval bounds for epistemic uncer-
tainty quantification. The NonDLocalSingleInterval class supports local gradient-based optimization apporaches
to determining interval bounds for epistemic UQ. The interval bounds may be on the entire function in the case
of pure interval analysis (e.g. intervals on input = intervals on output), or the intervals may be on statistics of an
"inner loop" aleatory analysis such as intervals on means, variances, or percentile levels.
The documentation for this class was generated from the following files:
• NonDLocalSingleInterval.hpp
• NonDLocalSingleInterval.cpp
Iterator
Analyzer
NonD
NonDPOFDarts
• ∼NonDPOFDarts ()
destructor
• void quantify_uncertainty ()
perform POFDart analysis and return probability of failure
• double generate_a_random_number ()
• void init_pof_darts ()
• void exit_pof_darts ()
• void execute (size_t kd)
• void print_POF_results (double lower, double upper)
• void print_results (std::ostream &s)
print the final statistics
• void shrink_big_spheres ()
• void assign_sphere_radius_POF (double ∗x, size_t isample)
• void assign_sphere_radius_OPT (double ∗x, size_t isample)
• void resolve_overlap_POF (size_t ksample)
• void compute_response (double ∗x)
• void compute_response_for_FD_gradients (double ∗x)
• double get_dart_radius (double f, double fgrad, double fcurv)
• void retrieve_POF_bounds (double &lower, double &upper)
• double estimate_spheres_volume_0d (double ∗∗spheres, size_t num_spheres, size_t num_dim, double
∗xmin, double ∗xmax)
• double get_sphere_volume (double r, size_t num_dim)
• void sample_uniformly_from_unit_sphere (double ∗dart, size_t num_dim)
• double area_triangle (double x1, double y1, double x2, double y2, double x3, double y3)
• double f_true (double ∗x)
• void plot_vertices_2d ()
Protected Attributes
• int samples
• int seed
• double Q [1220]
• int indx
• double cc
• double c
• double zc
• double zx
• double zy
• size_t qlen
• size_t _n_dim
• double ∗ _xmin
• double ∗ _xmax
• bool _global_optimization
• double _failure_threshold
• double _global_minima
• double _num_darts
• double _num_successive_misses_p
• double _num_successive_misses_m
• double _max_num_successive_misses
• double _max_radius
• double _accepted_void_ratio
• size_t _num_inserted_points
• size_t _total_budget
• double ∗∗ _sample_points
• double ∗ _dart
• double ∗ _grad_vec
• size_t _flat_dim
• size_t ∗ _line_flat
• size_t _num_flat_segments
• double ∗ _line_flat_start
• double ∗ _line_flat_end
• double ∗ _line_flat_length
• double ∗∗ _fval
• size_t _ieval
• size_t _active_response_function
• double _dx
• size_t _num_sample_eval
Base class for POF Dart methods within DAKOTA/UQ. The NonDPOFDart class implements the calculation of
a failure probability for a specified threshold for a specified response function using the concepts developed by
Mohamed Ebeida. The approach works by throwing down a number of Poisson disk samples of varying radii, and
identifying each disk as either in the failure or safe region. The center of each disk represents a "true" function
evaluation. kd-darts are used to place additional points, in such a way to target the failure region. When the disks
cover the space sufficiently, Monte Carlo methods or a box volume approach is used to calculate both the lower
and upper bounds on the failure probability.
The documentation for this class was generated from the following files:
• NonDPOFDarts.hpp
• NonDPOFDarts.cpp
Iterator
Analyzer
NonD
NonDExpansion
NonDPolynomialChaos
• ∼NonDPolynomialChaos ()
destructor
• void initialize_u_space_model ()
initialize uSpaceModel polynomial approximations with PCE/SC data
• void increment_specification_sequence ()
increment the input specification sequence (PCE only)
• void compute_expansion ()
form or import an orthogonal polynomial expansion using PCE methods
• void increment_order ()
uniformly increment the order of the polynomial chaos expansion
• void archive_coefficients ()
archive the PCE coefficient array for the orthogonal basis
• void order_to_dim_preference (const UShortArray &order, unsigned short &p, RealVector &dim_pref)
convert an isotropic/anisotropic expansion_order vector into a scalar plus a dimension preference vector
Private Attributes
• String expansionImportFile
filename for import of chaos coefficients
• Real collocRatio
factor applied to terms∧ termsOrder in computing number of regression points, either user specified or inferred
• Real termsOrder
exponent applied to number of expansion terms for computing number of regression points
• bool tensorRegression
option for regression PCE using a filtered set tensor-product points
• bool crossValidation
flag for use of cross-validation for selection of parameter settings in regression approaches
• RealVector noiseTols
noise tolerance for compressive sensing algorithms; vector form used in cross-validation
• Real l2Penalty
L2 penalty for LASSO algorithm (elastic net variant).
• UShortArray expOrderSeqSpec
user specification for expansion_order (array for multifidelity)
• RealVector dimPrefSpec
user specification for dimension_preference
• SizetArray collocPtsSeqSpec
user specification for collocation_points (array for multifidelity)
• SizetArray expSamplesSeqSpec
user specification for expansion_samples (array for multifidelity)
• size_t sequenceIndex
sequence index for {expOrder,collocPts,expSamples}SeqSpec
• RealMatrix pceGradsMeanX
derivative of the PCE with respect to the x-space variables evaluated at the means (used as uncertainty importance
metrics)
• bool normalizedCoeffOutput
user request for use of normalization when outputting PCE coefficients
standard constructor This constructor is called for a standard letter-envelope iterator instantiation using the Prob-
lemDescDB.
References Dakota::abort_handler(), NonDIntegration::anisotropic_order_to_dimension_-
preference(), Model::assign_rep(), NonDPolynomialChaos::collocPtsSeqSpec, NonDPolynomi-
alChaos::collocRatio, NonDExpansion::construct_cubature(), NonDExpansion::construct_expansion_-
sampler(), NonD::construct_lhs(), NonDExpansion::construct_quadrature(), NonDExpansion::construct_-
sparse_grid(), Model::derivative_concurrency(), NonDIntegration::dimension_preference_to_-
anisotropic_order(), NonDPolynomialChaos::dimPrefSpec, NonDExpansion::expansionCoeffsApproach,
NonDPolynomialChaos::expansionImportFile, NonDPolynomialChaos::expOrderSeqSpec, NonD-
PolynomialChaos::expSamplesSeqSpec, ProblemDescDB::get_bool(), ProblemDescDB::get_-
int(), ProblemDescDB::get_real(), ProblemDescDB::get_short(), ProblemDescDB::get_string(),
alternate constructor This constructor is used for helper iterator instantiation on the fly.
References Model::assign_rep(), NonDExpansion::construct_cubature(), NonDExpansion::construct_-
quadrature(), NonDExpansion::construct_sparse_grid(), NonDExpansion::expansionCoeffsApproach, Non-
DExpansion::initialize(), NonDPolynomialChaos::initialize_u_space_model(), Iterator::iteratedModel,
NonD::numContDesVars, NonD::numContEpistUncVars, NonD::numContStateVars, Iterator::outputLevel,
NonDPolynomialChaos::resolve_inputs(), NonD::transform_model(), and NonDExpansion::uSpaceModel.
increment the input specification sequence (PCE only) default implementation is overridden by PCE
Reimplemented from NonDExpansion.
References Model::approximations(), NonDPolynomialChaos::collocPtsSeqSpec, NonDPolynomi-
alChaos::collocRatio, NonDIntegration::dimension_preference_to_anisotropic_order(), NonDPolynomi-
alChaos::dimPrefSpec, PecosApproximation::expansion_order(), NonDExpansion::expansionCoeffsApproach,
NonDPolynomialChaos::expOrderSeqSpec, NonDPolynomialChaos::expSamplesSeqSpec, Iterator::iterator_-
rep(), NonDQuadrature::mode(), Model::model_rep(), Iterator::numContinuousVars, Iterator::numFunctions,
NonDExpansion::numSamplesOnModel, NonDQuadrature::quadrature_order(), NonDQuadrature::samples(),
NonDSampling::sampling_reference(), NonDPolynomialChaos::sequenceIndex, Model::subordinate_-
iterator(), NonDPolynomialChaos::tensorRegression, NonDPolynomialChaos::terms_ratio_to_samples(),
NonDPolynomialChaos::termsOrder, DataFitSurrModel::total_points(), NonDQuadrature::update(),
PecosApproximation::update_order(), and NonDExpansion::uSpaceModel.
uniformly increment the order of the polynomial chaos expansion Used for uniform refinement of regression-
based PCE.
Reimplemented from NonDExpansion.
References Model::approximations(), NonDPolynomialChaos::collocRatio, PecosApproximation::expansion_-
terms(), NonDQuadrature::increment_grid(), PecosApproximation::increment_order(), Iterator::iterator_-
rep(), NonDQuadrature::mode(), Model::model_rep(), Iterator::numFunctions, NonDExpan-
• NonDPolynomialChaos.hpp
• NonDPolynomialChaos.cpp
Iterator
Analyzer
NonD
NonDIntegration
NonDQuadrature
• NonDQuadrature (Model &model, int num_rand_samples, int seed, const UShortArray &quad_order_seq,
const RealVector &dim_pref)
alternate constructor for instantiations "on the fly" that sample randomly from a tensor product multi-index
• void increment_grid ()
increment SSG level/TPQ order
• void update ()
propagate any numSamples updates and/or grid updates/increments
• ∼NonDQuadrature ()
destructor
• void reset ()
restore initial state for repeated sub-iterator executions
• void increment_specification_sequence ()
increment sequenceIndex and update active orders/levels
• void filter_parameter_sets ()
prune allSamples back to size numSamples, retaining points with highest product weight
Private Attributes
• Pecos::TensorProductDriver ∗ tpqDriver
convenience pointer to the numIntDriver representation
• bool nestedRules
for studies involving refinement strategies, allow for use of nested quadrature rules such as Gauss-Patterson
• UShortArray quadOrderSeqSpec
a sequence of scalar quadrature orders, one per refinement level
• UShortArray dimQuadOrderRef
reference point for Pecos::TensorProductDriver::quadOrder: the original user specification for the number of
Gauss points per dimension, plus any refinements posted by increment_grid()
• short quadMode
point generation mode: FULL_TENSOR, FILTERED_TENSOR, RANDOM_TENSOR
• size_t numSamples
size of a subset of tensor quadrature points (filtered based on product weight or sampled uniformly from the tensor
multi-index); used by the regression PCE approach known as "probabilistic collocation"
• int randomSeed
seed for the random number generator used in sampling of the tensor multi-index
Derived nondeterministic class that generates N-dimensional numerical quadrature points for evaluation of ex-
pectation integrals over uncorrelated standard normals/uniforms/exponentials/betas/gammas. This class is used
by NonDPolynomialChaos, but could also be used for general numerical integration of moments. It employs
13.98.2.1 NonDQuadrature (Model & model, const UShortArray & quad_order_seq, const RealVector
& dim_pref)
alternate constructor for instantiations "on the fly" based on a quadrature order specification This alternate con-
structor is used for on-the-fly generation and evaluation of numerical quadrature points.
References NonDIntegration::numIntDriver, and NonDQuadrature::tpqDriver.
13.98.2.2 NonDQuadrature (Model & model, int num_filt_samples, const RealVector & dim_pref)
alternate constructor for instantiations "on the fly" that generate a filtered tensor product sample set This alternate
constructor is used for on-the-fly generation and evaluation of filtered tensor quadrature points.
References NonDIntegration::numIntDriver, and NonDQuadrature::tpqDriver.
13.98.2.3 NonDQuadrature (Model & model, int num_rand_samples, int seed, const UShortArray &
quad_order_seq, const RealVector & dim_pref)
alternate constructor for instantiations "on the fly" that sample randomly from a tensor product multi-index This
alternate constructor is used for on-the-fly generation and evaluation of random sampling from a tensor quadrature
multi-index.
References NonDIntegration::numIntDriver, and NonDQuadrature::tpqDriver.
constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case, set_db_-
list_nodes has been called and probDescDB can be queried for settings from the method specification. It is not
currently used, as there is not yet a separate nond_quadrature method specification.
References NonDIntegration::check_variables(), ProblemDescDB::get_bool(), ProblemDescDB::get_-
short(), Iterator::maxConcurrency, NonD::natafTransform, NonDQuadrature::nestedRules, NonDIntegra-
tion::numIntDriver, Iterator::probDescDB, NonDQuadrature::reset(), and NonDQuadrature::tpqDriver.
Implements NonDIntegration.
References Iterator::maxConcurrency, NonDQuadrature::nestedRules, Iterator::numContinuousVars,
NonDQuadrature::numSamples, NonDQuadrature::quadMode, NonDQuadrature::reset(), NonDQuadra-
ture::tpqDriver, and NonDQuadrature::update().
13.98.3.2 void sampling_reset (int min_samples, bool all_data_flag, bool stats_flag) [protected,
virtual]
used by DataFitSurrModel::build_global() to publish the minimum number of points needed from the quadrature
routine in order to build a particular global approximation.
Reimplemented from Iterator.
References NonDQuadrature::compute_minimum_quadrature_order(), NonDIntegration::dimPrefSpec,
NonDQuadrature::dimQuadOrderRef, NonDQuadrature::nestedRules, Iterator::numContinuousVars, and
NonDQuadrature::tpqDriver.
Referenced by NonDQuadrature::update().
get the current number of samples Return current number of evaluation points. Since the calculation of samples,
collocation points, etc. might be costly, provide a default implementation here that backs out from the maxCon-
currency. May be (is) overridden by derived classes.
Reimplemented from Iterator.
References NonDQuadrature::numSamples, NonDQuadrature::quadMode, and NonDQuadrature::tpqDriver.
The documentation for this class was generated from the following files:
• NonDQuadrature.hpp
• NonDQuadrature.cpp
Iterator
Analyzer
NonD
NonDCalibration
NonDBayesCalibration
NonDQUESOBayesCalibration
• ∼NonDQUESOBayesCalibration ()
destructor
Public Attributes
• String rejectionType
Rejection type (standard or delayed, in the DRAM framework).
• String metropolisType
Metropolis type (hastings or adaptive, in the DRAM framework).
• int numSamples
number of samples in the chain (e.g. number of MCMC samples)
• RealVector proposalCovScale
scale factor for proposal covariance
• Real likelihoodScale
scale factor for likelihood
• bool calibrateSigmaFlag
flag to indicated if the sigma terms should be calibrated (default true)
Protected Attributes
• int randomSeed
random seed to pass to QUESO
Private Attributes
• short emulatorType
the emulator type: NO_EMULATOR, GAUSSIAN_PROCESS, POLYNOMIAL_CHAOS, or STOCHASTIC_-
COLLOCATION
Bayesian inference using the QUESO library from UT Austin. This class provides a wrapper to the QUESO li-
brary developed as part of the Predictive Science Academic Alliance Program (PSAAP), specifically the PECOS
(Predictive Engineering and Computational Sciences) Center at UT Austin. The name QUESO stands for Quan-
tification of Uncertainty for Estimation, Simulation, and Optimization.
standard constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case,
set_db_list_nodes has been called and probDescDB can be queried for settings from the method specification.
• NonDQUESOBayesCalibration.hpp
• NonDQUESOBayesCalibration.cpp
Iterator
Analyzer
NonD
NonDReliability
NonDGlobalReliability NonDLocalReliability
• ∼NonDReliability ()
destructor
Protected Attributes
• Model uSpaceModel
Model representing the limit state in u-space, after any recastings and data fits.
• Model mppModel
RecastModel which formulates the optimization subproblem: RIA, PMA, EGO.
• Iterator mppOptimizer
Iterator which optimizes the mppModel.
• short mppSearchType
the MPP search type selection: MV, x/u-space AMV, x/u-space AMV+, x/u-space TANA, x/u-space EGO, or NO_-
APPROX
• Iterator importanceSampler
importance sampling instance used to compute/refine probabilities
• short integrationRefinement
integration refinement type (NO_INT_REFINE, IS, AIS, or MMAIS) provided by refinement specification
• size_t numRelAnalyses
number of invocations of quantify_uncertainty()
• size_t approxIters
number of approximation cycles for the current respFnCount/levelCount
• bool approxConverged
indicates convergence of approximation-based iterations
• int respFnCount
counter for which response function is being analyzed
• size_t levelCount
counter for which response/probability level is being analyzed
• size_t statCount
counter for which final statistic is being computed
• bool pmaMaximizeG
flag indicating maximization of G(u) within PMA formulation
• Real requestedTargetLevel
the {response,reliability,generalized reliability} level target for the current response function
Base class for the reliability methods within DAKOTA/UQ. The NonDReliability class provides a base class for
NonDLocalReliability, which implements traditional MPP-based reliability methods, and NonDGlobalReliabil-
ity, which implements global limit state search using Gaussian process models in combination with multimodal
importance sampling.
The documentation for this class was generated from the following files:
• NonDReliability.hpp
• NonDReliability.cpp
Iterator
Analyzer
NonD
NonDSampling
• void update_final_statistics ()
update finalStatistics from minValues/maxValues, momentStats, and computedProb-
Levels/computedRelLevels/computedRespLevels
• NonDSampling (NoDBBaseConstructor, Model &model, const String &sample_type, int samples, int seed,
const String &rng, bool vary_pattern, short sampling_vars_mode)
alternate constructor for sample generation and evaluation "on the fly"
• NonDSampling (NoDBBaseConstructor, const String &sample_type, int samples, int seed, const String
&rng, const RealVector &lower_bnds, const RealVector &upper_bnds)
alternate constructor for sample generation "on the fly"
• ∼NonDSampling ()
destructor
• void view_design_counts (const Model &model, size_t &num_cdv, size_t &num_didv, size_t &num_drdv)
const
compute sampled subsets (all, active, uncertain) within all variables (acv/adiv/adrv) from samplingVarsMode and
model
• void view_uncertain_counts (const Model &model, size_t &num_cuv, size_t &num_diuv, size_t &num_-
druv) const
compute sampled subsets (all, active, uncertain) within all variables (acv/adiv/adrv) from samplingVarsMode and
model
• void view_state_counts (const Model &model, size_t &num_csv, size_t &num_disv, size_t &num_drsv)
const
compute sampled subsets (all, active, uncertain) within all variables (acv/adiv/adrv) from samplingVarsMode and
model
• void mode_counts (const Model &model, size_t &cv_start, size_t &num_cv, size_t &div_start, size_t
&num_div, size_t &drv_start, size_t &num_drv) const
compute sampled subsets (all, active, uncertain) within all variables (acv/adiv/adrv) from samplingVarsMode and
model
Protected Attributes
• const int seedSpec
the user seed specification (default is 0)
• int randomSeed
the current seed
• int samplesRef
reference number of samples updated for refinement
• int numSamples
the current number of samples to evaluate
• String rngName
• String sampleType
the sample type: random, lhs, or incremental_lhs
• Pecos::LHSDriver lhsDriver
the C++ wrapper for the F90 LHS library
• bool statsFlag
flags computation/output of statistics
• bool allDataFlag
flags update of allResponses < (allVariables or allSamples already defined)
• short samplingVarsMode
the sampling mode: ALEATORY_UNCERTAIN{,_UNIFORM}, EPISTEMIC_UNCERTAIN{,_UNIFORM},
UNCERTAIN{,_UNIFORM}, ACTIVE{,_UNIFORM}, or ALL{,_UNIFORM}. This is a secondary control on top of
the variables view that allows sampling over subsets of variables that may differ from the view.
• short sampleRanksMode
mode for input/output of LHS sample ranks: IGNORE_RANKS, GET_RANKS, SET_RANKS, or SET_GET_RANKS
• bool varyPattern
flag for generating a sequence of seed values within multiple get_parameter_sets() calls so that these executions
(e.g., for SBO/SBNLS) are not repeated, but are still repeatable
• RealMatrix sampleRanks
data structure to hold the sample ranks
• SensAnalysisGlobal nonDSampCorr
initialize statistical post processing
Private Attributes
• size_t numLHSRuns
counter for number of executions of get_parameter_sets() for this object
• RealMatrix momentCIs
Matrix of confidence internals on moments, with rows for mean_lower, mean_upper, sd_lower, sd_upper (calculated
in compute_moments()).
• RealMatrix extremeValues
Minimum (row 0) and maximum (row 1) values of response functions for epistemic calculations (calculated in
compute_intervals()),.
• RealVectorArray computedPDFAbscissas
sorted response PDF intervals bounds extracted from min/max sample and requested/computedRespLevels (vector
lengths = num bins + 1)
• RealVectorArray computedPDFOrdinates
response PDF densities computed from bin counts divided by (unequal) bin widths (vector lengths = num bins)
Base class for common code between NonDLHSSampling, NonDIncremLHSSampling, and NonDAdaptImp-
Sampling. This base class provides common code for sampling methods which employ the Latin Hypercube
Sampling (LHS) package from Sandia Albuquerque’s Risk and Reliability organization. NonDSampling now
exclusively utilizes the 1998 Fortran 90 LHS version as documented in SAND98-0210, which was converted to
a UNIX link library in 2001. The 1970’s vintage LHS (that had been f2c’d and converted to incomplete classes)
has been removed.
constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case, set_db_-
list_nodes has been called and probDescDB can be queried for settings from the method specification.
References Dakota::abort_handler(), NonD::epistemicStats, NonD::initialize_final_statistics(),
Iterator::maxConcurrency, NonDSampling::numSamples, NonDSampling::sampleType, and
NonD::totalLevelRequests.
13.101.2.2 NonDSampling (NoDBBaseConstructor, Model & model, const String & sample_type, int
samples, int seed, const String & rng, bool vary_pattern, short sampling_vars_mode)
[protected]
alternate constructor for sample generation and evaluation "on the fly" This alternate constructor is used for
generation and evaluation of on-the-fly sample sets.
References NonD::epistemicStats, Iterator::maxConcurrency, NonD::numEpistemicUncVars, NonD-
Sampling::numSamples, NonDSampling::sampleType, NonDSampling::samplingVarsMode, and Itera-
tor::subIteratorFlag.
13.101.2.3 NonDSampling (NoDBBaseConstructor, const String & sample_type, int samples, int seed,
const String & rng, const RealVector & lower_bnds, const RealVector & upper_bnds)
[protected]
alternate constructor for sample generation "on the fly" This alternate constructor is used by ConcurrentStrategy
for generation of uniform, uncorrelated sample sets.
References Iterator::maxConcurrency, NonDSampling::numSamples, NonDSampling::sampleType, and Itera-
tor::subIteratorFlag.
get the current number of samples Return current number of evaluation points. Since the calculation of samples,
collocation points, etc. might be costly, provide a default implementation here that backs out from the maxCon-
currency. May be (is) overridden by derived classes.
Reimplemented from Iterator.
References NonDSampling::numSamples.
Referenced by NonDAdaptImpSampling::generate_samples(), NonDAdaptImpSampling::select_init_rep_-
points(), and NonDAdaptImpSampling::select_rep_points().
13.101.3.2 void sampling_reset (int min_samples, bool all_data_flag, bool stats_flag) [inline,
protected, virtual]
resets number of samples and sampling flags used by DataFitSurrModel::build_global() to publish the minimum
number of samples needed from the sampling routine (to build a particular global approximation) and to set
allDataFlag and statsFlag. In this case, allDataFlag is set to true (vectors of variable and response sets must be
returned to build the global approximation) and statsFlag is set to false (statistics computations are not needed).
Reimplemented from Iterator.
References NonDSampling::allDataFlag, NonDSampling::numSamples, NonDSampling::samplesRef, and
NonDSampling::statsFlag.
Uses lhsDriver to generate a set of samples from the distributions/bounds defined in the incoming model. This
version of get_parameter_sets() extracts data from the user-defined model in any of the four sampling modes.
Reimplemented from Analyzer.
References Dakota::abort_handler(), Model::acv(), Model::adiv(), Model::aleatory_distribution_-
parameters(), Model::all_continuous_lower_bounds(), Model::all_continuous_upper_bounds(),
Model::all_discrete_int_lower_bounds(), Model::all_discrete_int_upper_bounds(), Analyzer::allSamples,
Model::continuous_lower_bounds(), Model::continuous_upper_bounds(), Model::current_variables(),
Model::discrete_design_set_int_values(), Model::discrete_design_set_real_values(), Model::discrete_-
state_set_int_values(), Model::discrete_state_set_real_values(), Model::epistemic_distribution_-
13.101.3.4 void get_parameter_sets (const RealVector & lower_bnds, const RealVector & upper_bnds)
[protected]
Uses lhsDriver to generate a set of uniform samples over lower_bnds/upper_bnds. This version of get_parameter_-
sets() does not extract data from the user-defined model, but instead relies on the incoming bounded region
definition. It only support a UNIFORM sampling mode, where the distinction of ACTIVE_UNIFORM vs. ALL_-
UNIFORM is handled elsewhere.
References Analyzer::allSamples, NonDSampling::initialize_lhs(), NonDSampling::lhsDriver, and NonDSam-
pling::numSamples.
13.101.3.5 void view_design_counts (const Model & model, size_t & num_cdv, size_t & num_didv,
size_t & num_drdv) const [protected]
compute sampled subsets (all, active, uncertain) within all variables (acv/adiv/adrv) from samplingVarsMode and
model This function computes total design variable counts, not active counts, for use in defining offsets and counts
within all variables arrays.
References Model::all_continuous_variable_types(), Model::all_discrete_int_variable_types(), Model::all_-
discrete_real_variable_types(), Model::current_variables(), Variables::cv_start(), Variables::div_start(),
Variables::drv_start(), NonD::numContDesVars, NonD::numDiscIntDesVars, NonD::numDiscRealDesVars,
and Variables::view().
Referenced by NonDSampling::mode_counts().
13.101.3.6 void view_aleatory_uncertain_counts (const Model & model, size_t & num_cauv, size_t &
num_diauv, size_t & num_drauv) const [protected]
compute sampled subsets (all, active, uncertain) within all variables (acv/adiv/adrv) from samplingVarsMode and
model This function computes total aleatory uncertain variable counts, not active counts, for use in defining offsets
and counts within all variables arrays.
References Model::aleatory_distribution_parameters(), Model::current_variables(), Variables::cv(),
Variables::div(), Variables::drv(), NonD::numContAleatUncVars, NonD::numDiscIntAleatUncVars,
NonD::numDiscRealAleatUncVars, and Variables::view().
Referenced by NonDSampling::mode_counts().
13.101.3.7 void view_epistemic_uncertain_counts (const Model & model, size_t & num_ceuv, size_t &
num_dieuv, size_t & num_dreuv) const [protected]
compute sampled subsets (all, active, uncertain) within all variables (acv/adiv/adrv) from samplingVarsMode and
model This function computes total epistemic uncertain variable counts, not active counts, for use in defining
offsets and counts within all variables arrays.
References Model::current_variables(), Variables::cv(), Variables::div(), Variables::drv(), Model::epistemic_-
distribution_parameters(), NonD::numContEpistUncVars, NonD::numDiscIntEpistUncVars,
NonD::numDiscRealEpistUncVars, and Variables::view().
Referenced by NonDSampling::mode_counts().
13.101.3.8 void view_uncertain_counts (const Model & model, size_t & num_cuv, size_t & num_diuv,
size_t & num_druv) const [protected]
compute sampled subsets (all, active, uncertain) within all variables (acv/adiv/adrv) from samplingVarsMode and
model This function computes total uncertain variable counts, not active counts, for use in defining offsets and
counts within all variables arrays.
References Model::aleatory_distribution_parameters(), Model::current_variables(), Variables::cv(), Vari-
ables::div(), Variables::drv(), Model::epistemic_distribution_parameters(), NonD::numContAleatUncVars,
NonD::numContEpistUncVars, NonD::numDiscIntAleatUncVars, NonD::numDiscIntEpistUncVars,
NonD::numDiscRealAleatUncVars, NonD::numDiscRealEpistUncVars, and Variables::view().
Referenced by NonDSampling::mode_counts().
The documentation for this class was generated from the following files:
• NonDSampling.hpp
• NonDSampling.cpp
Iterator
Analyzer
NonD
NonDIntegration
NonDSparseGrid
• void increment_specification_sequence ()
advance to next nevel in ssgLevelSeqSpec sequence
• void initialize_sets ()
invokes SparseGridDriver::initialize_sets()
• void update_reference ()
invokes SparseGridDriver::update_reference()
• void restore_set ()
invokes SparseGridDriver::restore_set()
• void evaluate_set ()
invokes SparseGridDriver::compute_trial_grid()
• void decrement_set ()
invokes SparseGridDriver::pop_trial_set()
• void finalize_sets ()
invokes SparseGridDriver::finalize_sets()
• void evaluate_grid_increment ()
invokes SparseGridDriver::evaluate_grid_increment()
• ∼NonDSparseGrid ()
destructor
• void reset ()
restore initial state for repeated sub-iterator executions
Private Attributes
• Pecos::SparseGridDriver ∗ ssgDriver
convenience pointer to the numIntDriver representation
• UShortArray ssgLevelSeqSpec
the user specification for the Smolyak sparse grid level, defining a sequence of refinement levels.
Derived nondeterministic class that generates N-dimensional Smolyak sparse grids for numerical evaluation of
expectation integrals over independent standard random variables. This class is used by NonDPolynomialChaos
and NonDStochCollocation, but could also be used for general numerical integration of moments. It employs 1-D
Clenshaw-Curtis and Gaussian quadrature rules within Smolyak sparse grids.
13.102.2.1 NonDSparseGrid (Model & model, short exp_coeffs_approach, const UShortArray &
ssg_level_seq, const RealVector & dim_pref, short growth_rate = Pecos::MODERATE_-
RESTRICTED_GROWTH, short refine_control = Pecos::NO_CONTROL, bool
track_uniq_prod_wts = true, bool track_colloc_indices = true)
This alternate constructor is used for on-the-fly generation and evaluation of sparse grids within PCE and SC.
References NonDIntegration::numIntDriver, and NonDSparseGrid::ssgDriver.
constructor This constructor is called for a standard letter-envelope iterator instantiation. In this case, set_db_-
list_nodes has been called and probDescDB can be queried for settings from the method specification. It is not
currently used, as there is not a separate sparse_grid method specification.
References Model::aleatory_distribution_parameters(), NonDIntegration::check_variables(), NonDIntegra-
tion::dimPrefSpec, ProblemDescDB::get_bool(), ProblemDescDB::get_short(), Iterator::iteratedModel,
Iterator::maxConcurrency, NonD::natafTransform, NonDIntegration::numIntDriver, Iterator::probDescDB,
NonDSparseGrid::ssgDriver, and NonDSparseGrid::ssgLevelRef.
get the current number of samples Return current number of evaluation points. Since the calculation of samples,
collocation points, etc. might be costly, provide a default implementation here that backs out from the maxCon-
currency. May be (is) overridden by derived classes.
Reimplemented from Iterator.
References NonDSparseGrid::ssgDriver.
13.102.3.2 void sampling_reset (int min_samples, bool all_data_flag, bool stats_flag) [protected,
virtual]
used by DataFitSurrModel::build_global() to publish the minimum number of points needed from the sparse grid
routine in order to build a particular global approximation.
Reimplemented from Iterator.
References NonDSparseGrid::ssgDriver, and NonDSparseGrid::ssgLevelRef.
The documentation for this class was generated from the following files:
• NonDSparseGrid.hpp
• NonDSparseGrid.cpp
Iterator
Analyzer
NonD
NonDExpansion
NonDStochCollocation
• ∼NonDStochCollocation ()
destructor
• void initialize_u_space_model ()
initialize uSpaceModel polynomial approximations with PCE/SC data
• void update_expansion ()
update an expansion; avoids overhead in compute_expansion()
• Real compute_covariance_metric ()
compute 2-norm of change in response covariance
• Real compute_final_statistics_metric ()
compute 2-norm of change in final statistics
Private Attributes
• short sgBasisType
Type of interpolant (from enum in DataMethod.hpp).
standard constructor This constructor is called for a standard letter-envelope iterator instantiation using the Prob-
lemDescDB.
References Model::assign_rep(), NonDExpansion::construct_expansion_sampler(), NonDExpansion::construct_-
quadrature(), NonDExpansion::construct_sparse_grid(), Model::derivative_concurrency(), Non-
DExpansion::expansionCoeffsApproach, ProblemDescDB::get_bool(), ProblemDescDB::get_rv(),
ProblemDescDB::get_short(), ProblemDescDB::get_string(), ProblemDescDB::get_usa(), Model::init_-
communicators(), NonDExpansion::initialize(), NonDStochCollocation::initialize_u_space_model(), Itera-
tor::iteratedModel, NonDExpansion::nestedRules, NonD::numContDesVars, NonD::numContEpistUncVars,
NonD::numContStateVars, NonDExpansion::numSamplesOnExpansion, Iterator::outputLevel, NonDExpan-
sion::piecewiseBasis, Iterator::probDescDB, NonDExpansion::refineControl, NonDStochCollocation::resolve_-
inputs(), NonDStochCollocation::sgBasisType, NonD::transform_model(), and NonDExpansion::uSpaceModel.
alternate constructor This constructor is used for helper iterator instantiation on the fly.
References Model::assign_rep(), NonDExpansion::construct_quadrature(), NonDExpansion::construct_-
sparse_grid(), NonDExpansion::expansionCoeffsApproach, NonDExpansion::initialize(),
NonDStochCollocation::initialize_u_space_model(), Iterator::iteratedModel, NonD::numContDesVars,
NonD::numContEpistUncVars, NonD::numContStateVars, Iterator::outputLevel, NonDExpan-
sion::piecewiseBasis, NonDStochCollocation::resolve_inputs(), NonDStochCollocation::sgBasisType,
NonD::transform_model(), and NonDExpansion::uSpaceModel.
compute 2-norm of change in response covariance computes the default refinement metric based on change in
respCovariance
Reimplemented from NonDExpansion.
References Model::approximations(), PecosApproximation::delta_covariance(),
PecosApproximation::expansion_coefficient_flag(), NonDExpansion::initialPtU, NonD::numContDesVars,
NonD::numContEpistUncVars, NonD::numContStateVars, Iterator::numFunctions, NonDExpan-
sion::respCovariance, NonDStochCollocation::sgBasisType, and NonDExpansion::uSpaceModel.
compute 2-norm of change in final statistics computes a "goal-oriented" refinement metric employing finalStatis-
tics
Reimplemented from NonDExpansion.
References Model::approximations(), NonD::cdfFlag, NonDExpansion::compute_statistics(),
PecosApproximation::delta_beta(), PecosApproximation::delta_z(), PecosApproximation::expansion_-
coefficient_flag(), NonD::finalStatistics, Response::function_values(), NonDExpansion::initialPtU,
Response::num_functions(), NonD::numContDesVars, NonD::numContEpistUncVars,
NonD::numContStateVars, Iterator::numFunctions, NonD::requestedGenRelLevels,
NonD::requestedProbLevels, NonD::requestedRelLevels, NonD::requestedRespLevels, NonD::respLevelTarget,
NonDStochCollocation::sgBasisType, NonDExpansion::uSpaceModel, and Dakota::write_data().
The documentation for this class was generated from the following files:
• NonDStochCollocation.hpp
• NonDStochCollocation.cpp
Iterator
Minimizer
Optimizer
NonlinearCGOptimizer
• ∼NonlinearCGOptimizer ()
destructor
• void compute_direction ()
compute next direction via choice of method
• bool compute_step ()
compute step: fixed, simple decrease, sufficient decrease
• void bracket_min (Real &xa, Real &xb, Real &xc, Real &fa, Real &fb, Real &fc)
bracket the 1-D minimum in the linesearch
Private Attributes
• Real initialStep
initial step length
• Real linesearchTolerance
approximate accuracy of absissca in LS
• unsigned linesearchType
type of line search (if any)
• unsigned maxLinesearchIters
maximum evaluations in line search
• Real relFunctionTol
stopping criterion for rel change in fn
• Real relGradientTol
stopping criterion for rel reduction in g
• bool resetStep
whether to reset step with each linesearch
• unsigned restartIter
iter at which to reset to steepest descent
• unsigned updateType
type of CG direction update
• unsigned iterCurr
current iteration number
• RealVector designVars
current decision variables in the major iteration
• RealVector trialVars
decision variables in the linesearch
• Real functionCurr
• Real functionPrev
previous function value
• RealVector gradCurr
current gradient
• RealVector gradPrev
previous gradient
• RealVector gradDiff
temporary for gradient difference (gradCurr - gradPrev)
• RealVector searchDirection
current aggregate search direction
• Real stepLength
current step length parameter alpha
• Real gradDotGrad_init
initial gradient norm squared
• Real gradDotGrad_curr
gradCurr dot gradCurr
• Real gradDotGrad_prev
gradPrev dot gradPrev
Perform 1-D minimization for the stepLength using Brent’s method. Perform 1-D minimization for the stepLength
using Brent’s method. This is a C translation of fmin.f from Netlib.
References NonlinearCGOptimizer::linesearch_eval(), NonlinearCGOptimizer::maxLinesearchIters, and Itera-
tor::outputLevel.
Referenced by NonlinearCGOptimizer::compute_step().
The documentation for this class was generated from the following files:
• NonlinearCGOptimizer.hpp
• NonlinearCGOptimizer.cpp
Iterator
Minimizer
Optimizer SOLBase
NPSOLOptimizer
• ∼NPSOLOptimizer ()
destructor
• void find_optimum ()
Used within the optimizer branch for computing the optimal solution. Redefines the run virtual function for the
optimizer branch.
• void find_optimum_on_user_functions ()
called by find_optimum for setUpType == "user_functions"
• static void objective_eval (int &mode, int &n, double ∗x, double &f, double ∗gradf, int &nstate)
OBJFUN in NPSOL manual: computes the value and first derivatives of the objective function (passed by function
pointer to NPSOL).
Private Attributes
• String setUpType
controls iteration mode: "model" (normal usage) or "user_functions" (user-supplied functions mode for "on the fly"
instantiations). NonDReliability currently uses the user_functions mode.
• RealVector initialPoint
holds initial point passed in for "user_functions" mode.
• RealVector lowerBounds
holds variable lower bounds passed in for "user_functions" mode.
• RealVector upperBounds
holds variable upper bounds passed in for "user_functions" mode.
• void(∗ userObjectiveEval )(int &, int &, double ∗, double &, double ∗, int &)
holds function pointer for objective function evaluator passed in for "user_functions" mode.
• void(∗ userConstraintEval )(int &, int &, int &, int &, int ∗, double ∗, double ∗, double ∗, int &)
holds function pointer for constraint function evaluator passed in for "user_functions" mode.
Wrapper class for the NPSOL optimization library. The NPSOLOptimizer class provides a wrapper for NPSOL,
a Fortran 77 sequential quadratic programming library from Stanford University marketed by Stanford Business
Associates. It uses a function pointer approach for which passed functions must be either global functions or
static member functions. Any attribute used within static member functions must be either local to that function
or accessed through a static pointer.
The user input mappings are as follows: max_function_evaluations is implemented directly in
NPSOLOptimizer’s evaluator functions since there is no NPSOL parameter equivalent, and max_iterations,
convergence_tolerance, output verbosity, verify_level, function_precision, and
linesearch_tolerance are mapped into NPSOL’s "Major Iteration Limit", "Optimality Tolerance", "Major
Print Level" (verbose: Major Print Level = 20; quiet: Major Print Level = 10), "Verify Level", "Function
Precision", and "Linesearch Tolerance" parameters, respectively, using NPSOL’s npoptn() subroutine (as wrapped
by npoptn2() from the npoptn_wrapper.f file). Refer to [Gill, P.E., Murray, W., Saunders, M.A., and Wright, M.H.,
1986] for information on NPSOL’s optional input parameters and the npoptn() subroutine.
alternate constructor for Iterator instantiations by name This is an alternate constructor which accepts a Model but
does not have a supporting method specification from the ProblemDescDB.
References Minimizer::constraintTol, Iterator::convergenceTol, Iterator::fdGradStepSize, Iterator::gradientType,
Iterator::maxIterations, Iterator::outputLevel, SOLBase::set_options(), Minimizer::speculativeFlag, and Mini-
mizer::vendorNumericalGradFlag.
13.105.2.3 NPSOLOptimizer (Model & model, const int & derivative_level, const Real & conv_tol)
alternate constructor for instantiations "on the fly" This is an alternate constructor for instantiations on the fly
using a Model but no ProblemDescDB.
13.105.2.4 NPSOLOptimizer (const RealVector & initial_point, const RealVector & var_lower_bnds,
const RealVector & var_upper_bnds, const RealMatrix & lin_ineq_coeffs, const
RealVector & lin_ineq_lower_bnds, const RealVector & lin_ineq_upper_bnds, const
RealMatrix & lin_eq_coeffs, const RealVector & lin_eq_targets, const RealVector &
nonlin_ineq_lower_bnds, const RealVector & nonlin_ineq_upper_bnds, const RealVector &
nonlin_eq_targets, void(∗)(int &, int &, double ∗, double &, double ∗, int &) user_obj_eval,
void(∗)(int &, int &, int &, int &, int ∗, double ∗, double ∗, double ∗, int &) user_con_eval,
const int & derivative_level, const Real & conv_tol)
alternate constructor for instantiations "on the fly" This is an alternate constructor for performing an optimization
using the passed in objective function and constraint function pointers.
References SOLBase::allocate_arrays(), SOLBase::allocate_workspace(), SOLBase::augment_bounds(),
NPSOLOptimizer::lowerBounds, Iterator::numContinuousVars, Minimizer::numLinearConstraints, Mini-
mizer::numNonlinearConstraints, and NPSOLOptimizer::upperBounds.
The documentation for this class was generated from the following files:
• NPSOLOptimizer.hpp
• NPSOLOptimizer.cpp
Iterator
Minimizer
Optimizer
APPSOptimizer
COLINOptimizer
CONMINOptimizer
DOTOptimizer
JEGAOptimizer
NCSUOptimizer
NLPQLPOptimizer
NomadOptimizer
NonlinearCGOptimizer
NPSOLOptimizer
SNLLOptimizer
• Optimizer (NoDBBaseConstructor, size_t num_cv, size_t num_div, size_t num_drv, size_t num_lin_ineq,
size_t num_lin_eq, size_t num_nln_ineq, size_t num_nln_eq)
alternate constructor for "on the fly" instantiations
• ∼Optimizer ()
destructor
• void initialize_run ()
• void run ()
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post
Protected Attributes
• size_t numObjectiveFns
number of objective functions (iterator view)
• bool localObjectiveRecast
flag indicating whether local recasting to a single objective is used
• Optimizer ∗ prevOptInstance
pointer containing previous value of optimizerInstance
• void objective_reduction (const Response &full_response, const BoolDeque &sense, const RealVector
&full_wts, Response &reduced_response) const
forward mapping: maps multiple primary response functions to a single weighted objective for single-objective
optimizers
• static void primary_resp_reducer (const Variables &full_vars, const Variables &reduced_vars, const Re-
sponse &full_response, Response &reduced_response)
Recast callback to reduce multiple objectives or residuals to a single objective, with gradients and Hessians as
needed.
Base class for the optimizer branch of the iterator hierarchy. The Optimizer class provides common data and
functionality for DOTOptimizer, CONMINOptimizer, NPSOLOptimizer, SNLLOptimizer, NLPQLPOptimizer,
COLINOptimizer, and JEGAOptimizer.
standard constructor This constructor extracts the inherited data for the optimizer branch and performs sanity
checking on gradient and constraint settings.
References Dakota::abort_handler(), Iterator::bestVariablesArray, Minimizer::boundConstraintFlag, Vari-
ables::copy(), Model::current_variables(), Minimizer::data_transform_model(), ProblemDescDB::get_sizet(),
Iterator::gradientType, Iterator::hessianType, Model::init_communicators(), Iterator::iteratedModel, Opti-
mizer::localObjectiveRecast, Iterator::maxConcurrency, Iterator::methodName, Minimizer::minimizerRecasts,
Model::model_type(), Minimizer::numIterPrimaryFns, Optimizer::numObjectiveFns, Mini-
mizer::numUserPrimaryFns, Minimizer::obsDataFlag, Minimizer::optimizationFlag, Model::primary_-
response_fn_weights(), Iterator::probDescDB, Optimizer::reduce_model(), Minimizer::scale_model(), Min-
imizer::scaleFlag, Minimizer::speculativeFlag, and Dakota::strbegins().
Implements portions of initialize_run specific to Optimizers. This function should be invoked (or reimplemented)
by any derived implementations of initialize_run() (which would otherwise hide it).
Reimplemented from Minimizer.
Reimplemented in CONMINOptimizer, DOTOptimizer, NLPQLPOptimizer, and SNLLOptimizer.
References Iterator::iteratedModel, Minimizer::minimizerRecasts, Optimizer::optimizerInstance, Opti-
mizer::prevOptInstance, and Model::update_from_subordinate_model().
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post Virtual run function for the iterator class hierarchy. All derived classes need to redefine it.
Reimplemented from Iterator.
References Optimizer::find_optimum().
Implements portions of post_run specific to Optimizers. This function should be invoked (or reimplemented) by
any derived implementations of post_run() (which would otherwise hide it).
Reimplemented from Iterator.
Reimplemented in COLINOptimizer, and SNLLOptimizer.
References Dakota::abort_handler(), Response::active_set_request_vector(), Iterator::bestResponseArray, Iter-
ator::bestVariablesArray, Variables::continuous_variables(), Response::copy(), Minimizer::cvScaleMultipliers,
Minimizer::cvScaleOffsets, Minimizer::cvScaleTypes, Minimizer::expData, Response::function_value(),
Response::function_values(), Optimizer::local_objective_recast_retrieve(), Optimizer::localObjectiveRecast,
Minimizer::modify_s2n(), Minimizer::need_resp_trans_byvars(), Minimizer::numExperiments, Mini-
mizer::numNonlinearConstraints, Minimizer::numReplicates, Minimizer::numUserPrimaryFns, Min-
imizer::obsDataFlag, Minimizer::primaryRespScaleFlag, Minimizer::response_modify_s2n(), Mini-
mizer::secondaryRespScaleFlag, Response::update_partial(), and Minimizer::varsScaleFlag.
utility function to perform common operations following post_run(); deallocation and resetting of instance point-
ers Optional: perform finalization phases of run sequence, like deallocating memory and resetting instance point-
ers. Commonly used in sub-iterator executions. This is a virtual function; when re-implementing, a derived class
must call its nearest parent’s finalize_run(), typically _after_ performing its own implementation steps.
Reimplemented from Minimizer.
Reimplemented in SNLLOptimizer.
References Optimizer::optimizerInstance, and Optimizer::prevOptInstance.
Redefines default iterator results printing to include optimization results (objective functions and constraints).
Reimplemented from Iterator.
References Dakota::abort_handler(), Minimizer::archive_allocate_best(), Minimizer::archive_best(), Iter-
ator::bestResponseArray, Iterator::bestVariablesArray, Dakota::data_pairs, Model::interface_id(), Itera-
tor::iteratedModel, Dakota::lookup_by_val(), Iterator::numContinuousVars, Iterator::numFunctions, Min-
imizer::numNonlinearConstraints, Minimizer::optimizationFlag, Model::primary_response_fn_weights(),
Model::subordinate_model(), Dakota::write_data_partial(), and Dakota::write_precision.
Wrap iteratedModel in a RecastModel that performs (weighted) multi-objective or sum-of-squared residuals trans-
formation. Reduce model for least-squares or multi-objective transformation. Doesn’t map variables, or secondary
responses. Maps active set for Gauss-Newton. Maps primary responses to single objective so user vs. iterated
matters.
References Iterator::activeSet, Model::assign_rep(), Model::current_response(), Minimizer::gnewton_-
set_recast(), Iterator::hessianType, Iterator::iteratedModel, Iterator::numContinuousVars, Mini-
mizer::numNonlinearConstraints, Minimizer::numNonlinearIneqConstraints, Optimizer::numObjectiveFns, Min-
imizer::numRowsExpData, Minimizer::numUserPrimaryFns, Minimizer::obsDataFlag, Iterator::outputLevel,
Optimizer::primary_resp_reducer(), Model::primary_response_fn_sense(), Model::primary_response_fn_-
weights(), ActiveSet::request_vector(), Response::reshape(), and Minimizer::secondary_resp_copier().
Referenced by Optimizer::Optimizer().
13.106.3.7 void primary_resp_reducer (const Variables & full_vars, const Variables & reduced_vars,
const Response & full_response, Response & reduced_response) [static, private]
Recast callback to reduce multiple objectives or residuals to a single objective, with gradients and Hessians as
needed. Objective function map from multiple primary responses (objective or residuals) to a single objective.
Currently supports weighted sum; may later want more general transformations, e.g., goal-oriented
References Iterator::iteratedModel, Optimizer::objective_reduction(), Optimizer::optimizerInstance, It-
erator::outputLevel, Model::primary_response_fn_sense(), Model::primary_response_fn_weights(), and
Model::subordinate_model().
Referenced by Optimizer::reduce_model().
13.106.3.8 void objective_reduction (const Response & full_response, const BoolDeque & sense, const
RealVector & full_wts, Response & reduced_response) const [private]
forward mapping: maps multiple primary response functions to a single weighted objective for single-objective
optimizers This function is responsible for the mapping of multiple objective functions into a single objective
for publishing to single-objective optimizers. Used in DOTOptimizer, NPSOLOptimizer, SNLLOptimizer, and
SGOPTApplication on every function evaluation. The simple weighting approach (using primaryRespFnWts) is
the only technique supported currently. The weightings are used to scale function values, gradients, and Hessians
as needed.
13.106.3.9 void local_objective_recast_retrieve (const Variables & vars, Response & response) const
[private]
infers MOO/NLS solution from the solution of a single-objective optimizer Retrieve a MOO/NLS response based
on the data returned by a single objective optimizer by performing a data_pairs search. This may get called even
for a single user-specified function, since we may be recasting a single NLS residual into a squared objective.
References Response::active_set(), Dakota::data_pairs, Response::function_value(), Model::interface_id(), Iter-
ator::iteratedModel, Dakota::lookup_by_val(), Minimizer::numRowsExpData, Minimizer::numUserPrimaryFns,
Minimizer::obsDataFlag, and Response::update().
Referenced by Optimizer::post_run().
The documentation for this class was generated from the following files:
• DakotaOptimizer.hpp
• DakotaOptimizer.cpp
• ∼ParallelConfiguration ()
destructor
Private Attributes
• short numParallelLevels
number of parallel levels
• ParLevLIter wPLIter
list iterator for MPI_COMM_WORLD (not strictly required, but improves modularity by avoiding explicit usage of
MPI_COMM_WORLD)
• ParLevLIter siPLIter
list iterator for concurrent iterator partitions (there may be more than one per parallel configuration instance)
• ParLevLIter iePLIter
list iterator identifying the iterator-evaluation parallelLevel (there can only be one)
• ParLevLIter eaPLIter
list iterator identifying the evaluation-analysis parallelLevel (there can only be one)
Friends
• class ParallelLibrary
the ParallelLibrary class has special access priveleges in order to streamline implementation
Container class for a set of ParallelLevel list iterators that collectively identify a particular multilevel parallel
configuration. Rather than containing the multilevel parallel configuration directly, ParallelConfiguration instead
provides a set of list iterators which point into a combined list of ParallelLevels. This approach allows different
configurations to reuse ParallelLevels without copying them. A list of ParallelConfigurations is contained in
ParallelLibrary (ParallelLibrary::parallelConfigurations).
The documentation for this class was generated from the following file:
• ParallelLibrary.hpp
Interface
ApplicationInterface
DirectApplicInterface
ParallelDirectApplicInterface
• ∼ParallelDirectApplicInterface ()
destructor
Sample derived interface class for testing parallel simulator plug-ins using assign_rep(). The plug-in Paral-
lelDirectApplicInterface resides in namespace SIM and uses a copy of textbook() to perform parallel parameter
to response mappings. It may be activated by specifying the --with-plugin configure option, which activates the
DAKOTA_PLUGIN macro in dakota_config.h used by main.cpp (which activates the plug-in code block within
that file) and activates the PLUGIN_S declaration defined in Makefile.include and used in Makefile.source (which
add this class to the build). Test input files should then use an analysis_driver of "plugin_textbook".
• PluginParallelDirectApplicInterface.hpp
• PluginParallelDirectApplicInterface.cpp
• ∼ParallelLevel ()
destructor
Private Attributes
• bool dedicatedMasterFlag
signals dedicated master partitioning
• bool commSplitFlag
signals a communicator split was used
• bool serverMasterFlag
identifies master server processors
• bool messagePass
flag for message passing at this level
• int numServers
number of servers
• int procsPerServer
processors per server
• int procRemainder
proc remainder after equal distribution
• MPI_Comm serverIntraComm
intracomm. for each server partition
• int serverCommRank
rank in serverIntraComm
• int serverCommSize
size of serverIntraComm
• MPI_Comm hubServerIntraComm
intracomm for all serverCommRank==0 < w/i next higher level serverIntraComm
• int hubServerCommRank
rank in hubServerIntraComm
• int hubServerCommSize
size of hubServerIntraComm
• MPI_Comm hubServerInterComm
intercomm. between a server & the hub < (on server partitions only)
• MPI_Comm ∗ hubServerInterComms
intercomm. array on hub processor
• int serverId
server identifier
Friends
• class ParallelLibrary
the ParallelLibrary class has special access priveleges in order to streamline implementation
Container class for the data associated with a single level of communicator partitioning. A list of these levels
is contained in ParallelLibrary (ParallelLibrary::parallelLevels), which defines all of the parallelism levels across
one or more multilevel parallelism configurations.
The documentation for this class was generated from the following file:
• ParallelLibrary.hpp
• ParallelLibrary ()
default library mode constructor (assumes MPI_COMM_WORLD)
• ∼ParallelLibrary ()
destructor
• const ParallelLevel & init_iterator_communicators (const int &iterator_servers, const int &procs_-
per_iterator, const int &max_iterator_concurrency, const std::string &default_config, const std::string
&iterator_scheduling)
split MPI_COMM_WORLD into iterator communicators
• const ParallelLevel & init_evaluation_communicators (const int &evaluation_servers, const int &procs_-
per_evaluation, const int &max_evaluation_concurrency, const int &asynch_local_evaluation_concurrency,
const std::string &default_config, const std::string &evaluation_scheduling)
split an iterator communicator into evaluation communicators
• const ParallelLevel & init_analysis_communicators (const int &analysis_servers, const int &procs_per_-
analysis, const int &max_analysis_concurrency, const int &asynch_local_analysis_concurrency, const
std::string &default_config, const std::string &analysis_scheduling)
split an evaluation communicator into analysis communicators
• void free_iterator_communicators ()
deallocate iterator communicators
• void free_evaluation_communicators ()
deallocate evaluation communicators
• void free_analysis_communicators ()
deallocate analysis communicators
• void print_configuration ()
• void close_streams ()
close streams, files, and any other services
• void recv_si (int &recv_int, int source, int tag, MPI_Status &status)
blocking receive at the strategy-iterator communication level
• void isend_si (MPIPackBuffer &send_buff, int dest, int tag, MPI_Request &send_req)
nonblocking send at the strategy-iterator communication level
• void recv_si (MPIUnpackBuffer &recv_buff, int source, int tag, MPI_Status &status)
blocking receive at the strategy-iterator communication level
• void irecv_si (MPIUnpackBuffer &recv_buff, int source, int tag, MPI_Request &recv_req)
nonblocking receive at the strategy-iterator communication level
• void isend_ie (MPIPackBuffer &send_buff, int dest, int tag, MPI_Request &send_req)
nonblocking send at the iterator-evaluation communication level
• void recv_ie (MPIUnpackBuffer &recv_buff, int source, int tag, MPI_Status &status)
blocking receive at the iterator-evaluation communication level
• void irecv_ie (MPIUnpackBuffer &recv_buff, int source, int tag, MPI_Request &recv_req)
nonblocking receive at the iterator-evaluation communication level
• void isend_ea (int &send_int, int dest, int tag, MPI_Request &send_req)
nonblocking send at the evaluation-analysis communication level
• void recv_ea (int &recv_int, int source, int tag, MPI_Status &status)
blocking receive at the evaluation-analysis communication level
• void irecv_ea (int &recv_int, int source, int tag, MPI_Request &recv_req)
nonblocking receive at the evaluation-analysis communication level
• void barrier_w ()
enforce MPI_Barrier on MPI_COMM_WORLD
• void barrier_i ()
enforce MPI_Barrier on an iterator communicator
• void barrier_e ()
enforce MPI_Barrier on an evaluation communicator
• void barrier_a ()
enforce MPI_Barrier on an analysis communicator
• void waitsome (const int &num_sends, MPI_Request ∗&recv_requests, int &num_recvs, int ∗&index_-
array, MPI_Status ∗&status_array)
wait for at least one message from a series of nonblocking receives but complete all that are available
return worldSize
• bool parallel_configuration_is_complete ()
identifies if the current ParallelConfiguration has been fully populated
• void increment_parallel_configuration ()
add a new node to parallelConfigurations and increment currPCIter
• void initialize_timers ()
initialize DAKOTA and UTILIB timers
• void output_timers ()
conditionally output timers in destructor
• void init_communicators (const ParallelLevel &parent_pl, const int &num_servers, const int &procs_per_-
server, const int &max_concurrency, const int &asynch_local_concurrency, const std::string &default_-
config, const std::string &scheduling_override)
split a parent communicator into child server communicators
• bool resolve_inputs (int &num_servers, int &procs_per_server, const int &avail_procs, int &proc_-
remainder, const int &max_concurrency, const int &capacity_multiplier, const std::string &default_config,
const std::string &scheduling_override, bool print_rank)
resolve user inputs into a sensible partitioning scheme
• void send (MPIPackBuffer &send_buff, const int &dest, const int &tag, ParallelLevel &parent_pl, Paral-
lelLevel &child_pl)
blocking buffer send at the current communication level
• void send (int &send_int, const int &dest, const int &tag, ParallelLevel &parent_pl, ParallelLevel &child_-
pl)
blocking integer send at the current communication level
• void isend (MPIPackBuffer &send_buff, const int &dest, const int &tag, MPI_Request &send_req, Paral-
lelLevel &parent_pl, ParallelLevel &child_pl)
• void isend (int &send_int, const int &dest, const int &tag, MPI_Request &send_req, ParallelLevel
&parent_pl, ParallelLevel &child_pl)
nonblocking integer send at the current communication level
• void recv (MPIUnpackBuffer &recv_buff, const int &source, const int &tag, MPI_Status &status, Paral-
lelLevel &parent_pl, ParallelLevel &child_pl)
blocking buffer receive at the current communication level
• void recv (int &recv_int, const int &source, const int &tag, MPI_Status &status, ParallelLevel &parent_pl,
ParallelLevel &child_pl)
blocking integer receive at the current communication level
• void irecv (MPIUnpackBuffer &recv_buff, const int &source, const int &tag, MPI_Request &recv_req,
ParallelLevel &parent_pl, ParallelLevel &child_pl)
nonblocking buffer receive at the current communication level
• void irecv (int &recv_int, const int &source, const int &tag, MPI_Request &recv_req, ParallelLevel
&parent_pl, ParallelLevel &child_pl)
nonblocking integer receive at the current communication level
• void reduce_sum (double ∗local_vals, double ∗sum_vals, const int &num_vals, const MPI_Comm &comm)
Private Attributes
• std::ofstream output_ofstream
tagged file redirection of stdout
• std::ofstream error_ofstream
tagged file redirection of stderr
• MPI_Comm dakotaMPIComm
MPI_Comm on which DAKOTA is running.
• int worldRank
rank in MPI_Comm in which DAKOTA is running
• int worldSize
size of MPI_Comm in which DAKOTA is running
• bool mpirunFlag
flag for a parallel mpirun/yod launch
• bool ownMPIFlag
flag for ownership of MPI_Init/MPI_Finalize
• bool dummyFlag
prevents multiple MPI_Finalize calls due to dummy_lib
• bool stdOutputToFile
flags redirection of DAKOTA std output to a file
• bool stdErrorToFile
flags redirection of DAKOTA std error to a file
• std::string startupMessage
cached startup message for use in check_inputs
• bool checkFlag
flags invocation with command line option -check
• bool preRunFlag
flags invocation with command line option -pre_run
• bool runFlag
flags invocation with command line option -run
• bool postRunFlag
flags invocation with command line option -post_run
• bool userModesFlag
whether user run mdoes are active
• bool outputTimings
timing info only beyond help/version/check
• std::string preRunInput
filename for pre_run input
• std::string preRunOutput
filename for pre_run output
• std::string runInput
filename for run input
• std::string runOutput
filename for run output
• std::string postRunInput
filename for post_run input
• std::string postRunOutput
filename for post_run output
• Real startCPUTime
start reference for UTILIB CPU timer
• Real startWCTime
start reference for UTILIB wall clock timer
• Real startMPITime
start reference for MPI wall clock timer
• long startClock
start reference for local clock() timer measuring < parent+child CPU
• std::string stdOutputFilename
• std::string stdErrorFilename
filename for redirection of stderr
• std::string readRestartFilename
input filename for restart
• std::string writeRestartFilename
output filename for restart
• int stopRestartEvals
number of evals at which to stop restart processing
• ParLevLIter currPLIter
list iterator identifying the current node in parallelLevels
• ParConfigLIter currPCIter
list iterator identifying the current node in parallelConfigurations
Class for partitioning multiple levels of parallelism and managing message passing within these levels. The
ParallelLibrary class encapsulates all of the details of performing message passing within multiple levels of par-
allelism. It provides functions for partitioning of levels according to user configuration input and functions for
passing messages within and across MPI communicators for each of the parallelism levels. If support for other
message-passing libraries beyond MPI becomes needed (PVM, ...), then ParallelLibrary would be promoted to a
base class with virtual functions to encapsulate the library-specific syntax.
stand-alone mode constructor This constructor is the one used by main.cpp. It calls MPI_Init conditionally based
on whether a parallel launch is detected.
References ParallelLibrary::detect_parallel_launch(), ParallelLibrary::init_mpi_comm(),
ParallelLibrary::initialize_timers(), ParallelLibrary::mpirunFlag, and ParallelLibrary::ownMPIFlag.
13.110.2.2 ParallelLibrary ()
default library mode constructor (assumes MPI_COMM_WORLD) This constructor provides a library mode
default ParallelLibrary. It does not call MPI_Init, but rather gathers data from MPI_COMM_WORLD if MPI_-
Init has been called elsewhere.
References ParallelLibrary::init_mpi_comm(), ParallelLibrary::initialize_timers(), and ParallelLi-
brary::mpirunFlag.
library mode constructor accepting communicator This constructor provides a library mode ParallelLibrary, ac-
cepting an MPI communicator that might not be MPI_COMM_WORLD. It does not call MPI_Init, but rather
gathers data from dakota_mpi_comm if MPI_Init has been called elsewhere.
References ParallelLibrary::init_mpi_comm(), ParallelLibrary::initialize_timers(), and ParallelLi-
brary::mpirunFlag.
dummy constructor (used for dummy_lib) This constructor is used for creation of the global dummy_lib object,
which is used to satisfy initialization requirements when the real ParallelLibrary object is not available.
specify output streams and restart file(s) using command line inputs (normal mode) On the rank 0 processor,
get the -output, -error, -read_restart, and -write_restart filenames and the -stop_restart limit from the command
line. Defaults for the filenames from the command line handler are NULL for the filenames except write which
defaults to dakota.rst and 0 for read_restart_evals if no user specification. This information is Bcast from rank 0
to all iterator masters in manage_outputs_restart().
References ParallelLibrary::assign_streams(), ParallelLibrary::manage_run_modes(),
CommandLineHandler::read_restart_evals(), ParallelLibrary::readRestartFilename, GetLongOpt::retrieve(),
ParallelLibrary::stdErrorFilename, ParallelLibrary::stdOutputFilename, ParallelLibrary::stopRestartEvals,
ParallelLibrary::worldRank, and ParallelLibrary::writeRestartFilename.
Referenced by main(), and run_dakota().
specify output streams and restart file(s) using external inputs (library mode). Rather than extracting from the
command line, pass the std output, std error, read restart, and write restart filenames and the stop restart limit
directly. This function only needs to be invoked to specify non-default values [defaults for the filenames are
NULL (resulting in no output redirection, no restart read, and default restart write) and 0 for the stop restart limit
(resulting in no restart read limit)].
13.110.3.3 void manage_outputs_restart (const ParallelLevel & pl, bool results_output = false,
std::string results_filename = std::string())
manage output streams and restart file(s) (both modes) If the user has specified the use of files for DAKOTA
standard output and/or standard error, then bind these filenames to the Cout/Cerr macros. In addition, if concurrent
iterators are to be used, create and tag multiple output streams in order to prevent jumbled output. Manage restart
file(s) by processing any incoming evaluations from an old restart file and by setting up the binary output stream
for new evaluations. Only master iterator processor(s) read & write restart information. This function must follow
init_iterator_communicators so that restart can be managed properly for concurrent iterator strategies. In the case
of concurrent iterators, each iterator has its own restart file tagged with iterator number.
References Dakota::abort_handler(), ParallelLibrary::bcast(), ParallelLibrary::checkFlag, Dakota::dakota_cerr,
Dakota::dakota_cout, Dakota::data_pairs, ParallelLevel::dedicatedMasterFlag, ParallelLibrary::error_ofstream,
ParamResponsePair::eval_id(), ParallelLevel::hubServerCommSize, ParallelLevel::hubServerIntraComm,
ResultsManager::initialize(), Dakota::iterator_results_db, ParallelLevel::numServers, ParallelLibrary::output_-
ofstream, ParallelLibrary::postRunFlag, ParallelLibrary::postRunInput, ParallelLibrary::postRunOutput,
ParallelLibrary::preRunFlag, ParallelLibrary::preRunInput, ParallelLibrary::preRunOutput, ParallelLi-
brary::readRestartFilename, ParallelLibrary::runFlag, ParallelLibrary::runInput, ParallelLibrary::runOutput, Par-
allelLevel::serverCommRank, ParallelLevel::serverId, ParallelLevel::serverMasterFlag, MPIPackBuffer::size(),
ParallelLibrary::stdErrorFilename, ParallelLibrary::stdErrorToFile, ParallelLibrary::stdOutputFilename, Par-
allelLibrary::stdOutputToFile, ParallelLibrary::stopRestartEvals, ParallelLibrary::userModesFlag, ParallelLi-
brary::worldRank, Dakota::write_restart, and ParallelLibrary::writeRestartFilename.
Referenced by Strategy::init_iterator_parallelism().
close streams, files, and any other services Close streams associated with manage_outputs and manage_restart
and terminate any additional services that may be active.
References Dakota::abort_handler(), ParallelLibrary::currPCIter, Dakota::dakota_cerr, Dakota::dakota_cout,
Dakota::dc_ptr_int, ParallelLibrary::error_ofstream, Dakota::mc_ptr_int, ParallelLibrary::output_ofstream,
ParallelLibrary::parallelLevels, ParallelLevel::serverMasterFlag, ParallelLibrary::stdErrorToFile, ParallelLi-
brary::stdOutputToFile, and Dakota::write_restart.
Referenced by ParallelLibrary::∼ParallelLibrary().
add a new node to parallelConfigurations and increment currPCIter Called from the ParallelLibrary ctor and from
Model::init_communicators(). An increment is performed for each Model initialization except the first (which
inherits the world and strategy-iterator parallel levels from the first partial configuration).
References ParallelLibrary::currPCIter, ParallelConfiguration::eaPLIter, ParallelConfiguration::iePLIter, Parallel-
Configuration::numParallelLevels, ParallelLibrary::parallelConfigurations, ParallelLibrary::parallelLevels, Paral-
convenience function for initializing DAKOTA’s top-level MPI communicators, based on dakotaMPIComm
shared function for initializing based on passed MPI_Comm
References Dakota::abort_handler(), ParallelLibrary::currPLIter, Dakota::Dak_pl, ParallelLi-
brary::dakotaMPIComm, ParallelLibrary::increment_parallel_configuration(), ParallelLibrary::mpirunFlag,
ParallelLibrary::parallelLevels, ParallelLevel::serverCommRank, ParallelLevel::serverCommSize, Paral-
lelLevel::serverIntraComm, Dakota::start_dakota_heartbeat(), ParallelLibrary::startMPITime, ParallelLi-
brary::startupMessage, ParallelLibrary::worldRank, and ParallelLibrary::worldSize.
Referenced by ParallelLibrary::ParallelLibrary().
13.110.3.7 void init_communicators (const ParallelLevel & parent_pl, const int & num_servers, const int
& procs_per_server, const int & max_concurrency, const int & asynch_local_concurrency,
const std::string & default_config, const std::string & scheduling_override) [private]
split a parent communicator into child server communicators Split parent communicator into concurrent child
server partitions as specified by the passed parameters. This constructs new child intra-communicators and parent-
child inter-communicators. This function is called from the Strategy constructor for the concurrent iterator level
and from ApplicationInterface::init_communicators() for the concurrent evaluation and concurrent analysis levels.
References ParallelLevel::commSplitFlag, ParallelLibrary::currPCIter, ParallelLibrary::currPLIter,
ParallelLevel::dedicatedMasterFlag, ParallelLevel::numServers, ParallelLibrary::parallelLevels, Par-
allelLevel::procRemainder, ParallelLevel::procsPerServer, ParallelLibrary::resolve_inputs(), Paral-
lelLevel::serverCommRank, ParallelLevel::serverCommSize, ParallelLibrary::split_communicator_dedicated_-
master(), and ParallelLibrary::split_communicator_peer_partition().
Referenced by ParallelLibrary::init_analysis_communicators(), ParallelLibrary::init_evaluation_-
communicators(), and ParallelLibrary::init_iterator_communicators().
13.110.3.8 bool resolve_inputs (int & num_servers, int & procs_per_server, const int & avail_procs, int
& proc_remainder, const int & max_concurrency, const int & capacity_multiplier, const
std::string & default_config, const std::string & scheduling_override, bool print_rank)
[private]
resolve user inputs into a sensible partitioning scheme This function is responsible for the "auto-configure" intel-
ligence of DAKOTA. It resolves a variety of inputs and overrides into a sensible partitioning configuration for a
particular parallelism level. It also handles the general case in which a user’s specification request does not divide
out evenly with the number of available processors for the level. If num_servers & procs_per_server are both
nondefault, then the former takes precedence.
References Dakota::strbegins().
Referenced by ParallelLibrary::init_communicators().
13.110.3.9 void split_filenames (const char ∗ filenames, std::string & input_filename, std::string &
output_filename) [private]
split a double colon separated pair of filenames (possibly empty) into input and output filename strings Tokenize
colon-delimited input and output filenames, returns unchanged strings if tokens not found.
Referenced by ParallelLibrary::manage_run_modes().
The documentation for this class was generated from the following files:
• ParallelLibrary.hpp
• ParallelLibrary.cpp
• ParamResponsePair (const Variables &vars, const String &interface_id, const Response &response, bool
deep_copy=false)
alternate constructor for temporaries
• ParamResponsePair (const Variables &vars, const String &interface_id, const Response &response, const
int eval_id, bool deep_copy=true)
standard constructor for history uses
• ∼ParamResponsePair ()
destructor
Private Attributes
• Variables prPairParameters
the set of parameters for the function evaluation
• Response prPairResponse
the response set for the function evaluation
• IntStringPair evalInterfaceIds
the evalInterfaceIds aggregate
Friends
Container class for a variables object, a response object, and an evaluation id. ParamResponsePair provides a
container class for association of the input for a particular function evaluation (a variables object) with the output
from this function evaluation (a response object), along with an evaluation identifier. This container defines the
basic unit used in the data_pairs cache, in restart file operations, and in a variety of scheduling algorithm queues.
With the advent of STL, replacement of arrays of this class with map<> and pair<> template constructs may be
possible (using map<pair<int,String>, pair<Variables,Response> >, for example), assuming that deep copies,
I/O, alternate constructors, etc., can be adequately addressed. Boost tuple<> may also be a candidate.
13.111.2.1 ParamResponsePair (const Variables & vars, const String & interface_id, const Response &
response, bool deep_copy = false) [inline]
alternate constructor for temporaries Uses of this constructor often employ the standard Variables and Response
copy constructors to share representations since this constructor is commonly used for search_pairs (which are
local instantiations that go out of scope prior to any changes to values; i.e., they are not used for history).
13.111.2.2 ParamResponsePair (const Variables & vars, const String & interface_id, const Response &
response, const int eval_id, bool deep_copy = true) [inline]
standard constructor for history uses Uses of this constructor often do not share representations since deep copies
are used when history mechanisms (e.g., data_pairs and beforeSynchCorePRPQueue) are involved.
read a ParamResponsePair object from a packed MPI buffer interfaceId is omitted since master processor retains
interface ids and communicates asv and response data only with slaves.
References ParamResponsePair::evalInterfaceIds, ParamResponsePair::prPairParameters, and ParamResponse-
Pair::prPairResponse.
write a ParamResponsePair object to a packed MPI buffer interfaceId is omitted since master processor retains
interface ids and communicates asv and response data only with slaves.
References ParamResponsePair::evalInterfaceIds, ParamResponsePair::prPairParameters, and ParamResponse-
Pair::prPairResponse.
the evalInterfaceIds aggregate the function evaluation identifier (assigned from Interface::evalIdCntr) is paired
with the interface used to generate the response object. Used in PRPCache id_vars_set_compare to prevent
duplicate detection on results from different interfaces. evalInterfaceIds belongs here rather than in Response
since some Response objects involve consolidation of several fn evals (e.g., Model::synchronize_derivatives())
that are not, in total, generated by a single interface. The prPair, on the other hand, is used for storage of all low
level fn evals that get evaluated in ApplicationInterface::map().
Referenced by ParamResponsePair::eval_id(), ParamResponsePair::eval_interface_ids(),
ParamResponsePair::interface_id(), ParamResponsePair::operator=(), Dakota::operator==(), ParamRespon-
sePair::read(), ParamResponsePair::read_annotated(), ParamResponsePair::write(), ParamResponsePair::write_-
annotated(), and ParamResponsePair::write_tabular().
The documentation for this class was generated from the following files:
• ParamResponsePair.hpp
• ParamResponsePair.cpp
Iterator
Analyzer
PStudyDACE
ParamStudy
• ∼ParamStudy ()
destructor
• void pre_run ()
pre-run portion of run_iterator (optional); re-implemented by Iterators which can generate all Variables (parameter
sets) a priori
• void extract_trends ()
Redefines the run_iterator virtual function for the PStudy/DACE branch.
• void post_input ()
read tabular data for post-run mode
• void vector_loop ()
performs the parameter study by sampling along a vector, starting from an initial point followed by numSteps
increments along continous/discrete step vectors
• void centered_loop ()
performs a number of plus and minus offsets for each parameter centered about an initial point
• void multidim_loop ()
performs a full factorial combination for all intersections defined by a set of multidimensional partitions
• void final_point_to_step_vector ()
compute step vectors from finalPoint, initial points, and numSteps
• void distribute_partitions ()
compute step vectors from {cont,discInt,discReal}VarPartitions and global bounds
• bool check_finite_bounds ()
check for finite variable bounds within iteratedModel, as required for computing partitions of finite ranges
• bool check_ranges_sets (const IntVector &c_steps, const IntVector &di_steps, const IntVector &dr_steps)
sanity check for centered parameter study
• bool check_sets (const IntVector &c_steps, const IntVector &di_steps, const IntVector &dr_steps)
sanity check for increments along int/real set dimensions
• void dsi_step (size_t di_index, int increment, const IntSet &values, Variables &vars)
helper function for performing a discrete step in an integer set variable
• void dsr_step (size_t dr_index, int increment, const RealSet &values, Variables &vars)
helper function for performing a discrete step in a real set variable
• void centered_header (const String &type, size_t var_index, int step, size_t hdr_index)
store a centered parameter study header within allHeaders
Private Attributes
• short pStudyType
internal code for parameter study type: LIST, VECTOR_SV, VECTOR_FP, CENTERED, or MULTIDIM
• size_t numEvals
total number of parameter study evaluations computed from specification
• RealVectorArray listCVPoints
array of continuous evaluation points for the list_parameter_study
• IntVectorArray listDIVPoints
array of discrete int evaluation points for the list_parameter_study
• RealVectorArray listDRVPoints
array of discrete real evaluation points for the list_parameter_study
• RealVector initialCVPoint
the continuous starting point for vector and centered parameter studies
• IntVector initialDIVPoint
the continuous starting point for vector and centered parameter studies
• RealVector initialDRVPoint
the continuous starting point for vector and centered parameter studies
• RealVector contStepVector
the n-dimensional continuous increment
• IntVector discIntStepVector
the n-dimensional discrete value or index increment
• IntVector discRealStepVector
the n-dimensional discrete real index increment
• RealVector finalPoint
the ending point for vector_parameter_study (a specification option)
• int numSteps
the number of times continuous/discrete step vectors are applied for vector_parameter_study (a specification op-
tion)
• IntVector contStepsPerVariable
number of offsets in the plus and the minus direction for each continuous variable in a centered_parameter_study
• IntVector discIntStepsPerVariable
number of offsets in the plus and the minus direction for each discrete integer variable in a centered_parameter_-
study
• IntVector discRealStepsPerVariable
number of offsets in the plus and the minus direction for each discrete real variable in a centered_parameter_study
• UShortArray contVarPartitions
number of partitions for each continuous variable in a multidim_parameter_study
• UShortArray discIntVarPartitions
number of partitions for each discrete integer variable in a multidim_parameter_study
• UShortArray discRealVarPartitions
number of partitions for each discrete real variable in a multidim_parameter_study
Class for vector, list, centered, and multidimensional parameter studies. The ParamStudy class contains several
algorithms for performing parameter studies of different types. The vector parameter study steps along an n-
dimensional vector from an arbitrary initial point to an arbitrary final point in a specified number of steps. The
centered parameter study performs a number of plus and minus offsets in each coordinate direction around a center
point. A multidimensional parameter study fills an n-dimensional hypercube based on bounds and a specified
number of partitions for each dimension. And the list parameter study provides for a user specification of a list of
points to evaluate, which allows general parameter investigations not fitting the structure of vector, centered, or
multidim parameter studies.
pre-run portion of run_iterator (optional); re-implemented by Iterators which can generate all Variables (parameter
sets) a priori pre-run phase, which a derived iterator may optionally reimplement; when not present, pre-run is
likely integrated into the derived run function. This is a virtual function; when re-implementing, a derived class
must call its nearest parent’s pre_run(), if implemented, typically _before_ performing its own implementation
steps.
Reimplemented from Iterator.
References Dakota::abort_handler(), SharedVariablesData::active_components_totals(), Analyzer::allHeaders,
Analyzer::allVariables, ParamStudy::centered_loop(), Variables::continuous_variables(), Param-
Study::contStepsPerVariable, ParamStudy::contStepVector, ParamStudy::contVarPartitions, Dakota::copy_-
data(), Model::current_variables(), ParamStudy::discIntStepsPerVariable, ParamStudy::discIntStepVector,
ParamStudy::discIntVarPartitions, ParamStudy::discRealStepsPerVariable, ParamStudy::discRealStepVector,
ParamStudy::discRealVarPartitions, Variables::discrete_int_variables(), Variables::discrete_real_variables(),
ParamStudy::distribute_partitions(), ParamStudy::final_point_to_step_vector(), ParamStudy::finalPoint, Param-
Study::initialCVPoint, ParamStudy::initialDIVPoint, ParamStudy::initialDRVPoint, Iterator::iteratedModel,
ParamStudy::multidim_loop(), ParamStudy::numEvals, ParamStudy::numSteps, Iterator::outputLevel,
ParamStudy::pStudyType, ParamStudy::sample(), Variables::shared_data(), ParamStudy::vector_loop(),
Dakota::write_data(), and Dakota::write_ordered().
post-run portion of run_iterator (optional); verbose to print results; re-implemented by Iterators that can read all
Variables/Responses and perform final analysis phase in a standalone way Post-run phase, which a derived iterator
may optionally reimplement; when not present, post-run is likely integrated into run. This is a virtual function;
when re-implementing, a derived class must call its nearest parent’s post_run(), typically _after_ performing its
own implementation steps.
Reimplemented from Iterator.
References Analyzer::allResponses, Analyzer::allVariables, SensAnalysisGlobal::compute_correlations(),
PStudyDACE::pStudyDACESensGlobal, ParamStudy::pStudyType, and Iterator::subIteratorFlag.
13.112.2.3 bool load_distribute_points (const String & points_filename, bool annotated) [private]
Load from file and distribute points; using this function to manage construction of the temporary array
References ParamStudy::distribute_list_of_points(), Iterator::numContinuousVars, Iterator::numDiscreteIntVars,
Iterator::numDiscreteRealVars, and Dakota::read_data_tabular().
Referenced by ParamStudy::ParamStudy().
The documentation for this class was generated from the following files:
• ParamStudy.hpp
• ParamStudy.cpp
predicate for comparing ONLY the interfaceId and Vars attributes of PRPair
The documentation for this struct was generated from the following file:
• PRPMultiIndex.hpp
• PRPMultiIndex.hpp
Approximation
PecosApproximation
• ∼PecosApproximation ()
destructor
• void increment_order ()
increment OrthogPolyApproximation::approxOrder uniformly
• void compute_component_effects ()
Performs global sensitivity analysis using Sobol’ Indices by computing component (main and interaction) effects.
• void compute_total_effects ()
Performs global sensitivity analysis using Sobol’ Indices by computing total effects.
• void allocate_arrays ()
invoke Pecos::PolynomialApproximation::allocate_arrays()
• Real mean ()
return the mean of the expansion, treating all variables as random
• const Pecos::RealVector & mean_gradient (const Pecos::RealVector &x, const Pecos::SizetArray &dvv)
return the gradient of the expansion mean for a given parameter vector and given DVV, treating a subset of the
variables as random
• Real variance ()
return the variance of the expansion, treating all variables as random
• const Pecos::RealVector & variance_gradient (const Pecos::RealVector &x, const Pecos::SizetArray &dvv)
return the gradient of the expansion variance for a given parameter vector and given DVV, treating a subset of the
variables as random
• Real delta_mean ()
return the change in mean between two response expansions, treating all variables as random
return the change in mean between two response expansions, treating a subset of variables as random
• Real delta_std_deviation ()
return the change in standard deviation between two response expansions, treating all variables as random
• void compute_moments ()
compute moments up to the order supported by the Pecos polynomial approximation
• void build ()
builds the approximation from scratch
• void rebuild ()
rebuilds the approximation incrementally
• void restore ()
restores state prior to previous append()
• bool restore_available ()
queries availability of restoration for trial set
• size_t restoration_index ()
return index of trial set within restorable bookkeeping sets
• void finalize ()
finalize approximation by applying all remaining trial sets
• void store ()
store current approximation for later combination
Private Attributes
• Pecos::BasisApproximation pecosBasisApprox
the Pecos basis approximation, encompassing OrthogPolyApproximation and InterpPolyApproximation
• Pecos::PolynomialApproximation ∗ polyApproxRep
convenience pointer to representation
Derived approximation class for global basis polynomials. The PecosApproximation class provides a global
approximation based on basis polynomials. This includes orthogonal polynomials used for polynomial chaos
expansions and interpolation polynomials used for stochastic collocation.
builds the approximation from scratch This is the common base class portion of the virtual fn and is insufficient
on its own; derived implementations should explicitly invoke (or reimplement) this base class contribution.
Reimplemented from Approximation.
References PecosApproximation::pecosBasisApprox.
rebuilds the approximation incrementally This is the common base class portion of the virtual fn and is insufficient
on its own; derived implementations should explicitly invoke (or reimplement) this base class contribution.
Reimplemented from Approximation.
References PecosApproximation::pecosBasisApprox.
removes entries from end of SurrogateData::{vars,resp}Data (last points appended, or as specified in args) This
is the common base class portion of the virtual fn and is insufficient on its own; derived implementations should
explicitly invoke (or reimplement) this base class contribution.
Reimplemented from Approximation.
References PecosApproximation::pecosBasisApprox.
restores state prior to previous append() This is the common base class portion of the virtual fn and is insufficient
on its own; derived implementations should explicitly invoke (or reimplement) this base class contribution.
Reimplemented from Approximation.
References PecosApproximation::pecosBasisApprox.
finalize approximation by applying all remaining trial sets This is the common base class portion of the virtual fn
and is insufficient on its own; derived implementations should explicitly invoke (or reimplement) this base class
contribution.
Reimplemented from Approximation.
References PecosApproximation::pecosBasisApprox.
The documentation for this class was generated from the following files:
• PecosApproximation.hpp
• PecosApproximation.cpp
ProblemDescDB
NIDRProblemDescDB
• ∼ProblemDescDB ()
destructor
• void manage_inputs (const char ∗dakota_input_file, const char ∗parser_options=NULL, bool echo_-
input=true, void(∗callback)(void ∗)=NULL, void ∗callback_data=NULL)
invokes parse_inputs() to populate the problem description database and execute any callback function, broadcast()
to propagate DB data to all processors, and post_process() to construct default variables/response vectors. This is
an alternate API used by the file parsing mode in library_mode.cpp.
• void parse_inputs (const char ∗dakota_input_file, const char ∗parser_options=NULL, bool echo_-
input=true, void(∗callback)(void ∗)=NULL, void ∗callback_data=NULL)
parses the input file and populates the problem description database. This function reads from the dakota input
filename passed in and allows subsequent modifications to be done by a callback function. This API is used by
the mixed mode option in library_mode.cpp since it allows broadcast() and post_process() to be deferred until all
inputs have been provided.
verifies that there is at least one of each of the required keywords in the dakota input file. Used by parse_inputs().
• void broadcast ()
invokes send_db_buffer() and receive_db_buffer() to broadcast DB data across the processor allocation. Used by
manage_inputs().
• void post_process ()
post-processes the (minimal) input specification to assign default variables/responses specification arrays. Used by
manage_inputs().
• void lock ()
Locks the database in order to prevent data access when the list nodes may not be set properly. Unlocked by a set
nodes operation.
• void unlock ()
Explicitly unlocks the database. Use with care.
• void resolve_top_method ()
For a (default) strategy lacking a method pointer, this function is used to determine which of several potential
method specifications corresponds to the top method and then sets the list nodes accordingly.
• size_t get_db_method_node ()
return the index of the active node in dataMethodList
• size_t get_db_model_node ()
Protected Attributes
• DataStrategy strategySpec
the strategy specification (only one allowed) resulting from a call to strategy_kwhandler() or insert_node()
• size_t strategyCntr
counter for strategy specifications used in check_input
• void send_db_buffer ()
MPI send of a large buffer containing strategySpec and all objects in dataMethodList, dataModelList, dataVari-
ablesList, dataInterfaceList, and dataResponsesList. Used by manage_inputs().
• void receive_db_buffer ()
MPI receive of a large buffer containing strategySpec and all objects in dataMethodList, dataModelList, dataVari-
ablesList, dataInterfaceList, and dataResponsesList. Used by manage_inputs().
Private Attributes
• ParallelLibrary & parallelLib
reference to the parallel_lib object passed from main
• IteratorList iteratorList
list of iterator objects, one for each method specification
• ModelList modelList
list of model objects, one for each model specification
• VariablesList variablesList
list of variables objects, one for each variables specification
• InterfaceList interfaceList
list of interface objects, one for each interface specification
• ResponseList responseList
list of response objects, one for each responses specification
• bool methodDBLocked
prevents use of get_<type> retrieval and set_<type> update functions prior to setting the list node for the active
method specification
• bool modelDBLocked
prevents use of get_<type> retrieval and set_<type> update functions prior to setting the list node for the active
model specification
• bool variablesDBLocked
prevents use of get_<type> retrieval and set_<type> update functions prior to setting the list node for the active
variables specification
• bool interfaceDBLocked
prevents use of get_<type> retrieval and set_<type> update functions prior to setting the list node for the active
interface specification
• bool responsesDBLocked
prevents use of get_<type> retrieval and set_<type> update functions prior to setting the list node for the active
responses specification
• ProblemDescDB ∗ dbRep
• int referenceCount
number of objects sharing dbRep
Friends
• class Model
Model requires access to get_variables() and get_response().
• class SingleModel
SingleModel requires access to get_interface().
• class HierarchSurrModel
HierarchSurrModel requires access to get_model().
• class DataFitSurrModel
DataFitSurrModel requires access to get_iterator() and get_model().
• class NestedModel
NestedModel requires access to get_interface(), get_response(), get_iterator(), and get_model().
• class Strategy
Strategy requires access to get_iterator().
• class SingleMethodStrategy
SingleMethodStrategy requires access to get_model().
• class HybridStrategy
HybridStrategy requires access to get_model().
• class SequentialHybridStrategy
SequentialStrategy requires access to get_iterator().
• class ConcurrentStrategy
ConcurrentStrategy requires access to get_model().
• class SurrBasedLocalMinimizer
SurrBasedLocalMinimizer requires access to get_iterator().
• class SurrBasedGlobalMinimizer
SurrBasedGlobalMinimizer requires access to get_iterator().
The database containing information parsed from the DAKOTA input file. The ProblemDescDB class is a database
for DAKOTA input file data that is populated by a parser defined in a derived class. When the parser reads a
complete keyword, it populates a data class object (DataStrategy, DataMethod, DataVariables, DataInterface, or
DataResponses) and, for all cases except strategy, appends the object to a linked list (dataMethodList, dataVari-
ablesList, dataInterfaceList, or dataResponsesList). No strategy linked list is used since only one strategy specifi-
cation is allowed.
13.116.2.1 ProblemDescDB ()
default constructor The default constructor: dbRep is NULL in this case. This makes it necessary to check for
NULL in the copy constructor, assignment operator, and destructor.
standard constructor This is the envelope constructor which uses problem_db to build a fully populated db object.
It only needs to extract enough data to properly execute get_db(problem_db), since the constructor overloaded
with BaseConstructor builds the actual base class data inherited by the derived classes.
References Dakota::abort_handler(), ProblemDescDB::dbRep, and ProblemDescDB::get_db().
copy constructor Copy constructor manages sharing of dbRep and incrementing of referenceCount.
References ProblemDescDB::dbRep, and ProblemDescDB::referenceCount.
13.116.2.4 ∼ProblemDescDB ()
destructor Destructor decrements referenceCount and only deletes dbRep when referenceCount reaches zero.
References Dakota::Dak_pddb, ProblemDescDB::dbRep, and ProblemDescDB::referenceCount.
constructor initializes the base class part of letter classes (BaseConstructor overloading avoids infinite recursion
in the derived class constructors - Coplien, p. 139) This constructor is the one which must build the base class
data for all derived classes. get_db() instantiates a derived class letter and the derived constructor selects this
base class constructor in its initialization list (to avoid the recursion of the base class constructor calling get_db()
again). Since the letter IS the representation, its representation pointer is set to NULL (an uninitialized pointer
causes problems in ∼ProblemDescDB).
assignment operator Assignment operator decrements referenceCount for old dbRep, assigns new dbRep, and
increments referenceCount for new dbRep.
References ProblemDescDB::dbRep, and ProblemDescDB::referenceCount.
invokes manage_inputs(const char∗, ...) using the dakota input filename passed with the "-input" option on the
DAKOTA command line. This is the normal API employed in main.cpp. Manage command line inputs using the
CommandLineHandler class and parse the input file.
References ProblemDescDB::dbRep, ProblemDescDB::manage_inputs(), ProblemDescDB::parallelLib, GetLon-
gOpt::retrieve(), and ParallelLibrary::world_rank().
Referenced by main(), ProblemDescDB::manage_inputs(), run_dakota(), and run_dakota_parse().
13.116.3.3 void manage_inputs (const char ∗ dakota_input_file, const char ∗ parser_options = NULL,
bool echo_input = true, void(∗)(void ∗) callback = NULL, void ∗ callback_data = NULL)
invokes parse_inputs() to populate the problem description database and execute any callback function, broad-
cast() to propagate DB data to all processors, and post_process() to construct default variables/response vectors.
This is an alternate API used by the file parsing mode in library_mode.cpp. Parse the input file, broadcast it to all
processors, and post-process the data on all processors.
References ProblemDescDB::broadcast(), ProblemDescDB::dbRep, ProblemDescDB::manage_inputs(),
ProblemDescDB::parse_inputs(), and ProblemDescDB::post_process().
13.116.3.4 void parse_inputs (const char ∗ dakota_input_file, const char ∗ parser_options = NULL, bool
echo_input = true, void(∗)(void ∗) callback = NULL, void ∗ callback_data = NULL)
parses the input file and populates the problem description database. This function reads from the dakota input
filename passed in and allows subsequent modifications to be done by a callback function. This API is used by
the mixed mode option in library_mode.cpp since it allows broadcast() and post_process() to be deferred until
all inputs have been provided. Parse the input file, execute the callback function (if present), and perform basic
checks on keyword counts.
References ProblemDescDB::check_input(), ProblemDescDB::dbRep, ProblemDescDB::derived_parse_inputs(),
ProblemDescDB::parallelLib, ProblemDescDB::parse_inputs(), and ParallelLibrary::world_rank().
Referenced by ProblemDescDB::manage_inputs(), ProblemDescDB::parse_inputs(), and run_dakota_mixed().
post-processes the (minimal) input specification to assign default variables/responses specification arrays. Used
by manage_inputs(). When using library mode in a parallel application, post_process() should be called on all
processors following broadcast() of a minimal problem specification.
Used by the envelope constructor to instantiate the correct letter class. Initializes dbRep to the appropriate derived
type. The standard derived class constructors are invoked.
References Dakota::Dak_pddb.
Referenced by ProblemDescDB::ProblemDescDB().
The documentation for this class was generated from the following files:
• ProblemDescDB.hpp
• ProblemDescDB.cpp
Interface
ApplicationInterface
ProcessApplicInterface
ProcessHandleApplicInterface SysCallApplicInterface
• ∼ProcessApplicInterface ()
destructor
• void write_parameters_files (const Variables &vars, const ActiveSet &set, const Response &response, const
int id)
write the parameters data and response request data to one or more parameters files (using one or more invocations
of write_parameters_file()) in either standard or aprepro format
• void read_results_files (Response &response, const int id, const String &eval_id_tag)
read the response object from one or more results files using full eval_id_tag passed
Protected Attributes
• bool fileTagFlag
flags tagging of parameter/results files
• bool fileSaveFlag
flags retention of parameter/results files
• bool commandLineArgs
flag indicating use of passing of filenames as command line arguments to the analysis drivers and input/output
filters
• bool apreproFlag
flag indicating use of the APREPRO (the Sandia "A PRE PROcessor" utility) format for parameter files
• bool multipleParamsFiles
flag indicating the need for separate parameters files for multiple analysis drivers
• std::string iFilterName
the name of the input filter (input_filter user specification)
• std::string oFilterName
the name of the output filter (output_filter user specification)
• std::string specifiedParamsFileName
the name of the parameters file from user specification
• std::string paramsFileName
the parameters file name actually used (modified with tagging or temp files)
• std::string specifiedResultsFileName
the name of the results file from user specification
• std::string resultsFileName
the results file name actually used (modified with tagging or temp files)
• std::string fullEvalId
complete evalIdTag, possibly including hierarchical tagging and final eval id, but not program numbers, for passing
to write_parameters_files
• bool allowExistingResults
by default analysis code interfaces delete results files if they exist; user may override with this flag and we’ll try to
gather and only fork if needed
• std::string curWorkdir
working directory when useWorkdir is true
• bool useWorkdir
whether to use a new or specified work_directory
• std::string workDir
its name, if specified...
• bool dirTag
whether to tag the working directory
• bool dirSave
whether dir_save was specified
• bool dirDel
whether to delete the directory when Dakota terminates
• bool haveWorkdir
for dirTag, whether we have workDir
• std::string templateDir
• StringArray templateFiles
template files (if specified)
• bool templateCopy
whether to force a copy (versus link) every time
• bool templateReplace
whether to replace existing files
• bool haveTemplateDir
state variable for template directory
• std::string dakDir
Dakota directory (if needed).
Private Attributes
• String2DArray analysisComponents
the set of optional analysis components used by the analysis drivers (from the analysis_components interface spec-
ification)
Derived application interface class that spawns a simulation code using a separate process and communicates with
it through files. ProcessApplicInterface is subclassed for process handles or file completion testing.
13.117.2.1 void synchronous_local_analyses (int start, int end, int step) [inline, protected]
execute analyses synchronously on the local processor Execute analyses synchronously in succession on the local
processor (start to end in step increments). Modeled after ApplicationInterface::synchronous_local_evaluations().
References ApplicationInterface::synchronous_local_analysis().
Referenced by ProcessHandleApplicInterface::create_evaluation_process().
The documentation for this class was generated from the following files:
• ProcessApplicInterface.hpp
• ProcessApplicInterface.cpp
Interface
ApplicationInterface
ProcessApplicInterface
ProcessHandleApplicInterface
ForkApplicInterface SpawnApplicInterface
• ∼ProcessHandleApplicInterface ()
destructor
• void serve_analyses_asynch ()
serve the analysis scheduler and execute analysis jobs asynchronously
• void ifilter_argument_list ()
set argList for execution of the input filter
• void ofilter_argument_list ()
set argList for execution of the output filter
Protected Attributes
• std::map< pid_t, int > evalProcessIdMap
map of fork process id’s to function evaluation id’s for asynchronous evaluations
Derived application interface class that spawns a simulation code using a separate process, receives a process
identifier, and communicates with the spawned process through files. ProcessHandleApplicInterface is subclassed
for fork/execvp/waitpid (Unix) and spawnvp (Windows).
This code provides the derived function used by ApplicationInterface:: serve_analyses_synch() as well as a con-
venience function for ProcessHandleApplicInterface::synchronous_local_analyses() below.
Reimplemented from ApplicationInterface.
References ProcessHandleApplicInterface::create_analysis_process(), and ProcessHandleApplicInterface::driver_-
argument_list().
No derived interface plug-ins, so perform construct-time checks. However, process init issues as warnings since
some contexts (e.g., HierarchSurrModel) initialize more configurations than will be used.
Reimplemented from ApplicationInterface.
References ApplicationInterface::check_multiprocessor_analysis(), and ApplicationInterface::check_-
multiprocessor_asynchronous().
Manage the input filter, 1 or more analysis programs, and the output filter in blocking or nonblocking mode as
governed by block_flag. In the case of a single analysis and no filters, a single fork is performed, while in other
cases, an initial fork is reforked multiple times. Called from derived_map() with block_flag == BLOCK and
from derived_map_asynch() with block_flag == FALL_THROUGH. Uses create_analysis_process() to spawn
individual program components within the function evaluation.
Implements ProcessApplicInterface.
References Dakota::abort_handler(), ProcessHandleApplicInterface::analysis_process_group_id(),
ApplicationInterface::analysisServerId, ApplicationInterface::asynchLocalAnalysisConcurrency,
ApplicationInterface::asynchLocalAnalysisFlag, ProcessHandleApplicInterface::asynchronous_-
local_analyses(), ParallelLibrary::barrier_e(), ProcessApplicInterface::commandLineArgs,
ProcessHandleApplicInterface::create_analysis_process(), ProcessHandleApplicInterface::driver_-
argument_list(), ApplicationInterface::eaDedMasterFlag, ApplicationInterface::evalCommRank,
ApplicationInterface::evalCommSize, ProcessHandleApplicInterface::evalProcessIdMap,
ProcessHandleApplicInterface::evaluation_process_group_id(), ProcessHandleApplicInterface::ifilter_-
argument_list(), ProcessApplicInterface::iFilterName, ProcessHandleApplicInterface::join_-
evaluation_process_group(), ApplicationInterface::master_dynamic_schedule_analyses(), Process-
ApplicInterface::multipleParamsFiles, ApplicationInterface::numAnalysisDrivers, ApplicationInter-
face::numAnalysisServers, ProcessHandleApplicInterface::ofilter_argument_list(), ProcessApplicInter-
face::oFilterName, ApplicationInterface::parallelLib, ProcessApplicInterface::paramsFileName, ProcessAp-
plicInterface::programNames, ProcessApplicInterface::resultsFileName, ProcessHandleApplicInterface::serve_-
analyses_asynch(), ApplicationInterface::serve_analyses_synch(), ApplicationInterface::suppressOutput, and
ProcessApplicInterface::synchronous_local_analyses().
check the exit status of a forked process and abort if an error code was returned Check to see if the process
terminated abnormally (WIFEXITED(status)==0) or if either execvp or the application returned a status code
of -1 (WIFEXITED(status)!=0 && (signed char)WEXITSTATUS(status)==-1). If one of these conditions is
detected, output a failure message and abort. Note: the application code should not return a status code of -1
unless an immediate abort of dakota is wanted. If for instance, failure capturing is to be used, the application code
should write the word "FAIL" to the appropriate results file and return a status code of 0 through exit().
References Dakota::abort_handler().
Referenced by ForkApplicInterface::create_analysis_process(), SpawnApplicInterface::test_local_-
analyses_send(), SpawnApplicInterface::test_local_evaluations(), ForkApplicInterface::wait(),
SpawnApplicInterface::wait_local_analyses(), and SpawnApplicInterface::wait_local_evaluations().
13.118.2.6 void asynchronous_local_analyses (int start, int end, int step) [protected]
execute analyses asynchronously on the local processor Schedule analyses asynchronously on the local processor
using a dynamic scheduling approach (start to end in step increments). Concurrency is limited by asynchLocal-
AnalysisConcurrency. Modeled after ApplicationInterface::asynchronous_local_evaluations(). NOTE: This func-
tion should be elevated to ApplicationInterface if and when another derived interface class supports asynchronous
local analyses.
References Dakota::abort_handler(), ProcessHandleApplicInterface::analysisProcessIdMap, Application-
Interface::asynchLocalAnalysisConcurrency, ProcessHandleApplicInterface::create_analysis_process(),
ProcessHandleApplicInterface::driver_argument_list(), ApplicationInterface::numAnalysisDrivers, and
ProcessHandleApplicInterface::wait_local_analyses().
Referenced by ProcessHandleApplicInterface::create_evaluation_process().
serve the analysis scheduler and execute analysis jobs asynchronously This code runs multiple asynch analyses
on each server. It is modeled after ApplicationInterface::serve_evaluations_asynch(). NOTE: This fn should be
elevated to ApplicationInterface if and when another derived interface class supports hybrid analysis parallelism.
References Dakota::abort_handler(), ProcessHandleApplicInterface::analysisProcessIdMap, Application-
Interface::asynchLocalAnalysisConcurrency, ProcessHandleApplicInterface::create_analysis_process(),
ProcessHandleApplicInterface::driver_argument_list(), ParallelLibrary::irecv_ea(), ApplicationInter-
face::numAnalysisDrivers, ApplicationInterface::parallelLib, ParallelLibrary::recv_ea(), ParallelLibrary::test(),
and ProcessHandleApplicInterface::test_local_analyses_send().
Referenced by ProcessHandleApplicInterface::create_evaluation_process().
The documentation for this class was generated from the following files:
• ProcessHandleApplicInterface.hpp
• ProcessHandleApplicInterface.cpp
Iterator
Analyzer
PStudyDACE
• ∼PStudyDACE ()
destructor
• void run ()
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post
Protected Attributes
• SensAnalysisGlobal pStudyDACESensGlobal
initialize statistical post processing
• bool volQualityFlag
• bool varBasedDecompFlag
flag which specifies calculating variance based decomposition sensitivity analysis metrics
Private Attributes
• double chiMeas
quality measure
• double dMeas
quality measure
• double hMeas
quality measure
• double tauMeas
quality measure
Base class for managing common aspects of parameter studies and design of experiments methods. The PStudy-
DACE base class manages common data and functions, such as those involving the best solutions located during
the parameter set evaluations or the printing of final results.
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post Virtual run function for the iterator class hierarchy. All derived classes need to redefine it.
Reimplemented from Iterator.
References Analyzer::bestVarsRespMap, and PStudyDACE::extract_trends().
print the final iterator results This virtual function provides additional iterator-specific final results outputs beyond
the function evaluation summary printed in finalize_run().
Reimplemented from Analyzer.
Calculation of volumetric quality measures. Calculation of volumetric quality measures developed by FSU.
References PStudyDACE::chiMeas, PStudyDACE::dMeas, PStudyDACE::hMeas, and PStudyDACE::tauMeas.
Referenced by FSUDesignCompExp::get_parameter_sets(), and DDACEDesignCompExp::get_parameter_sets().
The documentation for this class was generated from the following files:
• DakotaPStudyDACE.hpp
• DakotaPStudyDACE.cpp
Iterator
Analyzer
PStudyDACE
PSUADEDesignCompExp
• ∼PSUADEDesignCompExp ()
destructor
• void pre_run ()
pre-run portion of run_iterator (optional); re-implemented by Iterators which can generate all Variables (parameter
sets) a priori
• void post_input ()
read tabular data for post-run mode
• void extract_trends ()
Redefines the run_iterator virtual function for the PStudy/DACE branch.
Private Attributes
• int samplesSpec
initial specification of number of samples
• int numSamples
current number of samples to be evaluated
• int numPartitions
number of partitions to pass to PSUADE (levels = partitions + 1)
• bool allDataFlag
flag which triggers the update of allVars/allResponses for use by Iterator::all_variables() and Iterator::all_-
responses()
• size_t numDACERuns
counter for number of run() executions for this object
• bool varyPattern
flag for generating a sequence of seed values within multiple get_parameter_sets() calls so that the sample sets are
not repeated, but are still repeatable
• int randomSeed
current seed for the random number generator
Wrapper class for the PSUADE library. The PSUADEDesignCompExp class provides a wrapper for PSUADE,
a C++ design of experiments library from Lawrence Livermore National Laboratory. Currently this class only
includes the PSUADE Morris One-at-a-time (MOAT) method to uniformly sample the parameter space spanned
by the active bounds of the current Model. It returns all generated samples and their corresponding responses as
well as the best sample found.
primary constructor for building a standard DACE iterator This constructor is called for a standard iterator built
with data from probDescDB.
References Dakota::abort_handler(), Iterator::maxConcurrency, Iterator::methodName, and PSUADEDesign-
CompExp::numSamples.
pre-run portion of run_iterator (optional); re-implemented by Iterators which can generate all Variables (parameter
sets) a priori pre-run phase, which a derived iterator may optionally reimplement; when not present, pre-run is
likely integrated into the derived run function. This is a virtual function; when re-implementing, a derived class
must call its nearest parent’s pre_run(), if implemented, typically _before_ performing its own implementation
steps.
Reimplemented from Iterator.
References PSUADEDesignCompExp::get_parameter_sets(), and Iterator::iteratedModel.
post-run portion of run_iterator (optional); verbose to print results; re-implemented by Iterators that can read all
Variables/Responses and perform final analysis phase in a standalone way Post-run phase, which a derived iterator
may optionally reimplement; when not present, post-run is likely integrated into run. This is a virtual function;
when re-implementing, a derived class must call its nearest parent’s post_run(), typically _after_ performing its
own implementation steps.
Reimplemented from Iterator.
References Dakota::abort_handler(), Analyzer::allResponses, Analyzer::allSamples, Model::continuous_lower_-
bounds(), Model::continuous_upper_bounds(), Iterator::iteratedModel, Iterator::numContinuousVars, Itera-
tor::numFunctions, and PSUADEDesignCompExp::numSamples.
get the current number of samples Return current number of evaluation points. Since the calculation of samples,
collocation points, etc. might be costly, provide a default implementation here that backs out from the maxCon-
currency. May be (is) overridden by derived classes.
Reimplemented from Iterator.
References PSUADEDesignCompExp::numSamples.
enforce sanity checks/modifications for the user input specification Users may input a variety of quantities, but
this function must enforce any restrictions imposed by the sampling algorithms.
References Dakota::abort_handler(), Iterator::methodName, Iterator::numContinuousVars, PSUAD-
EDesignCompExp::numPartitions, PSUADEDesignCompExp::numSamples, and PSUADEDesignComp-
Exp::varPartitionsSpec.
Referenced by PSUADEDesignCompExp::get_parameter_sets().
The documentation for this class was generated from the following files:
• PSUADEDesignCompExp.hpp
• PSUADEDesignCompExp.cpp
Interface
ApplicationInterface
DirectApplicInterface
PythonInterface
• ∼PythonInterface ()
destructor
• bool python_convert (const RealVector &c_src, const IntVector &di_src, const RealVector &dr_src, PyOb-
ject ∗∗dst)
convert RealVector + IntVector + RealVector to Python mixed list or numpy double array
convert labels
• bool python_convert (const StringMultiArray &c_src, const StringMultiArray &di_src, const StringMulti-
Array &dr_src, PyObject ∗∗dst)
convert all labels to single list
Protected Attributes
• bool userNumpyFlag
whether the user requested numpy data structures in the input file
Specialization of DirectApplicInterface to link to Python analysis drivers. Includes convenience functions to map
data to/from Python
execute an analysis code portion of a direct evaluation invocation Python specialization of dervied analysis com-
ponents.
Reimplemented from DirectApplicInterface.
References ApplicationInterface::analysisServerId, and PythonInterface::python_run().
13.121.2.2 bool python_convert_int (const ArrayT & src, Size sz, PyObject ∗∗ dst) [inline,
protected]
convert arrays of integer types to Python list or numpy array convert all integer array types including IntVector,
ShortArray, and SizetArray to Python list of ints or numpy array of ints
References PythonInterface::userNumpyFlag.
Referenced by PythonInterface::python_run().
The documentation for this class was generated from the following files:
• PythonInterface.hpp
• PythonInterface.cpp
Dummy struct for overloading constructors used in on-the-fly Model instantiations. RecastBaseConstructor is
used to overload the constructor used for on-the-fly Model instantiations. Putting this struct here avoids circular
dependencies.
The documentation for this struct was generated from the following file:
• dakota_global_defs.hpp
Model
RecastModel
• ∼RecastModel ()
destructor
• void transform_set (const Variables &recast_vars, const ActiveSet &recast_set, ActiveSet &sub_model_-
set)
into sub_model_set for use with subModel.
• void transform_response (const Variables &recast_vars, const Variables &sub_model_vars, const Response
&sub_model_resp, Response &recast_resp)
perform transformation of Response (sub-model --> recast)
return subModel
• void build_approximation ()
builds the subModel approximation
• void update_approximation (const Variables &vars, const IntResponsePair &response_pr, bool rebuild_-
flag)
replaces data in the subModel approximation
• void append_approximation (const Variables &vars, const IntResponsePair &response_pr, bool rebuild_-
flag)
appends data to the subModel approximation
• void restore_approximation ()
restore a previous approximation data state within a surrogate
• bool restore_available ()
query for whether a trial increment is restorable within a surrogate
• void finalize_approximation ()
finalize an approximation by applying all previous trial increments
• void store_approximation ()
move the current approximation into storage for later combination
• String local_eval_synchronization ()
return subModel local synchronization setting
• int local_eval_concurrency ()
return subModel local evaluation concurrency
• void derived_init_serial ()
set up RecastModel for serial operations (request forwarded to subModel).
• void stop_servers ()
executed by the master to terminate subModel server operations when RecastModel iteration is complete.
• void set_evaluation_reference ()
set the evaluation counter reference points for the RecastModel (request forwarded to subModel)
• void fine_grained_evaluation_counters ()
request fine-grained evaluation reporting within subModel
• void update_from_sub_model ()
update current variables/labels/bounds/targets from subModel
Private Attributes
• Model subModel
the sub-model underlying the function pointers
• Sizet2DArray varsMapIndices
For each subModel variable, identifies the indices of the recast variables used to define it (maps RecastModel
variables to subModel variables; data is packed with only the variable indices employed rather than a sparsely
filled N_sm x N_r matrix).
• bool nonlinearVarsMapping
boolean set to true if the variables mapping involves a nonlinear transformation. Used in transform_set() to manage
the requirement for gradients within the Hessian transformations. This does not require a BoolDeque for each
individual variable, since response gradients and Hessians are managed per function, not per variable.
• bool respMapping
set to true if non-NULL primaryRespMapping or secondaryRespMapping are supplied
• Sizet2DArray primaryRespMapIndices
For each recast primary function, identifies the indices of the subModel functions used to define it (maps subModel
response to RecastModel Response).
• Sizet2DArray secondaryRespMapIndices
For each recast secondary function, identifies the indices of the subModel functions used to define it (maps subModel
response to RecastModel response).
• BoolDequeArray nonlinearRespMapping
array of BoolDeques, one for each recast response function. Each BoolDeque defines which subModel response
functions contribute to the recast function using a nonlinear mapping. Used in transform_set() to augment the
subModel function value/gradient requirements.
• IntActiveSetMap recastSetMap
map of recast active set passed to derived_asynch_compute_response(). Needed for currentResponse update in
synchronization routines.
• IntVariablesMap recastVarsMap
map of recast variables used by derived_asynch_compute_response(). Needed for primaryRespMapping() and
secondaryRespMapping() in synchronization routines.
• IntVariablesMap subModelVarsMap
map of subModel variables used by derived_asynch_compute_response(). Needed for primaryRespMapping() and
secondaryRespMapping() in synchronization routines.
• IntResponseMap recastResponseMap
map of recast responses used by RecastModel::derived_synchronize() and RecastModel::derived_synchronize_-
nowait()
• void(∗ setMapping )(const Variables &recast_vars, const ActiveSet &recast_set, ActiveSet &sub_model_-
set)
holds pointer for set mapping function passed in ctor/initialize
• void(∗ primaryRespMapping )(const Variables &sub_model_vars, const Variables &recast_vars, const Re-
sponse &sub_model_response, Response &recast_response)
holds pointer for primary response mapping function passed in ctor/initialize
• void(∗ invPriRespMapping )(const Variables &recast_vars, const Variables &sub_model_vars, const Re-
sponse &recast_resp, Response &sub_model_resp)
holds pointer for optional inverse primary response mapping function passed in inverse_mappings()
• void(∗ invSecRespMapping )(const Variables &recast_vars, const Variables &sub_model_vars, const Re-
sponse &recast_resp, Response &sub_model_resp)
holds pointer for optional inverse secondary response mapping function passed in inverse_mappings()
Derived model class which provides a thin wrapper around a sub-model in order to recast the form of its inputs
and/or outputs. The RecastModel class uses function pointers to allow recasting of the subModel input/output
into new problem forms. This is currently used to recast SBO approximate subproblems, but can be used for
multiobjective, input/output scaling, and other problem modifications in the future.
13.123.2.1 RecastModel (const Model & sub_model, const Sizet2DArray & vars_map_indices,
const SizetArray & vars_comps_totals, bool nonlinear_vars_mapping, void(∗)(const
Variables &recast_vars, Variables &sub_model_vars) variables_map, void(∗)(const
Variables &recast_vars, const ActiveSet &recast_set, ActiveSet &sub_model_set)
set_map, const Sizet2DArray & primary_resp_map_indices, const Sizet2DArray &
secondary_resp_map_indices, size_t recast_secondary_offset, const BoolDequeArray &
nonlinear_resp_mapping, void(∗)(const Variables &sub_model_vars, const Variables
&recast_vars, const Response &sub_model_response, Response &recast_response)
primary_resp_map, void(∗)(const Variables &sub_model_vars, const Variables &recast_vars,
const Response &sub_model_response, Response &recast_response) secondary_resp_map)
standard constructor Default recast model constructor. Requires full definition of the transformation. Parameter
vars_comps_totals indicates the number of each type of variable {4 types} x {3 domains} in the recast variable
space. Note: recast_secondary_offset is the start index for equality constraints, typically num nonlinear ineq
constraints.
References Dakota::abort_handler(), Constraints::copy(), Response::copy(), Variables::copy(), Model::current_-
response(), Model::current_variables(), Model::currentResponse, Model::currentVariables, Variables::cv(),
Response::function_gradients(), Response::function_hessians(), RecastModel::initialize_data_from_-
submodel(), RecastModel::nonlinearRespMapping, Response::num_functions(), Model::num_functions(),
Constraints::num_linear_eq_constraints(), Constraints::num_linear_ineq_constraints(), Constraints::num_-
nonlinear_eq_constraints(), Constraints::num_nonlinear_ineq_constraints(), Model::numDerivVars,
Model::numFns, RecastModel::primaryRespMapIndices, RecastModel::primaryRespMapping, Con-
straints::reshape(), Response::reshape(), RecastModel::respMapping, RecastModel::secondaryRespMapIndices,
RecastModel::secondaryRespMapping, RecastModel::subModel, Model::user_defined_constraints(),
Model::userDefinedConstraints, Variables::variables_components_totals(), RecastModel::variablesMapping,
and Variables::view().
13.123.2.2 RecastModel (const Model & sub_model, const SizetArray & vars_comps_totals, size_t
num_recast_primary_fns, size_t num_recast_secondary_fns, size_t recast_secondary_offset)
alternate constructor This alternate constructor defers initialization of the function pointers until a separate call
to initialize(), and accepts the minimum information needed to construct currentVariables, currentResponse, and
userDefinedConstraints. The resulting model is sufficiently complete for passing to an Iterator. Parameter vars_-
comps_totals indicates the number of each type of variable {4 types} x {3 domains} in the recast variable space.
Note: recast_secondary_offset is the start index for equality constraints, typically num nonlinear ineq constraints.
References Constraints::copy(), Response::copy(), Variables::copy(), Model::current_response(),
Model::current_variables(), Model::currentResponse, Model::currentVariables, Variables::cv(),
Response::function_gradients(), Response::function_hessians(), RecastModel::initialize_data_from_-
completes initialization of the RecastModel after alternate construction This function is used for late initialization
of the recasting functions. It is used in concert with the alternate constructor.
References Dakota::abort_handler(), RecastModel::nonlinearRespMapping, Recast-
Model::nonlinearVarsMapping, RecastModel::primaryRespMapIndices, RecastModel::primaryRespMapping,
RecastModel::respMapping, RecastModel::secondaryRespMapIndices, RecastModel::secondaryRespMapping,
RecastModel::setMapping, RecastModel::variablesMapping, and RecastModel::varsMapIndices.
Referenced by EffGlobalMinimizer::minimize_surrogates_on_model(), NonDLocalReliability::mpp_-
search(), NonDGlobalReliability::optimize_gaussian_process(), NonDLocalInterval::quantify_uncertainty(),
NonDGlobalInterval::quantify_uncertainty(), and Minimizer::scale_model().
13.123.3.3 void eval_tag_prefix (const String & eval_id_str) [inline, protected, virtual]
set the hierarchical eval ID tag prefix RecastModel just forwards any tags to its subModel
Reimplemented from Model.
update current variables/labels/bounds/targets from subModel Update inactive values and labels in currentVari-
ables and inactive bound constraints in userDefinedConstraints from variables and constraints data within sub-
Model.
References Model::aleatDistParams, Model::aleatory_distribution_parameters(), Model::continuous_-
lower_bounds(), Constraints::continuous_lower_bounds(), Model::continuous_upper_bounds(),
Constraints::continuous_upper_bounds(), Model::continuous_variable_labels(), Variables::continuous_-
variable_labels(), Model::continuous_variables(), Variables::continuous_variables(), Model::currentResponse,
Model::currentVariables, Model::discrete_design_set_int_values(), Model::discrete_design_set_real_-
values(), Model::discrete_int_lower_bounds(), Constraints::discrete_int_lower_bounds(), Model::discrete_-
int_upper_bounds(), Constraints::discrete_int_upper_bounds(), Model::discrete_int_variable_labels(),
Variables::discrete_int_variable_labels(), Model::discrete_int_variables(), Variables::discrete_int_variables(),
Model::discrete_real_lower_bounds(), Constraints::discrete_real_lower_bounds(), Model::discrete_-
real_upper_bounds(), Constraints::discrete_real_upper_bounds(), Model::discrete_real_variable_-
labels(), Variables::discrete_real_variable_labels(), Model::discrete_real_variables(), Variables::discrete_-
real_variables(), Model::discrete_state_set_int_values(), Model::discrete_state_set_real_values(),
Model::discreteDesignSetIntValues, Model::discreteDesignSetRealValues, Model::discreteStateSetIntValues,
Model::discreteStateSetRealValues, Model::epistDistParams, Model::epistemic_distribution_parameters(),
Response::function_label(), Model::inactive_continuous_lower_bounds(), Constraints::inactive_continuous_-
lower_bounds(), Model::inactive_continuous_upper_bounds(), Constraints::inactive_continuous_upper_-
bounds(), Model::inactive_continuous_variable_labels(), Variables::inactive_continuous_variable_labels(),
Model::inactive_continuous_variables(), Variables::inactive_continuous_variables(), Model::inactive_discrete_-
int_lower_bounds(), Constraints::inactive_discrete_int_lower_bounds(), Model::inactive_discrete_int_upper_-
bounds(), Constraints::inactive_discrete_int_upper_bounds(), Model::inactive_discrete_int_variable_labels(),
Variables::inactive_discrete_int_variable_labels(), Model::inactive_discrete_int_variables(), Variables::inactive_-
discrete_int_variables(), Model::inactive_discrete_real_lower_bounds(), Constraints::inactive_discrete_real_-
lower_bounds(), Model::inactive_discrete_real_upper_bounds(), Constraints::inactive_discrete_real_upper_-
bounds(), Model::inactive_discrete_real_variable_labels(), Variables::inactive_discrete_real_variable_labels(),
Model::inactive_discrete_real_variables(), Variables::inactive_discrete_real_variables(), Model::linear_-
eq_constraint_coeffs(), Constraints::linear_eq_constraint_coeffs(), Model::linear_eq_constraint_targets(),
Constraints::linear_eq_constraint_targets(), Model::linear_ineq_constraint_coeffs(), Constraints::linear_ineq_-
constraint_coeffs(), Model::linear_ineq_constraint_lower_bounds(), Constraints::linear_ineq_constraint_lower_-
bounds(), Model::linear_ineq_constraint_upper_bounds(), Constraints::linear_ineq_constraint_upper_bounds(),
Model::nonlinear_eq_constraint_targets(), Constraints::nonlinear_eq_constraint_targets(), Model::nonlinear_-
ineq_constraint_lower_bounds(), Constraints::nonlinear_ineq_constraint_lower_bounds(), Model::nonlinear_-
ineq_constraint_upper_bounds(), Constraints::nonlinear_ineq_constraint_upper_bounds(), Model::num_-
functions(), Model::num_linear_eq_constraints(), Model::num_linear_ineq_constraints(), Model::num_-
nonlinear_eq_constraints(), Constraints::num_nonlinear_eq_constraints(), Model::num_nonlinear_ineq_-
constraints(), Constraints::num_nonlinear_ineq_constraints(), Model::numFns, Model::primary_response_fn_-
sense(), Model::primary_response_fn_weights(), Model::primaryRespFnSense, Model::primaryRespFnWts,
RecastModel::primaryRespMapping, Model::response_labels(), RecastModel::secondaryRespMapping, Recast-
Model::subModel, Model::userDefinedConstraints, and RecastModel::variablesMapping.
Referenced by RecastModel::update_from_subordinate_model().
The documentation for this class was generated from the following files:
• RecastModel.hpp
• RecastModel.cpp
Derived class within the Constraints hierarchy which employs relaxation of discrete variables. Inheritance dia-
gram for RelaxedVarConstraints::
Constraints
RelaxedVarConstraints
• ∼RelaxedVarConstraints ()
destructor
• void build_active_views ()
construct active views of all variables bounds arrays
• void build_inactive_views ()
construct inactive views of all variables bounds arrays
Derived class within the Constraints hierarchy which employs relaxation of discrete variables. Derived vari-
able constraints classes take different views of the design, uncertain, and state variable types and the contin-
uous and discrete domain types. The RelaxedVarConstraints derived class combines continuous and discrete
domain types through integer relaxation. The branch and bound method uses this approach (see Variables::get_-
variables(problem_db) for variables type selection; variables type is passed to the Constraints constructor in
Model).
standard constructor In this class, a relaxed data approach is used in which continuous and discrete arrays are
combined into a single continuous array (integrality is relaxed; the converse of truncating reals is not currently
supported but could be in the future if needed). Iterators which use this class include: BranchBndOptimizer.
References Constraints::allContinuousLowerBnds, Constraints::allContinuousUpperBnds, Constraints::build_-
views(), SharedVariablesData::components_totals(), Dakota::copy_data_partial(), ProblemDescDB::get_iv(),
ProblemDescDB::get_rv(), Constraints::manage_linear_constraints(), Dakota::merge_data_partial(), Con-
straints::sharedVarsData, and SharedVariablesData::vc_lookup().
reshape the lower/upper bound arrays within the Constraints hierarchy Resizes the derived bounds arrays.
Reimplemented from Constraints.
References Constraints::allContinuousLowerBnds, Constraints::allContinuousUpperBnds, and
Constraints::build_views().
Referenced by RelaxedVarConstraints::RelaxedVarConstraints().
The documentation for this class was generated from the following files:
• RelaxedVarConstraints.hpp
• RelaxedVarConstraints.cpp
Variables
RelaxedVariables
• ∼RelaxedVariables ()
destructor
• void build_active_views ()
construct active views of all variables arrays
• void build_inactive_views ()
Derived class within the Variables hierarchy which employs the relaxation of discrete variables. Derived variables
classes take different views of the design, uncertain, and state variable types and the continuous and discrete
domain types. The RelaxedVariables derived class combines continuous and discrete domain types but separates
design, uncertain, and state variable types. The branch and bound method uses this approach (see Variables::get_-
variables(problem_db)).
13.125.2.1 RelaxedVariables (const ProblemDescDB & problem_db, const std::pair< short, short > &
view)
standard constructor In this class, a relaxed data approach is used in which continuous and discrete arrays are
combined into a single continuous array (integrality is relaxed; the converse of truncating reals is not currently
supported but could be in the future if needed). Iterators/strategies which use this class include: BranchBn-
dOptimizer. Extract fundamental variable types and labels and merge continuous and discrete domains to create
aggregate arrays and views.
References Variables::allContinuousVars, Variables::build_views(), SharedVariablesData::components_totals(),
Dakota::copy_data_partial(), ProblemDescDB::get_iv(), ProblemDescDB::get_rv(), Dakota::merge_data_-
partial(), Variables::sharedVarsData, SharedVariablesData::vc_lookup(), and SharedVariablesData::view().
• RelaxedVariables.hpp
• RelaxedVariables.cpp
• ∼Response ()
destructor
• RealVector function_values_view ()
return all function values as a view for updating in place
• RealMatrix function_gradients_view ()
• RealSymMatrixArray function_hessians_view ()
return all function Hessians as Teuchos::Views (shallow copies) for updating in place
• int data_size ()
handle class forward to corresponding body class member function
• void update (const RealVector &source_fn_vals, const RealMatrix &source_fn_grads, const RealSymMa-
trixArray &source_fn_hessians, const ActiveSet &source_set)
Overloaded form which allows update from components of a response object. Care is taken to allow different
derivative array sizing.
• void update_partial (size_t start_index_target, size_t num_items, const Response &response, size_t start_-
index_source)
partial update of this response object from another response object. The response objects may have different
numbers of response functions.
• void reshape (size_t num_fns, size_t num_params, bool grad_flag, bool hess_flag)
• void reset ()
handle class forward to corresponding body class member function
• void reset_inactive ()
handle class forward to corresponding body class member function
Private Attributes
• ResponseRep ∗ responseRep
pointer to the body (handle-body idiom)
Friends
• bool operator== (const Response &resp1, const Response &resp2)
equality operator
Container class for response functions and their derivatives. Response provides the handle class. The Response
class is a container class for an abstract set of functions (functionValues) and their first (functionGradients) and
second (functionHessians) derivatives. The functions may involve objective and constraint functions (optimization
data set), least squares terms (parameter estimation data set), or generic response functions (uncertainty quantifi-
cation data set). It is not currently part of a class hierarchy, since the abstraction has been sufficiently general and
has not required specialization. For memory efficiency, it employs the "handle-body idiom" approach to reference
counting and representation sharing (see Coplien "Advanced C++", p. 58), for which Response serves as the
handle and ResponseRep serves as the body.
13.126.2.1 Response ()
default constructor Need a populated problem description database to build a meaningful Response object, so set
the responseRep=NULL in default constructor for efficiency. This then requires a check on NULL in the copy
constructor, assignment operator, and destructor.
The documentation for this class was generated from the following files:
• DakotaResponse.hpp
• DakotaResponse.cpp
• ∼ResponseRep ()
destructor
• int data_size ()
return the number of doubles active in response. Used for sizing double∗ response_data arrays passed into read_-
data and write_data.
• void update (const RealVector &source_fn_vals, const RealMatrix &source_fn_grads, const RealSymMa-
trixArray &source_fn_hessians, const ActiveSet &source_set)
update this response object from components of another response object
• void reshape (size_t num_fns, size_t num_params, bool grad_flag, bool hess_flag)
rehapes response data arrays
• void reset ()
resets all response data to zero
• void reset_inactive ()
resets all inactive response data to zero
Private Attributes
• int referenceCount
number of handle objects sharing responseRep
• RealVector functionValues
abstract set of response functions
• RealMatrix functionGradients
first derivatives of the response functions
• RealSymMatrixArray functionHessians
second derivatives of the response functions
• ActiveSet responseActiveSet
copy of the ActiveSet used by the Model to generate a Response instance
• StringArray functionLabels
response function identifiers used to improve output readability
• String responsesId
response identifier string from the input file
Friends
• class Response
the handle class can access attributes of the body class directly
Container class for response functions and their derivatives. ResponseRep provides the body class. The Respon-
seRep class is the "representation" of the response container class. It is the "body" portion of the "handle-body
idiom" (see Coplien "Advanced C++", p. 58). The handle class (Response) provides for memory efficiency
in management of multiple response objects through reference counting and representation sharing. The body
class (ResponseRep) actually contains the response data (functionValues, functionGradients, functionHessians,
etc.). The representation is hidden in that an instance of ResponseRep may only be created by Response. There-
fore, programmers create instances of the Response handle class, and only need to be aware of the handle/body
mechanisms when it comes to managing shallow copies (shared representation) versus deep copies (separate rep-
resentation used for history mechanisms).
13.127.2.1 ResponseRep (const Variables & vars, const ProblemDescDB & problem_db) [private]
standard constructor built from problem description database The standard constructor used by
Dakota::ModelRep.
alternate constructor using limited data Used for building a response object of the correct size on the fly (e.g., by
slave analysis servers performing execute() on a local_response). functionLabels is not needed for this purpose
since it’s not passed in the MPI send/recv buffers. However, NPSOLOptimizer’s user-defined functions option
uses this constructor to build bestResponseArray.front() and bestResponseArray.front() needs functionLabels for
I/O, so construction of functionLabels has been added.
References Dakota::build_labels(), ResponseRep::functionGradients, ResponseRep::functionHessians, and Re-
sponseRep::functionLabels.
read a responseRep object from an std::istream ASCII version of read needs capabilities for capturing data omis-
sions or formatting errors (resulting from user error or asynch race condition) and analysis failures (resulting from
nonconvergence, instability, etc.).
References ResponseRep::functionGradients, ResponseRep::functionHessians, ResponseRep::functionValues,
Dakota::re_match(), Dakota::read_col_vector_trans(), ResponseRep::read_data(), ActiveSet::request_vector(),
ResponseRep::reset(), and ResponseRep::responseActiveSet.
Referenced by Response::read().
read a responseRep object from an std::istream (annotated format) read_annotated() is used for neutral file trans-
lation of restart files. Since objects are built solely from this data, annotations are used. This version closely
mirrors the BiStream version.
References ResponseRep::functionGradients, ResponseRep::functionHessians, ResponseRep::functionLabels,
ResponseRep::functionValues, Dakota::read_col_vector_trans(), Dakota::read_lower_triangle(),
write a responseRep object to an std::ostream (annotated format) write_annotated() is used for neutral file transla-
tion of restart files. Since objects need to be build solely from this data, annotations are used. This version closely
mirrors the BoStream version, with the exception of the use of white space between fields.
References Dakota::array_write_annotated(), ActiveSet::derivative_vector(), ResponseRep::functionGradients,
ResponseRep::functionHessians, ResponseRep::functionLabels, ResponseRep::functionValues,
ActiveSet::request_vector(), ResponseRep::responseActiveSet, ActiveSet::write_annotated(), Dakota::write_-
col_vector_trans(), and Dakota::write_lower_triangle().
Referenced by Response::write_annotated().
read functionValues from an std::istream (tabular format) read_tabular is used to read functionValues in tabular
format. It is currently only used by ApproximationInterfaces in reading samples from a file. There is insufficient
data in a tabular file to build complete response objects; rather, the response object must be constructed a priori
and then its functionValues can be set.
References ResponseRep::functionValues.
Referenced by Response::read_tabular().
write functionValues to an std::ostream (tabular format) write_tabular is used for output of functionValues in a
tabular format for convenience in post-processing/plotting of DAKOTA results.
References ResponseRep::functionValues, ActiveSet::request_vector(), ResponseRep::responseActiveSet, and
Dakota::write_precision.
Referenced by Response::write_tabular().
read a responseRep object from a binary stream Binary version differs from ASCII version in 2 primary ways:
(1) it lacks formatting. (2) the Response has not been sized a priori. In reading data from the binary restart file,
a ParamResponsePair was constructed with its default constructor which called the Response default constructor.
Therefore, we must first read sizing data and resize all of the arrays.
References ResponseRep::functionGradients, ResponseRep::functionHessians, ResponseRep::functionLabels,
ResponseRep::functionValues, Dakota::read_col_vector_trans(), Dakota::read_lower_triangle(),
ActiveSet::request_vector(), ResponseRep::reset(), ResponseRep::reshape(), and Respon-
seRep::responseActiveSet.
write a responseRep object to a binary stream Binary version differs from ASCII version in 2 primary ways: (1)
It lacks formatting. (2) In reading data from the binary restart file, ParamResponsePairs are constructed with their
default constructor which calls the Response default constructor. Therefore, we must first write sizing data so that
ResponseRep::read(BoStream& s) can resize the arrays.
References ActiveSet::derivative_vector(), ResponseRep::functionGradients, ResponseRep::functionHessians,
ResponseRep::functionLabels, ResponseRep::functionValues, ActiveSet::request_vector(), Respon-
seRep::responseActiveSet, Dakota::write_col_vector_trans(), and Dakota::write_lower_triangle().
read a responseRep object from a packed MPI buffer UnpackBuffer version differs from BiStream version in the
omission of functionLabels. Master processor retains labels and interface ids and communicates asv and response
data only with slaves.
References ResponseRep::functionGradients, ResponseRep::functionHessians, ResponseRep::functionValues,
Dakota::read_col_vector_trans(), Dakota::read_lower_triangle(), ResponseRep::reset(), ResponseRep::reshape(),
and ResponseRep::responseActiveSet.
write a responseRep object to a packed MPI buffer MPIPackBuffer version differs from BoStream version only in
the omission of functionLabels. The master processor retains labels and ids and communicates asv and response
data only with slaves.
References ActiveSet::derivative_vector(), ResponseRep::functionGradients, ResponseRep::functionHessians,
ResponseRep::functionValues, ActiveSet::request_vector(), ResponseRep::responseActiveSet, Dakota::write_-
col_vector_trans(), and Dakota::write_lower_triangle().
13.127.3.11 void update (const RealVector & source_fn_vals, const RealMatrix & source_fn_grads,
const RealSymMatrixArray & source_fn_hessians, const ActiveSet & source_set)
[private]
update this response object from components of another response object Copy function values/gradients/Hessians
data _only_. Prevents unwanted overwriting of responseActiveSet, functionLabels, etc. Also, care is taken to
account for differences in derivative variable matrix sizing.
References Dakota::abort_handler(), ActiveSet::derivative_vector(), ResponseRep::functionGradients, Respon-
seRep::functionHessians, ResponseRep::functionValues, ActiveSet::request_vector(), ResponseRep::reset_-
inactive(), and ResponseRep::responseActiveSet.
Referenced by Response::update().
13.127.3.12 void update_partial (size_t start_index_target, size_t num_items, const RealVector &
source_fn_vals, const RealMatrix & source_fn_grads, const RealSymMatrixArray &
source_fn_hessians, const ActiveSet & source_set, size_t start_index_source) [private]
partially update this response object partial components of another response object Copy function val-
ues/gradients/Hessians data _only_. Prevents unwanted overwriting of responseActiveSet, functionLabels, etc.
Also, care is taken to account for differences in derivative variable matrix sizing.
References Dakota::abort_handler(), ActiveSet::derivative_vector(), ResponseRep::functionGradients, Respon-
seRep::functionHessians, ResponseRep::functionValues, ActiveSet::request_vector(), ResponseRep::reset_-
inactive(), and ResponseRep::responseActiveSet.
Referenced by Response::update_partial().
13.127.3.13 void reshape (size_t num_fns, size_t num_params, bool grad_flag, bool hess_flag)
[private]
rehapes response data arrays Reshape functionValues, functionGradients, and functionHessians according to
num_fns, num_params, grad_flag, and hess_flag.
References Dakota::build_labels(), ActiveSet::derivative_vector(), ResponseRep::functionGradients, Respon-
seRep::functionHessians, ResponseRep::functionLabels, ResponseRep::functionValues, ActiveSet::request_-
vector(), ActiveSet::reshape(), and ResponseRep::responseActiveSet.
Referenced by ResponseRep::active_set_derivative_vector(), ResponseRep::read(), ResponseRep::read_-
annotated(), and Response::reshape().
resets all response data to zero Reset all numerical response data (not labels, ids, or active set) to zero.
References ResponseRep::functionGradients, ResponseRep::functionHessians, and Respon-
seRep::functionValues.
Referenced by ResponseRep::read(), ResponseRep::read_annotated(), and Response::reset().
resets all inactive response data to zero Used to clear out any inactive data left over from previous evaluations.
References ResponseRep::functionGradients, ResponseRep::functionHessians, ResponseRep::functionValues,
ActiveSet::request_vector(), and ResponseRep::responseActiveSet.
Referenced by Response::reset_inactive(), ResponseRep::update(), and ResponseRep::update_partial().
first derivatives of the response functions the gradient vectors (plural) are column vectors in the matrix (singular)
with (row, col) = (variable index, response fn index).
• DakotaResponse.hpp
• DakotaResponse.cpp
• void add_data (const StrStrSizet &iterator_id, const std::string &data_name, const boost::any &result, const
MetaDataType &metadata)
record addition with metadata map
• const ResultsValueType & lookup_data (const StrStrSizet &iterator_id, const std::string &data_name) const
• void output_data (const std::vector< std::vector< std::string > > &data, std::ostream &os)
output data to ostream
Private Attributes
Class: ResultsDBAny Description: A map-based container to store DAKOTA Iterator results in underlying
boost::anys, with optional metadata
13.128.2.1 void array_insert (const StrStrSizet & iterator_id, const std::string & data_name, size_t
index, const StoredType & sent_data) [inline]
insert sent_data in specified position in previously allocated array insert requires previous allocation, and does not
allow metadata update
References Dakota::abort_handler(), ResultsDBAny::iteratorData, and Dakota::make_key().
Referenced by ResultsManager::array_insert().
13.128.2.2 void add_data (const StrStrSizet & iterator_id, const std::string & data_name, const
boost::any & result, const MetaDataType & metadata)
13.128.2.3 void extract_data (const boost::any & dataholder, std::ostream & os) [private]
determine the type of contained data and output it to ostream Extract the data from the held any and
map to supported concrete types int double RealVector (Teuchos::SerialDenseVector<int,double) RealMatrix
(Teuchos::SerialDenseMatrix<int,double)
References ResultsDBAny::output_data().
Referenced by ResultsDBAny::dump_data(), and ResultsDBAny::print_data().
The documentation for this class was generated from the following files:
• ResultsDBAny.hpp
• ResultsDBAny.cpp
• ResultsEntry (const ResultsManager &results_mgr, const StrStrSizet &iterator_id, const std::string &data_-
name, size_t array_index)
Construct ResultsEntry to retrieve item array_index from array of StoredType.
Private Attributes
• bool coreActive
whether the ResultsManager has an active in-core database
• StoredType dbData
data retrieved from file data base
Class to manage in-core vs. file database lookups. ResultsEntry manages database lookups. If a core database is
available, will return a reference directly to the stored data; if disk, will return reference to a local copy contained
in this class. Allows disk-stored data to persist for minimum time during lookup to support true out-of-core use
cases.
The documentation for this class was generated from the following file:
• ResultsManager.hpp
• ∼ResultsID ()
Private destructor for ResultsID.
Private Attributes
• std::map< std::pair< std::string, std::string >, size_t > idMap
storage for the results IDs
Get a globally unique 1-based execution number for a given iterator name (combination of methodName and
methodID) for use in results DB. Each run_iterator call creates or increments this count for its string identifier.
The documentation for this class was generated from the following files:
• ResultsManager.hpp
• ResultsManager.cpp
• void write_databases ()
Write in-core databases to file.
• void insert (const StrStrSizet &iterator_id, const std::string &data_name, const StringArray &sa_labels,
const MetaDataType metadata=MetaDataType())
insert StringArray, e.g., response labels
• void insert (const StrStrSizet &iterator_id, const std::string &data_name, const RealMatrix &matrix, const
MetaDataType metadata=MetaDataType())
insert RealMatrix, e.g. correlations
Public Attributes
• ResultsNames results_names
Copy of valid results names for when manager is passed around.
Private Attributes
• bool coreDBActive
whether the in-core database in active
• std::string coreDBFilename
filename for the in-core database
• bool fileDBActive
whether the file database is active
• ResultsDBAny coreDB
In-core database, with option to flush to file at end.
Friends
• class ResultsEntry
ResultsEntry is a friend of ResultsManager.
Results manager for iterator final data. The results manager provides the API for posting and retrieving iterator
results data (and eventually run config/statistics). It can manage a set of underlying results databases, in or out of
core, depending on configuration
The key for a results entry is documented in results_types.hpp, e.g., tuple<std::string, std::string, size_t,
std::string>
For now, using concrete types for most insertion, since underlying databases like HDF5 might need concrete
types; though template parameter for array allocation and retrieval. Probably want to use concrete types for arrays
too.
All insertions overwrite any previous data.
The documentation for this class was generated from the following files:
• ResultsManager.hpp
• ResultsManager.cpp
Public Attributes
• size_t namesVersion
• std::string best_cv
• std::string best_div
• std::string best_drv
• std::string best_fns
• std::string moments_std
• std::string moments_central
• std::string moments_std_num
• std::string moments_central_num
• std::string moments_std_exp
• std::string moments_central_exp
• std::string moment_cis
• std::string extreme_values
• std::string map_resp_prob
• std::string map_resp_rel
• std::string map_resp_genrel
• std::string map_prob_resp
• std::string map_rel_resp
• std::string map_genrel_resp
• std::string pdf_histograms
• std::string correl_simple_all
• std::string correl_simple_io
• std::string correl_partial_io
• std::string correl_simple_rank_all
• std::string correl_simple_rank_io
• std::string correl_partial_rank_io
• std::string pce_coeffs
• std::string pce_coeff_labels
• std::string cv_labels
• std::string div_labels
• std::string drv_labels
• std::string fn_labels
List of valid names for iterator results. All data in the ResultsNames class is public, basically just a struct
The documentation for this class was generated from the following file:
• ResultsManager.hpp
Iterator
Analyzer
Verification
RichExtrapVerification
• ∼RichExtrapVerification ()
destructor
• void perform_verification ()
Redefines the run_iterator virtual function for the PStudy/DACE branch.
• void converge_order ()
iterate using extrapolation() until convOrder stabilizes
• void converge_qoi ()
iterate using extrapolation() until QOIs stabilize
predict the converged value based on the convergence rate and the value of Phi
Private Attributes
• short studyType
internal code for extrapolation study type: ESTIMATE_ORDER, CONVERGE_ORDER, or CONVERGE_QOI
• size_t numFactors
number of refinement factors defined from active state variables
• RealVector initialCVars
initial reference values for refinement factors
• size_t factorIndex
the index of the active factor
• Real refinementRate
rate of mesh refinement (default = 2.)
• RealMatrix convOrder
the orders of convergence of the QOIs (numFunctions by numFactors)
• RealMatrix extrapQOI
the extrapolated value of the QOI (numFunctions by numFactors)
• RealMatrix numErrorQOI
the numerical uncertainty associated with level of refinement (numFunctions by numFactors)
• RealVector refinementRefPt
This is a reference point reported for the converged extrapQOI and numErrorQOI. It currently corresponds to the
coarsest mesh in the final refinement triple.
Class for Richardson extrapolation for code and solution verification. The RichExtrapVerification class contains
several algorithms for performing Richardson extrapolation.
print the final iterator results This virtual function provides additional iterator-specific final results outputs beyond
the function evaluation summary printed in finalize_run().
perform a single estimation of convOrder using extrapolation() This algorithm executes a single refinement triple
and returns convergence order estimates.
References RichExtrapVerification::extrapolate_result(), RichExtrapVerification::extrapolation(), RichExtrapVer-
ification::extrapQOI, RichExtrapVerification::factorIndex, RichExtrapVerification::initialCVars, RichExtrapVer-
ification::numErrorQOI, RichExtrapVerification::numFactors, Iterator::numFunctions, RichExtrapVerifica-
tion::refinementRate, and RichExtrapVerification::refinementRefPt.
Referenced by RichExtrapVerification::perform_verification().
iterate using extrapolation() until convOrder stabilizes This algorithm continues to refine until the convergence
order estimate converges.
References Iterator::convergenceTol, RichExtrapVerification::convOrder, Dakota::copy_data(),
RichExtrapVerification::extrapolate_result(), RichExtrapVerification::extrapolation(), RichExtrapVeri-
fication::extrapQOI, RichExtrapVerification::factorIndex, RichExtrapVerification::initialCVars, Itera-
tor::maxIterations, RichExtrapVerification::numErrorQOI, RichExtrapVerification::numFactors, Itera-
tor::numFunctions, Iterator::outputLevel, RichExtrapVerification::refinementRate, RichExtrapVerifica-
tion::refinementRefPt, and Dakota::write_data().
Referenced by RichExtrapVerification::perform_verification().
iterate using extrapolation() until QOIs stabilize This algorithm continues to refine until the discretization error
lies within a prescribed tolerance.
References Iterator::convergenceTol, RichExtrapVerification::extrapolate_result(), RichExtrapVerifica-
tion::extrapolation(), RichExtrapVerification::extrapQOI, RichExtrapVerification::factorIndex, RichExtrapVer-
ification::initialCVars, Iterator::maxIterations, RichExtrapVerification::numErrorQOI, RichExtrapVerifica-
tion::numFactors, Iterator::numFunctions, Iterator::outputLevel, RichExtrapVerification::refinementRate,
RichExtrapVerification::refinementRefPt, and Dakota::write_data().
Referenced by RichExtrapVerification::perform_verification().
The documentation for this class was generated from the following files:
• RichExtrapVerification.hpp
• RichExtrapVerification.cpp
Interface
ApplicationInterface
DirectApplicInterface
ScilabInterface
• ∼ScilabInterface ()
Destructor: close Matlab engine.
Protected Attributes
• int scilabEngine
identifier for the running Scilab enginer
Specialization of DirectApplicInterface to link to Scilab analysis drivers. Includes convenience functions to map
data to/from Scilab
The documentation for this class was generated from the following files:
• ScilabInterface.hpp
• ScilabInterface.cpp
• ∼SensAnalysisGlobal ()
destructor
Private Attributes
• RealMatrix simpleCorr
matrix to hold simple raw correlations
• RealMatrix simpleRankCorr
matrix to hold simple rank correlations
• RealMatrix partialCorr
matrix to hold partial raw correlations
• RealMatrix partialRankCorr
matrix to hold partial rank correlations
• size_t numFns
number of responses
• size_t numVars
number of inputs
• bool numericalIssuesRaw
flag indicating numerical issues in partial raw correlation calculations
• bool numericalIssuesRank
flag indicating numerical issues in partial rank correlation calculations
• bool corrComputed
flag indictaing whether correlations have been computed
Class for a utility class containing correlation calculations and variance-based decomposition. This class provides
code for several of the sampling methods both in the NonD branch and in the PStudyDACE branch. Currently,
the utility functions provide global sensitivity analysis through correlation calculations (e.g. simple, partial, rank,
raw) as well as variance-based decomposition.
The documentation for this class was generated from the following files:
• SensAnalysisGlobal.hpp
• SensAnalysisGlobal.cpp
Strategy
HybridStrategy
SequentialHybridStrategy
• ∼SequentialHybridStrategy ()
destructor
• void run_sequential_adaptive ()
run a sequential adaptive hybrid
• void partition_sets (size_t num_sets, int job_index, size_t &start_index, size_t &job_size)
convert num_sets and job_index into a start_index and job_size for extraction from parameterSets
Private Attributes
• String hybridType
sequential or sequential_adaptive
• size_t seqCount
hybrid sequence counter: 0 to numIterators-1
• Real progressMetric
the amount of progress made in a single iterator++ cycle within a sequential adaptive hybrid
• Real progressThreshold
when the progress metric falls below this threshold, the sequential adaptive hybrid switches to the next method
• PRP2DArray prpResults
2-D array of results corresponding to numIteratorJobs, one set of results per job (iterators may return multiple final
solutions)
• VariablesArray parameterSets
1-D array of variable starting points for the iterator jobs
Strategy for sequential hybrid minimization using multiple optimization and nonlinear least squares methods on
multiple models of varying fidelity. The sequential hybrid minimization strategy has two approaches: (1) the
non-adaptive sequential hybrid runs one method to completion, passes its best results as the starting point for a
subsequent method, and continues this succession until all methods have been executed (the stopping rules are
controlled internally by each minimizer), and (2) the adaptive sequential hybrid uses adaptive stopping rules for
the minimizers that are controlled externally by the strategy. Note that while the strategy is targeted at minimizers,
any iterator may be used so long as it defines the notion of a final solution which can be passed as the starting
point for subsequent iterators.
pack a send_buffer for assigning an iterator job to a server This virtual function redefinition is executed on the
dedicated master processor for self scheduling. It is not used for peer partitions.
Reimplemented from Strategy.
References SequentialHybridStrategy::extract_parameter_sets(), and SequentialHybridStrategy::seqCount.
Referenced by SequentialHybridStrategy::run_sequential().
unpack a recv_buffer for accepting an iterator job from the scheduler This virtual function redefinition is executed
on an iterator server for dedicated master self scheduling. It is not used for peer partitions.
Reimplemented from Strategy.
References SequentialHybridStrategy::initialize_iterator(), and SequentialHybridStrategy::seqCount.
pack a send_buffer for returning iterator results from a server This virtual function redefinition is executed either
on an iterator server for dedicated master self scheduling or on peers 2 through n for static scheduling.
Reimplemented from Strategy.
References SequentialHybridStrategy::prpResults.
unpack a recv_buffer for accepting iterator results from a server This virtual function redefinition is executed on
an strategy master (either the dedicated master processor for self scheduling or peer 1 for static scheduling).
Reimplemented from Strategy.
References SequentialHybridStrategy::prpResults.
run a sequential hybrid In the sequential nonadaptive case, there is no interference with the iterators. Each runs
until its own convergence criteria is satisfied. Status: fully operational.
References Iterator::accepts_multiple_points(), ParallelLibrary::bcast_i(), ParallelLibrary::bcast_si(),
Response::function_values(), Strategy::graph2DFlag, Iterator::initialize_graphics(), Model::interface_id(),
Response::is_null(), Variables::is_null(), Strategy::iteratorCommRank, Strategy::iteratorCommSize, Strat-
egy::iteratorServerId, HybridStrategy::methodList, Iterator::num_final_solutions(), Strategy::numIteratorJobs,
HybridStrategy::numIterators, Strategy::numIteratorServers, SequentialHybridStrategy::pack_parameters_-
buffer(), Strategy::parallelLib, SequentialHybridStrategy::parameterSets, Strategy::paramsMsgLen,
SequentialHybridStrategy::prpResults, ParallelLibrary::recv_si(), Iterator::response_results(), Strat-
egy::resultsMsgLen, Iterator::returns_multiple_points(), Strategy::schedule_iterators(), HybridStrat-
egy::selectedIterators, ParallelLibrary::send_si(), SequentialHybridStrategy::seqCount, MPIPack-
Buffer::size(), Strategy::stratIterDedMaster, Strategy::stratIterMessagePass, Strategy::tabularDataFile, Strat-
egy::tabularDataFlag, HybridStrategy::userDefinedModels, Iterator::variables_results(), Strategy::worldRank,
and Dakota::write_data().
Referenced by SequentialHybridStrategy::run_strategy().
run a sequential adaptive hybrid In the sequential adaptive case, there is interference with the iterators through
the use of the ++ overloaded operator. iterator++ runs the iterator for one cycle, after which a progress_metric is
computed. This progress metric is used to dictate method switching instead of each iterator’s internal convergence
criteria. Status: incomplete.
References Strategy::graph2DFlag, HybridStrategy::methodList, HybridStrategy::numIterators, Sequen-
tialHybridStrategy::progressMetric, SequentialHybridStrategy::progressThreshold, Strategy::run_iterator(),
HybridStrategy::selectedIterators, SequentialHybridStrategy::seqCount, Strategy::tabularDataFile, Strat-
egy::tabularDataFlag, HybridStrategy::userDefinedModels, and Strategy::worldRank.
Referenced by SequentialHybridStrategy::run_strategy().
extract partial_param_sets from parameterSets based on job_index This convenience function is executed on an
iterator master (static scheduling) or a strategy master (self scheduling) at run initialization time and has access to
the full parameterSets array (this is All-Reduced for all peers at the completion of each cycle in run_sequential()).
References SequentialHybridStrategy::parameterSets, and SequentialHybridStrategy::partition_sets().
Referenced by SequentialHybridStrategy::initialize_iterator(), and SequentialHybridStrategy::pack_parameters_-
buffer().
The documentation for this class was generated from the following files:
• SequentialHybridStrategy.hpp
• SequentialHybridStrategy.cpp
Sample derived interface class for testing serial simulator plug-ins using assign_rep(). Inheritance diagram for
SerialDirectApplicInterface::
Interface
ApplicationInterface
DirectApplicInterface
SerialDirectApplicInterface
• ∼SerialDirectApplicInterface ()
destructor
Sample derived interface class for testing serial simulator plug-ins using assign_rep(). The plug-in SerialDirec-
tApplicInterface resides in namespace SIM and uses a copy of rosenbrock() to perform serial parameter to re-
sponse mappings. It may be activated by specifying the --with-plugin configure option, which activates the
DAKOTA_PLUGIN macro in dakota_config.h used by main.cpp (which activates the plug-in code block within
that file) and activates the PLUGIN_S declaration defined in Makefile.include and used in Makefile.source (which
add this class to the build). Test input files should then use an analysis_driver of "plugin_rosenbrock".
• PluginSerialDirectApplicInterface.hpp
• PluginSerialDirectApplicInterface.cpp
• SharedVariablesData (const ProblemDescDB &problem_db, const std::pair< short, short > &view)
standard constructor
• SharedVariablesData (const std::pair< short, short > &view, const SizetArray &vars_comps_totals)
lightweight constructor
• ∼SharedVariablesData ()
destructor
• void size_all_discrete_int_labels ()
size labels for all of the discrete integer variables
• void initialize_all_discrete_int_types ()
initialize types for all of the discrete integer variables
• void size_all_discrete_real_labels ()
size labels for all of the discrete real variables
• void initialize_all_discrete_real_types ()
initialize types for all of the discrete real variables
• void initialize_active_components ()
initialize the active components totals given active variable counts
• void initialize_inactive_components ()
initialize the inactive components totals given inactive variable counts
get ids of discrete variables that have been relaxed into continuous variable arrays
• size_t cv () const
get number of active continuous vars
Private Attributes
• SharedVariablesDataRep ∗ svdRep
pointer to the body (handle-body idiom)
Container class encapsulating variables data that can be shared among a set of Variables instances. An array of
Variables objects (e.g., Analyzer::allVariables) contains repeated configuration data (id’s, labels, counts). Shared-
VariablesData employs a handle-body idiom to allow this shared data to be managed in a single object with many
references to it, one per Variables object in the array. This allows scaling to larger sample sets.
The documentation for this class was generated from the following file:
• SharedVariablesData.hpp
• SharedVariablesDataRep (const std::pair< short, short > &view, const SizetArray &vars_comps_totals)
lightweight constructor
• ∼SharedVariablesDataRep ()
destructor
• void size_all_discrete_int_labels ()
size allDiscreteIntLabels
• void initialize_all_discrete_int_types ()
initialize allDiscreteIntTypes
• void size_all_discrete_real_labels ()
size allDiscreteRealLabels
• void initialize_all_discrete_real_types ()
initialize allDiscreteRealTypes
• void initialize_active_components ()
initialize activeVarsCompsTotals given {c,di,dr}vStart and num{C,DI,DR}V
• void initialize_inactive_components ()
initialize inactiveVarsCompsTotals given i{c,di,dr}vStart and numI{C,DI,DR}V
Private Attributes
• String variablesId
variables identifier string from the input file
• SizetArray variablesCompsTotals
totals for variable type counts for {continuous,discrete integer,discrete real} {design,aleatory uncertain,epistemic
uncertain,state}
• SizetArray activeVarsCompsTotals
totals for active variable type counts for {continuous,discrete integer,discrete real} {design,aleatory uncer-
tain,epistemic uncertain,state}
• SizetArray inactiveVarsCompsTotals
totals for inactive variable type counts for {continuous,discrete integer,discrete real} {design,aleatory uncer-
tain,epistemic uncertain,state}
• size_t cvStart
start index of active continuous variables within allContinuousVars
• size_t divStart
start index of active discrete integer variables within allDiscreteIntVars
• size_t drvStart
start index of active discrete real variables within allDiscreteRealVars
• size_t icvStart
start index of inactive continuous variables within allContinuousVars
• size_t idivStart
start index of inactive discrete integer variables w/i allDiscreteIntVars
• size_t idrvStart
start index of inactive discrete real variables within allDiscreteRealVars
• size_t numCV
number of active continuous variables
• size_t numDIV
number of active discrete integer variables
• size_t numDRV
number of active discrete real variables
• size_t numICV
number of inactive continuous variables
• size_t numIDIV
number of inactive discrete integer variables
• size_t numIDRV
number of inactive discrete real variables
• StringMultiArray allContinuousLabels
array of variable labels for all of the continuous variables
• StringMultiArray allDiscreteIntLabels
array of variable labels for all of the discrete integer variables
• StringMultiArray allDiscreteRealLabels
array of variable labels for all of the discrete real variables
• UShortMultiArray allContinuousTypes
array of variable types for all of the continuous variables
• UShortMultiArray allDiscreteIntTypes
array of variable types for all of the discrete integer variables
• UShortMultiArray allDiscreteRealTypes
array of variable types for all of the discrete real variables
• SizetMultiArray allContinuousIds
array of 1-based position identifiers for the all continuous variables array
• SizetArray relaxedDiscreteIds
array of discrete variable identifiers for which the discrete requirement is relaxed by merging them into a continuous
array
• int referenceCount
number of handle objects sharing svdRep
Friends
• class SharedVariablesData
The representation of a SharedVariablesData instance. This representation, or body, may be shared by multiple
SharedVariablesData handle instances. The SharedVariablesData/SharedVariablesDataRep pairs utilize a handle-
body idiom (Coplien, Advanced C++).
standard constructor This constructor is the one which must build the base class data for all derived classes. get_-
variables() instantiates a derived class letter and the derived constructor selects this base class constructor in its
initialization list (to avoid the recursion of the base class constructor calling get_variables() again). Since the
letter IS the representation, its representation pointer is set to NULL (an uninitialized pointer causes problems in
∼Variables).
References SharedVariablesDataRep::allContinuousLabels, SharedVariablesDataRep::allDiscreteIntLabels,
SharedVariablesDataRep::allDiscreteRealLabels, Dakota::copy_data_partial(), ProblemDescDB::get_-
sa(), ProblemDescDB::get_sizet(), SharedVariablesDataRep::initialize_all_continuous_ids(),
SharedVariablesDataRep::initialize_all_continuous_types(), SharedVariablesDataRep::initialize_all_-
discrete_int_types(), SharedVariablesDataRep::initialize_all_discrete_real_types(), SharedVariables-
DataRep::variablesComponents, SharedVariablesDataRep::variablesCompsTotals, and SharedVariables-
DataRep::variablesView.
array of 1-based position identifiers for the all continuous variables array These identifiers define positions of the
all continuous variables array within the total variable sequence.
Referenced by SharedVariablesDataRep::initialize_all_continuous_ids().
The documentation for this class was generated from the following files:
• SharedVariablesData.hpp
• SharedVariablesData.cpp
Strategy
SingleMethodStrategy
• ∼SingleMethodStrategy ()
destructor
• void run_strategy ()
Perform the strategy by executing selectedIterator on userDefinedModel.
Private Attributes
• Model userDefinedModel
the model to be iterated
• Iterator selectedIterator
the iterator
Simple fall-through strategy for running a single iterator on a single model. This strategy executes a single iterator
on a single model. Since it does not provide coordination for multiple iterators and models, it can considered to
be a "fall-through" strategy in that it allows control to fall through immediately to the iterator.
The documentation for this class was generated from the following files:
• SingleMethodStrategy.hpp
• SingleMethodStrategy.cpp
Model
SingleModel
• ∼SingleModel ()
destructor
• String local_eval_synchronization ()
return userDefinedInterface synchronization setting
• int local_eval_concurrency ()
return userDefinedInterface asynchronous evaluation concurrency
• void derived_init_serial ()
set up SingleModel for serial operations (request forwarded to userDefinedInterface).
• void stop_servers ()
executed by the master to terminate userDefinedInterface server operations when SingleModel iteration is complete.
• void set_evaluation_reference ()
set the evaluation counter reference points for the SingleModel (request forwarded to userDefinedInterface)
• void fine_grained_evaluation_counters ()
request fine-grained evaluation reporting within the userDefinedInterface
Private Attributes
• Interface userDefinedInterface
the interface used for mapping variables to responses
Derived model class which utilizes a single interface to map variables into responses. The SingleModel class is
the simplest of the derived model classes. It provides the capabilities of the original Model class, prior to the
development of surrogate and nested model extensions. The derived response computation and synchronization
functions utilize a single interface to perform the function evaluations.
set the hierarchical eval ID tag prefix SingleModel doesn’t need to change the tagging, so just forward to Interface
Reimplemented from Model.
References Interface::eval_tag_prefix(), and SingleModel::userDefinedInterface.
The documentation for this class was generated from the following files:
• SingleModel.hpp
• SingleModel.cpp
SNLLBase
SNLLLeastSq SNLLOptimizer
• ∼SNLLBase ()
destructor
• void copy_con_vals_optpp_to_dak (const RealVector &g, RealVector &local_fn_vals, const size_t &offset)
• void copy_con_grad (const RealMatrix &local_fn_grads, RealMatrix &grad_g, const size_t &offset)
convenience function for copying local_fn_grads to grad_g; used by constraint evaluator functions
• void snll_post_instantiate (const int &num_cv, bool vendor_num_grad_flag, const String &finite_diff_-
type, const Real &fdss, const int &max_iter, const int &max_fn_evals, const Real &conv_tol, const
Real &grad_tol, const Real &max_step, bool bound_constr_flag, const int &num_constr, short output_-
lev, OPTPP::OptimizeClass ∗the_optimizer, OPTPP::NLP0 ∗nlf_objective, OPTPP::FDNLF1 ∗fd_nlf1,
OPTPP::FDNLF1 ∗fd_nlf1_con)
convenience function for setting OPT++ options after the method instantiation
Protected Attributes
• String searchMethod
value_based_line_search, gradient_based_line_search, trust_region, or tr_pds
• OPTPP::SearchStrategy searchStrat
enum: LineSearch, TrustRegion, or TrustPDS
• OPTPP::MeritFcn meritFn
enum: NormFmu, ArgaezTapia, or VanShanno
• Real maxStep
value from max_step specification
• Real stepLenToBndry
value from steplength_to_boundary specification
• Real centeringParam
value from centering_parameter specification
• bool constantASVFlag
flags a user selection of active_set_vector == constant. By mapping this into mode override, reliance on duplicate
detection can be avoided.
Base class for OPT++ optimization and least squares methods. The SNLLBase class provides a common base
class for SNLLOptimizer and SNLLLeastSq, both of which are wrappers for OPT++, a C++ optimization library
from the Computational Sciences and Mathematics Research (CSMR) department at Sandia’s Livermore CA site.
The documentation for this class was generated from the following files:
• SNLLBase.hpp
• SNLLBase.cpp
Iterator
Minimizer
LeastSq SNLLBase
SNLLLeastSq
• ∼SNLLLeastSq ()
destructor
• void minimize_residuals ()
Performs the iterations to determine the least squares solution.
• void finalize_run ()
restores instances
objective function evaluator function which obtains values and gradients for least square terms and computes
objective function value, gradient, and Hessian using the Gauss-Newton approximation.
• static void constraint1_evaluator_gn (int mode, int n, const RealVector &x, RealVector &g, RealMatrix
&grad_g, int &result_mode)
constraint evaluator function which provides constraint values and gradients to OPT++ Gauss-Newton methods.
• static void constraint2_evaluator_gn (int mode, int n, const RealVector &x, RealVector &g, RealMatrix
&grad_g, OPTPP::OptppArray< RealSymMatrix > &hess_g, int &result_mode)
constraint evaluator function which provides constraint values, gradients, and Hessians to OPT++ Gauss-Newton
methods.
Private Attributes
• SNLLLeastSq ∗ prevSnllLSqInstance
pointer to the previously active object instance used for restoration in the case of iterator/model recursion
• OPTPP::NLP0 ∗ nlfObjective
objective NLF base class pointer
• OPTPP::NLP0 ∗ nlfConstraint
constraint NLF base class pointer
• OPTPP::NLP ∗ nlpConstraint
constraint NLP pointer
• OPTPP::NLF2 ∗ nlf2
pointer to objective NLF for full Newton optimizers
• OPTPP::NLF2 ∗ nlf2Con
pointer to constraint NLF for full Newton optimizers
• OPTPP::NLF1 ∗ nlf1Con
pointer to constraint NLF for Quasi Newton optimizers
• OPTPP::OptimizeClass ∗ theOptimizer
optimizer base class pointer
• OPTPP::OptNewton ∗ optnewton
Newton optimizer pointer.
• OPTPP::OptBCNewton ∗ optbcnewton
Bound constrained Newton optimizer ptr.
• OPTPP::OptDHNIPS ∗ optdhnips
Disaggregated Hessian NIPS optimizer ptr.
Wrapper class for the OPT++ optimization library. The SNLLLeastSq class provides a wrapper for OPT++, a C++
optimization library of nonlinear programming and pattern search techniques from the Computational Sciences
and Mathematics Research (CSMR) department at Sandia’s Livermore CA site. It uses a function pointer approach
for which passed functions must be either global functions or static member functions. Any attribute used within
static member functions must be either local to that function, a static member, or accessed by static pointer.
The user input mappings are as follows: max_iterations, max_function_evaluations,
convergence_tolerance, max_step, gradient_tolerance, search_method, and search_-
scheme_size are set using OPT++’s setMaxIter(), setMaxFeval(), setFcnTol(), setMaxStep(), setGradTol(),
setSearchStrategy(), and setSSS() member functions, respectively; output verbosity is used to toggle OPT++’s
debug mode using the setDebug() member function. Internal to OPT++, there are 3 search strategies, while
the DAKOTA search_method specification supports 4 (value_based_line_search, gradient_-
based_line_search, trust_region, or tr_pds). The difference stems from the "is_expensive" flag in
OPT++. If the search strategy is LineSearch and "is_expensive" is turned on, then the value_based_line_-
search is used. Otherwise (the "is_expensive" default is off), the algorithm will use the gradient_based_-
line_search. Refer to [Meza, J.C., 1994] and to the OPT++ source in the Dakota/packages/OPTPP directory
for information on OPT++ class member functions.
invokes snll_post_run and re-implements post_run (does not call parent) and performs other solution processing
SNLLLeastSq requires fn DB lookup, so overrides LeastSq::post_run and directly invokes Iterator::post_run when
complete.
Reimplemented from LeastSq.
References Iterator::activeSet, Iterator::bestResponseArray, Iterator::bestVariablesArray, SNLLBase::copy_-
con_vals_optpp_to_dak(), Dakota::copy_data_partial(), Minimizer::cvScaleMultipliers, Min-
imizer::cvScaleOffsets, Minimizer::cvScaleTypes, Dakota::data_pairs, Minimizer::expData,
Response::function_values(), LeastSq::get_confidence_intervals(), Model::interface_id(), Itera-
tor::iteratedModel, Dakota::lookup_by_val(), Minimizer::modify_s2n(), Minimizer::need_resp_-
trans_byvars(), SNLLLeastSq::nlfObjective, Minimizer::numExperiments, Iterator::numFunctions,
LeastSq::numLeastSqTerms, Minimizer::numNonlinearConstraints, Minimizer::numReplicates, Mini-
mizer::numUserPrimaryFns, Minimizer::obsDataFlag, ActiveSet::request_values(), ActiveSet::request_vector(),
Minimizer::responseScaleMultipliers, Minimizer::responseScaleOffsets, Minimizer::responseScaleTypes,
Minimizer::secondaryRespScaleFlag, SNLLBase::snll_post_run(), SNLLLeastSq::theOptimizer, and Mini-
mizer::varsScaleFlag.
13.143.2.2 void nlf2_evaluator_gn (int mode, int n, const RealVector & x, double & f, RealVector &
grad_f, RealSymMatrix & hess_f, int & result_mode) [static, private]
objective function evaluator function which obtains values and gradients for least square terms and computes
objective function value, gradient, and Hessian using the Gauss-Newton approximation. This nlf2 evaluator
function is used for the Gauss-Newton method in order to exploit the special structure of the nonlinear least
squares problem. Here, fx = sum (T_i - Tbar_i)∧ 2 and Response is made up of residual functions and their
gradients along with any nonlinear constraints. The objective function and its gradient vector and Hessian matrix
are computed directly from the residual functions and their derivatives (which are returned from the Response
object).
References Dakota::abort_handler(), Iterator::activeSet, Model::compute_response(), Model::continuous_-
variables(), Model::current_response(), Response::function_gradients(), Response::function_values(), Itera-
tor::iteratedModel, SNLLBase::lastEvalMode, SNLLBase::lastEvalVars, SNLLBase::lastFnEvalLocn, Itera-
tor::numFunctions, LeastSq::numLeastSqTerms, Minimizer::numNonlinearConstraints, Iterator::outputLevel,
ActiveSet::request_vector(), SNLLLeastSq::snllLSqInstance, Dakota::write_data(), and Dakota::write_precision.
Referenced by SNLLLeastSq::SNLLLeastSq().
13.143.2.3 void constraint1_evaluator_gn (int mode, int n, const RealVector & x, RealVector & g,
RealMatrix & grad_g, int & result_mode) [static, private]
constraint evaluator function which provides constraint values and gradients to OPT++ Gauss-Newton methods.
While it does not employ the Gauss-Newton approximation, it is distinct from constraint1_evaluator() due to its
need to anticipate the required modes for the least squares terms. This constraint evaluator function is used with
diaggregated Hessian NIPS and is currently active.
References Dakota::abort_handler(), Iterator::activeSet, Model::compute_response(), Model::continuous_-
variables(), SNLLBase::copy_con_grad(), SNLLBase::copy_con_vals_dak_to_optpp(), Model::current_-
response(), Response::function_gradients(), Response::function_values(), Iterator::iteratedModel, SNLL-
Base::lastEvalMode, SNLLBase::lastEvalVars, SNLLBase::lastFnEvalLocn, Iterator::numFunctions,
LeastSq::numLeastSqTerms, Iterator::outputLevel, ActiveSet::request_vector(), and SNLL-
LeastSq::snllLSqInstance.
Referenced by SNLLLeastSq::SNLLLeastSq().
13.143.2.4 void constraint2_evaluator_gn (int mode, int n, const RealVector & x, RealVector &
g, RealMatrix & grad_g, OPTPP::OptppArray< RealSymMatrix > & hess_g, int &
result_mode) [static, private]
constraint evaluator function which provides constraint values, gradients, and Hessians to OPT++ Gauss-Newton
methods. While it does not employ the Gauss-Newton approximation, it is distinct from constraint2_evaluator()
due to its need to anticipate the required modes for the least squares terms. This constraint evaluator function is
used with full Newton NIPS and is currently inactive.
References Dakota::abort_handler(), Iterator::activeSet, Model::compute_response(), Model::continuous_-
variables(), SNLLBase::copy_con_grad(), SNLLBase::copy_con_hess(), SNLLBase::copy_con_vals_-
dak_to_optpp(), Model::current_response(), Response::function_gradients(), Response::function_-
hessians(), Response::function_values(), Iterator::iteratedModel, SNLLBase::lastEvalMode, SNLL-
Base::lastEvalVars, SNLLBase::lastFnEvalLocn, SNLLBase::modeOverrideFlag, Iterator::numFunctions,
• SNLLLeastSq.hpp
• SNLLLeastSq.cpp
Iterator
Minimizer
Optimizer SNLLBase
SNLLOptimizer
• SNLLOptimizer (const RealVector &initial_pt, const RealVector &var_l_bnds, const RealVector &var_-
u_bnds, const RealMatrix &lin_ineq_coeffs, const RealVector &lin_ineq_l_bnds, const RealVector
&lin_ineq_u_bnds, const RealMatrix &lin_eq_coeffs, const RealVector &lin_eq_tgts, const RealVector
&nln_ineq_l_bnds, const RealVector &nln_ineq_u_bnds, const RealVector &nln_eq_tgts, void(∗user_-
obj_eval)(int mode, int n, const RealVector &x, double &f, RealVector &grad_f, int &result_mode),
void(∗user_con_eval)(int mode, int n, const RealVector &x, RealVector &g, RealMatrix &grad_g, int
&result_mode))
alternate constructor for instantiations "on the fly"
• ∼SNLLOptimizer ()
destructor
• void find_optimum ()
Performs the iterations to determine the optimal solution.
• void finalize_run ()
performs cleanup, restores instances and calls parent finalize
• static void nlf1_evaluator (int mode, int n, const RealVector &x, double &f, RealVector &grad_f, int
&result_mode)
objective function evaluator function which provides function values and gradients to OPT++ methods.
• static void nlf2_evaluator (int mode, int n, const RealVector &x, double &f, RealVector &grad_f, RealSym-
Matrix &hess_f, int &result_mode)
objective function evaluator function which provides function values, gradients, and Hessians to OPT++ methods.
• static void constraint0_evaluator (int n, const RealVector &x, RealVector &g, int &result_mode)
constraint evaluator function for OPT++ methods which require only constraint values.
• static void constraint1_evaluator (int mode, int n, const RealVector &x, RealVector &g, RealMatrix &grad_-
g, int &result_mode)
constraint evaluator function which provides constraint values and gradients to OPT++ methods.
• static void constraint2_evaluator (int mode, int n, const RealVector &x, RealVector &g, RealMatrix &grad_-
g, OPTPP::OptppArray< RealSymMatrix > &hess_g, int &result_mode)
constraint evaluator function which provides constraint values, gradients, and Hessians to OPT++ methods.
Private Attributes
• SNLLOptimizer ∗ prevSnllOptInstance
pointer to the previously active object instance used for restoration in the case of iterator/model recursion
• OPTPP::NLP0 ∗ nlfObjective
objective NLF base class pointer
• OPTPP::NLP0 ∗ nlfConstraint
constraint NLF base class pointer
• OPTPP::NLP ∗ nlpConstraint
constraint NLP pointer
• OPTPP::NLF0 ∗ nlf0
• OPTPP::NLF1 ∗ nlf1
pointer to objective NLF for (analytic) gradient-based optimizers
• OPTPP::NLF1 ∗ nlf1Con
pointer to constraint NLF for (analytic) gradient-based optimizers
• OPTPP::FDNLF1 ∗ fdnlf1
pointer to objective NLF for (finite diff) gradient-based optimizers
• OPTPP::FDNLF1 ∗ fdnlf1Con
pointer to constraint NLF for (finite diff) gradient-based optimizers
• OPTPP::NLF2 ∗ nlf2
pointer to objective NLF for full Newton optimizers
• OPTPP::NLF2 ∗ nlf2Con
pointer to constraint NLF for full Newton optimizers
• OPTPP::OptimizeClass ∗ theOptimizer
optimizer base class pointer
• OPTPP::OptPDS ∗ optpds
PDS optimizer pointer.
• OPTPP::OptCG ∗ optcg
CG optimizer pointer.
• OPTPP::OptLBFGS ∗ optlbfgs
L-BFGS optimizer pointer.
• OPTPP::OptNewton ∗ optnewton
Newton optimizer pointer.
• OPTPP::OptQNewton ∗ optqnewton
Quasi-Newton optimizer pointer.
• OPTPP::OptFDNewton ∗ optfdnewton
Finite Difference Newton opt pointer.
• OPTPP::OptBCNewton ∗ optbcnewton
Bound constrained Newton opt pointer.
• OPTPP::OptBCQNewton ∗ optbcqnewton
Bnd constrained Quasi-Newton opt ptr.
• OPTPP::OptBCFDNewton ∗ optbcfdnewton
Bnd constrained FD-Newton opt ptr.
• OPTPP::OptNIPS ∗ optnips
NIPS optimizer pointer.
• OPTPP::OptQNIPS ∗ optqnips
Quasi-Newton NIPS optimizer pointer.
• OPTPP::OptFDNIPS ∗ optfdnips
Finite Difference NIPS opt pointer.
• String setUpType
flag for iteration mode: "model" (normal usage) or "user_functions" (user-supplied functions mode for "on the fly"
instantiations). NonDReliability currently uses the user_functions mode.
• RealVector initialPoint
holds initial point passed in for "user_functions" mode.
• RealVector lowerBounds
holds variable lower bounds passed in for "user_functions" mode.
• RealVector upperBounds
holds variable upper bounds passed in for "user_functions" mode.
Wrapper class for the OPT++ optimization library. The SNLLOptimizer class provides a wrapper for OPT++,
a C++ optimization library of nonlinear programming and pattern search techniques from the Computational
Sciences and Mathematics Research (CSMR) department at Sandia’s Livermore CA site. It uses a function pointer
approach for which passed functions must be either global functions or static member functions. Any attribute
used within static member functions must be either local to that function, a static member, or accessed by static
pointer.
The user input mappings are as follows: max_iterations, max_function_evaluations,
convergence_tolerance, max_step, gradient_tolerance, search_method, and search_-
scheme_size are set using OPT++’s setMaxIter(), setMaxFeval(), setFcnTol(), setMaxStep(), setGradTol(),
setSearchStrategy(), and setSSS() member functions, respectively; output verbosity is used to toggle OPT++’s
debug mode using the setDebug() member function. Internal to OPT++, there are 3 search strategies, while
the DAKOTA search_method specification supports 4 (value_based_line_search, gradient_-
based_line_search, trust_region, or tr_pds). The difference stems from the "is_expensive" flag in
OPT++. If the search strategy is LineSearch and "is_expensive" is turned on, then the value_based_line_-
search is used. Otherwise (the "is_expensive" default is off), the algorithm will use the gradient_based_-
line_search. Refer to [Meza, J.C., 1994] and to the OPT++ source in the Dakota/packages/OPTPP directory
for information on OPT++ class member functions.
standard constructor This constructor is used for normal instantiations using data from the ProblemDescDB.
References Dakota::abort_handler(), Minimizer::boundConstraintFlag, SNLLBase::centeringParam,
SNLLOptimizer::constraint0_evaluator(), SNLLOptimizer::constraint1_evaluator(),
SNLLOptimizer::constraint2_evaluator(), Iterator::convergenceTol, Iterator::fdGradStepSize, SNL-
LOptimizer::fdnlf1, SNLLOptimizer::fdnlf1Con, ProblemDescDB::get_int(), ProblemDescDB::get_-
real(), Model::init_communicators(), SNLLBase::init_fn(), Iterator::intervalType, Iterator::iteratedModel,
Dakota::LARGE_SCALE, Iterator::maxConcurrency, Iterator::maxFunctionEvals, Iterator::maxIterations,
SNLLBase::maxStep, SNLLBase::meritFn, Iterator::methodName, Minimizer::minimizerRecasts, SNL-
LOptimizer::nlf0, SNLLOptimizer::nlf0_evaluator(), SNLLOptimizer::nlf1, SNLLOptimizer::nlf1_-
evaluator(), SNLLOptimizer::nlf1Con, SNLLOptimizer::nlf2, SNLLOptimizer::nlf2_evaluator(), SNLLOpti-
mizer::nlf2Con, SNLLOptimizer::nlfConstraint, SNLLOptimizer::nlfObjective, SNLLOptimizer::nlpConstraint,
Minimizer::numConstraints, Iterator::numContinuousVars, Minimizer::numNonlinearConstraints, SNL-
LOptimizer::optbcfdnewton, SNLLOptimizer::optbcnewton, SNLLOptimizer::optbcqnewton, SNLLOpti-
mizer::optcg, SNLLOptimizer::optfdnewton, SNLLOptimizer::optfdnips, SNLLOptimizer::optlbfgs, SNLLOpti-
mizer::optnewton, SNLLOptimizer::optnips, SNLLOptimizer::optpds, SNLLOptimizer::optqnewton, SNLLOp-
timizer::optqnips, Iterator::outputLevel, Iterator::probDescDB, SNLLBase::searchStrat, SNLLBase::snll_post_-
instantiate(), SNLLBase::snll_pre_instantiate(), SNLLBase::stepLenToBndry, SNLLOptimizer::theOptimizer,
and Minimizer::vendorNumericalGradFlag.
alternate constructor for instantiations "on the fly" This is an alternate constructor for instantiations on the fly
using a Model but no ProblemDescDB.
References Minimizer::boundConstraintFlag, SNLLOptimizer::constraint1_evaluator(), Itera-
tor::convergenceTol, Iterator::fdGradStepSize, SNLLBase::init_fn(), Iterator::intervalType, Dakota::LARGE_-
SCALE, Iterator::maxFunctionEvals, Iterator::maxIterations, SNLLBase::meritFn, Iterator::methodName, SNL-
LOptimizer::nlf1, SNLLOptimizer::nlf1_evaluator(), SNLLOptimizer::nlf1Con, SNLLOptimizer::nlfConstraint,
SNLLOptimizer::nlfObjective, SNLLOptimizer::nlpConstraint, Minimizer::numConstraints, Itera-
tor::numContinuousVars, Minimizer::numNonlinearConstraints, SNLLOptimizer::optbcqnewton, SNL-
LOptimizer::optlbfgs, SNLLOptimizer::optqnewton, SNLLOptimizer::optqnips, Iterator::outputLevel,
SNLLBase::searchStrat, SNLLBase::snll_post_instantiate(), SNLLBase::snll_pre_instantiate(), SNLLOpti-
mizer::theOptimizer, and Minimizer::vendorNumericalGradFlag.
13.144.2.3 SNLLOptimizer (const RealVector & initial_pt, const RealVector & var_l_bnds, const
RealVector & var_u_bnds, const RealMatrix & lin_ineq_coeffs, const RealVector &
lin_ineq_l_bnds, const RealVector & lin_ineq_u_bnds, const RealMatrix & lin_eq_coeffs,
const RealVector & lin_eq_tgts, const RealVector & nln_ineq_l_bnds, const RealVector &
nln_ineq_u_bnds, const RealVector & nln_eq_tgts, void(∗)(int mode, int n, const RealVector
&x, double &f, RealVector &grad_f, int &result_mode) user_obj_eval, void(∗)(int mode, int n,
const RealVector &x, RealVector &g, RealMatrix &grad_g, int &result_mode) user_con_eval)
alternate constructor for instantiations "on the fly" This is an alternate constructor for performing an optimization
using the passed in objective function and constraint function pointers.
References Minimizer::bigRealBoundSize, Minimizer::boundConstraintFlag, SNLLBase::init_fn(), SNLLOp-
timizer::initialPoint, Dakota::LARGE_SCALE, SNLLOptimizer::lowerBounds, SNLLBase::meritFn, SNL-
LOptimizer::nlf1, SNLLOptimizer::nlf1Con, SNLLOptimizer::nlfConstraint, SNLLOptimizer::nlfObjective,
SNLLOptimizer::nlpConstraint, Minimizer::numConstraints, Iterator::numContinuousVars, Mini-
mizer::numNonlinearConstraints, SNLLOptimizer::optbcqnewton, SNLLOptimizer::optlbfgs, SNL-
LOptimizer::optqnewton, SNLLOptimizer::optqnips, Iterator::outputLevel, SNLLBase::searchStrat,
SNLLBase::snll_initialize_run(), SNLLBase::snll_post_instantiate(), SNLLBase::snll_pre_instantiate(), SNL-
LOptimizer::theOptimizer, and SNLLOptimizer::upperBounds.
13.144.3.1 void nlf0_evaluator (int n, const RealVector & x, double & f, int & result_mode) [static,
private]
objective function evaluator function for OPT++ methods which require only function values. For use when
DAKOTA computes f and gradients are not directly available. This is used by nongradient-based optimizers such
as PDS and by gradient-based optimizers in vendor numerical gradient mode (opt++’s internal finite difference
routine is used).
References Model::compute_response(), Model::continuous_variables(), Model::current_response(),
Response::function_value(), Iterator::iteratedModel, SNLLBase::lastEvalVars, SNLLBase::lastFnEvalLocn,
Minimizer::numNonlinearConstraints, Iterator::outputLevel, Model::primary_response_fn_sense(), and SNL-
LOptimizer::snllOptInstance.
Referenced by SNLLOptimizer::SNLLOptimizer().
13.144.3.2 void nlf1_evaluator (int mode, int n, const RealVector & x, double & f, RealVector & grad_f,
int & result_mode) [static, private]
objective function evaluator function which provides function values and gradients to OPT++ methods. For use
when DAKOTA computes f and df/dX (regardless of gradientType). Vendor numerical gradient case is handled
by nlf0_evaluator.
References Iterator::activeSet, Model::compute_response(), Model::continuous_variables(),
Model::current_response(), Response::function_gradient_copy(), Response::function_value(), Itera-
tor::iteratedModel, SNLLBase::lastEvalMode, SNLLBase::lastEvalVars, SNLLBase::lastFnEvalLocn,
Minimizer::numNonlinearConstraints, Iterator::outputLevel, Model::primary_response_fn_sense(),
ActiveSet::request_values(), and SNLLOptimizer::snllOptInstance.
Referenced by SNLLOptimizer::SNLLOptimizer().
13.144.3.3 void nlf2_evaluator (int mode, int n, const RealVector & x, double & f, RealVector & grad_f,
RealSymMatrix & hess_f, int & result_mode) [static, private]
objective function evaluator function which provides function values, gradients, and Hessians to OPT++ methods.
For use when DAKOTA receives f, df/dX, & d∧ 2f/dx∧ 2 from the ApplicationInterface (analytic only). Finite
differencing does not make sense for a full Newton approach, since lack of analytic gradients & Hessian should
dictate the use of quasi-newton or fd-newton. Thus, there is no fdnlf2_evaluator for use with full Newton ap-
proaches, since it is preferable to use quasi-newton or fd-newton with nlf1. Gauss-Newton does not fit this model;
it uses nlf2_evaluator_gn instead of nlf2_evaluator.
References Iterator::activeSet, Model::compute_response(), Model::continuous_variables(), Model::current_-
response(), Response::function_gradient_copy(), Response::function_hessian(), Response::function_value(),
Iterator::iteratedModel, SNLLBase::lastEvalMode, SNLLBase::lastEvalVars, SNLLBase::lastFnEvalLocn,
Minimizer::numNonlinearConstraints, Iterator::outputLevel, Model::primary_response_fn_sense(),
ActiveSet::request_values(), and SNLLOptimizer::snllOptInstance.
Referenced by SNLLOptimizer::SNLLOptimizer().
13.144.3.4 void constraint0_evaluator (int n, const RealVector & x, RealVector & g, int & result_mode)
[static, private]
constraint evaluator function for OPT++ methods which require only constraint values. For use when DAKOTA
computes g and gradients are not directly available. This is used by nongradient-based optimizers and by gradient-
based optimizers in vendor numerical gradient mode (opt++’s internal finite difference routine is used).
References Model::compute_response(), Model::continuous_variables(), SNLLBase::copy_con_vals_-
dak_to_optpp(), Model::current_response(), Response::function_values(), Iterator::iteratedModel, SNLL-
Base::lastEvalVars, SNLLBase::lastFnEvalLocn, Optimizer::numObjectiveFns, Iterator::outputLevel, and
SNLLOptimizer::snllOptInstance.
Referenced by SNLLOptimizer::SNLLOptimizer().
13.144.3.5 void constraint1_evaluator (int mode, int n, const RealVector & x, RealVector & g,
RealMatrix & grad_g, int & result_mode) [static, private]
constraint evaluator function which provides constraint values and gradients to OPT++ methods. For use when
DAKOTA computes g and dg/dX (regardless of gradientType). Vendor numerical gradient case is handled by
constraint0_evaluator.
References Iterator::activeSet, Model::compute_response(), Model::continuous_variables(), SNLLBase::copy_-
con_grad(), SNLLBase::copy_con_vals_dak_to_optpp(), Model::current_response(), Response::function_-
gradients(), Response::function_values(), Iterator::iteratedModel, SNLLBase::lastEvalMode, SNLL-
Base::lastEvalVars, SNLLBase::lastFnEvalLocn, Optimizer::numObjectiveFns, Iterator::outputLevel,
ActiveSet::request_values(), and SNLLOptimizer::snllOptInstance.
Referenced by SNLLOptimizer::SNLLOptimizer().
13.144.3.6 void constraint2_evaluator (int mode, int n, const RealVector & x, RealVector & g,
RealMatrix & grad_g, OPTPP::OptppArray< RealSymMatrix > & hess_g, int &
result_mode) [static, private]
constraint evaluator function which provides constraint values, gradients, and Hessians to OPT++ methods. For
use when DAKOTA computes g, dg/dX, & d∧ 2g/dx∧ 2 (analytic only).
References Iterator::activeSet, Model::compute_response(), Model::continuous_variables(), SNLLBase::copy_-
con_grad(), SNLLBase::copy_con_hess(), SNLLBase::copy_con_vals_dak_to_optpp(), Model::current_-
response(), Response::function_gradients(), Response::function_hessians(), Response::function_values(),
Iterator::iteratedModel, SNLLBase::lastEvalMode, SNLLBase::lastEvalVars, SNLLBase::lastFnEvalLocn,
Optimizer::numObjectiveFns, Iterator::outputLevel, ActiveSet::request_values(), and SNLLOpti-
mizer::snllOptInstance.
Referenced by SNLLOptimizer::SNLLOptimizer().
The documentation for this class was generated from the following files:
• SNLLOptimizer.hpp
• SNLLOptimizer.cpp
SOLBase
NLSSOLLeastSq NPSOLOptimizer
• ∼SOLBase ()
destructor
• void deallocate_arrays ()
Deallocates memory previously allocated by allocate_arrays().
• void allocate_workspace (const int &num_cv, const int &num_nln_con, const int &num_lin_con, const int
&num_lsq)
Allocates real and integer workspaces for the SOL algorithms.
• void set_options (bool speculative_flag, bool vendor_num_grad_flag, short output_lev, const int &verify_-
lev, const Real &fn_prec, const Real &linesrch_tol, const int &max_iter, const Real &constr_tol, const Real
&conv_tol, const std::string &grad_type, const Real &fdss)
Sets SOL method options using calls to npoptn2.
Protected Attributes
• int realWorkSpaceSize
size of realWorkSpace
• int intWorkSpaceSize
size of intWorkSpace
• RealArray realWorkSpace
real work space for NPSOL/NLSSOL
• IntArray intWorkSpace
int work space for NPSOL/NLSSOL
• int nlnConstraintArraySize
used for non-zero array sizing (nonlinear constraints)
• int linConstraintArraySize
used for non-zero array sizing (linear constraints)
• RealArray cLambda
CLAMBDA from NPSOL manual: Langrange multipliers.
• IntArray constraintState
ISTATE from NPSOL manual: constraint status.
• int informResult
INFORM from NPSOL manual: optimization status on exit.
• int numberIterations
ITER from NPSOL manual: number of (major) iterations performed.
• int boundsArraySize
length of augmented bounds arrays (variable bounds plus linear and nonlinear constraint bounds)
• double ∗ linConstraintMatrixF77
[A] matrix from NPSOL manual: linear constraint coefficients
• double ∗ upperFactorHessianF77
[R] matrix from NPSOL manual: upper Cholesky factor of the Hessian of the Lagrangian.
• double ∗ constraintJacMatrixF77
[CJAC] matrix from NPSOL manual: nonlinear constraint Jacobian
• int fnEvalCntr
counter for testing against maxFunctionEvals
• size_t constrOffset
used in constraint_eval() to bridge NLSSOLLeastSq::numLeastSqTerms and NPSOLOptimizer::numObjectiveFns
Base class for Stanford SOL software. The SOLBase class provides a common base class for NPSOLOptimizer
and NLSSOLLeastSq, both of which are Fortran 77 sequential quadratic programming algorithms from Stanford
University marketed by Stanford Business Associates.
The documentation for this class was generated from the following files:
• SOLBase.hpp
• SOLBase.cpp
Interface
ApplicationInterface
ProcessApplicInterface
ProcessHandleApplicInterface
SpawnApplicInterface
• ∼SpawnApplicInterface ()
destructor
• size_t wait_local_analyses ()
wait for asynchronous analyses on the local processor, completing at least one job
test for asynchronous analysis completions on the local processor and return results for any completions by sending
messages
Derived application interface class which spawns simulation codes using spawnvp. SpawnApplicInterface is used
on Windows systems and is a peer to ForkApplicInterface for Unix systems.
The documentation for this class was generated from the following files:
• SpawnApplicInterface.hpp
• SpawnApplicInterface.cpp
Base class for the strategy class hierarchy. Inheritance diagram for Strategy::
Strategy
• Strategy ()
default constructor
• virtual ∼Strategy ()
destructor
• void init_iterator_parallelism ()
convenience function for initializing iterator communicators, setting parallel configuration attributes, and manag-
ing outputs and restart.
Protected Attributes
• ProblemDescDB & probDescDB
class member reference to the problem description database
• String strategyName
type of strategy: single_method, hybrid, multi_start, or pareto_set.
• bool stratIterMessagePass
flag for message passing at si level
• bool stratIterDedMaster
flag for dedicated master part. at si level
• int worldRank
processor rank in MPI_COMM_WORLD
• int worldSize
size of MPI_COMM_WORLD
• int iteratorCommRank
processor rank in iteratorComm
• int iteratorCommSize
number of processors in iteratorComm
• int numIteratorServers
number of concurrent iterator partitions
• int iteratorServerId
identifier for an iterator server
• bool graph2DFlag
flag for using 2D graphics plots
• bool tabularDataFlag
flag for file tabulation of graphics data
• String tabularDataFile
filename for tabulation of graphics data
• bool resultsOutputFlag
whether to output results data
• std::string resultsOutputFile
filename for results data
• int maxConcurrency
maximum iterator concurrency possible in Strategy
• int numIteratorJobs
number of iterator executions to schedule
• int paramsMsgLen
length of MPI buffer for parameter input instance(s)
• int resultsMsgLen
length of MPI buffer for results output instance(s)
Private Attributes
• Strategy ∗ strategyRep
pointer to the letter (initialized only for the envelope)
• int referenceCount
number of objects sharing strategyRep
Base class for the strategy class hierarchy. The Strategy class is the base class for the class hierarchy providing
the top level control in DAKOTA. The strategy is responsible for creating and managing iterators and models.
For memory efficiency and enhanced polymorphism, the strategy hierarchy employs the "letter/envelope idiom"
(see Coplien "Advanced C++", p. 133), for which the base class (Strategy) serves as the envelope and one of the
derived classes (selected in Strategy::get_strategy()) serves as the letter.
13.147.2.1 Strategy ()
default constructor Default constructor. strategyRep is NULL in this case (a populated problem_db is needed
to build a meaningful Strategy object). This makes it necessary to check for NULL in the copy constructor,
assignment operator, and destructor.
envelope constructor Used in main.cpp instantiation to build the envelope. This constructor only needs to extract
enough data to properly execute get_strategy, since Strategy::Strategy(BaseConstructor, problem_db) builds the
actual base class data inherited by the derived strategies.
References Dakota::abort_handler(), Strategy::get_strategy(), and Strategy::strategyRep.
copy constructor Copy constructor manages sharing of strategyRep and incrementing of referenceCount.
References Strategy::referenceCount, and Strategy::strategyRep.
destructor Destructor decrements referenceCount and only deletes strategyRep when referenceCount reaches zero.
References ParallelLibrary::free_iterator_communicators(), ParallelLibrary::is_null(), Strategy::parallelLib,
Strategy::referenceCount, and Strategy::strategyRep.
constructor initializes the base class part of letter classes (BaseConstructor overloading avoids infinite recursion
in the derived class constructors - Coplien, p. 139) This constructor is the one which must build the base class
data for all inherited strategies. get_strategy() instantiates a derived class letter and the derived constructor selects
this base class constructor in its initialization list (to avoid the recursion of the base class constructor calling get_-
strategy() again). Since the letter IS the representation, its representation pointer is set to NULL (an uninitialized
pointer causes problems in ∼Strategy).
References ProblemDescDB::get_int(), Strategy::probDescDB, and Dakota::write_precision.
assignment operator Assignment operator decrements referenceCount for old strategyRep, assigns new strate-
gyRep, and increments referenceCount for new strategyRep.
References Strategy::referenceCount, and Strategy::strategyRep.
pack a send_buffer for assigning an iterator job to a server This virtual function redefinition is executed on the
dedicated master processor for self scheduling. It is not used for peer partitions.
Reimplemented in ConcurrentStrategy, and SequentialHybridStrategy.
References Strategy::pack_parameters_buffer(), and Strategy::strategyRep.
Referenced by Strategy::pack_parameters_buffer(), and Strategy::self_schedule_iterators().
unpack a recv_buffer for accepting an iterator job from the scheduler This virtual function redefinition is executed
on an iterator server for dedicated master self scheduling. It is not used for peer partitions.
Reimplemented in ConcurrentStrategy, and SequentialHybridStrategy.
References Strategy::strategyRep, and Strategy::unpack_parameters_buffer().
Referenced by Strategy::serve_iterators(), and Strategy::unpack_parameters_buffer().
pack a send_buffer for returning iterator results from a server This virtual function redefinition is executed either
on an iterator server for dedicated master self scheduling or on peers 2 through n for static scheduling.
Reimplemented in ConcurrentStrategy, and SequentialHybridStrategy.
References Strategy::pack_results_buffer(), and Strategy::strategyRep.
Referenced by Strategy::pack_results_buffer(), Strategy::serve_iterators(), and Strategy::static_schedule_-
iterators().
unpack a recv_buffer for accepting iterator results from a server This virtual function redefinition is executed on
an strategy master (either the dedicated master processor for self scheduling or peer 1 for static scheduling).
convenience function for initializing iterator communicators, setting parallel configuration attributes, and manag-
ing outputs and restart. This function is called from derived class constructors once maxConcurrency is defined
but prior to instantiating Iterators and Models.
References ParallelLevel::dedicated_master_flag(), ProblemDescDB::get_int(), ProblemDescDB::get_string(),
ParallelLibrary::init_iterator_communicators(), Strategy::iteratorCommRank, Strategy::iteratorCommSize,
Strategy::iteratorServerId, ParallelLibrary::manage_outputs_restart(), Strategy::maxConcurrency,
ParallelLevel::message_pass(), ParallelLevel::num_servers(), Strategy::numIteratorServers, Strategy::parallelLib,
Strategy::probDescDB, Strategy::resultsOutputFile, Strategy::resultsOutputFlag, ParallelLevel::server_-
communicator_rank(), ParallelLevel::server_communicator_size(), ParallelLevel::server_id(), Strat-
egy::stratIterDedMaster, and Strategy::stratIterMessagePass.
Referenced by CollaborativeHybridStrategy::CollaborativeHybridStrategy(), ConcurrentStrat-
egy::ConcurrentStrategy(), EmbeddedHybridStrategy::EmbeddedHybridStrategy(), SequentialHybridStrat-
egy::SequentialHybridStrategy(), and SingleMethodStrategy::SingleMethodStrategy().
13.147.3.7 void init_iterator (Iterator & the_iterator, Model & the_model) [protected]
convenience function for allocating comms prior to running an iterator This is a convenience function for encap-
sulating the allocation of communicators prior to running an iterator. It does not require a strategyRep forward
since it is only used by letter objects.
References ProblemDescDB::get_iterator(), Model::init_comms_bcast_flag(), Model::init_communicators(),
Strategy::iteratorCommRank, Strategy::iteratorCommSize, Iterator::maximum_concurrency(), Strat-
egy::probDescDB, Model::serve_configurations(), and Model::stop_configurations().
Referenced by HybridStrategy::allocate_methods(), ConcurrentStrategy::ConcurrentStrategy(), and SingleMeth-
odStrategy::SingleMethodStrategy().
13.147.3.8 void run_iterator (Iterator & the_iterator, Model & the_model) [protected]
Convenience function for invoking an iterator and managing parallelism. This version omits communicator repar-
titioning. Function must be public due to use by MINLPNode. This is a convenience function for encapsulating
the parallel features (run/serve) of running an iterator. This function omits allocation/deallocation of communi-
cators to provide greater efficiency in those strategies which involve multiple iterator executions but only require
communicator allocation/deallocation to be performed once.
It does not require a strategyRep forward since it is only used by letter objects. While it is currently a public
function due to its use in MINLPNode, this usage still involves a strategy letter object.
References Strategy::iteratorCommRank, Iterator::maximum_concurrency(), Iterator::run_iterator(),
Model::serve(), and Model::stop_servers().
13.147.3.9 void free_iterator (Iterator & the_iterator, Model & the_model) [protected]
convenience function for deallocating comms after running an iterator This is a convenience function for encap-
sulating the deallocation of communicators after running an iterator. It does not require a strategyRep forward
since it is only used by letter objects.
References Model::free_communicators(), and Iterator::maximum_concurrency().
Referenced by HybridStrategy::deallocate_methods(), ConcurrentStrategy::∼ConcurrentStrategy(), and
SingleMethodStrategy::∼SingleMethodStrategy().
13.147.3.10 void schedule_iterators (Iterator & the_iterator, Model & the_model) [protected]
short convenience function for distributing control among self_schedule_iterators(), serve_iterators(), and static_-
schedule_iterators() This implementation supports the scheduling of multiple jobs using a single iterator/model
pair. Additional future (overloaded) implementations could involve independent iterator instances.
References Strategy::self_schedule_iterators(), Strategy::serve_iterators(), Strategy::static_schedule_iterators(),
Strategy::stratIterDedMaster, and Strategy::worldRank.
Referenced by SequentialHybridStrategy::run_sequential(), EmbeddedHybridStrategy::run_strategy(),
ConcurrentStrategy::run_strategy(), and CollaborativeHybridStrategy::run_strategy().
executed by the strategy master to self-schedule iterator jobs among slave iterator servers (called by derived
run_strategy()) This function is adapted from ApplicationInterface::self_schedule_evaluations(). It occurs on a
dedicated master scheduler, so output is directed to std::cout.
References ParallelLibrary::free(), ParallelLibrary::irecv_si(), ParallelLibrary::isend_si(), Strat-
egy::numIteratorJobs, Strategy::numIteratorServers, Strategy::pack_parameters_buffer(), Strategy::parallelLib,
ParallelLibrary::print_configuration(), MPIPackBuffer::reset(), MPIUnpackBuffer::resize(), Strat-
egy::resultsMsgLen, Strategy::unpack_results_buffer(), ParallelLibrary::waitall(), and ParallelLi-
brary::waitsome().
Referenced by Strategy::schedule_iterators().
13.147.3.12 void serve_iterators (Iterator & the_iterator, Model & the_model) [protected]
executed on the slave iterator servers to perform iterator jobs assigned by the strategy master (called by derived
run_strategy()) This function is similar in structure to ApplicationInterface::serve_evaluations_synch().
References ParallelLibrary::bcast_i(), Strategy::iteratorCommRank, Strategy::iteratorCommSize,
Strategy::pack_results_buffer(), ParallelLibrary::parallel_time(), Strategy::parallelLib, Strategy::paramsMsgLen,
ParallelLibrary::recv_si(), Strategy::resultsMsgLen, Strategy::run_iterator(), ParallelLibrary::send_si(),
Strategy::unpack_parameters_buffer(), and Strategy::update_local_results().
Referenced by Strategy::schedule_iterators().
Used by the envelope to instantiate the correct letter class. Used only by the envelope constructor to initialize
strategyRep to the appropriate derived type, as given by the strategyName attribute.
References ProblemDescDB::get_string(), Strategy::probDescDB, Strategy::strategyName, and
Dakota::strbegins().
Referenced by Strategy::Strategy().
The documentation for this class was generated from the following files:
• DakotaStrategy.hpp
• DakotaStrategy.cpp
Approximation
SurfpackApproximation
• ∼SurfpackApproximation ()
destructor
• void build ()
SurfData object will be created from Dakota’s SurrogateData, and the appropriate Surfpack build method will be
invoked.
• bool diagnostics_available ()
check if the diagnostics are available (true for the Surfpack types)
• Real diagnostic (const String &metric_type, const SurfpackModel &model, const SurfData &data)
retrieve a single diagnostic metric for the diagnostic type specified on the given model and data
• void merge_variable_arrays (const RealVector &cv, const IntVector &div, const RealVector &drv, RealAr-
ray &ra)
merge cv, div, and drv vectors into a single ra array
Private Attributes
• unsigned short approxOrder
order of polynomial approximation
• SurfpackModel ∗ model
The native Surfpack approximation.
• SurfpackModelFactory ∗ factory
factory for the SurfpackModel instance
• SurfData ∗ surfData
The data used to build the approximation, in Surfpack format.
• String exportModelName
A Surfpack model name for saving the surrogate model.
• StringArray diagnosticSet
set of diagnostic metrics
• bool crossValidateFlag
whether to perform cross validation
• unsigned numFolds
number of folds for CV
• Real percentFold
percentage of data for CV
• bool pressFlag
whether to perform PRESS
Derived approximation class for Surfpack approximation classes. Interface between Surfpack and Dakota. The
SurfpackApproximation class is the interface between Dakota and Surfpack. Based on the information in the
ProblemDescDB that is passed in through the constructor, SurfpackApproximation builds a Surfpack Surface
object that corresponds to one of the following data-fitting techniques: polynomial regression, kriging, artificial
neural networks, radial basis function network, or multivariate adaptaive regression splines (MARS).
13.148.2.1 SurfpackApproximation (const String & approx_type, const UShortArray & approx_order,
size_t num_vars, short data_order, short output_level)
alternate constructor On-the-fly constructor which uses mostly Surfpack model defaults.
References Dakota::abort_handler(), SurfpackApproximation::approxOrder, Approximation::approxType, Ap-
proximation::buildDataOrder, SurfpackApproximation::factory, and Approximation::outputLevel.
SurfData object will be created from Dakota’s SurrogateData, and the appropriate Surfpack build method will be
invoked.
surfData will be deleted in dtor
Todo
Right now, we’re completely deleting the old data and then recopying the current data into a SurfData object.
This was just the easiest way to arrive at a solution that would build and run. This function is frequently called
from addPoint rebuild, however, and it’s not good to go through this whole process every time one more data
point is added.
13.148.3.2 const RealSymMatrix & hessian (const Variables & vars) [protected, virtual]
Todo
Make this acceptably efficient
copy from SurrogateData to SurfPoint/SurfData Copy the data stored in Dakota-style SurrogateData into
Surfpack-style SurfPoint and SurfData objects.
References SurfpackApproximation::add_anchor_to_surfdata(), SurfpackApproximation::add_sd_to_surfdata(),
Approximation::approxData, Approximation::buildDataOrder, SurfpackApproximation::factory, and Approxima-
tion::outputLevel.
Referenced by SurfpackApproximation::build().
set the anchor point (including gradient and hessian if present) into surf_data If there is an anchor point, add an
equality constraint for its response value. Also add constraints for gradient and hessian, if applicable.
References Dakota::abort_handler(), Approximation::approxData, Dakota::copy_data(),
SurfpackApproximation::copy_matrix(), SurfpackApproximation::gradient(), SurfpackApproxima-
tion::hessian(), Approximation::outputLevel, SurfpackApproximation::sdv_to_realarray(), and Dakota::write_-
data().
Referenced by SurfpackApproximation::surrogates_to_surf_data().
The documentation for this class was generated from the following files:
• SurfpackApproximation.hpp
• SurfpackApproximation.cpp
Iterator
Minimizer
SurrBasedMinimizer
SurrBasedGlobalMinimizer
• ∼SurrBasedGlobalMinimizer ()
destructor
Private Attributes
• bool replacePoints
flag for replacing the previous iteration’s point additions, rather than continuing to append, during construction of
the next surrogate
The global surrogate-based minimizer which sequentially minimizes and updates a global surrogate model without
trust region controls. This method uses a SurrogateModel to perform minimization (optimization or nonlinear
least squares) through a set of iterations. At each iteration, a surrogate is built, the surrogate is minimized, and the
optimal points from the surrogate are then evaluated with the "true" function, to generate new points upon which
the surrogate for the next iteration is built.
The documentation for this class was generated from the following files:
• SurrBasedGlobalMinimizer.hpp
• SurrBasedGlobalMinimizer.cpp
Iterator
Minimizer
SurrBasedMinimizer
SurrBasedLocalMinimizer
• ∼SurrBasedLocalMinimizer ()
destructor
• void find_center_approx ()
• void hard_convergence_check (const Response &response_truth, const RealVector &c_vars, const Re-
alVector &lower_bnds, const RealVector &upper_bnds)
check for hard convergence (norm of projected gradient of merit function near zero)
• void tr_ratio_check (const RealVector &c_vars_star, const RealVector &tr_lower_bounds, const RealVector
&tr_upper_bounds)
compute trust region ratio (for SBLM iterate acceptance and trust region resizing) and check for soft convergence
(diminishing returns)
• static void hom_objective_eval (int &mode, int &n, double ∗tau_and_x, double &f, double ∗grad_f, int
&)
static function used by NPSOL as the objective function in the homotopy constraint relaxation formulation.
• static void hom_constraint_eval (int &mode, int &ncnln, int &n, int &nrowj, int ∗needc, double ∗tau_and_-
x, double ∗c, double ∗cjac, int &nstate)
static function used by NPSOL as the constraint function in the homotopy constraint relaxation formulation.
Private Attributes
• Real origTrustRegionFactor
original user specification for trustRegionFactor
• Real trustRegionFactor
the trust region factor is used to compute the total size of the trust region -- it is a percentage, e.g. for trustRegion-
Factor = 0.1, the actual size of the trust region will be 10% of the global bounds (upper bound - lower bound for
each design variable).
• Real minTrustRegionFactor
a soft convergence control: stop SBLM when the trust region factor is reduced below the value of minTrustRegion-
Factor
• Real trRatioContractValue
trust region ratio min value: contract tr if ratio below this value
• Real trRatioExpandValue
trust region ratio sufficient value: expand tr if ratio above this value
• Real gammaContract
trust region contraction factor
• Real gammaExpand
trust region expansion factor
• short approxSubProbObj
type of approximate subproblem objective: ORIGINAL_OBJ, LAGRANGIAN_OBJ, or AUGMENTED_-
LAGRANGIAN_OBJ
• short approxSubProbCon
type of approximate subproblem constraints: NO_CON, LINEARIZED_CON, or ORIGINAL_CON
• Model approxSubProbModel
the approximate sub-problem formulation solved on each approximate minimization cycle: may be a shallow copy
of iteratedModel, or may involve a RecastModel recursion applied to iteratedModel
• bool recastSubProb
flag to indicate when approxSubProbModel involves a RecastModel recursion
• short trConstraintRelax
type of trust region constraint relaxation for infeasible starting points: NO_RELAX or HOMOTOPY
• short meritFnType
type of merit function used in trust region ratio logic: PENALTY_MERIT, ADAPTIVE_PENALTY_MERIT,
LAGRANGIAN_MERIT, or AUGMENTED_LAGRANGIAN_MERIT
• short acceptLogic
type of iterate acceptance test logic: FILTER or TR_RATIO
• int penaltyIterOffset
iteration offset used to update the scaling of the penalty parameter for adaptive_penalty merit functions
• short convergenceFlag
code indicating satisfaction of hard or soft convergence conditions
• short softConvCount
number of consecutive candidate point rejections. If the count reaches softConvLimit, stop SBLM.
• short softConvLimit
the limit on consecutive candidate point rejections. If exceeded by softConvCount, stop SBLM.
• bool truthGradientFlag
flags the use/availability of truth gradients within the SBLM process
• bool approxGradientFlag
flags the use/availability of surrogate gradients within the SBLM process
• bool truthHessianFlag
flags the use/availability of truth Hessians within the SBLM process
• bool approxHessianFlag
flags the use/availability of surrogate Hessians within the SBLM process
• short correctionType
flags the use of surrogate correction techniques at the center of each trust region
• bool globalApproxFlag
flags the use of a global data fit surrogate (rsm, ann, mars, kriging)
• bool multiptApproxFlag
flags the use of a multipoint data fit surrogate (TANA)
• bool localApproxFlag
flags the use of a local data fit surrogate (Taylor series)
• bool hierarchApproxFlag
flags the use of a model hierarchy/multifidelity surrogate
• bool newCenterFlag
flags the acceptance of a candidate point and the existence of a new trust region center
• bool daceCenterPtFlag
flags the availability of the center point in the DACE evaluations for global approximations (CCD, Box-Behnken)
• bool multiLayerBypassFlag
flags the simultaneous presence of two conditions: (1) additional layerings w/i actual_model (e.g., surrogateModel
= layered/nested/layered -> actual_model = nested/layered), and (2) a user-specification to bypass all layerings
within actual_model for the evaluation of truth data (responseCenterTruth and responseStarTruth).
• bool useDerivsFlag
flag for the "use_derivatives" specification for which derivatives are to be evaluated at each DACE point in global
surrogate builds.
• RealVector nonlinIneqLowerBndsSlack
individual violations of nonlinear inequality constraint lower bounds
• RealVector nonlinIneqUpperBndsSlack
individual violations of nonlinear inequality constraint upper bounds
• RealVector nonlinEqTargetsSlack
individual violations of nonlinear equality constraint targets
• Real tau
constraint relaxation parameter
• Real alpha
constraint relaxation parameter backoff parameter (multiplier)
• Variables varsCenter
variables at the trust region center
• Response responseCenterApprox
approx response at trust region center
• Response responseStarApprox
approx response at SBLM cycle minimum
• IntResponsePair responseCenterTruth
truth response at trust region center
• IntResponsePair responseStarTruth
truth response at SBLM cycle minimum
Class for provably-convergent local surrogate-based optimization and nonlinear least squares. This minimizer
uses a SurrogateModel to perform minimization based on local, global, or hierarchical surrogates. It achieves
provable convergence through the use of a sequence of trust regions and the application of surrogate corrections
at the trust region centers.
Performs local surrogate-based minimization by minimizing local, global, or hierarchical surrogates over a series
of trust regions. Trust region-based strategy to perform surrogate-based optimization in subregions (trust regions)
of the parameter space. The minimizer operates on approximations in lieu of the more expensive simulation-based
response functions. The size of the trust region is varied according to the goodness of the agreement between the
approximations and the true response functions.
Implements SurrBasedMinimizer.
References Dakota::abort_handler(), Iterator::active_set(), Response::active_set(), Model::active_variables(),
Graphics::add_datapoint(), DiscrepancyCorrection::apply(), SurrBasedLocalMinimizer::approxGradientFlag,
SurrBasedLocalMinimizer::approxHessianFlag, SurrBasedMinimizer::approxSubProbMinimizer, Sur-
rBasedLocalMinimizer::approxSubProbModel, Iterator::bestResponseArray, Iterator::bestVariablesArray,
Model::build_approximation(), Model::component_parallel_mode(), DiscrepancyCorrection::compute(),
Model::compute_response(), Model::continuous_lower_bounds(), Model::continuous_upper_bounds(),
Model::continuous_variables(), Variables::continuous_variables(), SurrBasedLocalMinimizer::convergenceFlag,
Variables::copy(), Dakota::copy_data(), SurrBasedLocalMinimizer::correctionType, Model::current_-
response(), Model::current_variables(), SurrBasedLocalMinimizer::daceCenterPtFlag, Dakota::dakota_-
graphics, Model::discrepancy_correction(), Model::evaluation_id(), SurrBasedLocalMinimizer::find_center_-
approx(), SurrBasedLocalMinimizer::find_center_truth(), SurrBasedLocalMinimizer::globalApproxFlag,
SurrBasedLocalMinimizer::hard_convergence_check(), Iterator::is_null(), Iterator::iteratedModel,
SurrBasedLocalMinimizer::localApproxFlag, Iterator::maxIterations, SurrBasedLocalMini-
mizer::minTrustRegionFactor, SurrBasedLocalMinimizer::multiLayerBypassFlag, SurrBasedLocalMini-
mizer::multiptApproxFlag, SurrBasedLocalMinimizer::newCenterFlag, Model::nonlinear_eq_constraint_-
targets(), Model::nonlinear_ineq_constraint_lower_bounds(), Model::nonlinear_ineq_constraint_upper_-
bounds(), Iterator::numContinuousVars, SurrBasedMinimizer::origNonlinEqTargets, SurrBasedMini-
mizer::origNonlinIneqLowerBnds, SurrBasedMinimizer::origNonlinIneqUpperBnds, SurrBasedLocalMini-
mizer::recastSubProb, SurrBasedLocalMinimizer::relax_constraints(), ActiveSet::request_values(), SurrBas-
edLocalMinimizer::reset(), Iterator::response_results(), SurrBasedLocalMinimizer::responseCenterApprox,
SurrBasedLocalMinimizer::responseCenterTruth, SurrBasedLocalMinimizer::responseStarApprox,
SurrBasedLocalMinimizer::responseStarTruth, Iterator::run_iterator(), Iterator::sampling_scheme(),
SurrBasedMinimizer::sbIterNum, SurrBasedLocalMinimizer::sblmInstance, SurrBasedLocalMin-
imizer::softConvCount, SurrBasedLocalMinimizer::softConvLimit, Model::subordinate_iterator(),
Model::surrogate_model(), Model::surrogate_response_mode(), SurrBasedLocalMinimizer::tr_bounds(),
SurrBasedLocalMinimizer::tr_ratio_check(), SurrBasedLocalMinimizer::trConstraintRelax, SurrBasedLo-
calMinimizer::trustRegionFactor, Model::truth_model(), SurrBasedLocalMinimizer::truthGradientFlag, Sur-
rBasedLocalMinimizer::truthHessianFlag, Response::update(), SurrBasedLocalMinimizer::useDerivsFlag,
Iterator::variables_results(), and SurrBasedLocalMinimizer::varsCenter.
13.150.2.2 void hard_convergence_check (const Response & response_truth, const RealVector & c_vars,
const RealVector & lower_bnds, const RealVector & upper_bnds) [private]
check for hard convergence (norm of projected gradient of merit function near zero) The hard convergence check
computes the gradient of the merit function at the trust region center, performs a projection for active bound
constraints (removing any gradient component directed into an active bound), and signals convergence if the
2-norm of this projected gradient is less than convergenceTol.
13.150.2.3 void tr_ratio_check (const RealVector & c_vars_star, const RealVector & tr_lower_bnds,
const RealVector & tr_upper_bnds) [private]
compute trust region ratio (for SBLM iterate acceptance and trust region resizing) and check for soft convergence
(diminishing returns) Assess acceptance of SBLM iterate (trust region ratio or filter) and compute soft convergence
metrics (number of consecutive failures, min trust region size, etc.) to assess whether the convergence rate has
decreased to a point where the process should be terminated (diminishing returns).
References SurrBasedLocalMinimizer::acceptLogic, SurrBasedLocalMinimizer::approxSubProbObj,
SurrBasedMinimizer::augmented_lagrangian_merit(), SurrBasedMinimizer::constraint_violation(), Mini-
mizer::constraintTol, Iterator::convergenceTol, SurrBasedMinimizer::etaSequence, Response::function_values(),
SurrBasedLocalMinimizer::gammaContract, SurrBasedLocalMinimizer::gammaExpand, SurrBasedLocalMin-
imizer::globalApproxFlag, Iterator::iteratedModel, SurrBasedMinimizer::lagrangian_merit(), SurrBased-
LocalMinimizer::meritFnType, SurrBasedLocalMinimizer::newCenterFlag, Iterator::numContinuousVars,
SurrBasedMinimizer::origNonlinEqTargets, SurrBasedMinimizer::origNonlinIneqLowerBnds, SurrBased-
Minimizer::origNonlinIneqUpperBnds, SurrBasedMinimizer::penalty_merit(), Model::primary_response_-
fn_sense(), Model::primary_response_fn_weights(), SurrBasedLocalMinimizer::responseCenterApprox,
SurrBasedLocalMinimizer::responseCenterTruth, SurrBasedLocalMinimizer::responseStarApprox, Sur-
rBasedLocalMinimizer::responseStarTruth, SurrBasedLocalMinimizer::softConvCount, SurrBased-
LocalMinimizer::trRatioContractValue, SurrBasedLocalMinimizer::trRatioExpandValue, SurrBased-
LocalMinimizer::trustRegionFactor, SurrBasedMinimizer::update_augmented_lagrange_multipliers(),
SurrBasedMinimizer::update_filter(), and SurrBasedLocalMinimizer::update_penalty().
Referenced by SurrBasedLocalMinimizer::minimize_surrogates().
13.150.2.4 void update_penalty (const RealVector & fns_center_truth, const RealVector &
fns_star_truth) [private]
initialize and update the penaltyParameter Scaling of the penalty value is important to avoid rejecting SBLM
iterates which must increase the objective to achieve a reduction in constraint violation. In the basic penalty case,
the penalty is ramped exponentially based on the iteration counter. In the adaptive case, the ratio of relative change
between center and star points for the objective and constraint violation values is used to rescale penalty values.
References SurrBasedMinimizer::alphaEta, SurrBasedLocalMinimizer::approxSubProbObj,
SurrBasedMinimizer::constraint_violation(), Minimizer::constraintTol, SurrBasedMinimizer::eta, Sur-
rBasedMinimizer::etaSequence, Iterator::iteratedModel, SurrBasedLocalMinimizer::meritFnType, Mini-
mizer::objective(), SurrBasedLocalMinimizer::penaltyIterOffset, SurrBasedMinimizer::penaltyParameter,
13.150.2.5 void approx_subprob_objective_eval (const Variables & surrogate_vars, const Variables &
recast_vars, const Response & surrogate_response, Response & recast_response) [static,
private]
static function used to define the approximate subproblem objective. Objective functions evaluator for solution of
approximate subproblem using a RecastModel.
References Response::active_set_request_vector(), SurrBasedLocalMinimizer::approxSubProbCon,
SurrBasedLocalMinimizer::approxSubProbModel, SurrBasedLocalMinimizer::approxSubProbObj,
SurrBasedMinimizer::augmented_lagrangian_gradient(), SurrBasedMinimizer::augmented_lagrangian_-
merit(), Response::function_gradient(), Response::function_gradient_view(), Response::function_-
gradients(), Response::function_value(), Response::function_values(), Iterator::iteratedModel,
SurrBasedMinimizer::lagrangian_gradient(), SurrBasedMinimizer::lagrangian_merit(), Model::nonlinear_-
eq_constraint_targets(), Model::nonlinear_ineq_constraint_lower_bounds(), Model::nonlinear_ineq_constraint_-
upper_bounds(), Minimizer::numUserPrimaryFns, Minimizer::objective(), Minimizer::objective_gradient(),
SurrBasedMinimizer::origNonlinEqTargets, SurrBasedMinimizer::origNonlinIneqLowerBnds, SurrBased-
Minimizer::origNonlinIneqUpperBnds, Model::primary_response_fn_sense(), Model::primary_response_fn_-
weights(), and SurrBasedLocalMinimizer::sblmInstance.
Referenced by SurrBasedLocalMinimizer::SurrBasedLocalMinimizer().
13.150.2.6 void approx_subprob_constraint_eval (const Variables & surrogate_vars, const Variables &
recast_vars, const Response & surrogate_response, Response & recast_response) [static,
private]
static function used to define the approximate subproblem constraints. Constraint functions evaluator for solution
of approximate subproblem using a RecastModel.
References Response::active_set_derivative_vector(), Response::active_set_request_vector(), SurrBased-
LocalMinimizer::approxSubProbCon, SurrBasedLocalMinimizer::approxSubProbObj, Variables::continuous_-
variables(), Response::function_gradient(), Response::function_gradient_view(), Response::function_gradients(),
Response::function_value(), Response::function_values(), Minimizer::numUserPrimaryFns, SurrBasedLo-
calMinimizer::responseCenterApprox, SurrBasedLocalMinimizer::sblmInstance, and SurrBasedLocalMini-
mizer::varsCenter.
Referenced by SurrBasedLocalMinimizer::SurrBasedLocalMinimizer().
13.150.2.7 void hom_objective_eval (int & mode, int & n, double ∗ tau_and_x, double & f, double ∗
grad_f, int &) [static, private]
static function used by NPSOL as the objective function in the homotopy constraint relaxation formulation.
NPSOL objective functions evaluator for solution of homotopy constraint relaxation parameter optimization. This
constrained optimization problem performs the update of the tau parameter in the homotopy heuristic approach
used to relax the constraints in the original problem .
Referenced by SurrBasedLocalMinimizer::relax_constraints().
13.150.2.8 void hom_constraint_eval (int & mode, int & ncnln, int & n, int & nrowj, int ∗ needc,
double ∗ tau_and_x, double ∗ c, double ∗ cjac, int & nstate) [static, private]
static function used by NPSOL as the constraint function in the homotopy constraint relaxation formulation.
NPSOL constraint functions evaluator for solution of homotopy constraint relaxation parameter optimization.
This constrained optimization problem performs the update of the tau parameter in the homotopy heuristic ap-
proach used to relax the constraints in the original problem.
References Response::active_set(), SurrBasedLocalMinimizer::approxSubProbModel, Model::compute_-
response(), Model::continuous_variables(), Model::current_response(), Response::function_gradients(),
Response::function_values(), SurrBasedLocalMinimizer::nonlinEqTargetsSlack, SurrBasedLocalMini-
mizer::nonlinIneqLowerBndsSlack, SurrBasedLocalMinimizer::nonlinIneqUpperBndsSlack, Model::num_-
functions(), Minimizer::numNonlinearEqConstraints, Minimizer::numNonlinearIneqConstraints,
ActiveSet::request_vector(), SurrBasedLocalMinimizer::sblmInstance, and SurrBasedLocalMinimizer::tau.
Referenced by SurrBasedLocalMinimizer::relax_constraints().
The documentation for this class was generated from the following files:
• SurrBasedLocalMinimizer.hpp
• SurrBasedLocalMinimizer.cpp
Iterator
Minimizer
SurrBasedMinimizer
• ∼SurrBasedMinimizer ()
destructor
• void run ()
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post
• Real lagrangian_merit (const RealVector &fn_vals, const BoolDeque &sense, const RealVector &primary_-
wts, const RealVector &nln_ineq_l_bnds, const RealVector &nln_ineq_u_bnds, const RealVector &nln_-
eq_tgts)
compute a Lagrangian function from a set of function values
• void lagrangian_gradient (const RealVector &fn_vals, const RealMatrix &fn_grads, const BoolDeque
&sense, const RealVector &primary_wts, const RealVector &nln_ineq_l_bnds, const RealVector &nln_-
ineq_u_bnds, const RealVector &nln_eq_tgts, RealVector &lag_grad)
compute the gradient of the Lagrangian function
• Real augmented_lagrangian_merit (const RealVector &fn_vals, const BoolDeque &sense, const RealVector
&primary_wts, const RealVector &nln_ineq_l_bnds, const RealVector &nln_ineq_u_bnds, const RealVec-
tor &nln_eq_tgts)
compute an augmented Lagrangian function from a set of function values
• Real penalty_merit (const RealVector &fn_vals, const BoolDeque &sense, const RealVector &primary_-
wts)
compute a penalty function from a set of function values
• void penalty_gradient (const RealVector &fn_vals, const RealMatrix &fn_grads, const BoolDeque &sense,
const RealVector &primary_wts, RealVector &pen_grad)
compute the gradient of the penalty function
Protected Attributes
• Iterator approxSubProbMinimizer
the minimizer used on the surrogate model to solve the approximate subproblem on each surrogate-based iteration
• int sbIterNum
surrogate-based minimization iteration number
• RealVectorList sbFilter
Set of response function vectors defining a filter (objective vs. constraint violation) for iterate selection/rejection.
• RealVector lagrangeMult
Lagrange multipliers for basic Lagrangian calculations.
• RealVector augLagrangeMult
Lagrange multipliers for augmented Lagrangian calculations.
• Real penaltyParameter
the penalization factor for violated constraints used in quadratic penalty calculations; increased in update_-
penalty()
• RealVector origNonlinIneqLowerBnds
original nonlinear inequality constraint lower bounds (no relaxation)
• RealVector origNonlinIneqUpperBnds
original nonlinear inequality constraint upper bounds (no relaxation)
• RealVector origNonlinEqTargets
original nonlinear equality constraint targets (no relaxation)
• Real eta
constant used in etaSequence updates
• Real alphaEta
power for etaSequence updates when updating penalty
• Real betaEta
power for etaSequence updates when updating multipliers
• Real etaSequence
decreasing sequence of allowable constraint violation used in augmented Lagrangian updates (refer to Conn,
Gould, and Toint, section 14.4)
Base class for local/global surrogate-based optimization/least squares. These minimizers use a SurrogateModel
to perform optimization based either on local trust region methods or global updating methods.
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post Virtual run function for the iterator class hierarchy. All derived classes need to redefine it.
Reimplemented from Iterator.
References SurrBasedMinimizer::minimize_surrogates().
Redefines default iterator results printing to include optimization results (objective functions and constraints).
Reimplemented from Iterator.
References Dakota::abort_handler(), Iterator::activeSet, Minimizer::archive_allocate_best(),
Minimizer::archive_best(), Iterator::bestResponseArray, Iterator::bestVariablesArray, Dakota::data_pairs,
Model::interface_id(), Iterator::iteratedModel, Dakota::lookup_by_val(), Iterator::methodName, Itera-
tor::numFunctions, Minimizer::numUserPrimaryFns, Minimizer::optimizationFlag, ActiveSet::request_values(),
Dakota::strbegins(), Model::truth_model(), and Dakota::write_data_partial().
13.151.2.3 void update_lagrange_multipliers (const RealVector & fn_vals, const RealMatrix & fn_grads)
[protected]
initialize and update Lagrange multipliers for basic Lagrangian For the Rockafellar augmented Lagrangian, simple
Lagrange multiplier updates are available which do not require the active constraint gradients. For the basic
Lagrangian, Lagrange multipliers are estimated through solution of a nonnegative linear least squares problem.
References Dakota::abort_handler(), Minimizer::bigRealBoundSize, Minimizer::constraintTol,
Iterator::iteratedModel, SurrBasedMinimizer::lagrangeMult, Iterator::numContinuousVars,
Minimizer::numNonlinearEqConstraints, Minimizer::numNonlinearIneqConstraints, Mini-
mizer::numUserPrimaryFns, Minimizer::objective_gradient(), SurrBasedMinimizer::origNonlinIneqLowerBnds,
SurrBasedMinimizer::origNonlinIneqUpperBnds, Model::primary_response_fn_sense(), and Model::primary_-
response_fn_weights().
Referenced by SurrBasedLocalMinimizer::hard_convergence_check().
initialize and update the Lagrange multipliers for augmented Lagrangian For the Rockafellar augmented La-
grangian, simple Lagrange multiplier updates are available which do not require the active constraint gradients.
For the basic Lagrangian, Lagrange multipliers are estimated through solution of a nonnegative linear least squares
problem.
References SurrBasedMinimizer::augLagrangeMult, SurrBasedMinimizer::betaEta, Mini-
mizer::bigRealBoundSize, SurrBasedMinimizer::etaSequence, Minimizer::numNonlinearEqConstraints,
Minimizer::numNonlinearIneqConstraints, Minimizer::numUserPrimaryFns, SurrBasedMini-
mizer::origNonlinEqTargets, SurrBasedMinimizer::origNonlinIneqLowerBnds, SurrBasedMini-
mizer::origNonlinIneqUpperBnds, and SurrBasedMinimizer::penaltyParameter.
Referenced by SurrBasedLocalMinimizer::hard_convergence_check(), EffGlobalMinimizer::minimize_-
surrogates_on_model(), and SurrBasedLocalMinimizer::tr_ratio_check().
update a filter from a set of function values Update the sbFilter with fn_vals if new iterate is non-dominated.
References SurrBasedMinimizer::constraint_violation(), Iterator::iteratedModel, Mini-
mizer::numNonlinearConstraints, Minimizer::objective(), Model::primary_response_fn_sense(),
Model::primary_response_fn_weights(), and SurrBasedMinimizer::sbFilter.
13.151.2.6 Real lagrangian_merit (const RealVector & fn_vals, const BoolDeque & sense, const
RealVector & primary_wts, const RealVector & nln_ineq_l_bnds, const RealVector &
nln_ineq_u_bnds, const RealVector & nln_eq_tgts) [protected]
compute a Lagrangian function from a set of function values The Lagrangian function computation sums the
objective function and the Lagrange multipler terms for inequality/equality constraints. This implementation
follows the convention in Vanderplaats with g<=0 and h=0. The bounds/targets passed in may reflect the original
constraints or the relaxed constraints.
References Minimizer::bigRealBoundSize, Minimizer::constraintTol, SurrBasedMinimizer::lagrangeMult,
Minimizer::numNonlinearEqConstraints, Minimizer::numNonlinearIneqConstraints, Mini-
mizer::numUserPrimaryFns, and Minimizer::objective().
Referenced by SurrBasedLocalMinimizer::approx_subprob_objective_eval(), and
SurrBasedLocalMinimizer::tr_ratio_check().
13.151.2.7 Real augmented_lagrangian_merit (const RealVector & fn_vals, const BoolDeque & sense,
const RealVector & primary_wts, const RealVector & nln_ineq_l_bnds, const RealVector &
nln_ineq_u_bnds, const RealVector & nln_eq_tgts) [protected]
compute an augmented Lagrangian function from a set of function values The Rockafellar augmented Lagrangian
function sums the objective function, Lagrange multipler terms for inequality/equality constraints, and quadratic
penalty terms for inequality/equality constraints. This implementation follows the convention in Vanderplaats
with g<=0 and h=0. The bounds/targets passed in may reflect the original constraints or the relaxed constraints.
References SurrBasedMinimizer::augLagrangeMult, Minimizer::bigRealBoundSize, Min-
imizer::numNonlinearEqConstraints, Minimizer::numNonlinearIneqConstraints, Mini-
mizer::numUserPrimaryFns, Minimizer::objective(), and SurrBasedMinimizer::penaltyParameter.
Referenced by SurrBasedLocalMinimizer::approx_subprob_objective_eval(), EffGlobalMinimizer::get_best_-
sample(), EffGlobalMinimizer::minimize_surrogates_on_model(), and SurrBasedLocalMinimizer::tr_ratio_-
check().
13.151.2.8 Real penalty_merit (const RealVector & fn_vals, const BoolDeque & sense, const RealVector
& primary_wts) [protected]
compute a penalty function from a set of function values The penalty function computation applies a quadratic
penalty to any constraint violations and adds this to the objective function(s) p = f + r_p cv.
References SurrBasedMinimizer::constraint_violation(), Minimizer::constraintTol, Minimizer::objective(), and
SurrBasedMinimizer::penaltyParameter.
Referenced by SurrBasedLocalMinimizer::tr_ratio_check().
13.151.2.9 Real constraint_violation (const RealVector & fn_vals, const Real & constraint_tol)
[protected]
compute the constraint violation from a set of function values Compute the quadratic constraint violation defined
as cv = g+∧ T g+ + h+∧ T h+. This implementation supports equality constraints and 2-sided inequalities. The
constraint_tol allows for a small constraint infeasibility (used for penalty methods, but not Lagrangian methods).
References Minimizer::bigRealBoundSize, Minimizer::numNonlinearEqConstraints, Min-
imizer::numNonlinearIneqConstraints, Minimizer::numUserPrimaryFns, SurrBasedMini-
mizer::origNonlinEqTargets, SurrBasedMinimizer::origNonlinIneqLowerBnds, and SurrBasedMini-
mizer::origNonlinIneqUpperBnds.
Referenced by SurrBasedLocalMinimizer::hard_convergence_check(), EffGlobalMinimizer::minimize_-
surrogates_on_model(), SurrBasedMinimizer::penalty_merit(), SurrBasedLocalMinimizer::relax_-
constraints(), SurrBasedLocalMinimizer::tr_ratio_check(), SurrBasedMinimizer::update_filter(), and
SurrBasedLocalMinimizer::update_penalty().
The documentation for this class was generated from the following files:
• SurrBasedMinimizer.hpp
• SurrBasedMinimizer.cpp
Model
SurrogateModel
DataFitSurrModel HierarchSurrModel
• ∼SurrogateModel ()
destructor
• bool force_rebuild ()
evaluate whether a rebuild of the approximation should be forced based on changes in the inactive data
• void asv_mapping (const ShortArray &orig_asv, ShortArray &actual_asv, ShortArray &approx_asv, bool
build_flag)
distributes the incoming orig_asv among actual_asv and approx_asv
Protected Attributes
• IntSet surrogateFnIndices
for mixed response sets, this array specifies the response function subset that is approximated
• IntResponseMap surrResponseMap
map of surrogate responses used in derived_synchronize() and derived_synchronize_nowait() functions
• IntVariablesMap rawVarsMap
map of raw continuous variables used by apply_correction(). Model::varsList cannot be used for this purpose since
it does not contain lower level variables sets from finite differencing.
• IntIntMap truthIdMap
map from actualModel/highFidelityModel evaluation ids to DataFitSurrModel.hppierarchSurrModel ids
• IntIntMap surrIdMap
map from approxInterface/lowFidelityModel evaluation ids to DataFitSurrModel.hppierarchSurrModel ids
• IntResponseMap cachedApproxRespMap
map of approximate responses retrieved in derived_synchronize_nowait() that could not be returned since corre-
sponding truth model response portions were still pending.
• short responseMode
an enumeration that controls the response calculation mode in {DataFit,Hierarch}SurrModel approximate response
computations
• size_t approxBuilds
number of calls to build_approximation()
• RealVector referenceCLBnds
stores a reference copy of active continuous lower bounds when the approximation is built; used to detect when a
rebuild is required.
• RealVector referenceCUBnds
stores a reference copy of active continuous upper bounds when the approximation is built; used to detect when a
rebuild is required.
• IntVector referenceDILBnds
stores a reference copy of active discrete int lower bounds when the approximation is built; used to detect when a
rebuild is required.
• IntVector referenceDIUBnds
stores a reference copy of active discrete int upper bounds when the approximation is built; used to detect when a
rebuild is required.
• RealVector referenceDRLBnds
stores a reference copy of active discrete real lower bounds when the approximation is built; used to detect when a
rebuild is required.
• RealVector referenceDRUBnds
stores a reference copy of active discrete real upper bounds when the approximation is built; used to detect when a
rebuild is required.
• RealVector referenceICVars
stores a reference copy of the inactive continuous variables when the approximation is built using a Distinct view;
used to detect when a rebuild is required.
• IntVector referenceIDIVars
stores a reference copy of the inactive discrete int variables when the approximation is built using a Distinct view;
used to detect when a rebuild is required.
• RealVector referenceIDRVars
stores a reference copy of the inactive discrete real variables when the approximation is built using a Distinct view;
used to detect when a rebuild is required.
• DiscrepancyCorrection deltaCorr
manages construction and application of correction functions that are applied to a surrogate model (DataFitSurr
or HierarchSurr) in order to reproduce high fidelity data.
Private Attributes
• Variables truthModelVars
copy of the truth model variables object used to simplify conversion among differing variable views in force_-
rebuild()
• Constraints truthModelCons
copy of the truth model constraints object used to simplify conversion among differing variable views in force_-
rebuild()
Base class for surrogate models (DataFitSurrModel and HierarchSurrModel). The SurrogateModel class provides
common functions to derived classes for computing and applying corrections to approximations.
evaluate whether a rebuild of the approximation should be forced based on changes in the inactive data This func-
tion forces a rebuild of the approximation according to the sub-model variables view, the approximation type, and
whether the active approximation bounds or inactive variable values have changed since the last approximation
build.
Reimplemented from Model.
References Constraints::all_continuous_lower_bounds(), Constraints::all_continuous_upper_bounds(),
Variables::all_continuous_variables(), Constraints::all_discrete_int_lower_bounds(), Constraints::all_-
discrete_int_upper_bounds(), Variables::all_discrete_int_variables(), Constraints::all_discrete_real_-
lower_bounds(), Constraints::all_discrete_real_upper_bounds(), Variables::all_discrete_real_variables(),
Model::continuous_lower_bounds(), Constraints::continuous_lower_bounds(), Model::continuous_-
upper_bounds(), Constraints::continuous_upper_bounds(), Variables::continuous_variables(), Con-
straints::copy(), Variables::copy(), Model::current_variables(), Model::currentVariables, Model::discrete_-
int_lower_bounds(), Constraints::discrete_int_lower_bounds(), Model::discrete_int_upper_bounds(),
Constraints::discrete_int_upper_bounds(), Variables::discrete_int_variables(), Model::discrete_real_-
lower_bounds(), Constraints::discrete_real_lower_bounds(), Model::discrete_real_upper_bounds(),
Constraints::discrete_real_upper_bounds(), Variables::discrete_real_variables(), Variables::inactive_-
continuous_variables(), Variables::inactive_discrete_int_variables(), Variables::inactive_discrete_real_-
variables(), Constraints::is_null(), Variables::is_null(), Model::is_null(), Model::model_type(), Surrogate-
Model::referenceCLBnds, SurrogateModel::referenceCUBnds, SurrogateModel::referenceDILBnds, Surro-
gateModel::referenceDIUBnds, SurrogateModel::referenceDRLBnds, SurrogateModel::referenceDRUBnds,
SurrogateModel::referenceICVars, SurrogateModel::referenceIDIVars, SurrogateModel::referenceIDRVars,
Dakota::strbegins(), Model::subordinate_model(), Model::surrogateType, Model::truth_model(), Sur-
rogateModel::truthModelCons, SurrogateModel::truthModelVars, Model::user_defined_constraints(),
Model::userDefinedConstraints, and Variables::view().
Referenced by HierarchSurrModel::derived_asynch_compute_response(), DataFitSurrModel::derived_asynch_-
compute_response(), HierarchSurrModel::derived_compute_response(), and DataFitSurrModel::derived_-
compute_response().
an enumeration that controls the response calculation mode in {DataFit,Hierarch}SurrModel approximate re-
sponse computations SurrBasedLocalMinimizer toggles this mode since compute_correction() does not back out
old corrections.
Referenced by HierarchSurrModel::component_parallel_mode(), HierarchSurrModel::derived_asynch_-
compute_response(), DataFitSurrModel::derived_asynch_compute_response(), HierarchSurrModel::derived_-
compute_response(), DataFitSurrModel::derived_compute_response(), HierarchSurrModel::derived_set_-
communicators(), HierarchSurrModel::derived_synchronize(), DataFitSurrModel::derived_synchronize(),
DataFitSurrModel::derived_synchronize_approx(), HierarchSurrModel::derived_synchronize_nowait(),
DataFitSurrModel::derived_synchronize_nowait(), HierarchSurrModel::serve(), SurrogateModel::surrogate_-
response_mode(), HierarchSurrModel::surrogate_response_mode(), and DataFitSurrModel::surrogate_-
response_mode().
number of calls to build_approximation() used as a flag to automatically build the approximation if one of the
derived compute_response functions is called prior to build_approximation().
Referenced by DataFitSurrModel::append_approximation(), DataFitSurrModel::approximation_-
coefficients(), HierarchSurrModel::build_approximation(), DataFitSurrModel::build_approximation(),
HierarchSurrModel::derived_asynch_compute_response(), DataFitSurrModel::derived_asynch_compute_-
response(), HierarchSurrModel::derived_compute_response(), DataFitSurrModel::derived_compute_-
response(), DataFitSurrModel::pop_approximation(), DataFitSurrModel::update_actual_model(),
DataFitSurrModel::update_approximation(), DataFitSurrModel::update_from_actual_model(), and
HierarchSurrModel::update_model().
The documentation for this class was generated from the following files:
• SurrogateModel.hpp
• SurrogateModel.cpp
Interface
ApplicationInterface
ProcessApplicInterface
SysCallApplicInterface
GridApplicInterface
• ∼SysCallApplicInterface ()
destructor
detect completion of a function evaluation through existence of the necessary results file(s)
Private Attributes
• IntSet sysCallSet
set of function evaluation id’s for active asynchronous system call evaluations
• IntShortMap failCountMap
map linking function evaluation id’s to number of response read failures
Derived application interface class which spawns simulation codes using system calls. system() is part of the C
API and can be used on both Windows and Unix systems.
Check for completion of active asynch jobs (tracked with sysCallSet). Wait for at least one completion and
complete all jobs that have returned. This satisifies a "fairness" principle, in the sense that a completed job will
_always_ be processed (whereas accepting only a single completion could always accept the same completion -
the case of very inexpensive fn. evals. - and starve some servers).
Reimplemented from ApplicationInterface.
Reimplemented in GridApplicInterface.
References ApplicationInterface::completionSet, and SysCallApplicInterface::test_local_evaluations().
Check for completion of active asynch jobs (tracked with sysCallSet). Make one pass through sysCallSet &
complete all jobs that have returned.
Reimplemented from ApplicationInterface.
Reimplemented in GridApplicInterface.
References Dakota::abort_handler(), Response::active_set(), ApplicationInterface::completionSet,
SysCallApplicInterface::failCountMap, ProcessApplicInterface::fileNameMap, Interface::final_eval_id_-
tag(), Dakota::lookup_by_eval_id(), ApplicationInterface::manage_failure(), ParamResponsePair::prp_-
parameters(), ParamResponsePair::prp_response(), ProcessApplicInterface::read_results_files(), SysCallAp-
plicInterface::sysCallSet, and SysCallApplicInterface::system_call_file_test().
Referenced by SysCallApplicInterface::wait_local_evaluations().
No derived interface plug-ins, so perform construct-time checks. However, process init issues as warnings since
some contexts (e.g., HierarchSurrModel) initialize more configurations than will be used.
Reimplemented from ApplicationInterface.
References ApplicationInterface::check_multiprocessor_analysis().
spawn a complete function evaluation Put the SysCallApplicInterface to the shell. This function is used when all
portions of the function evaluation (i.e., all analysis drivers) are executed on the local processor.
References CommandShell::asynch_flag(), ProcessApplicInterface::commandLineArgs, ProcessAp-
plicInterface::curWorkdir, Dakota::flush(), ProcessApplicInterface::iFilterName, ProcessApplicInter-
spawn the input filter portion of a function evaluation Put the input filter to the shell. This function is used when
multiple analysis drivers are spread between processors. No need to check for a Null input filter, as this is checked
externally. Use of nonblocking shells is supported in this fn, although its use is currently prevented externally.
References CommandShell::asynch_flag(), ProcessApplicInterface::commandLineArgs, ProcessAp-
plicInterface::curWorkdir, Dakota::flush(), ProcessApplicInterface::iFilterName, ProcessApplicInter-
face::paramsFileName, ProcessApplicInterface::resultsFileName, CommandShell::suppress_output_flag(),
ApplicationInterface::suppressOutput, and ProcessApplicInterface::useWorkdir.
Referenced by SysCallApplicInterface::create_evaluation_process().
spawn a single analysis as part of a function evaluation Put a single analysis to the shell. This function is used
when multiple analysis drivers are spread between processors. Use of nonblocking shells is supported in this fn,
although its use is currently prevented externally.
References CommandShell::asynch_flag(), ProcessApplicInterface::commandLineArgs, ProcessApplicIn-
terface::curWorkdir, Dakota::flush(), ProcessApplicInterface::multipleParamsFiles, ProcessApplicInter-
face::paramsFileName, ProcessApplicInterface::programNames, ProcessApplicInterface::resultsFileName,
CommandShell::suppress_output_flag(), ApplicationInterface::suppressOutput, and ProcessApplicInter-
face::useWorkdir.
Referenced by SysCallApplicInterface::create_evaluation_process(), SysCallApplicInterface::synchronous_-
local_analysis(), and GridApplicInterface::synchronous_local_analysis().
spawn the output filter portion of a function evaluation Put the output filter to the shell. This function is used
when multiple analysis drivers are spread between processors. No need to check for a Null output filter, as this
is checked externally. Use of nonblocking shells is supported in this fn, although its use is currently prevented
externally.
References CommandShell::asynch_flag(), ProcessApplicInterface::commandLineArgs, ProcessAp-
plicInterface::curWorkdir, Dakota::flush(), ProcessApplicInterface::oFilterName, ProcessApplicInter-
face::paramsFileName, ProcessApplicInterface::resultsFileName, CommandShell::suppress_output_flag(),
ApplicationInterface::suppressOutput, and ProcessApplicInterface::useWorkdir.
Referenced by SysCallApplicInterface::create_evaluation_process().
The documentation for this class was generated from the following files:
• SysCallApplicInterface.hpp
• SysCallApplicInterface.cpp
Approximation
TANA3Approximation
• ∼TANA3Approximation ()
destructor
• void build ()
builds the approximation from scratch
• void clear_current ()
Private Attributes
• RealVector pExp
vector of exponent values
• RealVector minX
vector of minimum parameter values used in scaling
• RealVector scX1
vector of scaled x1 values
• RealVector scX2
vector of scaled x2 values
• Real H
the scalar Hessian value in the TANA-3 approximation
Derived approximation class for TANA-3 two-point exponential approximation (a multipoint approximation). The
TANA3Approximation class provides a multipoint approximation based on matching value and gradient data from
two points (typically the current and previous iterates) in parameter space. It forms an exponential approximation
in terms of intervening variables.
builds the approximation from scratch This is the common base class portion of the virtual fn and is insufficient
on its own; derived implementations should explicitly invoke (or reimplement) this base class contribution.
Reimplemented from Approximation.
References Dakota::abort_handler(), Approximation::approxData, Approximation::buildDataOrder,
TANA3Approximation::find_scaled_coefficients(), TANA3Approximation::minX, Approximation::numVars,
and TANA3Approximation::pExp.
• TANA3Approximation.hpp
• TANA3Approximation.cpp
Approximation
TaylorApproximation
• ∼TaylorApproximation ()
destructor
• void build ()
builds the approximation from scratch
Derived approximation class for first- or second-order Taylor series (a local approximation). The TaylorApprox-
imation class provides a local approximation based on data from a single point in parameter space. It uses a
zeroth-, first- or second-order Taylor series expansion: f(x) = f(x_c) for zeroth-order, plus grad(x_c)’ (x - x_c) for
first- and second-order, and plus (x - x_c)’ Hess(x_c) (x - x_c) / 2 for second-order.
builds the approximation from scratch This is the common base class portion of the virtual fn and is insufficient
on its own; derived implementations should explicitly invoke (or reimplement) this base class contribution.
Reimplemented from Approximation.
References Dakota::abort_handler(), Approximation::approxData, Approximation::buildDataOrder, and Approx-
imation::numVars.
The documentation for this class was generated from the following files:
• TaylorApproximation.hpp
• TaylorApproximation.cpp
Interface
ApplicationInterface
DirectApplicInterface
TestDriverInterface
• ∼TestDriverInterface ()
destructor
• int mod_cantilever ()
unscaled cantilever test function for UQ
• int cyl_head ()
the cylinder head constrained optimization test fn
• int multimodal ()
multimodal UQ test function
• int log_ratio ()
• int short_column ()
the short_column UQ/OUU test function
• int lf_short_column ()
a low fidelity short_column test function
• int mf_short_column ()
alternate short_column formulations for < multifidelity or model form studies
• int side_impact_cost ()
the side_impact_cost UQ/OUU test function
• int side_impact_perf ()
the side_impact_perf UQ/OUU test function
• int rosenbrock ()
the Rosenbrock optimization and least squares test fn
• int generalized_rosenbrock ()
n-dimensional Rosenbrock (Schittkowski)
• int extended_rosenbrock ()
n-dimensional Rosenbrock (Nocedal/Wright)
• int lf_rosenbrock ()
a low fidelity version of the Rosenbrock function
• int mf_rosenbrock ()
alternate Rosenbrock formulations for < multifidelity or model form studies
• int gerstner ()
the isotropic/anisotropic Gerstner test function family
• int scalable_gerstner ()
scalable versions of the Gerstner test family
• int steel_column_cost ()
the steel_column_cost UQ/OUU test function
• int steel_column_perf ()
the steel_column_perf UQ/OUU test function
• int sobol_rational ()
Sobol SA rational test function.
• int sobol_g_function ()
Sobol SA discontinuous test function.
• int sobol_ishigami ()
Sobol SA transcendental test function.
• int text_book ()
the text_book constrained optimization test function
• int text_book1 ()
portion of text_book() evaluating the objective fn
• int text_book2 ()
portion of text_book() evaluating constraint 1
• int text_book3 ()
portion of text_book() evaluating constraint 2
• int text_book_ouu ()
the text_book_ouu OUU test function
• int scalable_text_book ()
scalable version of the text_book test function
• int scalable_monomials ()
simple monomials for UQ exactness testing
• void herbie1D (size_t der_mode, Real xc_loc, std::vector< Real > &w_and_ders)
1D components of herbie function
• void smooth_herbie1D (size_t der_mode, Real xc_loc, std::vector< Real > &w_and_ders)
1D components of smooth_herbie function
• void shubert1D (size_t der_mode, Real xc_loc, std::vector< Real > &w_and_ders)
1D components of shubert function
• int herbie ()
returns the N-D herbie function
• int smooth_herbie ()
returns the N-D smooth herbie function
• int shubert ()
returns the N-D shubert function
• void separable_combine (Real mult_scale_factor, std::vector< Real > &w, std::vector< Real > &d1w,
std::vector< Real > &d2w)
utility to combine components of separable fns
• int salinas ()
direct interface to the SALINAS structural dynamics code
• int mc_api_run ()
direct interface to ModelCenter via API, HKIM 4/3/03
execute an analysis code portion of a direct evaluation invocation Derived map to evaluate a particular built-in test
analysis function
Reimplemented from DirectApplicInterface.
References Dakota::abort_handler(), ApplicationInterface::analysisServerId, TestDriverIn-
terface::cantilever(), TestDriverInterface::cyl_head(), DirectApplicInterface::driverTypeMap,
TestDriverInterface::extended_rosenbrock(), TestDriverInterface::generalized_rosenbrock(), Test-
DriverInterface::gerstner(), TestDriverInterface::herbie(), TestDriverInterface::lf_rosenbrock(),
TestDriverInterface::lf_short_column(), TestDriverInterface::log_ratio(), TestDriverInterface::mc_api_run(),
TestDriverInterface::mf_rosenbrock(), TestDriverInterface::mf_short_column(), TestDriverInterface::mod_-
cantilever(), TestDriverInterface::multimodal(), TestDriverInterface::rosenbrock(), TestDriverInter-
face::salinas(), TestDriverInterface::scalable_gerstner(), TestDriverInterface::scalable_monomials(),
TestDriverInterface::scalable_text_book(), TestDriverInterface::short_column(), TestDriverInter-
face::shubert(), TestDriverInterface::side_impact_cost(), TestDriverInterface::side_impact_perf(),
TestDriverInterface::smooth_herbie(), TestDriverInterface::sobol_g_function(), TestDriverInterface::sobol_-
ishigami(), TestDriverInterface::sobol_rational(), TestDriverInterface::steel_column_cost(),
TestDriverInterface::steel_column_perf(), TestDriverInterface::text_book(), TestDriverInterface::text_book1(),
TestDriverInterface::text_book2(), TestDriverInterface::text_book3(), and TestDriverInterface::text_book_ouu().
13.156.2.2 void herbie1D (size_t der_mode, Real xc_loc, std::vector< Real > & w_and_ders)
[private]
1D components of herbie function 1D Herbie function and its derivatives (apart from a multiplicative factor)
Referenced by TestDriverInterface::herbie().
13.156.2.3 void smooth_herbie1D (size_t der_mode, Real xc_loc, std::vector< Real > & w_and_ders)
[private]
1D components of smooth_herbie function 1D Smoothed Herbie= 1DHerbie minus the high frequency sine term,
and its derivatives (apart from a multiplicative factor)
Referenced by TestDriverInterface::smooth_herbie().
13.156.2.4 void shubert1D (size_t der_mode, Real xc_loc, std::vector< Real > & w_and_ders)
[private]
1D components of shubert function 1D Shubert function and its derivatives (apart from a multiplicative factor)
Referenced by TestDriverInterface::shubert().
returns the N-D herbie function N-D Herbie function and its derivatives.
References DirectApplicInterface::directFnASV, DirectApplicInterface::directFnDVV, TestDriver-
Interface::herbie1D(), DirectApplicInterface::numDerivVars, DirectApplicInterface::numVars,
TestDriverInterface::separable_combine(), and DirectApplicInterface::xC.
Referenced by TestDriverInterface::derived_map_ac().
returns the N-D smooth herbie function N-D Smoothed Herbie function and its derivatives.
References DirectApplicInterface::directFnASV, DirectApplicInterface::directFnDVV, DirectApplicIn-
terface::numDerivVars, DirectApplicInterface::numVars, TestDriverInterface::separable_combine(),
TestDriverInterface::smooth_herbie1D(), and DirectApplicInterface::xC.
Referenced by TestDriverInterface::derived_map_ac().
13.156.2.7 void separable_combine (Real mult_scale_factor, std::vector< Real > & w, std::vector<
Real > & d1w, std::vector< Real > & d2w) [private]
utility to combine components of separable fns this function combines N 1D functions and their derivatives to
compute a N-D separable function and its derivatives, logic is general enough to support different 1D functions in
different dimensions (can mix and match)
References DirectApplicInterface::directFnASV, DirectApplicInterface::directFnDVV, DirectApplicIn-
terface::fnGrads, DirectApplicInterface::fnHessians, DirectApplicInterface::fnVals, DirectApplicInter-
face::numDerivVars, and DirectApplicInterface::numVars.
Referenced by TestDriverInterface::herbie(), TestDriverInterface::shubert(), and TestDriverInterface::smooth_-
herbie().
direct interface to ModelCenter via API, HKIM 4/3/03 The ModelCenter interface doesn’t have any specific
construct vs. run time functions. For now, we manage it along with the integrated test drivers
References Dakota::abort_handler(), DirectApplicInterface::analysisComponents, DirectApplicInter-
face::analysisDriverIndex, Dakota::dc_ptr_int, DirectApplicInterface::directFnASV, Interface::fnLabels,
DirectApplicInterface::fnVals, Dakota::mc_ptr_int, ApplicationInterface::multiProcAnalysisFlag, DirectAp-
plicInterface::numACV, DirectApplicInterface::numADIV, DirectApplicInterface::numADRV, DirectApplicIn-
terface::numFns, DirectApplicInterface::xC, DirectApplicInterface::xCLabels, DirectApplicInterface::xDI,
DirectApplicInterface::xDILabels, DirectApplicInterface::xDR, and DirectApplicInterface::xDRLabels.
Referenced by TestDriverInterface::derived_map_ac().
The documentation for this class was generated from the following files:
• TestDriverInterface.hpp
• TestDriverInterface.cpp
• ∼TrackerHTTP ()
destructor to free handles
• void post_start ()
post the start of an analysis and archive start time
• void url_add_field (std::string &url, const char ∗keyword, const std::string &value, bool delimit=true) const
append keyword/value pair to url in GET style (with &keyword=value); set delimit = false to omit the &
• void build_default_data (std::string &url, std::time_t &rawtime, const std::string &mode) const
construct URL with shared information for start/finish
Private Attributes
• CURL ∗ curlPtr
pointer to the curl handler instance
• FILE ∗ devNull
pointer to /dev/null
• std::string trackerLocation
base URL for the tracker
• std::string proxyLocation
if empty, proxy may still be specified via environment variables (unlike default CURL behavior)
• long timeoutSeconds
seconds until the request will timeout (may have issues with signals)
• std::string methodList
list of active methods
• std::string dakotaVersion
DAKOTA version.
• std::time_t startTime
cached starting time in raw seconds
• short outputLevel
verbosity control
TrackerHTTP: a usage tracking module that uses HTTP/HTTPS via the curl library.
transmit data to the web server using GET whole url including location&fields
References TrackerHTTP::curlPtr, and TrackerHTTP::outputLevel.
POST separate location and query; datatopost="name=daniel&project=curl". separate location and query; datato-
post="name=daniel&project=curl"
References TrackerHTTP::curlPtr, TrackerHTTP::outputLevel, and TrackerHTTP::trackerLocation.
Referenced by TrackerHTTP::post_finish(), and TrackerHTTP::post_start().
The documentation for this class was generated from the following files:
• TrackerHTTP.hpp
• TrackerHTTP.cpp
Variables
MixedVariables RelaxedVariables
• virtual ∼Variables ()
destructor
• size_t tv () const
total number of vars
• size_t cv () const
number of active continuous vars
• void build_views ()
construct active/inactive views of all variables arrays
Protected Attributes
• SharedVariablesData sharedVarsData
reference-counted instance of shared variables data: id’s, labels, counts
• RealVector allContinuousVars
array combining all of the continuous variables (design, uncertain, state)
• IntVector allDiscreteIntVars
array combining all of the discrete integer variables (design, state)
• RealVector allDiscreteRealVars
array combining all of the discrete real variables (design, state)
• RealVector continuousVars
the active continuous variables array view
• IntVector discreteIntVars
the active discrete integer variables array view
• RealVector discreteRealVars
the active discrete real variables array view
• RealVector inactiveContinuousVars
the inactive continuous variables array view
• IntVector inactiveDiscreteIntVars
the inactive discrete integer variables array view
• RealVector inactiveDiscreteRealVars
the inactive discrete real variables array view
• void check_view_compatibility ()
perform sanity checks on view.first and view.second after update
Private Attributes
• Variables ∗ variablesRep
pointer to the letter (initialized only for the envelope)
• int referenceCount
number of objects sharing variablesRep
Friends
• bool operator== (const Variables &vars1, const Variables &vars2)
equality operator
Base class for the variables class hierarchy. The Variables class is the base class for the class hierarchy providing
design, uncertain, and state variables for continuous and discrete domains within a Model. Using the fundamental
arrays from the input specification, different derived classes define different views of the data. For memory
efficiency and enhanced polymorphism, the variables hierarchy employs the "letter/envelope idiom" (see Coplien
"Advanced C++", p. 133), for which the base class (Variables) serves as the envelope and one of the derived
classes (selected in Variables::get_variables()) serves as the letter.
13.158.2.1 Variables ()
default constructor The default constructor: variablesRep is NULL in this case (a populated problem_db is needed
to build a meaningful Variables object). This makes it necessary to check for NULL in the copy constructor,
assignment operator, and destructor.
standard constructor This is the primary envelope constructor which uses problem_db to build a fully populated
variables object. It only needs to extract enough data to properly execute get_variables(problem_db), since the
constructor overloaded with BaseConstructor builds the actual base class data inherited by the derived classes.
References Dakota::abort_handler(), Variables::get_variables(), and Variables::variablesRep.
alternate constructor for instantiations on the fly This is the alternate envelope constructor for instantiations on the
fly. This constructor executes get_variables(view), which invokes the default derived/base constructors, followed
by a resize() based on vars_comps.
References Dakota::abort_handler(), Variables::get_variables(), and Variables::variablesRep.
copy constructor Copy constructor manages sharing of variablesRep and incrementing of referenceCount.
References Variables::referenceCount, and Variables::variablesRep.
destructor Destructor decrements referenceCount and only deletes variablesRep when referenceCount reaches
zero.
References Variables::referenceCount, and Variables::variablesRep.
13.158.2.6 Variables (BaseConstructor, const ProblemDescDB & problem_db, const std::pair< short,
short > & view) [protected]
constructor initializes the base class part of letter classes (BaseConstructor overloading avoids infinite recursion
in the derived class constructors - Coplien, p. 139) This constructor is the one which must build the base class
data for all derived classes. get_variables() instantiates a derived class letter and the derived constructor selects
this base class constructor in its initialization list (to avoid the recursion of the base class constructor calling get_-
variables() again). Since the letter IS the representation, its representation pointer is set to NULL (an uninitialized
pointer causes problems in ∼Variables).
constructor initializes the base class part of letter classes (BaseConstructor overloading avoids infinite recursion
in the derived class constructors - Coplien, p. 139) This constructor is the one which must build the base class
data for all derived classes. get_variables() instantiates a derived class letter and the derived constructor selects
this base class constructor in its initialization list (to avoid the recursion of the base class constructor calling get_-
variables() again). Since the letter IS the representation, its representation pointer is set to NULL (an uninitialized
pointer causes problems in ∼Variables).
assignment operator Assignment operator decrements referenceCount for old variablesRep, assigns new vari-
ablesRep, and increments referenceCount for new variablesRep.
References Variables::referenceCount, and Variables::variablesRep.
for use when a deep copy is needed (the representation is _not_ shared) Deep copies are used for history mecha-
nisms such as bestVariablesArray and data_pairs since these must catalogue copies (and should not change as the
representation within currentVariables changes).
References Variables::allContinuousVars, Variables::allDiscreteIntVars, Variables::allDiscreteRealVars,
Variables::build_views(), Variables::get_variables(), Variables::sharedVarsData, and Variables::variablesRep.
Referenced by Model::asynch_compute_response(), ApplicationInterface::continuation(),
RecastModel::derived_asynch_compute_response(), HierarchSurrModel::derived_asynch_compute_response(),
DataFitSurrModel::derived_asynch_compute_response(), EffGlobalMinimizer::EffGlobalMinimizer(),
SurrogateModel::force_rebuild(), DiscrepancyCorrection::initialize_corrections(), LeastSq::LeastSq(),
SurrBasedLocalMinimizer::minimize_surrogates(), Optimizer::Optimizer(), NonDLHSEvidence::post_process_-
samples(), COLINOptimizer::post_run(), Analyzer::read_variables_responses(), RecastModel::RecastModel(),
Minimizer::resize_best_vars_array(), SurrBasedGlobalMinimizer::SurrBasedGlobalMinimizer(), SurrBasedLo-
calMinimizer::SurrBasedLocalMinimizer(), Analyzer::update_best(), and NonDLocalReliability::update_mpp_-
search_data().
Used by the standard envelope constructor to instantiate the correct letter class. Initializes variablesRep to the
appropriate derived type, as given by problem_db attributes. The standard derived class constructors are invoked.
References Variables::get_view(), and Variables::view().
Referenced by Variables::copy(), Variables::read(), Variables::read_annotated(), and Variables::Variables().
Used by the alternate envelope constructors, by read functions, and by copy() to instantiate a new letter class.
Initializes variablesRep to the appropriate derived type, as given by view. The default derived class constructors
are invoked.
References SharedVariablesData::view().
infer domain from method selection Aggregate view and domain settings.
References Dakota::abort_handler().
Referenced by Variables::get_view().
The documentation for this class was generated from the following files:
• DakotaVariables.hpp
• DakotaVariables.cpp
Base class for managing common aspects of verification studies. Inheritance diagram for Verification::
Iterator
Analyzer
Verification
RichExtrapVerification
• ∼Verification ()
destructor
• void run ()
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post
Base class for managing common aspects of verification studies. The Verification base class manages common
data and functions, such as those involving ...
run portion of run_iterator; implemented by all derived classes and may include pre/post steps in lieu of separate
pre/post Virtual run function for the iterator class hierarchy. All derived classes need to redefine it.
Reimplemented from Iterator.
References Verification::perform_verification().
print the final iterator results This virtual function provides additional iterator-specific final results outputs beyond
the function evaluation summary printed in finalize_run().
Reimplemented from Analyzer.
Reimplemented in RichExtrapVerification.
The documentation for this class was generated from the following files:
• DakotaVerification.hpp
• DakotaVerification.cpp
File Documentation
Namespaces
• namespace Dakota
The primary namespace for DAKOTA.
Functions
• void DAKOTA_DLL_FN dakota_create (int ∗dakota_ptr_int, char ∗logname)
create and configure a new DakotaRunner, adding it to list of instances
• int get_mc_ptr_int ()
get the DAKOTA pointer to ModelCenter
• int get_dc_ptr_int ()
get the DAKOTA pointer to ModelCenter current design point
Functions
• void DAKOTA_DLL_FN dakota_create (int ∗dakota_ptr_int, char ∗logname)
create and configure a new DakotaRunner, adding it to list of instances
Namespaces
• namespace Dakota
The primary namespace for DAKOTA.
Functions
• void open_file (std::ifstream &data_file, const std::string &input_filename, const std::string &context_-
message)
open the file specified by name for reading, using passed input stream, presenting context-specific error on failure
• void open_file (std::ofstream &data_file, const std::string &output_filename, const std::string &context_-
message)
open the file specified by name for writing, using passed output stream, presenting context-specific error on failure
• void write_header_tabular (std::ostream &tabular_ostream, const Variables &vars, const Response &re-
sponse, const std::string &counter_label, bool active_only=false, bool response_labels=true)
output the header row (labels) for a tabular data file used by Analyzer and Graphics
• void write_data_tabular (std::ostream &tabular_ostream, const Variables &vars, const Response &response,
size_t counter=_NPOS, bool active_only=false, bool write_responses=true)
output a row of tabular data from variables and response object used by graphics to append to tabular file during
iteration
Utility functions for reading and writing tabular data files Emerging utilities for tabular file I/O. For now, just
extraction of capability from separate contexts to faciliate rework. These augment (and leverage) those in data_-
util.h. Design/capability goals: Ability to read / write data with row/col headers or in free-form Detect premature
end of file, report if extra data More consistent and reliable checks for file open errors Require right number of
cols in header mode; only total data checking in free-form (likely) Allow comment character for header rows or
even in data? variables vs. variables/responses for both read and write Should we support CSV? delimiter = ’,’;
other? Verify treatment of trailing newline without reading a zero Allow reading into the transpose of the data
structure
Classes
• class Evaluator
An evaluator specialization that knows how to interact with Dakota.
• class EvaluatorCreator
A specialization of the JEGA::FrontEnd::EvaluatorCreator that creates a new instance of a Evaluator.
• class Driver
A subclass of the JEGA front end driver that exposes the individual protected methods to execute the algorithm.
Namespaces
• namespace Dakota
The primary namespace for DAKOTA.
Functions
• template<typename T >
string asstring (const T &val)
Creates a string from the argument val using an ostringstream.
Classes
• class JEGAOptimizer
A version of Dakota::Optimizer for instantiation of John Eddy’s Genetic Algorithms (JEGA).
Namespaces
• namespace Dakota
The primary namespace for DAKOTA.
Namespaces
• namespace Dakota
The primary namespace for DAKOTA.
Functions
• void nidr_set_input_string (const char ∗)
Set input to NIDR via string argument instead of input file.
• void run_dakota_data ()
Function to encapsulate the DAKOTA object instantiations for mode 2: direct Data class instantiation.
file containing a mock simulator main for testing DAKOTA in library mode
Function to encapsulate the DAKOTA object instantiations for mode 1: parsing an input file. This function parses
from an input file to define the ProblemDescDB data.
References ProblemDescDB::lock(), ProblemDescDB::manage_inputs(), model_interface_plugins(),
Strategy::run_strategy(), and ParallelLibrary::world_rank().
Referenced by main().
Function to encapsulate the DAKOTA object instantiations for mode 3: mixed parsing and direct updating. This
function showcases multiple features. For parsing, either an input file (dakota_input_file != NULL) or a default
input string (dakota_input_file == NULL) are shown. This parsed input is then mixed with input from three
sources: (1) input from a user-supplied callback function, (2) updates to the DB prior to Strategy instantiation, (3)
updates directly to Iterators/Models following Strategy instantiation.
References ProblemDescDB::broadcast(), ProblemDescDB::get_sa(), ProblemDescDB::lock(), model_-
interface_plugins(), ProblemDescDB::model_list(), ParallelLibrary::mpirun_flag(), my_callback_-
function(), nidr_set_input_string(), ProblemDescDB::parse_inputs(), ProblemDescDB::post_process(),
ProblemDescDB::resolve_top_method(), Strategy::run_strategy(), ProblemDescDB::set(), and
ParallelLibrary::world_rank().
Referenced by main().
Iterate over models and plugin appropriate interface: serial rosenbrock or parallel textbook.
References Dakota::abort_handler(), Interface::analysis_drivers(), Interface::assign_rep(), Dakota::contains(),
Interface::interface_type(), ProblemDescDB::model_list(), ParallelLevel::server_intra_communicator(), and
ProblemDescDB::set_db_model_nodes().
Referenced by Dakota::run_dakota_data(), run_dakota_mixed(), and run_dakota_parse().
A mock simulator main for testing DAKOTA in library mode. Uses alternative instantiation syntax as described
in the library mode documentation within the Developers Manual. Tests several problem specification modes: (1)
run_dakota_parse: reads all problem specification data from an input file (2) run_dakota_data: creates all problem
specification from direct Data instance instantiations. (3) run_dakota_mixed: a mixture of input parsing (by file or
default string) and direct data updates, where the data updates occur: (a) via the DB prior to Strategy instantiation,
and (b) via Iterators/Models following Strategy instantiation. Usage: dakota_library_mode [-m] [dakota.in]
References ParallelLibrary::detect_parallel_launch(), Dakota::run_dakota_data(), run_dakota_mixed(), and run_-
dakota_parse().
Example of user-provided callback function to override input specified and managed by NIDR, e.g., from an input
deck.
References Dakota::contains(), ProblemDescDB::get_sa(), ProblemDescDB::get_string(),
ProblemDescDB::resolve_top_method(), and ProblemDescDB::set().
Referenced by run_dakota_mixed().
Functions
• void manage_mpi (MPI_Comm &my_comm, int &color)
Split MPI_COMM_WORLD, returning the comm and color.
• void run_dakota (const MPI_Comm &comm, const std::string &input, const int &color)
Launch DAKOTA on passed communicator, tagging output/error with color.
• void collect_results ()
Wait for and collect results from DAKOTA runs.
file containing a mock simulator main for testing DAKOTA in library mode on a split communicator
Functions
• void fpinit_ASL ()
• int nidr_save_exedir (const char ∗, int)
• int main (int argc, char ∗argv[ ])
The main DAKOTA program.
Floating-point initialization from AMPL: switch to 53-bit rounding if appropriate, to eliminate some cross-
platform differences.
Referenced by main().
The main DAKOTA program. Manage command line inputs, input files, restart file(s), output streams, and top
level parallel iterator communicators. Instantiate the Strategy and invoke its run_strategy() virtual function.
References Dakota::abort_handler(), fpinit_ASL(), CommandLineHandler::instantiate_flag(), ProblemDe-
scDB::lock(), ProblemDescDB::manage_inputs(), ParallelLibrary::output_helper(), GetLongOpt::retrieve(),
Strategy::run_strategy(), ParallelLibrary::specify_outputs_restart(), and ParallelLibrary::world_rank().
Namespaces
• namespace Dakota
The primary namespace for DAKOTA.
Functions
• void print_restart (int argc, char ∗∗argv, String print_dest)
print a restart file
The main program for the DAKOTA restart utility. Parse command line inputs and invoke the appropriate utility
function (print_restart(), print_restart_tabular(), read_neutral(), repair_restart(), or concatenate_restart()).
References Dakota::concatenate_restart(), Dakota::print_restart(), Dakota::print_restart_tabular(), Dakota::read_-
neutral(), and Dakota::repair_restart().
∼Approximation add_data
Dakota::Approximation, 269 Dakota::ResultsDBAny, 870
∼BiStream add_datapoint
Dakota::BiStream, 290 Dakota::Graphics, 469
∼Constraints allContinuousIds
Dakota::Constraints, 330 Dakota::SharedVariablesDataRep, 902
∼DataFitSurrModel anisotropic_order_to_dimension_preference
Dakota::DataFitSurrModel, 340 Dakota::NonDIntegration, 683
∼EffGlobalMinimizer append_approximation
Dakota::EffGlobalMinimizer, 428 Dakota::ApproximationInterface, 277
∼Interface Dakota::DataFitSurrModel, 343
Dakota::Interface, 489 approx_subprob_constraint_eval
∼Iterator Dakota::SurrBasedLocalMinimizer, 952
Dakota::Iterator, 500 approx_subprob_objective_eval
∼Model Dakota::SurrBasedLocalMinimizer, 952
Dakota::Model, 565 approxBuilds
∼NonDAdaptiveSampling Dakota::SurrogateModel, 963
Dakota::NonDAdaptiveSampling, 637 Approximation
∼NonDGPImpSampling Dakota::Approximation, 268, 269
Dakota::NonDGPImpSampling, 672 APPSEvalMgr
∼ProblemDescDB
Dakota::APPSEvalMgr, 282
Dakota::ProblemDescDB, 815
APPSOptimizer
∼Strategy
Dakota::APPSOptimizer, 284
Dakota::Strategy, 933
array_insert
∼Variables
Dakota::ResultsDBAny, 870
Dakota::Variables, 994
assess_reconstruction
_initPts
Dakota::EfficientSubspaceMethod, 432
Dakota::JEGAOptimizer, 514
_model assign_rep
Dakota::JEGAOptimizer::Evaluator, 444 Dakota::Interface, 490
Dakota::Iterator, 504
A Dakota::Model, 571
Dakota::CONMINOptimizer, 320 assign_streams
abort_handler_t Dakota::CommandLineHandler, 306
Dakota, 132 asstring
accepts_multiple_points Dakota, 133
Dakota::JEGAOptimizer, 513 asynchronous_local_analyses
actualModel Dakota::ProcessHandleApplicInterface, 826
Dakota::DataFitSurrModel, 347 asynchronous_local_evaluations
add_anchor_to_surfdata Dakota::ApplicationInterface, 258
Dakota::SurfpackApproximation, 942 asynchronous_local_evaluations_nowait
INDEX 1019
get_variables increment_parallel_configuration
Dakota::Variables, 996 Dakota::ParallelLibrary, 783
GetBestMOSolutions increment_reference
Dakota::JEGAOptimizer, 512 Dakota::NonDCubature, 645
GetBestSolutions increment_specification_sequence
Dakota::JEGAOptimizer, 512 Dakota::NonDExpansion, 656
GetBestSOSolutions Dakota::NonDPolynomialChaos, 722
Dakota::JEGAOptimizer, 512 init_communicators
GetDescription Dakota::Model, 570
Dakota::JEGAOptimizer::Evaluator, 443 Dakota::ParallelLibrary, 784
getdist init_communicators_checks
Dakota, 132 Dakota::ApplicationInterface, 256
GetLongOpt Dakota::DirectApplicInterface, 412
Dakota::GetLongOpt, 465 Dakota::ProcessHandleApplicInterface, 825
GetName Dakota::SysCallApplicInterface, 967
Dakota::JEGAOptimizer::Evaluator, 443 init_iterator
GetNumberLinearConstraints Dakota::Strategy, 935
Dakota::JEGAOptimizer::Evaluator, 442 init_iterator_parallelism
GetNumberNonLinearConstraints Dakota::Strategy, 935
Dakota::JEGAOptimizer::Evaluator, 441 init_mpi_comm
getRmax Dakota::ParallelLibrary, 784
Dakota, 133 init_serial
gnewton_set_recast Dakota::ApplicationInterface, 254
Dakota::Minimizer, 531 Dakota::Model, 570
GPmodel_apply initial_points
Dakota::GaussProcApproximation, 461 Dakota::JEGAOptimizer, 513, 514
initial_taylor_series
hard_convergence_check
Dakota::NonDLocalReliability, 709
Dakota::SurrBasedLocalMinimizer, 950
initialize
herbie
Dakota::NonDAdaptImpSampling, 631
Dakota::TestDriverInterface, 979
herbie1D Dakota::RecastModel, 847
Dakota::TestDriverInterface, 978 initialize_class_data
hessian Dakota::NonDLocalReliability, 710
Dakota::SurfpackApproximation, 941 initialize_final_statistics
hom_constraint_eval Dakota::NonD, 625
Dakota::SurrBasedLocalMinimizer, 952 initialize_graphics
hom_objective_eval Dakota::Iterator, 502
Dakota::SurrBasedLocalMinimizer, 952 initialize_grid
Dakota::NonDQuadrature, 727
IC initialize_h
Dakota::CONMINOptimizer, 320 Dakota::Model, 572
id_vars_exact_compare initialize_level_data
Dakota, 134 Dakota::NonDLocalReliability, 710
import_points initialize_mpp_search_data
Dakota::DataFitSurrModel, 344 Dakota::NonDLocalReliability, 710
increment_grid_preference initialize_random_variable_parameters
Dakota::NonDCubature, 645 Dakota::NonD, 626
increment_order initialize_random_variable_types
Dakota::NonDPolynomialChaos, 722 Dakota::NonD, 625
initialize_random_variables kw_106
Dakota::NonD, 624 Dakota, 165
initialize_run kw_107
Dakota::Iterator, 501 Dakota, 165
Dakota::LeastSq, 517 kw_108
Dakota::Minimizer, 529 Dakota, 165
Dakota::NonD, 624 kw_109
Dakota::Optimizer, 760 Dakota, 165
initialize_scaling kw_11
Dakota::Minimizer, 534 Dakota, 140
initialize_variables_and_constraints kw_110
Dakota::APPSOptimizer, 285 Dakota, 166
instantiate_flag kw_111
Dakota::CommandLineHandler, 306 Dakota, 166
intCntlParmArray kw_112
Dakota::DOTOptimizer, 422 Dakota, 166
Interface kw_113
Dakota::Interface, 489 Dakota, 166
interface kw_114
Dakota::Model, 568 Dakota, 167
interface_id kw_115
Dakota::Model, 569 Dakota, 167
ISC kw_116
Dakota::CONMINOptimizer, 320 Dakota, 167
isReadyForWork kw_117
Dakota::APPSEvalMgr, 282 Dakota, 167
Iterator kw_118
Dakota::Iterator, 499–501
Dakota, 168
kw_119
JEGAOptimizer
Dakota, 168
Dakota::JEGAOptimizer, 509
JEGAOptimizer.cpp, 1005 kw_12
JEGAOptimizer.hpp, 1006 Dakota, 140
kw_120
kw_1 Dakota, 168
Dakota, 137 kw_121
kw_10 Dakota, 168
Dakota, 140 kw_122
kw_100 Dakota, 168
Dakota, 164 kw_123
kw_101 Dakota, 169
Dakota, 164 kw_124
kw_102 Dakota, 169
Dakota, 164 kw_125
kw_103 Dakota, 169
Dakota, 164 kw_126
kw_104 Dakota, 169
Dakota, 165 kw_127
kw_105 Dakota, 169
Dakota, 165 kw_128
resolve_inputs run_iterator
Dakota::ParallelLibrary, 784 Dakota::Iterator, 503
resolve_samples_symbols Dakota::Strategy, 935
Dakota::DDACEDesignCompExp, 406 run_sequential
Response Dakota::SequentialHybridStrategy, 890
Dakota::Response, 858 run_sequential_adaptive
response_mapping Dakota::SequentialHybridStrategy, 890
Dakota::Interface, 490
Dakota::NestedModel, 593 S
response_modify_n2s Dakota::CONMINOptimizer, 319
Dakota::Minimizer, 535 sample_likelihood
response_modify_s2n Dakota::NonDDREAMBayesCalibration, 649
Dakota::Minimizer, 531 sampling_reset
responseMode Dakota::NonDCubature, 645
Dakota::SurrogateModel, 963 Dakota::NonDQuadrature, 728
Dakota::NonDSampling, 739
ResponseRep
Dakota::NonDSparseGrid, 745
Dakota::ResponseRep, 862, 863
SCAL
restart_util.cpp, 1011
Dakota::CONMINOptimizer, 320
main, 1011
scale_model
restore
Dakota::Minimizer, 530
Dakota::Approximation, 270
schedule_iterators
Dakota::PecosApproximation, 805
Dakota::Strategy, 936
restore_approximation
SCI_FIELD_NAMES
Dakota::ApproximationInterface, 278
Dakota, 233
restore_available
SCI_NUMBER_OF_FIELDS
Dakota::ApproximationInterface, 278 Dakota, 233
retrieve secondary_resp_copier
Dakota::GetLongOpt, 465 Dakota::Minimizer, 531
returns_multiple_points secondary_resp_scaler
Dakota::COLINOptimizer, 301 Dakota::Minimizer, 534
Dakota::JEGAOptimizer, 513 self_schedule_iterators
RIA_constraint_eval Dakota::Strategy, 936
Dakota::NonDLocalReliability, 708 send_data_using_get
RIA_objective_eval Dakota::TrackerHTTP, 983
Dakota::NonDLocalReliability, 708 send_data_using_post
run Dakota::TrackerHTTP, 983
Dakota::Iterator, 502 separable_combine
Dakota::LeastSq, 517 Dakota::TestDriverInterface, 979
Dakota::NonD, 625 SeparateVariables
Dakota::Optimizer, 760 Dakota::JEGAOptimizer::Evaluator, 441
Dakota::PStudyDACE, 829 serve_analyses_asynch
Dakota::SurrBasedMinimizer, 956 Dakota::ProcessHandleApplicInterface, 827
Dakota::Verification, 998 serve_analyses_synch
run_dakota_data Dakota::ApplicationInterface, 256
Dakota, 133 serve_evaluations
run_dakota_mixed Dakota::ApplicationInterface, 255
library_mode.cpp, 1007 serve_evaluations_asynch
run_dakota_parse Dakota::ApplicationInterface, 261
library_mode.cpp, 1007 serve_evaluations_asynch_peer
test_local_evaluations update_quasi_hessians
Dakota::SysCallApplicInterface, 966 Dakota::Model, 574
SIM::ParallelDirectApplicInterface, 766 update_response
SIM::SerialDirectApplicInterface, 893 Dakota::Model, 573
ToDoubleMatrix usage
Dakota::JEGAOptimizer, 512 Dakota::GetLongOpt, 466
tr_ratio_check useDerivs
Dakota::SurrBasedLocalMinimizer, 951 Dakota::NonDExpansion, 658
trendOrder
Dakota::GaussProcApproximation, 461 Valueless
truth_model Dakota::GetLongOpt, 464
Dakota::Model, 567 var_mp_cbound
Dakota, 233
uncertain_vars_to_subspace var_mp_check_cau
Dakota::EfficientSubspaceMethod, 432 Dakota, 231
unpack_parameters_buffer var_mp_check_ceu
Dakota::ConcurrentStrategy, 312 Dakota, 232
Dakota::SequentialHybridStrategy, 889 var_mp_check_cv
Dakota::Strategy, 934 Dakota, 231
unpack_results_buffer var_mp_check_daui
Dakota::ConcurrentStrategy, 312 Dakota, 232
Dakota::SequentialHybridStrategy, 890 var_mp_check_daur
Dakota::Strategy, 934 Dakota, 232
update var_mp_check_deui
Dakota::ResponseRep, 865 Dakota, 232
update_actual_model var_mp_check_deur
Dakota::DataFitSurrModel, 345 Dakota, 232
update_approximation var_mp_check_dset
Dakota::ApproximationInterface, 276 Dakota, 231
Dakota::DataFitSurrModel, 342 var_mp_drange
update_augmented_lagrange_multipliers Dakota, 233
Dakota::SurrBasedMinimizer, 957 Variables
update_filter Dakota::Variables, 994, 995
Dakota::SurrBasedMinimizer, 957 variables_scaler
update_from_actual_model Dakota::Minimizer, 534
Dakota::DataFitSurrModel, 346 variance_based_decomp
update_from_sub_model Dakota::Analyzer, 243
Dakota::RecastModel, 848 vars_u_to_x_mapping
update_from_subordinate_model Dakota::NonD, 626
Dakota::Model, 567 vars_x_to_u_mapping
update_lagrange_multipliers Dakota::NonD, 627
Dakota::SurrBasedMinimizer, 957 view_aleatory_uncertain_counts
update_level_data Dakota::NonDSampling, 740
Dakota::NonDLocalReliability, 711 view_design_counts
update_mpp_search_data Dakota::NonDSampling, 740
Dakota::NonDLocalReliability, 711 view_epistemic_uncertain_counts
update_partial Dakota::NonDSampling, 740
Dakota::ResponseRep, 865 view_uncertain_counts
update_penalty Dakota::NonDSampling, 741
Dakota::SurrBasedLocalMinimizer, 951 Vlch
Dakota, 230
VLI
Dakota, 231
VLR
Dakota, 230
volumetric_quality
Dakota::PStudyDACE, 830
wait_local_evaluations
Dakota::GridApplicInterface, 472
Dakota::SysCallApplicInterface, 966
weight_model
Dakota::LeastSq, 518
write
Dakota::ParamResponsePair, 788
Dakota::ResponseRep, 863–865
write_annotated
Dakota::ResponseRep, 864
write_tabular
Dakota::ResponseRep, 864