DesignXplorer Users Guide
DesignXplorer Users Guide
ANSYS, ANSYS Workbench, AUTODYN, CFX, FLUENT and any and all ANSYS, Inc. brand, product, service and feature
names, logos and slogans are registered trademarks or trademarks of ANSYS, Inc. or its subsidiaries located in the
United States or other countries. ICEM CFD is a trademark used by ANSYS, Inc. under license. CFX is a trademark
of Sony Corporation in Japan. All other brand, product, service and feature names or trademarks are the property
of their respective owners. FLEXlm and FLEXnet are trademarks of Flexera Software LLC.
Disclaimer Notice
THIS ANSYS SOFTWARE PRODUCT AND PROGRAM DOCUMENTATION INCLUDE TRADE SECRETS AND ARE CONFID-
ENTIAL AND PROPRIETARY PRODUCTS OF ANSYS, INC., ITS SUBSIDIARIES, OR LICENSORS. The software products
and documentation are furnished by ANSYS, Inc., its subsidiaries, or affiliates under a software license agreement
that contains provisions concerning non-disclosure, copying, length and nature of use, compliance with exporting
laws, warranties, disclaimers, limitations of liability, and remedies, and other provisions. The software products
and documentation may be used, disclosed, transferred, or copied only in accordance with the terms and conditions
of that software license agreement.
ANSYS, Inc. and ANSYS Europe, Ltd. are UL registered ISO 9001: 2015 companies.
For U.S. Government users, except as specifically granted by the ANSYS, Inc. software license agreement, the use,
duplication, or disclosure by the United States Government is subject to restrictions stated in the ANSYS, Inc.
software license agreement and FAR 12.212 (for non-DOD licenses).
Third-Party Software
See the legal information in the product help files for the complete Legal Notice for ANSYS proprietary software
and third-party software. If you are unable to access the Legal Notice, contact ANSYS, Inc.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. iii
DesignXplorer User's Guide
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
iv of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer User's Guide
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. v
DesignXplorer User's Guide
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
vi of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer User's Guide
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. vii
DesignXplorer User's Guide
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
viii of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer User's Guide
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. ix
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
x of ANSYS, Inc. and its subsidiaries and affiliates.
List of Tables
1. Different Types of Kurtosis ..................................................................................................................... 382
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. xi
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
xii of ANSYS, Inc. and its subsidiaries and affiliates.
ANSYS DesignXplorer Overview
The following links provide quick access to information about ANSYS DesignXplorer and its use:
• Limitations (p. 9)
How to Participate
The program is voluntary. To participate, select Yes when the Product Improvement Program dialog
appears. Only then will collection of data for this product begin.
Data We Collect
The data we collect under the ANSYS Product Improvement Program are limited. The types and amounts
of collected data vary from product to product. Typically, the data fall into the categories listed here:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 1
ANSYS DesignXplorer Overview
Hardware: Information about the hardware on which the product is running, such as the:
System: Configuration information about the system the product is running on, such as the:
• country code
• time zone
• language used
• time duration
Session Actions: Counts of certain user actions during a session, such as the number of:
• project saves
• restarts
• toolbar selections
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
2 of ANSYS, Inc. and its subsidiaries and affiliates.
The ANSYS Product Improvement Program
• number and types of entities used, such as nodes, elements, cells, surfaces, primitives, etc.
• time and frequency domains (static, steady-state, transient, modal, harmonic, etc.)
• the solution controls used, such as convergence criteria, precision settings, and tuning options
• solver statistics such as the number of equations, number of load steps, number of design points, etc.
• geometry- or design-specific inputs, such as coordinate values or locations, thicknesses, or other dimen-
sional values
• actual values of material properties, loadings, or any other real-valued user-supplied data
In addition to collecting only anonymous data, we make no record of where we collect data from. We
therefore cannot associate collected data with any specific customer, company, or location.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 3
ANSYS DesignXplorer Overview
No, your participation is voluntary. We encourage you to participate, however, as it helps us create
products that will better meet your future needs.
No. You are not enrolled unless you explicitly agree to participate.
3. Does participating in this program put my intellectual property at risk of being collected or discovered by ANSYS?
Yes, you can stop participating at any time. To do so, select ANSYS Product Improvement Program
from the Help menu. A dialog appears and asks if you want to continue participating in the program.
Select No and then click OK. Data will no longer be collected or sent.
No, the data collection does not affect the product performance in any significant way. The amount
of data collected is very small.
The data is collected during each use session of the product. The collected data is sent to a secure
server once per session, when you exit the product.
Not at this time, although we are adding it to more of our products at each release. The program
is available in a product only if this ANSYS Product Improvement Program description appears in the
product documentation, as it does here for this product.
8. If I enroll in the program for this product, am I automatically enrolled in the program for the other ANSYS products
I use on the same machine?
Yes. Your enrollment choice applies to all ANSYS products you use on the same machine. Similarly,
if you end your enrollment in the program for one product, you end your enrollment for all ANSYS
products on that machine.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
4 of ANSYS, Inc. and its subsidiaries and affiliates.
Introduction to ANSYS DesignXplorer
9. How is enrollment in the Product Improvement Program determined if I use ANSYS products in a cluster?
In a cluster configuration, the Product Improvement Program enrollment is determined by the host
machine setting.
10. Can I easily opt out of the Product Improvement Program for all clients in my network installation?
c. Change the value from "on" to "off" and save the file.
Design exploration describes the relationship between the design variables and the performance of the
product using Design of Experiments (DOEs) and response surfaces. DOEs and response surfaces provide
all of the information required to achieve simulation-driven product development. Once the variation
of product performance with respect to design variables is known, it becomes easy to understand and
identify the changes required to meet requirements for the product. After response surfaces are created,
you can analyze and share results using curves, surfaces, and sensitivities that are easily understood.
You can use these results at any time during the development of the product without requiring addi-
tional simulations to test a new configuration.
Available Tools
DesignXplorer offers a powerful suite of DOE types:
Central Composite Design (CCD) provides a traditional DOE sampling set, while the objective of Op-
timal Space-Filling (OSF) is to gain the maximum insight with the fewest number of points. OSF is
very useful when you have limited computation time.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 5
ANSYS DesignXplorer Overview
After sampling, design exploration provides several different response surface types to represent the
simulation's responses:
These response surface types can accurately represent highly nonlinear responses, such as those
found in high frequency electromagnetics.
Once the simulation's responses are characterized, DesignXplorer supplies the following optimization
algorithms:
You can also use extensions to integrate external optimizers into the DesignXplorer workflow. For
more information, see Performing an Optimization with an External Optimizer (p. 183).
DesignXplorer provides several graphical tools for investigating a design. These tools include sensit-
ivity plots, correlation matrices, curves, surfaces, trade-off plots and parallel charts with Pareto front
display, and spider charts.
DesignXplorer also provides correlation matrix techniques to help you identify the key parameters of
a design before you create response surfaces.
Additionally, from a series of 2D or 3D simulations, you can create a ROM (reduced order model).
ROMs are stand-alone digital objects that offer a mathematical representation for computationally
inexpensive, near real-time analysis. For more information, see Using ROMs (p. 219).
• Product performance includes maximum stress, mass, fluid flow, and velocities.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
6 of ANSYS, Inc. and its subsidiaries and affiliates.
Introduction to ANSYS DesignXplorer
Based on exploration results, you can identify the key parameters of the design and how they affect
product performance. You can then use this knowledge to influence the design so that it meets the
product's requirements.
DesignXplorer provide tools to analyze a parametric design with a reasonable number of parameters.
Supported response surface methods are suitable for problems using 10 to 15 input parameters.
In addition to performing the standard simulation, you must define the parameters to investigate.
The input parameters, also called design variables, can include CAD parameters, loading conditions,
material properties, and more.
You choose the output parameters, also called performance indicators, from the simulation results.
Output parameters can include maximum stresses, fluid pressure, velocities, temperatures, masses,
and custom-defined results. For example, product cost could be a custom-defined result based on
masses and manufacturing constraints that you use as an output parameter.
CAD parameters can be defined in a CAD package or in ANSYS DesignModeler. In Workbench, mater-
ial properties are defined in the Engineering Data cell of an analysis system that you've inserted in
the Project Schematic. The origin of other parameters are in the simulation model itself. Output
parameters are defined in the various simulation environments (Mechanical, CFD, and so on). Custom
parameters are defined in the Parameter Set bar.
When you update the DOE, DesignXplorer creates a response surface for each output parameter. A
response surface is an approximation of the response of the system. Its accuracy depends on several
factors, including complexity of the variations of the output parameters, number of points in the
original DOE, and choice of the response surface type.
DesignXplorer provides a variety of response surface types. The default type, Genetic Aggregation,
automates the process of selecting, configuring, and generating the response surface best suited to
each output parameter in your problem. Several other response surface types are available for selection.
For instance, Standard Response Surface - Full 2nd Order Polynomials, which is based on a modified
quadratic formulation, provides satisfying results when the variations of the output parameters are
slight. However, Kriging is more effective for problems with a broad range of variation.
After response surfaces are created, you can thoroughly investigate the design using a variety of
graphical and numerical tools. Additionally, you can use optimization techniques to identify valid
design points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 7
ANSYS DesignXplorer Overview
Usually, the investigation starts with viewing sensitivity graphs because they graphically display the
relative influence of the input parameters. These bar or pie charts indicate how much output para-
meters are locally influenced by the input parameters around a given response point. Varying the
location of the response point can provide a totally different graph. Explanations of these graphs
typically use a hill and valley analogy. If the point is at the top of a steep hill, the influence of the
parameters is large. If the response point is in a flat valley, the influence of the input parameters is
small.
The response surfaces provide curves or surfaces that show the variation of one output parameter
with respect to one or two input parameters at a time. These curves or surfaces also are dependent
on the response point.
Both sensitivity charts and response surfaces are key tools for answering such what-if questions as
"What parameter should we change if we want to reduce cost?".
DesignXplorer provides additional tools for identifying design candidates. In addition to thoroughly
investigating response surface curves to determine design candidates, you can use optimization
techniques to find design candidates from a Response Surface cell or any cell containing design
points. DesignXplorer provides two types of goal-driven optimization (GDO) systems: Response Surface
Optimization and Direct Optimization.
You can drag a GDO system from the Toolbox and drop it in the Project Schematic. If you drop a
Response Surface Optimization system on an existing Response Surface system, these two systems
share this portion of the data. For a Direct Optimization system, you can create data transfer links
between the Optimization cell and any other cell containing design points. You can insert several
GDO systems in the project, which is useful if you want to analyze several hypotheses.
Once a GDO system is inserted, you must define the optimization study. This includes choosing the
optimization method, setting the objectives and constraints, and specifying the domain. You then
solve the optimization problem. In many cases, there is not a unique solution, and several design
candidates are identified.
The results of the optimization are also very likely to provide design candidates that cannot be
manufactured. For example, a radius of 3.14523 mm is likely difficult to achieve. However, because
all information about the variability of the output parameters is provided by the source of design
point data, whether a Response Surface cell or another DesignXplorer cell, you can easily find an
acceptable design candidate close to the one indicated by the optimization.
You should check the accuracy of the response surface for the design candidates. To verify a candidate,
you update the design point, which checks the validity of the output parameters.
Probabilistic characterization provides a probability of success or failure rather than a simple yes or
no evaluation. For instance, a probabilistic analysis could determine that one part in 1 million is likely
to fail. Probabilistic analysis can also predict the probability of a product surviving its expected useful
life.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
8 of ANSYS, Inc. and its subsidiaries and affiliates.
Limitations
To perform a robustness analysis, you insert a Six Sigma Analysis system in the Project Schematic.
The process here is very similar to the process for response surfaces. The main difference is in the
definition of the parameters. Instead of specifying a minimum and maximum value to define the
range for each input parameter, you describe an input parameter in terms of a statistical curve and
the curve's associated parameters. Once parameters are defined, you compute a new DOE and corres-
ponding response surfaces and sensitivity charts. The results for the robustness analysis are in the
form of probabilities and statistical distributions of the output parameters.
The number of simulations depend on the number of parameters as well as the convergence criteria
for the means and standard deviations of the parameters. While you can provide a hard limit for the
number of points to compute, the accuracy of the correlation matrix can be affected if not enough
points are computed. Based on results, you reduce the number of parameters to the 10 to15 with
the highest correlations.
The first step to solving this problem is to set Response Surface Type to Kriging. This response
surface type determines the accuracy of the response surface as well as the number of points that
are required to increase the accuracy. With this method, you can set a manual or automatic refinement
type. Manual refinement allows you to control the number of points to compute.
Sparse Grid is yet another of the response surface types available. This adaptive response surface is
driven by the accuracy that you request. It automatically refines the matrix of design points where
the gradient of the output parameters is higher to increase the accuracy of the response surface. To
use this feature, you set Response Surface Type to Sparse Grid and Design of Experiments Type
to Sparse Grid Initialization.
While no manual refinement is available when Sparse Grid is selected, manual refinement is available
for other response surface types. With manual refinement, you can enter specific points into the set
of existing design points used to calculate the response surface.
Limitations
• Suppressed properties are not available for a Design of Experiments system.
• When you use an external optimizer for a goal-driven optimization system, unsupported properties and
functionality are not displayed in the DesignXplorer interface.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 9
ANSYS DesignXplorer Overview
To insert a design exploration system, you must have a DesignXplorer license. This license must be
available when you preview or update a DesignXplorer system or cell and also when results need to
be generated from a response surface.
You must also have licenses available for the systems that are to solve the design points that
DesignXplorer generates. The licenses for the solvers must be available when you update a system that
generate design points.
The DesignXplorer license is released when all DesignXplorer tabs are closed.
Note:
• If you do not have a DesignXplorer license, you can successfully resume an existing design ex-
ploration project and review the already generated results, with some interaction possible on
charts. However, to update a DesignXplorer system or evaluate a response surface, a license is
required. If you do not have a license, an error displays.
• To update design points simultaneously, you must have one solver license for each simultaneous
solve. Be aware that the number of design points that can be solved simultaneously is limited
by hardware, your RSM configuration, and available solver licenses.
• You do not need to reserve licenses for DesignXplorer components because DesignXplorer does
not check licenses out of the reserve pool. However, if some licenses are reserved for design
point updates, any update of a DesignXplorer component will require an extra license. This license
can come from a bundle.
• Inserting a 3D ROM system for producing a ROM (p. 219) requires a ROM Builder license. A ROM
Builder license also enables existing DesignXplorer capabilities.
User Interface
The Workbench user interface allows you to easily build your project in a workspace called the Project
Schematic.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
10 of ANSYS, Inc. and its subsidiaries and affiliates.
User Interface
Project Schematic
From the Project tab, you add design exploration systems to the Project Schematic to perform different
types of parametric analyses. From the Toolbox, you drag a system from under Design Exploration
and drop it under the Parameter Set bar. Optionally, you can double-click the system in the Toolbox.
Each system added to the Project Schematic has a blue system header and one or more cells for ana-
lysis components. You generally interact with a system at the cell level. Right-clicking a system header
or cell displays context menu options. Double-clicking a cell performs the default option, which appears
in bold in the context menu. Because the default option is typically Edit, double-clicking a cell generally
opens its component tab. Component tabs are described in more detail later.
Toolbox
When you are viewing the Project Schematic, the Toolbox displays a Design Exploration category
with DesignXplorer systems. To perform a particular type of design exploration, you drag the system
from the Toolbox and drop it in the Project Schematic below the Parameter Set bar.
When you are viewing the component tab for a cell, the Toolbox displays the charts that can be added
to the object currently selected in the Outline pane. For example, assume that you double-clicked a
Response Surface cell to open its component tab. If you select a response point in the Outline pane,
the Toolbox displays the charts that can be inserted for the response point.
Double-clicking a chart in the Toolbox adds it to the object selected in the Outline pane. You can also
add a chart by dragging it from the Toolbox and dropping it on an object in the Outline pane. Addi-
tionally, you can right-click a node in the Outline pane and select the chart to add from the context
menu.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 11
ANSYS DesignXplorer Overview
Component Tabs
Once a DesignXplorer system is added to the Project Schematic, double-clicking a cell typically opens
its component tab. For a cell where Edit is not the default menu option, you can right-click the cell and
select Edit. A component tab displays a window configuration with multiple panes in which to set
analysis options, run the analysis, and view results. For example, double-clicking a Design of Experiments
cell displays the component tab for the DOE.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
12 of ANSYS, Inc. and its subsidiaries and affiliates.
User Interface
A component tab has four panes: Outline, Table, Properties, and either Chart or Results. In general,
you select an object in the Outline pane and either set up its properties or view the table or chart as-
sociated with it.
• Outline: Provides a hierarchy of the main objects that make up the cell that you are editing.
The state icon on the root node tells you if the data for the cell is up-to-date or if it must be updated.
It also helps you to figure out the effect of your changes on the cell and its parameter properties.
Quick help is associated with various states. If you see the information icon to the right of the root
node, click it to see what immediate actions must be taken. If links appear in the quick help, they
take you to more information in the ANSYS product help.
On nodes for result objects (such as response points, charts, and Min-Max search), state icons indicate
if the objects are up-to-date or if they need to be updated. State icons help you to quickly assess the
current status of your result objects. For example, if you change a DOE setting, the state icon of the
corresponding chart is updated, given the pending changes. When the state icon indicates that the
update on a result object has failed ( ), you can try to update the object by right-clicking it and se-
lecting the appropriate menu option.
• Table: A tabular view of the data associated with the object selected in the Outline pane. The title bar
contains a description of the table. You can right-click in the table to export the data to a CSV (Comma-
Separated Values) file. For more information, see Exporting Design Point Parameter Values to a Comma-
Separated Values File in the Workbench User's Guide.
• Properties: Lists the properties that can be set for the object selected in the Outline pane. For example,
when a parameter for a DOE is selected in the Outline pane, you set bounds for this parameter in the
Properties pane. When the root node for a Response Surface cell is selected, you set the response surface
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 13
ANSYS DesignXplorer Overview
type and other options in the Properties pane. When a chart is selected, you set plotting and display options
in the Properties pane.
• Chart or Results: Displays various charts or results for the object selected in the Outline pane. In the Charts
pane, you can right-click a chart to export the data to a CSV (Comma-Separated Values) file. The Results
pane is shown only on the component tab for a goal-driven optimization system.
You can insert and duplicate charts (or a response point with charts for a Response Surface cell)
even if the system is out-of-date. When the system is out-of-date, the charts in the Chart pane are
updated when the system is updated. For any DesignXplorer cell where a chart is inserted before the
system is updated, all types of charts supported by the cell are inserted by default at the end of the
update. If a cell already contains a chart, no new chart is inserted by default. For a Response Surface
cell, if there is no response point, a response point is inserted by default with all charts. For more
information, see Using DesignXplorer Charts (p. 20).
Note:
• In the Table and Properties panes, input parameter values and output parameter values obtained
from a simulation are displayed in black text. Output parameters based on a response surface
are displayed in the color specified on the Response Surface tab in the Options dialog box. For
more information, see Response Surface Options (p. 25).
• In the Properties pane, the color convention for output parameter values is not applied to the
Calculated Minimum and Calculated Maximum values. These values always display in black
text.
Context Menu
Right-clicking a DesignXplorer system header or cell in the Project Schematic displays a context menu.
Likewise, right-clicking a node in the Outline pane of a component tab displays a context menu. The
options available depend on the state of the system or cell. The options typically available are Update,
Preview, Clear Generated Data, and Refresh. The option selected is performed only on the selected
system or cell.
Parameters
DesignXplorer makes use of two types of parameters:
• Input parameters
• Output parameters
For information about performing what-if studies to investigate design alternatives, see Working with
Parameters and Design Points in the Workbench User's Guide. For information about grouping parameters
by application, see Tree Usage in Parameters Tab and Parameter Set Tabs (p. 15).
Input Parameters
Input parameters define the values to analyze for the model under investigation. Input parameters include
CAD parameters, analysis parameters, DesignModeler parameters, and mesh parameters.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
14 of ANSYS, Inc. and its subsidiaries and affiliates.
Parameters
• Examples of CAD and DesignModeler input parameters are length and radius.
• Examples of analysis input parameters are pressure, material properties, materials, and sheet thickness.
• Examples of mesh parameters are relevance, number of prism layers, and mesh size on an entity.
Input parameters can be discrete or continuous. Each of these parameter types has a specific form.
Discrete parameters physically represent different configurations or states of the model. An example is
the number of holes in a geometry. Discrete parameters allow you to analyze different design variations
in the same parametric study without having to create multiple models for parallel parametric analysis.
For more information, see Defining Discrete Input Parameters (p. 278).
Continuous parameters physically vary in a continuous manner between some lower bound and upper
bound. Examples are a CAD dimension and load magnitude. Continuous parameters allow you to analyze
a continuous value within a defined range, with each parameter representing a direction in the design
and treated as a continuous function in Design of Experiments and Response Surface systems. For
a continuous parameter, you can impose additional limitations on the values within this range. For
more information, see Defining Continuous Input Parameters (p. 280).
If you disable an input parameter, its initial value, which becomes editable, is used for the design ex-
ploration study. If you change the initial value of a disabled input parameter during the study, all de-
pending results are invalidated. A disabled input parameter can have a different initial value in each
DesignXplorer system. For more information, see Changing Input Parameters (p. 284).
Output Parameters
Output parameters either result from the geometry or are response outputs from the analysis. Examples
include volume, mass, frequency, stress, velocity, pressure, force, heat flux, and so on. Output parameters
can include derived parameters, which are calculated from output and input parameters using equations
that you provide. Derived parameters are created in the system and then passed into DesignXplorer as
output parameters.
When editing the Parameter Set tab, all parameters for the project are listed in the Outline pane, under
Input Parameters and Output Parameters, depending on their nature.
The parameters are also grouped by system name to reflect the origin of the parameters and the
structure of the Project Schematic when working in parametric environments. Because parameters can
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 15
ANSYS DesignXplorer Overview
be manipulated from the component tabs for a Parameters cell and the Parameter Set bar, and also
in DesignXplorer tabs, the same tree structure is always used.
Tip:
You can edit the system name by right-clicking the system name in the Project
Schematic.
Design Points
A design point is defined by a snapshot of parameter values where output parameter values are calculated
directly by a project update. Design points are created by design exploration. For instance, they are
created when processing a DOE or correlation matrix or when refining a response surface.
It is also possible to insert a design point at the project level from an optimization candidate design to
perform a validation update. Output parameter values are not copied to the created design point because
they were calculated by design exploration and are, by definition, approximated. Actual output para-
meters are calculated from the design point input parameters when a project is updated.
You can also edit process settings for design point updates, including the order in which points are
updated and the location where the update occurs. When submitting design points for update, you
can specify whether the update is to run locally on your machine or sent via RSM for remote processing.
For more information, see Working with Design Points (p. 286).
Response Points
A response point is defined by a snapshot of parameter values where output parameter values are
calculated in DesignXplorer from a response surface. As such, the parameter values are approximate
and calculated from response surfaces. You should verify the most promising designs by a solve in the
system using the same parameter values.
When editing a Response Surface cell, you can create new response points from either the Outline or
Table pane. You can also insert response points and design points from the Table pane or Chart pane
by right-clicking a table row or point on the chart and selecting an appropriate option from the context
menu. For instance, you can right-click a point in a Response chart and select the option for inserting
a new response point in this location.
You can duplicate a response point by right-clicking it in the Outline pane and selecting Duplicate.
You can also duplicate a response point using the drag-and-drop operation. An update of the response
point is attempted so that the duplication of an existing up-to-date response point results in a new up-
to-date response point.
Note:
• When you use the context menu to duplicate a chart that is a child of a response point, a
new chart is inserted under the same response point. However, when you use the drag-and-
drop operation, the duplicate is inserted under the other response point.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
16 of ANSYS, Inc. and its subsidiaries and affiliates.
Workflow
For more information, see Working with Response Surfaces (p. 110).
Workflow
To run a parametric analysis in DesignXplorer, you must:
Features in the geometry that are important to the analysis should be exposed as parameters.
These parameters can then be passed to DesignXplorer.
2. Drag a system analysis from the Toolbox and drop it in the Project Schematic, connecting it to the
DesignModeler or CAD file.
3. Double-click the Parameter Set bar and do the following for each input parameter:
b. In the Properties pane, set the limits for the input parameter.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 17
ANSYS DesignXplorer Overview
4. In the Project Schematic, drag the DesignXplorer system that you want to insert and drop it below the
Parameter Set bar.
• For any DesignXplorer system, right-click the system header and select Duplicate. A new system of this
type is added to the Project Schematic under the Parameter Set bar. No data is shared with the original
system.
• For a DesignXplorer system with a Design of Experiments cell, click this cell and select Duplicate. A new
system of this type is added to the Project Schematic under the Parameter Set bar. No data is shared
with the original system.
• For a DesignXplorer system with a Response Surface cell, click this cell and select Duplicate. A new system
of this type is added to the Project Schematic under the Parameter Set bar. The DOE data is shared
with the original system.
• For a Direct Optimization, Response Surface Optimization, ROM Builder, or Six Sigma Analysis system,
click the system header and select Duplicate. A new system of this type is added to the Project Schem-
atic under the Parameter Set bar. The DOE and response surface data is shared with the original system.
Any cell in the duplicated DesignXplorer system that contains data that is not shared with the original
system is marked as Update Required.
When you duplicate a DesignXplorer system, definitions of your data (such as charts, responses points,
and metric objects) are also duplicated. An update is required to calculate the results for the duplicated
data.
1. Specify design point update options in the Properties pane of the Parameter Set tab. These options
can vary from the global settings specified on the Solution Process tab in the Options dialog box.
2. For each cell in the design exploration system, double-click it to open the component tab and set up
any analysis options that are needed. Options can include parameter limits, optimization objectives or
constraints, optimization type, parameter distributions for Six Sigma Analysis, and more.
• From the component tab for the cell, right-click the root node in the Outline pane and select Update.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
18 of ANSYS, Inc. and its subsidiaries and affiliates.
Workflow
4. Make sure that you set up and solve each cell in a DesignXplorer system to complete the analysis for this
system.
Tip:
To update alls systems in the entire project, either click Update Project on the toolbar
or right-click in any empty area of the Project Schematic and select Update Project.
5. View the results for each DesignXplorer system from the component tabs for its cells. Results are in the
form of tables, statistics, charts, and so on.
In the Project Schematic, cells display icons to indicate their states. If a cell is out-of-date and must
be updated, right-click the cell and select Update.
Progress Pane
You can open the Progress pane from the View menu or click Show Progress in the lower right
corner of the Workbench window. During execution of an update, the Status cell displays the com-
ponent currently being updated. The Details cell displays additional information about updating this
component. The Progress cell displays a progress bar.
Throughout the execution of the update, this pane continuously reports the progress. To stop the
update, you would clicking the red stop button to the right of the progress bar. To restart a stopped
update at a later time, you use any of the methods for starting a new update.
Messages Pane
If solution errors exist and the Messages pane is not open, in the lower right of the window, the
button for showing messages flashes orange. Clicking this button, which indicates the number of
messages generated, opens the Messages pane so that you can see the solution errors. You can also
open the Messages pane from the View menu.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 19
ANSYS DesignXplorer Overview
All of the charts available for a cell are listed in the Toolbox for the component tab. When you update
a cell for the first time, one of each chart available for the cell is automatically inserted in the Outline
pane. For a Response Surface cell, a new response point is also automatically inserted. If a cell already
contains charts, the charts are replaced with the next update.
Note:
Charts are available for selection in the Outline pane. Most charts are created under Charts.
However, charts for a Response Surface cell are an exception. The Predicated vs. Observed
chart is inserted under Quality → Goodness of Fit. Other charts are inserted under respective
response points under Response Point.
• Drag a chart from the Toolbox and drop it on the Outline pane.
• In the Outline pane, right-click the cell under which to add the chart and select the option for inserting the
desired chart type.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
20 of ANSYS, Inc. and its subsidiaries and affiliates.
Using DesignXplorer Charts
• To create a second instance of a chart with default settings or to create a new Response Surface chart under
a different response point, drag the desired chart from the Toolbox and drop it on the parent cell.
• To create an exact copy of an existing chart, right-click the chart and select Duplicate.
For Response Surface charts, the Duplicate option on the context menu creates an exact copy of the
existing chart under the same response point. To create a fresh instance of a chart type under a different
response point, drag the existing chart and drop it on the new response point.
Chart duplication triggers a chart update. If the update succeeds, both the original chart and the duplicate
are up-to-date.
In the Chart pane, you can drag the mouse over various chart elements to view coordinates and other
element details.
You can change chart properties in the Properties pane. You can also right-click the directory on the
chart to use context-menu options for performing various chart-related operations. The options available
depend on the chart type and state of the chart.
• To edit general chart properties, right-click a chart or chart element and select Edit Properties. For more
information, see Setting Chart Properties in the Workbench User's Guide.
• To add new points to your design, right-click a chart point. Depending on the chart type, you can select
from the following context menu options: Explore Response Surface at Point, Insert as Design Point,
Insert as Refinement Point, Insert as Verification Point, and Insert as Custom Candidate Point.
– Right-click a chart parameter and, depending on the parameter, select Disable <parameterID>, Disable
all Input Parameters but <parameterID>, or Disable all Output Parameters but <parameterID>.
– If at least one parameter is already disabled, you can right-click anywhere on the chart and select Reverse
Enable/Disable to enable all disabled parameters or vice versa.
For general information about working with charts, see Working with the Chart Pane in the Workbench
User's Guide.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 21
ANSYS DesignXplorer Overview
Exporting ROMs
A 3D ROM system is based on a Design of Experiments (DOE) and its design points, which automate the
production of solution snapshots and the ROM itself. Once the ROM is produced, you can export it to a
ROMZ file, which can be consumed by anyone who has access to the ROM Viewer or Workbench. You can
also export the ROM to an FMU file, which can be consumed by anyone who has access to ANSYS Twin
Builder. For more information, see ROM Consumption (p. 222).
2. In the tree, expand Design Exploration to see its three child tabs:
• Design of Experiments
• Response Surface
As you click the four DesignXplorer tabs, you might need to scroll to see all options. A grayed-out
section becomes available only if a previous option is changed to some specific value that enables
the section.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
22 of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration Options
For descriptions of options on other tabs, see Workbench User Preferences in the Workbench User's
Guide.
3. To change the default value for an option, click directly in the field for the option. Some fields require you
to directly enter text while others require you to make selections from dropdown menus or select or clear
check boxes.
• Preserve Design Points After DX Run: If selected, design points created for a design exploration cell are
saved to the project's design points table once the solution completes. When this check box is selected, the
Retain Data for Each Preserved Design Point option is enabled. If this check box is selected, data is retained
for each design point saved to the project's design points table. For more information, see Retaining Data
for Generated Design Points (p. 288) in the Workbench User's Guide.
• Retry All Failed Design Points: If selected, additional attempts are made to solve design points that failed
to update during the first run. When this check box is selected, the following options are enabled:
– Number of Retries: Number of times to try to update failed design points. The default is 5.
– Retry Delay (seconds): Number of seconds to elapse between tries. The default is 120.
Under Graph, the Chart Resolution option specifies the number of points for a continuous input
parameter axis in a 2D or 3D Response Surface chart. The range is from 2 to 100. The default is 25. In-
creasing the number of points enhances the chart resolution.
Under Sensitivity:
• Significance Level: Relative importance or significance to assume for input variables. The allowable range
is from 0.0 to 1.0, where 0 means that all input variables are assumed to be insignificant and 1.0 means that
all input variables are assumed to be significant. The default is 0.025.
• Correlation Coefficient Calculation Type: Calculation method for determining sensitivity correlation
coefficients. Choices are:
– Rank Order (Spearman): Evaluates correlation coefficients based on the rank of samples (default).
• Display Parameter Full Name: If selected, full parameter names display rather than short parameter names.
• Parameter Naming Convention: Naming style for input parameters within design exploration. Choices are:
– Taguchi Style: Names for parameters are continuous variables and noise variables.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 23
ANSYS DesignXplorer Overview
– Uncertainty Management Style: Names for parameters are design variables and uncertainty variables
(default).
– Reliability Based Optimization Style: Names for parameters are design variables and random variables.
Under Messages, the Confirm if Min-Max Search can take a long time option specifies whether to
display an alert before performing a Min-Max search operation when there are discrete input parameters.
You might want to display such alerts because Min-Max searches can be time-consuming:
• Box-Behnken Design
For more information, see DOE Types (p. 72) and Working with Design Points (p. 286).
When Central Composite Design is selected, options under Central Composite Design Options are
enabled:
• Design Type: Method for improving the response surface fit for the DOE. Choices are:
– Face-Centered
– Rotatable
– VIF-Optimality
– G-Optimality
– Auto-Defined
For more information, see Central Composite Design (CCD) (p. 72) and Using a Central Composite
Design DOE (p. 82).
Note:
If you change the setting for Design Type here in the Options dialog box, new
design points are generated for a DOE that has not yet been solved.
• Enhanced Template: Specifies whether to use the enhanced template. This check box is enabled only
for Rotatable and Face-Centered design types.
When either Optimal Space-Filling Design or Latin Hypercube Sampling Design is the algorithm
selected, options under Latin Hypercube Sampling or Optimal Space-Filling are enabled:
• Design Type: Method for improving the response surface fit for the DOE. Choices are:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
24 of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration Options
– Centered L2
– Maximum Entropy
• Max Number of Cycles: Maximum number of iterations that the base DOE is to undergo for the final
sample locations to conform to the chosen DOE type.
• Sample Type: Method for determining the number of samples. Choices are:
– CCD Samples: Number of samples is the same as that of a corresponding CCD design (default).
– Linear Model Samples: Number of samples is the same as that of a design of linear resolution.
– Pure Quadratic Model Samples: Number of samples is the same as that of a design of pure quad-
ratic resolution, which uses constant and quadratic terms.
– Full Quadratic Model Samples: Number of samples is the same as that of a design of full quadratic
resolution, which uses all constant, quadratic and linear terms.
– User Defined Samples: Number of DOE samples that you want to have generated.
For more information, see Optimal Space-Filling Design (OSF) (p. 73) and Latin Hypercube Sampling
Design (p. 76).
– Kriging
– Non-Parametric Regression
– Neural Network
For more information, see Response Surface Types (p. 89) and Using Response Surfaces (p. 89).
The algorithm that you select determines if subsequent categories are enabled.
• Color for Response Surface Based Output Values: Color in which to display output values that are
calculated from a response surface. While simulation output values that are calculated from a design
point update always display in black text, DesignXplorer applies the color that is selected here to response
surface-based output values in the Properties and Table panes for all cells and in the Results pane
for the Optimization component (specifically for the Candidate Points chart and Samples chart).
– Design points, derived parameters with no output parameter dependency, verified candidate points,
and all output parameters calculated in a Direct Optimization system are simulation-based. Con-
sequently, these output values display in black text.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 25
ANSYS DesignXplorer Overview
– Response points, Min-Max search results, and candidate points in a Response Surface Optimization
system are based on response surfaces. Consequently, these output values display in the color specified
by this option.
In a Direct Optimization system, derived and direct output parameters are all calculated from
a simulation and so display in black text. In a Response Surface Optimization system, the
color used for derived values depends on the definition (expression) of the derived parameter.
If the expression of the parameter depends on at least one output parameter, either directly
or indirectly, the derived values are considered to be based on a response surface and so
display in the color specified by this option.
Under Kriging Options, the Kernel Variation Type option specifies the mode for correlation parameter
selection. This option is available only when Kriging is the algorithm selected. Choices are:
• Variable Kernel Variation: Radial basis function mode that uses one correlation parameter for each
design variable (default).
• Constant Kernel Variation: Pure Kriging mode that uses a single correlation parameter.
Under Neural Network Options, the Number of Cells option species the number of cells that the
neural network uses to control the quality of the response surface. This option is available only when
Neural Network is the algorithm selected. A higher value allows the neural network to better capture
parameter interactions. The recommended range is from 1 to 10. The default is 3.
Once a response surface is solved, it is possible to switch to another response surface type or change
the options for the current response surface in the Properties pane for the Response Surface cell.
Anytime that you change options in the Properties pane, you must update the response surface to
obtain the new fitting.
Under Weighted Latin Hypercube, Sampling Magnification specifies the number of times to reduce
regular Latin Hypercube samples while achieving a certain probability of failure (Pf ). For example, the
lowest probability of failure for 1000 Latin Hypercube samples is approximately 1/1000. The default is
5. A magnification of 5 is meant to use 200 weighted/biased Latin Hypercube samples to approach the
lowest probability of 1/1000. You should not use a magnification greater than 5 because a significant
Pf error can occur due to highly biased samples.
Under Optimization:
• Method Selection: Specifies whether Auto or Manual is the default for optimization method selection in
a newly inserted optimization system. To make performing optimizations easy for non-experts, Auto is the
original setting. For more information, see Using Goal-Driven Optimizations (p. 159).
• Constraint Handling: A constraint satisfaction filter on samples generated from a Screening, NLPQL, MOGA,
or Adaptive Single-Objective optimization that determines what candidates to display in the candidates
table. This option can be used for any optimization application and is especially useful for Screening samples
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
26 of ANSYS, Inc. and its subsidiaries and affiliates.
Customizing DesignXplorer with ANSYS ACT
to detect the edges of solution feasibility for highly constrained nonlinear optimization problems. Choices
are:
– Relaxed: Treats the upper, lower, and equality constraints as objectives. A candidate points that violates
an objective is still considered feasible and so is shown in the table.
– Strict (default): Treats the upper, lower, and equality constraints as hard constraints. If any constraint is
violated, the candidate is not shown in the table. Depending on the extent of constraint violations, it is
possible that no candidate points are shown in the table.
• Tolerance Settings: For Direct Optimization and Response Surface Optimization systems, indicates
whether to display options in the Optimization cell for entering tolerance values for objectives and con-
straints. When this check box is selected (default) and the Solution Process Update property for the Para-
meter Set bar is set to Submit to Design Point Service (DPS), options also display in the Optimization
cell for entering initial values for objectives, which are sent to DPS when design points are updated. For
more information, see Tolerance Settings (p. 191).
For more information, see DesignXplorer Feature Creation and DesignExplorer APIs in the ACT Cus-
tomization Guide for DesignXplorer.
For more information, see Simulation Wizards in the ANSYS ACT Developer's Guide.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 27
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
28 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Systems and Components
The topics in this section provide an introduction to using specific DesignXplorer systems and their in-
dividual components for your design exploration projects. A component is generally referred to as a
cell.
What is Design Exploration?
DesignXplorer Systems
DesignXplorer Components
After setting up your analysis, you can pick one of the system under Design Exploration in the Toolbox
and then do any of the following:
• Parametrize your solution and view an interpolated response surface for the parameter ranges
• View the parameters associated with the minimum and maximum values of your outputs
• Create a correlation matrix that shows you the sensitivity of outputs to changes in your input parameters
• Set output objectives and see what input parameters meet these objectives
• Produce and consume ROMs for computationally inexpensive, near real-time analysis
DesignXplorer Systems
The following DesignXplorer systems are available if you have installed ANSYS DesignXplorer and have
an appropriate license:
Parameters Correlation System
Response Surface System
Goal-Driven Optimization Systems
3D ROM System
Six Sigma Analysis System
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 29
DesignXplorer Systems and Components
When a project has many input parameters (more than 10), building an accurate response surface
becomes an expensive process. By using a Parameters Correlation system, you can identify the most
significant input parameters and then disable those that are less significant when building the response
surface. With fewer input parameters, the response surface is more accurate and less expensive to
build.
The Parameters Correlation system contains a single cell: Parameters Correlation (p. 34)
For the deterministic method, response surfaces for all output parameters are generated in two steps:
• Solving the output parameters for all design points as defined by a DOE
• Fitting the output parameters as a function of the input parameters using regression analysis techniques
DesignXplorer offers two types of GDO systems: Response Surface Optimization and Direct Optim-
ization.
The Direct Optimization system contains a single cell: Optimization (p. 41)
3D ROM System
A 3D ROM (p. 47) system is used to produce a ROM (p. 219) from a series of simulations. As a stand-
alone digital object, a ROM offers a mathematical representation for computationally inexpensive,
near real-time analysis.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
30 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
Using a Six Sigma Analysis system, you can determine the extent to which uncertainties in the
model affect the results of the analysis. An uncertainty (random quantity) is a parameter whose value
is impossible to determine at a given point in time (if it is time-dependent) or at a given location (if
it is location-dependent). An ambient temperature is an example. You cannot know precisely what
the temperature will be one week from now in a given city.
DesignXplorer Components
Design Exploration systems are made up of one or more components or cells. Double-clicking a cell
in the Project Schematic opens its component tab. Virtually all component tabs contain these four
panes: Outline, Properties, Table, and Chart.
The content in a pane depends on the object selected in the Outline pane. The following topics sum-
marize the component tabs for the various cells in Design Exploration systems:
Design of Experiments Component Reference
Parameters Correlation Component Reference
Response Surface Component Reference
Optimization Component Reference
3D ROM Component Reference
Six Sigma Analysis Component Reference
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 31
DesignXplorer Systems and Components
Note:
The DOE for a 3D ROM system or a Six Sigma Analysis system can share data only
with the DOE for another DesignXplorer system of the same type.
The Design of Experiments tab allows you to preview or generate design points. The Preview oper-
ation generates design points but does not solve them. The Update operation both generates and
solves design points. On the Design of Experiments tab, you can set input parameter limits and
properties for the DOE and view the design points table and several parameter charts. For more in-
formation, see:
The following panes in the Design of Experiments tab allow you to customize your DOE and view
the updated results.
Outline:
• Select output parameter and view their minimum and maximum values.
• Select charts to view available charts and change chart types and data properties. You can use the Toolbox
or context menu to insert as many charts as you want.
Properties:
• Preserve Design Points After DX Run: Specifies whether to retain design points at the project level each
time that the DOE is updated. If you select this check box, Retain Data for Each Preserved Design Point
is shown. If you also select this check box, in addition to saving the design points to the project's design
points table, data for each design point is saved. For more information, see Retaining Data for Generated
Design Points (p. 288) in the Workbench User's Guide.
• Number of Retries: Specify the number of times DesignXplorer is to try to update failed design points.
If the Retry All Failed Design Points check box is not selected on the Design Exploration tab in the
Options dialog box, the default is 0. However, you can specify the default number of retries for this spe-
cific project here. When the Number of Retries property is not set to 0, Retry Delay (seconds) specifies
how much time is to elapse between tries.
• Design of Experiments Type: Specifies the DOE type to use. Choices follow. For descriptions and specific
properties, see DOE Types (p. 72).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
32 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
– Box-Behnken Design
– Custom
– Custom + Sampling
– External sampling methods as defined by the DOE extensions loaded to the project.
Table:
Displays the design points and input parameter data when previewing. On updating, the table also
displays output parameter data. You can add data points manually if Design of Experiments Type
is set to Custom.
Chart:
• Display Parameter Full Name: Indicate whether to show the full parameter name or the short
parameter name.
• Use the Enabled check boxes for the input parameters to enable or disable the display of parameter
axes on the chart.
• Click a line on the chart to display input and output values for this line in the Input Parameters
and Output Parameters sections of the Properties pane.
The chart displays only updated design points. If the DOE does not yet contain any updated design
points, output parameters are automatically disabled from the chart. The axes that are visible cor-
respond to the input parameters for the design points.
The Parameters Parallel chart supports interactive exploration of the DOE. When you place the
mouse cursor on the graph, sliders appear at the upper and lower bounds of each axis. You can
use the sliders to easily filter for each parameter. Design points that fall outside of the bounds
defined by the sliders are dynamically hidden.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 33
DesignXplorer Systems and Components
You can also look at the data in this chart as a Spider chart. Right-click in an empty area of the
chart and select Edit Properties. Then, in the Properties pane for the chart, change Chart Type
to Spider. The Spider chart shows all input and output parameters arranged in a set of radial axes
spaced equally. Each design point is represented by a corresponding envelope defined in the radial
axes.
• Display Parameter Full Name: Indicate whether to show the full parameter name or the short
parameter name.
• X-Axis (Bottom) X-Axis (Top), Y-Axis (Left), Y-Axis (Right): Design points can be plotted for
either X-Axis (Bottom) or X-Axis (Top) against input and output parameters on any of the other
axes.
The following panes in the Parameters Correlation tab allow you to customize your search and view
the results.
Outline:
• Select Parameters Correlation and change properties and view the number of samples generated for
this correlation.
• Select output parameters and view their minimum and maximum values.
• Select charts to view available charts and change chart types and data properties.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
34 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
Properties:
• Preserve Design Points After DX Run: Specifies whether to retain design points at the project level from
this parameters correlation. If this check box is selected, Retain Data for Each Preserved Design Point
is shown. If you also select this check box, in addition to saving the design points to the project's design
points table, data for each design point is saved. For more information, see Retaining Data for Generated
Design Points (p. 288) in the Workbench User's Guide.
• Number of Retries: Number of times DesignXplorer is to try to update failed design points. If the Retry
All Failed Design Points check box is not selected on the Design Exploration tab in the Options dialog
box, the default is 0. However, for a correlation that is not linked to a response surface, you can specify
the default number of retries for this specific project here. When Number of Retries is not set to 0, Retry
Delay (seconds) specifies how much time is to elapse between tries.
• Reuse the samples already generated: Specifies whether to reuse the samples generated in a previous
correlation.
• Correlation Type: Algorithm to use for the parameter correlation. Choices are:
– Spearman
– Pearson
• Number Of Samples: Maximum number of samples to generate for this correlation. This value must be
greater than the number of enabled input parameters.
• Auto Stop Type: Choices are Execute All Simulations and Enable Auto Stop. When Enable Auto Stop
is selected, set the additional options that are shown:
– Mean Value Accuracy: Desired accuracy for the mean value of the sample set.
– Standard Deviation Accuracy: Desired accuracy for the standard deviation of the sample set.
– Convergence Check Frequency: Number of simulations to execute before checking for convergence.
This value must be greater than the number of enabled input parameters.
– Size of Generated Sample Set: Read-only value indicating the number of samples generated for the
correlation solution.
• Correlation Filtering: Specifies whether to filter major input parameters based on correlation values.
• R2 Contribution Filtering: Specifies whether to filter major input parameters based on R2 contributions.
• Maximum Number of Major Inputs: Maximum number of input parameters selected as major input
parameters. By default, this number is the minimum of the current number of input parameters and 20
(where 20 is the recommended maximum number of input parameters for a response surface).
Table:
Displays both a correlation matrix and a determination matrix for the input and output parameters.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 35
DesignXplorer Systems and Components
Chart:
You can create a Correlation Scatter chart for a given parameter combination by right-clicking the
associated cell in the Correlation Matrix chart and selecting Insert <x-axis> vs <y-axis> Correlation
Scatter.
To view the Correlation Scatter chart, in the Outline pane under Charts, select Correlation Scatter.
Use the Properties pane as follows:
• Enable or disable the display of the quadratic and linear trend lines.
• View the coefficient of determination and the equations for the quadratic and linear trend lines.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
36 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
To view the Correlation Matrix chart, in the Outline pane under Charts, select Correlation Matrix.
Use the Properties pane as follows:
To view the Determination Matrix chart, in the Outline pane under Charts,select Determination
Matrix. Use the Properties pane as follows:
Sensitivities Chart
The Sensitivities chart allows you to graphically view the global sensitivities of each output para-
meter with respect to the input parameters.
To view the Sensitivities chart, in the Outline pane under Charts, select Sensitivities. Use the
Properties pane as follows:
The full model R2 represents the variability of the output parameter that can be explained by a
linear (or quadratic) correlation between the input parameters and the output parameter.
The value of the bars corresponds to the linear (or quadratic) determination coefficient of each input
associated to the selected output.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 37
DesignXplorer Systems and Components
To view the Determination Histogram chart, in the Outline pane under Charts, select Determination
Histogram. Use the Properties pane as follows:
• Threshold R2: Enables you to filter input parameters by hiding those with a determination coefficient
lower than the given threshold.
The following panes in the Response Surface tab allow you to customize your response surface and
view the results:
Outline:
• Select Response Surface to specify the Response Surface Type, change response surface properties,
and view the tolerances table (Genetic Aggregation only) or the response points table (all other response
surface types).
• Under Refinement, view the tolerances table (Genetic Aggregation only), Convergence Curves chart
(Genetic Aggregation only), and refinement points table.
• Under Quality, view the goodness of fit results, the verification points table, and the Predicted vs Observed
chart.
• Under Response Points, view the response points table and view the properties and available charts for
individual response points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
38 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
Properties:
• Preserve Design Points After DX Run: Select this check box if you want to retain at the project level
design points that are created when refinements are run for this response surface. If this property is set,
Retain Data for Each Preserved Design Point is available. If this check box is selected, in addition to
saving the design points to the project's design points table, data for each design point is saved. For more
information, see Retaining Data for Generated Design Points (p. 288).
Note:
Selecting this check box does not preserve any design points unless you run either a
manual refinement or one of the Kriging refinements because the response surface uses
the design points generated by the DOE. If the DOE of the response surface does not
preserve design points, when you perform a refinement, only the refinement points are
preserved at the project level. If the DOE is set to preserve design points and the response
surface is also set to preserve design points, when you perform a refinement, the project
contains the DOE design points and the refinement points.
• Number of Retries: Number of times DesignXplorer is to try to update the failed design points. If the
Retry All Failed Design Points option is not selected on the Design Exploration tab in the Options
dialog box, the default is 0. However, you can specify the default number of retries for this specific project
here. When Number of Retries is not set to 0, Retry Delay (seconds) specifies how much time is to elapse
between tries.
• Response Surface Type: The type of response surface. Choices follow. For descriptions, see Response
Surface Types (p. 89).
– Genetic Aggregation
– Kriging
– Non-Parametric Regression
– Neural Network
– Sparse Grid
• Refinement Type: Where applicable, select Manual to enter refinement points manually or Auto-Refine-
ment to automate the refinement process.
• Generate Verification Points: Specify the number of verification points to be generated. The default
value is 1. The results are included in the verification points table and the Goodness of Fit chart.
• Settings for selected response surface type, as applicable. For more information, see Response Surface
Types (p. 89).
Table: Depending on your selection in the Outline pane, displays one of the following tables:
• Min-Max Search
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 39
DesignXplorer Systems and Components
• Refinement Points
• Goodness of Fit
• Verification Points
• Response Points
Chart:
Displays the available charts for the response point selected in the Outline pane:
Response Chart
Local Sensitivity Charts
Spider Chart
Response Chart
The Response chart allows you to graphically view the effect that changing each input parameter
has on the displayed output parameter.
You can add response points to the response points table by right-clicking the Response chart and
selecting Explore Response Surface at Point, Insert as Design Point, Insert as Refinement Point
or Insert as Verification Point.
• Display Parameter Full Name: Specifies whether to display the full parameter name or the para-
meter ID on the chart.
• Chart Resolution Along X: Sets the number of points to display on the X axis response curve. The
default is 25.
• Chart Resolution Along Y: Sets the number of points to display on the Y axis response curve when
Mode is set to 3D. The default is 25.
• Number of Slices: Sets the number of slices when Mode is set to 2D Slices and there are continuous
input parameters.
• Show Design Points: Specifies whether to display design points on the chart.
• Choose the input parameters to display in either the first axis option or the first and second axis
options, depending on the chart mode.
• Use the sliders or drop-down menus to change the values of the input parameters that are not
displayed to see how they affect the values of displayed parameters. You can enter specific values
in the boxes above the sliders.
• View the interpolated output parameter values for the selected set of input parameter values.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
40 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
• Display Parameter Full Name: Specifies whether to display the full parameter name or the para-
meter ID on the chart.
Chart Mode: Set to Bar or Pie. This option is available only for the Local Sensitivity chart.
• Axes Range: Set to Use Min Max of the Output Parameter or Use Chart Data.
• Chart Resolution: Set the number of points per curve. The default is 25.
• Use the sliders to change the values of the input parameters to see how the sensitivity changes
for each output.
• View the interpolated output parameter values for the selected set of input parameter values.
Spider Chart
The Spider chart allows you to visualize the effect that changing the input parameters has on all
of the output parameters simultaneously. Use the Properties pane as follows:
• Display Parameter Full Name: Specifies whether to display the full parameter name or the para-
meter ID on the chart.
• Use the sliders to change the values of the input parameters to see how they affect the output
parameters.
• View the interpolated output parameter values for the selected set of input parameter values.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 41
DesignXplorer Systems and Components
The following panes in the Optimization tab allow you to customize your GDO and view the results:
Outline: Allows you to select the following nodes and perform related actions in the tab:
• Optimization:
– Change optimization properties and view the size of the generated sample set.
– View an optimization summary with details on the study, method, and returned candidate points.
– View the Convergence Criteria chart for the optimization. For more information, see Using the Conver-
gence Criteria Chart (p. 199).
– Select an objective or constraint and view its properties, the calculated minimum and maximum values
of each of the outputs, and History chart. For more information, see History Chart (p. 45).
• Domain:
– Select an input parameter or parameter relationship to view and edit its properties or to see its History
chart. For more information, see History Chart (p. 45).
• Raw Optimization Data: For Direct Optimization systems, when an optimization update is finished,
DesignXplorer saves the design point data calculated during the optimization. You can access this data
by selecting Raw Optimization Data in the Outline pane.
Note:
The design point data is displayed without analysis or optimization results. The data
does not show feasibility, ratings, Pareto fronts, and so on.
• Convergence Criteria: View the Convergence Criteria chart and specify the criteria to display. For more
information, see Using the Convergence Criteria Chart (p. 199).
• Results:
Select one of the result types available to view results in the Charts pane and, in some cases, in
the Table pane. When a result is selected, you can change the data properties of its related chart
(X axis and Y axis parameters, parameters to display on the bar chart, and so on), and edit its table
data.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
42 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
Properties: When Optimization is selected in the Outline pane, the Properties pane allows you to
specify:
• Method Name: Choices for methods follow. If optimization extensions are loaded to the project, you can
also choose an external optimizer.
– MOGA
– NLPQL
– MISQP
– Screening
– Adaptive Single-Objective
– Adaptive Multiple-Objective
• Relevant settings for the selected Method Name. Depending on the method of optimization, these can
include specifications for samples, sample sets, number of iterations, and allowable convergence or Pareto
percentages.
Table: Before the update, specify input parameter domain settings and objective and constraint set-
tings:
• Optimization Domain
– Set the Upper Bound and Lower Bound for each input parameter. For NLPQL and MISQP optimizations,
also set the Starting Value.
– Set Left Expression, Right Expression, and Operator for each parameter relationship.
For more information, see Defining the Optimization Domain (p. 184).
– For each parameter, you can define an objective, constraint, or both. Options vary according to para-
meter type.
– For a parameter with Objective Type set to Seek Target, you specify a target.
– For a parameter with Constraint Type set to Lower Bound <= Values <= Upper Bound, you use Lower
Bound and Upper Bound to specify the target range.
– For a parameter with an Objective or Constraint defined, you specify the relative Objective Importance
or Constraint Importance of that parameter in regard to the other objectives.
– For a parameter with a Constraint defined (such as output parameters, discrete parameters, or continu-
ous parameters with manufacturable values), you specify the Constraint Handling for that parameter.
For more information, see Defining Optimization Objectives and Constraints (p. 188).
During an update of a Direct Optimization system, if you select an objective, constraint, or input
parameter in the Outline pane, the Table pane shows all of the design points being calculated by
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 43
DesignXplorer Systems and Components
the optimization. For iterative optimization methods, the display is refreshed dynamically after each
iteration, allowing you to track the progress of the optimization by simultaneously viewing design
points in the Table pane, History charts in the Charts pane, and History chart sparklines in the Outline
pane. For the Screening optimization method, these objects are updated only after the optimization
has completed.
After the update, when you select Candidate Points under Results in the Outline pane, the Table
pane displays up to the maximum number of requested candidates generated by the optimization.
The number of gold stars or red crosses displayed next to each objective-driven parameter indicate
how well the parameter meets the stated objective, from three red crosses for the worst to three
gold stars for the best. The Table pane also allows you to add and edit your own candidate points,
view values of candidate point expressions, and calculates the percentage of variation for each
parameter for which an objective has been defined. For more information, see Working with Candidate
Points (p. 194).
Note:
Goal-driven parameter values with inequality constraints receive either three stars to
indicate that the constraint is met or three red crosses to indicate that the constraint
is not met.
You can verify predicted output values for each candidate. For more information, see Verifying Can-
didates by Design Point Update (p. 197).
Results:
The Convergence Criteria chart is the default optimization chart, so it displays in the Chart pane
unless you select another chart type. When Convergence Criteria is selected in the Outline pane,
the Properties pane displays the convergence criteria relevant to the selected optimization method
in read-only mode. Various generic chart properties can be changed for the Convergence Criteria
chart.
The chart remains available when the optimization update is complete. The legend shows the color-
coding for the convergence criteria.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
44 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
• Using the Convergence Criteria Chart for Multiple-Objective Optimization (p. 199)
• Using the Convergence Criteria Chart for Single-Objective Optimization (p. 200)
History Chart
The History chart allows you to view the history of a single enabled objective, constraint, input
parameter, or parameter relationship during the update process. For iterative optimization methods,
the History chart is updated after each iteration. For the Screening optimization method, it is updated
only when the optimization is complete.
In the Outline pane, select an object under Objectives and Constraints or an input parameter or
parameter relationship under Domain. The Properties pane displays various properties for the se-
lected object. Various generic chart properties can be changed for both types of History chart.
In the Chart pane, the color-coded legend allows you to interpret the chart. In the Outline pane,
a sparkline graphic of the History chart is displayed next to each objective, constraint, and input
parameter object.
• Working with the History Chart in the Chart Pane (p. 202)
Properties: The following options are applied to results in both the Table and Chart panes.
• Display Parameter Relationships: Select to display parameter relationships in the candidate points
table.
• Display Parameter Full Name: Specifies whether to display the full parameter name or short parameter
name.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 45
DesignXplorer Systems and Components
• Coloring Method: Specifies whether to color the results according to candidate type or source type.
• Show Starting Point: Select to show the starting point on the chart (NLPQL and MISQP only).
• Show Verified Candidates: Select to show verified candidates in the results (Response Surface Optim-
ization system only).
• Change various generic chart properties for the results in the Chart pane.
Tradeoff Chart
The Tradeoff chart allows you to view the Pareto fronts created from the samples generated in the
goal-driven optimization. In the Outline pane under Charts, select Tradeoff to display this chart
in the Chart pane. Use the Properties pane as follows:
• Number of Pareto Fronts to Show: Set the number of Pareto fronts that are displayed on the
chart.
• Show infeasible points: Enable or disable the display of infeasible points. This option is available
when constraints are defined.
• Click a point on the chart to display a Parameters section that shows the values of the input and
output parameters for this point.
• Change various generic chart properties for the results in the Chart pane.
Samples Chart
The Samples chart allows you to visually explore a sample set given defined objectives. In the
Outline pane under Charts, select Samples to display this chart in the Chart pane. Use the Prop-
erties pane as follows:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
46 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
• Chart Mode: Set to Candidates or Pareto Fronts. When Pareto fronts is selected, the following
options can be set:
– Number of Pareto Fronts to Show: Either enter the value or use the slider to select the number
of Pareto fronts to display.
• Show infeasible points: Enable or disable the display of infeasible points. This option is available
when constraints are defined.
• Click a line on the chart to display the values of the input and output parameters for this line in
the Parameters section. Use the Enabled check box to enable or disable the display of parameter
axes on the chart.
• Change various generic chart properties for the results in the Chart pane.
Sensitivities Chart
The Sensitivities chart allows you to graphically view the global sensitivities of each output para-
meter with respect to the input parameters. In the Outline pane under Charts, select Sensitivities
to display this chart in the Chart pane. Use the Properties pane as follows:
• Change various generic chart properties for the results in the Chart pane.
In the Outline pane for the Design of Experiments (3D ROM) cell, all input parameters are selected
for use by default. In the Properties pane for each input variable, you set lower and upper bounds.
Accepting the default values for all other properties is generally recommended.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 47
DesignXplorer Systems and Components
When you perform an update, the design points for building the ROM are inserted in the Table
pane and their results are calculated. As each design point is updated, its results are saved to a
ROM snapshot file (ROMSNP).
ROM Builder
The ROM Builder cell provides for setting up the solver system for ROM production. While ROM
setup is specific to the ANSY product, the workflow for producing a ROM is generic. When you
perform an update, the ROM is built. Once the update finishes, you can open the ROM in the ROM
Viewer and export the ROM.
Note:
Currently, you can set up and build a ROM only for a Fluent system. ROM production
examples (p. 222) are provided.
Before solving the DOE, you must set up input parameter options. The following panes for the
Design of Experiments cell contain objects that are unique to Six Sigma Analysis:
Outline:
• View the Skewness and Kurtosis properties for each input parameter distribution.
• View the calculated mean and standard deviation for all distribution types except Normal and Truncated
Normal where you can set those values.
• View the lower bound and upper bound for each parameter, which are used to generate the DOE.
Properties: When an input parameter is selected in the Outline pane, the Properties pane allows
you to set its properties:
• Distribution Type: Type of distribution associated with the input parameter. Choices follow. For more
information, see Distribution Functions (p. 373).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
48 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
– Uniform
– Triangular
– Normal
– Truncated Normal
– Lognormal
– Exponential
– Beta
– Weibull
• Distribution Upper Bound (Uniform, Triangular, Truncated Normal, and Beta only)
Table: When an output parameter or chart is selected in the Outline pane, the Table pane displays
the design points table, which populates automatically during the solving of the points. When an
input parameter is selected in the Outline pane, the Table pane displays data for each of the
samples in the set:
• Quantile: Input parameter value point for the given PDF and CDF values.
• PDF: Probability Density Function of the input parameter along the X axis.
• CDF: Cumulative Distribution Function is the integration of PDF along the X axis.
Chart : When an input parameter is selected in the Outline pane, the Chart pane displays the
Probability Density Function and Cumulative Distribution Function for the distribution type chosen
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 49
DesignXplorer Systems and Components
for the input parameter. The Parameters Parallel chart and Design Points vs Parameter chart are
also available, just as in a standard DOE (p. 32).
The following panes for the Six Sigma Analysis cell allow you to customize your analysis and view
the results:
Outline:
• Select each input parameter and view its distribution properties, statistics, upper and lower bounds,
initial value, and distribution chart.
• Select each output parameter and view its calculated maximum and minimum values, statistics
and distribution chart.
• Set the table display format for each parameter to Quantile-Percentile or Percentile-Quantile.
Properties:
• Sampling Type: Type of sampling for the Six Sigma Analysis. For more information, see Sample Gener-
ation (p. 376).
– WLHS: When selected, the Weighted Latin Hypercube Sampling technique is used.
For parameters, Probability Table specifies how to display analysis information in the Table pane:
• Quantile-Percentile
• Percentile-Quantile
• Probability: Probability that the parameter is less than or equal to the specified value.
• Sigma Level: Approximate number of standard deviations away from the sample mean for the
given sample value.
If Probability Table is set to Quantile-Percentile in the Properties pane, you can edit the para-
meter value and see the corresponding Probability and Sigma Level values. If Probability Table
is set to Percentile-Quantile, the columns are reversed. You can then enter a Probability or Sigma
Level value and see the corresponding changes in the other columns.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
50 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Components
Chart: When a parameter is selected in the Outline pane, the Chart pane displays the Probability
Density Function and Cumulative Distribution Function. A global Sensitivities chart is available in
the Outline pane.
• Change various generic chart properties for the results in the Chart pane.
For more information, see Working with Sensitivities (p. 298) and Statistical Sensitivities in a
SSA (p. 379).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 51
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
52 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Parameters Correlations
The application of goal-driven optimization and Six Sigma Analysis (SSA) in a finite element-based
framework is always a challenge in terms of solving time, especially when the finite element model is
large. For example, hundreds or thousands of finite element simulation runs in SSA is not uncommon.
If one simulation run takes hours to complete, it is almost impractical to perform SSA at all with thousands
or even hundreds of simulations.
In a DOE, sampling points increase dramatically as the number of input parameters increases. For example,
a total of 149 sampling points (finite element evaluations) are needed for 10 input variables using
Central Composite Design with fractional factorial design. As the number of input variables increases,
the analysis becomes more and more intractable. In this case, one would like to exclude unimportant
input parameters from the DOE sampling to reduce unnecessary sampling points. A correlation matrix
is a tool to help identify which input parameters are unimportant and therefore treated as deterministic
parameters in SSA.
• Determine which input parameters have the most (and the least) effect on your design.
A Parameters Correlation system also provides a variety of charts to assist in your assessment of
parametric effects. For more information, see Working with Parameters Correlation Charts (p. 63).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 53
Using Parameters Correlations
1. In the Project Schematic, drag a Parameters Correlation system from under Design Exploration in the
Toolbox and drop it under the Parameter Set bar.
4. If you want to review the samples to be calculated before generating them, in the Outline pane, right-click
Parameters Correlation and select Preview.
5. To generate the samples, in the Outline pane, right-click Parameters Correlation and select Update.
6. When the update finishes, use the filtered results in the Table pane to find the most relevant inputs for a
selected output. For more information, see Reviewing Filtered Correlation Data (p. 58).
7. Use the various charts in the Outline pane to examine the results. For more information, see Using
DesignXplorer Charts (p. 20).
Correlation Type
Select the Spearman or Pearson correlation type. The default value is determined by the Correlation
Coefficient Calculation Type option in Tools → Options.
• Pearson: Select this option to correlate linear relationships. This correlation method uses actual data
to evaluate the correlation and bases correlation coefficients on sample values.
For more information on these correlation types, see Sample Generation (p. 57).
Number Of Samples
Specify the maximum number of samples to be generated in the correlation sample set. The default is
100. This value must be greater than the number of enabled input parameters.
• Execute All Simulations: DesignXplorer updates the number of design points specified by the
Number of Samples property.
• Enable Auto Stop: The number of samples required to calculate the correlation is determined according
to the convergence of the mean and standard deviation of the output parameters. At each iteration,
the mean and standard deviation convergences are checked against the level of accuracy specified
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
54 of ANSYS, Inc. and its subsidiaries and affiliates.
Running a Parameters Correlation
by the Mean Value Accuracy and Standard Deviation Accuracy properties. For more information,
see Correlation Convergence Process (p. 58).
By default, all output parameters are taken into account for the filtering process. You can change
these filtering options before updating the correlation study. You can also change the filtering options
after the update has been completed. When you change options, a new design point update is not
necessary. An Update operation updates only the display to show the results sorted according to
your selected criteria.
To indicate that an output parameter is not to be considered in the filtering process, you can use
either of these methods to select the Ignore for Filtering check box:
• In the Outline pane, select the output. Then, in the Property pane, select Ignore for Filtering.
• In the Outline pane, select one or more outputs, right-click, and select Ignore for Filtering.
To indicate that an output parameter is to be considered in the filtering process, in the Outline
pane, select one or more outputs, right-click, and select Use for Filtering.
In the Outline pane, your filtering outputs are indicated by a funnel icon. When Parameters Cor-
relation is selected in the Outline pane, your filtering parameters are also listed in the summary
report in the Table pane.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 55
Using Parameters Correlations
To specify your filtering criteria, select Parameters Correlation in the Outline pane. Then, in the
Properties pane under Filtering Method, set properties:
• Relevance Threshold: Determines how strictly inputs are filtered for inclusion in the major input category.
To estimate the relevance of parameter relationships, a metric is computed for any input-output pair.
If one of the metrics for a given input parameter exceeds the value set for Relevance Threshold, that
parameter is categorized as a major input. Otherwise, it is categorized as a minor input.
The default value is 0.5, with possible values ranging from 0 to 1. A value of 1 applies the
strictest filter, and a value of 0 the most relaxed. For example, if there are 10 inputs and Relevance
Threshold is set to 0, all 10 of the outputs are categorized as major outputs. If Relevance
Threshold is changed to 1, a number of the outputs are filtered out and categorized instead as
minor inputs.
For more information on how Relevance Threshold filters inputs, see Parameters Correlation
Filtering Theory (p. 309).
• Correlation Filtering: This is based on Pearson's, Spearman's, and quadratic correlation. For each input-
output pair, both correlation complexity and the number of samples are used to determine relevance.
The greater the number of samples, the greater the probability that the correlation detected is valid. If
the relevance is greater than the Relevance Threshold value, the input is categorized as a major input.
This property is enabled by default.
• R2 Contribution Filtering: Computes a reduced model and evaluates the R2 contribution of each input
parameter on the filtering output. The R2 contribution is then used to calculate relevance. If the relevance
is greater than the Relevance Threshold value, the input is categorized as a major input. This property
is enabled by default.
• Maximum Number of Major Inputs: Maximum number of input parameters that can be categorized
as major inputs.
The maximum value is equal to the number of enabled input parameters. The minimum value
is 1. The default is 20 if the number of input parameters is greater than or equal to 20. This is
the recommended maximum number of inputs for a response surface.
Once you have set your filtering criteria, update the Parameters Correlation cell to filter the results.
If design points have already been updated, they are not updated again. It only updates the results
according to the new filter. This allows you to change your filter settings and review multiple
scenarios without having to run a new design point update each time.
Note:
When you select only one filtering method (Correlation Filtering or R2 Contribution
Filtering), the relevance threshold can change the list of major input parameters but
not their best relevance values.
When you select both filtering methods, Correlation Filtering is applied first, which
retains a list of major input parameters based on the Relevance Threshold value. The
R2 Contribution Filtering method is then applied, based on the list of major input
parameters retained by the Correlation Filtering method. This can cause the best relev-
ance values displayed in the tables of major and minor input parameters to change
slightly when you change the Relevance Threshold value with both filtering methods
enabled.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
56 of ANSYS, Inc. and its subsidiaries and affiliates.
Running a Parameters Correlation
Sample Generation
Two methods of generating samples are available: Pearson's and Spearman's:
• Recognizes monotonic relationships, which are less restrictive than linear ones. In a monotonic relationship,
one of the following two things happens:
– As the value of one variable increases, the value of the other variable also increases.
– As the value of one variable increases, the value of the other variable decreases.
During the update of a Parameters Correlation cell, you can monitor progress in the Progress pane.
The table of samples refreshes automatically as results are returned to DesignXplorer.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 57
Using Parameters Correlations
By clicking the red stop button to the right of the progress bar, you can interrupt the update. If
enough samples are calculated, partial correlation results are generated. You can see the results in
the Table pane of the component tab by selecting a chart object in the Outline pane.
To restart an interrupted update, you either right-click Parameters Correlation in the Outline pane
and select Update or click Update on the toolbar. The update restarts where it was interrupted.
1. The convergence status is checked each time the number of points specified for Convergence Check
Frequency has been updated.
• The mean and the standard deviation are calculated based on all up-to-date design points available
at this step.
• The mean is compared with the mean at the previous step. It is considered to be stable if the difference
is smaller than 1% by default (Mean Value Accuracy = 0.01).
• The standard deviation is compared with the standard deviation at the previous step. It is considered
to be stable if the difference is smaller than 2% by default (Standard Deviation Accuracy
= 0.02).
3. If the mean and standard deviation are stable for all output parameters, the correlation is converged.
The convergence status is indicated by the Converged property. When the process is converged,
Converged is set to Yes and the possible remaining unsolved samples are automatically removed.
If the process has stopped because the value for Number of Samples is reached before convergence,
Converged is set to No.
Under Filtering Method, you see the relevance threshold, configuration, and filtering output para-
meters used to filter the correlation data.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
58 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Correlation Results
Under Major Input Parameters and Minor Input Parameters, you see inputs sorted in descending
order according to their relevance to the output indicated. This output, shown in the Output Para-
meter column, is the output for which the input has the most relevance.
• The R2 Contribution and Correlation Value values reference the relationship for that input-output
pair.
• The R2 Contribution is the difference between two R2 coefficients of quadratic regressions (with and
without this input). For more information, see R2 Contribution (p. 311).
• Correlation Value corresponds to the most relevant correlation among Pearson, Spearman, or
Quadratic correlations. For Pearson or Spearman, the value can have a (+/-) sign. For Quadratic, the
value is positive. For more information, see Relevance of the Correlation Value (p. 309).
Note:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 59
Using Parameters Correlations
Unlike the Correlation Matrix chart, the Determination Matrix chart is not symmetric. The Determination
Matrix chart displays the R2 for each parameter pair. To view this chart, select it in the Outline pane.
Quadratic determination data is shown in both the Table and Chart panes.
In addition, quadratic information is shown in the Correlation Scatter chart and general parameters
correlation table. The Correlation Scatter chart displays both the quadratic trend line and the linear
trend line equation for the selected parameter pair. The general parameters correlation table shows
quadratic data, linear data, and correlation design points.
If you select Sensitivities in the Outline pane for the Six Sigma Analysis cell, you can review the
sensitivities derived from the samples generated for the correlation. The correlation sensitivities are
global sensitivities. In the Properties pane for the Sensitivities chart, you can choose the output
parameters for which you want to review sensitivities and the input parameters that you would like
to evaluate for the output parameters.
On the Design Exploration tab of the Options dialog box, the default setting for Significance Level
is 0.025. Parameters with a sensitivity value above this significance are shown with a flat line on the
Sensitivities chart. The value displayed for these parameters when you place the mouse cursor over
them on the chart is 0.
To view the actual correlation value of the insignificant parameter pair, you select Correlation Matrix
in the Outline pane and then place the mouse cursor over the square for that pair in the matrix. Or,
you can set Significance Level to 1 in the Options dialog box, which bypasses the significance test
and displays all input parameters on the Sensitivities chart with their actual correlation values.
In addition, linear information is shown in the Correlation Scatter chart and the general parameters
correlation table. The Correlation Scatter chart displays both the quadratic trend line and the linear
trend line equation for the selected parameter pair. The general parameters correlation table shows
quadratic data, linear data, and correlation design points.
If you select one or more design points and right-click, the context menu offers many of the same
options available for the project's design points table for the Parameter Set bar.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
60 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Correlation Results
• In the Outline pane, click the locked icon to the right of the Design Points node.
• In the Properties pane, set Sampling Type from Auto (default) to Custom.
• Changes the icon to the right of the Design Points node from an locked icon to an unlocked icon
( ). If you place the mouse cursor over the unlocked icon, the tooltip indicates that the sample set
is customized.
• Displays a warning icon ( ) in the Message column for the top Parameters Correlation node and
the Design Points node. If you click the icon, the message indicates that the sampling is custom and
that results might be inaccurate because the distribution of the design points might not be optimal.
• Displays an information icon ( ) to the left of the Parameters Correlation node if there are not
enough samples to update the correlation. If you click the icon, the message indicates that you must
add more design points in the table or set Sampling Type back to Auto.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 61
Using Parameters Correlations
• Sets Auto Stop Type to Execute All Simulations and makes this property read-only.
• Hides Number of Samples because the read-only property Size of Generated Sample Set already
displays the size of the sample set.
• Makes input parameter values editable. As you enter or edit a value, DesignXplorer validates the
value against the ranges for the parameter.
• Allows you to set all output parameter values as editable if the custom correlation is not linked to a
Response Surface cell. For more information, see Editable Output Parameter Values (p. 300). For a
response surface-based correlation, values for the output parameters are calculated by evaluating
the response surface.
• Allows you to import design points from a CSV file, just like you do for a custom DOE. For more in-
formation, see Importing Design Points from a CSV File (p. 86).
– For a standalone Parameters Correlation system, DesignXplorer imports values for input paramet-
ers. During parsing and validation (p. 86) of the design point data, DesignXplorer might ask
whether you want to adjust parameter ranges. If values for output parameters exist in the CSV file,
DesignXplorer also imports them.
– For a response surface-based Parameters Correlation system, DesignXplorer imports values for
input parameters if the design point is within the parametric space. DesignXplorer never imports
values for output parameters.
• Allows you to copy all or selected design points from the Parameter Set into the custom correlation,
just like you do for a custom DOE. For more information, see Copying Design Points (p. 86)
– For a standalone Parameters Correlation system, DesignXplorer copies input parameter values.
During parsing and validation (p. 86) of the design point data, DesignXplorer might ask whether
you want to adjust parameter ranges. If the design points are up-to-date, DesignXplorer also copies
the values for output parameters.
– For a response surface-based Parameters Correlation system, DesignXplorer copies input para-
meter values if the design point is within the parametric space. DesignXplorer never copies values
for output parameters.
Note:
• If you preview design points before unlocking the design points table, DesignXplorer
keeps the design points from the preview. You can edit or delete these design points
and add new ones.
• If you edit values for output parameters, you can clear the edited values. DesignXplorer
then marks these design points as out-of-date.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
62 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters Correlation Charts
When a Parameters Correlation system is updated, one instance of each chart is added to the project.
In the Outline pane for the Parameters Correlation cell, you can select each object under Charts to
view that chart in the Charts pane.
To add a new instance of a chart, double-click it in the Toolbox. The chart is added as the last entry
under Charts.
To add a Correlation Scatter chart for a particular parameter combination, right-click the associated cell
in the Correlation Matrix chart and select Insert <x-axis> vs <y-axis> Correlation Scatter.
Color-coding of the cells indicates the strength of the correlation. The correlation value is displayed
when you place the mouse cursor over a cell. The closer the absolute correlation value is to 1, the
stronger the relationship. A value of 1 indicates a positive correlation, which means that when the
first parameter increases, the second parameter increases as well. A value of −1 indicates a negative
correlation, which means that when the first parameter increases, the second parameter decreases.
When you run a Pearson's correlation, the square of the correlation value corresponds to the R2 of
the linear fitting between the pair of parameters. When you run a Spearman's correlation, the correl-
ation value corresponds to the R2 of the linear fitting between rank values of the pair of parameters.
For more information on these correlation types, see Sample Generation (p. 57).
In the following parameters correlation table, input parameter P13-PIPE_Thickness is a major input
with a strong effect on the design.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 63
Using Parameters Correlations
On the other hand, input parameter P12-Hoop Dist is not important to the study because it has
little effect on the outputs. In this case, you might want to disable P12-Hoop Dist by clearing the
Enabled check box in the Properties pane. When the input is disabled, the chart changes accordingly.
To disable parameters, you can also right-click a cell corresponding to that parameter and select the
desired option from the context menu. You can disable the selected input, disable the selected output,
disable all other inputs, or disable all other outputs.
To generate a Correlation Scatter chart for a given parameter combination, right-click the corresponding
cell in the correlation matrix chart and select Insert <x-axis> vs <y-axis> Correlation Scatter.
If desired, you can export the correlation matrix data to a CSV file by selecting the Export Chart Data
as CSV context option. For more information, see Exporting Design Point Parameter Values to a
Comma-Separated Values File in the Workbench User's Guide.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
64 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters Correlation Charts
the degree of quadratic correlation between two parameters pair via a graphical presentation of linear
and quadratic trends.
Note:
You can create a Correlation Scatter chart for a given parameter combination by right-clicking the
corresponding cell in the correlation matrix chart and selecting Insert <x-axis> vs <y-axis> Correlation
Scatter.
In this example, under Trend Lines in the Properties pane, Linear and Quadratic display equations
for R2 values.
Because both the Linear and Quadratic properties are enabled in this example:
• The equations for the linear and quadratic trend lines are shown in the chart legend.
• The linear and quadratic trend lines are each represented by a separate line on the chart. The closer
the samples lie to the curve, the closer the coefficient of determination is to the optimum value of
1.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 65
Using Parameters Correlations
When you export the Correlation Scatter chart data to a CSV file or generate a report, the trend line
equations are included in the export and are shown in the CSV file or Workbench project report. For
more information, see Exporting Design Point Parameter Values to a Comma-Separated Values File
in the Workbench User's Guide.
Color-coding of the cells indicates the strength of the correlation (R2). The R2 value is displayed when
you place the mouse cursor over a cell. The closer the R2 value is to 1, the stronger the relationship.
In the following determination matrix, input parameter P5–Tensile Yield Strength is a major input
because it drives all the outputs.
You can disable inputs that have little effect on the outputs. To disable a parameter in the chart:
• In the Properties pane, clear the Enabled check box for the parameter.
• Right-click a cell corresponding to the parameter and select an option from the context menu. You can
disable the selected input, disable the selected output, disable all other inputs, or disable all other outputs.
You can also select the Export Chart Data as CSV context option to export the correlation matrix
data to a CSV file. For more information, see Exporting Design Point Parameter Values to a Comma-
Separated Values File in the Workbench User's Guide.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
66 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters Correlation Charts
When you view a Determination Histogram chart, you should also check the Full Model R2 (%) value
to see how well output variations are explained by input variations. This value represents the variab-
ility of the output parameter that can be explained by a linear or quadratic correlation between the
input parameters and the output parameter. The closer this value is to 100%, the more certain it is
that output variations result from the inputs. The lower the value, the more likely it is that other
factors such as noise, mesh error, or an insufficient number of points is causing the output variations.
In the following figure, you can see that input parameters P3–LENGTH, P2–HEIGHT, and P4–FORCE
all affect output P8–DISPLACEMENT. You can also see that of the three inputs, P3–LENGTH has by
far the greatest effect. The value for the linear determination is 96.2%.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 67
Using Parameters Correlations
To view the chart for a quadratic determination, in the Properties pane, set Determination Type to
Quadratic. With a quadratic determination type, input P5–YOUNG could also have a slight effect on
P8–DISPLACEMENT. For example, Full Model R2 (%) could improved slightly, perhaps to 97.436%.
You can filter your inputs to keep only the most important parameters by selecting and clearing their
check boxes in the Outline pane.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
68 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters Correlation Charts
Generally, the effect of an input parameter on an output parameter is driven by the following two
things:
• The amount by which the output parameter varies across the variation range of an input parameter.
• The variation range of an input parameter. Typically, the wider the variation range, the larger the effect
of the input parameter.
The statistical sensitivities are based on the Spearman rank order correlation coefficients, which sim-
ultaneously take both aspects into account.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 69
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
70 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Design of Experiments
Design of Experiments (DOE) is a technique used to scientifically determine the location of sampling
points. It is included as part of response surface, goal-driven optimization, and other analysis systems.
There are a wide range of DOE algorithms or methods available in engineering literature. However, they
all have common characteristics. They try to locate sampling points such that the space of random input
parameters is explored in the most efficient way, or they try to obtain the required information with a
minimum of sampling points.
Sample points in efficient locations not only reduce the required number of sampling points but also
increase the accuracy of the response surface that is derived from the results of the sampling points.
By default, the deterministic method uses Central Composite Design, which combines one center point,
points along the axis of the input parameters, and the points determined by a fractional factorial design.
For more information, see DOE Types (p. 72).
Once you set up your input parameters, you update the DOE, which submits the generated design
points to the analysis system to determine a solution. Design points are solved simultaneously if the
analysis system is set up to do so. Otherwise, design points are solved sequentially. After the solution
is complete, you update the Response Surface cell, which generates response surfaces for each output
parameter based on the data in the generated design points.
Note:
If you change the DOE type after doing an initial analysis and preview the design points table, any
design points generated for the new algorithm that are the same as design points solved for a previous
algorithm appear as up-to-date. Only the design points that are different from any previously submitted
design points need to be solved.
You should set properties for your DOE before generating your design points table. The following topics
describe setting up and solving a DOE:
Setting Up the DOE
DOE Types
Number of Input Parameters for DOE Types
Comparison of LHS and OSF DOE Types
Using a Central Composite Design DOE
Upper and Lower Locations of DOE Points
DOE Matrix Generation
Exporting and Importing Design Points
Copying Design Points
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 71
Using Design of Experiments
3. In the Properties pane under Design of Experiments, make a selection for Design of Experiments Type.
For descriptions, see DOE Types (p. 72).
4. Specify additional properties for the DOE. The selection for Design of Experiments Type determines what
properties are available.
DOE Types
In the Properties pane for the Design of Experiments cell, Design of Experiments Type specifies the
algorithm or method for locating sampling points. DesignXplorer supports several DOE types:
Central Composite Design (CCD)
Optimal Space-Filling Design (OSF)
Box-Behnken Design
Custom
Custom + Sampling
Sparse Grid Initialization
Latin Hypercube Sampling Design
External Design of Experiments
• Design Type: Design type to use for CCD to improve the response surface fit for the DOE. For each design
type, the alpha value is defined as the location of the sampling point that accounts for all quadratic main
effects. The following CCD design types are available:
– Face-Centered: A three-level design with no rotatability. The alpha value equals 1.0. A Template Type
setting automatically appears.
– Rotatable: A five-level design that includes rotatability. The alpha value is calculated based on the
number of input variables and a fraction of the factorial part. A design with rotatability has the same
variance of the fitted value regardless of the direction from the center point. A Template Type setting
automatically appears.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
72 of ANSYS, Inc. and its subsidiaries and affiliates.
DOE Types
– VIF-Optimality: A five-level design in which the alpha value is calculated by minimizing a measure of
non-orthogonality known as the Variance Inflation Factor (VIF). The more highly correlated the input
variable with one or more terms in a regression model, the higher the VIF.
– G-Optimality: Minimizes a measure of the expected error in a prediction and minimizes the largest
expected variance of prediction over the region of interest.
– Auto-Defined: Design exploration automatically selects the design type based on the number of input
variables. Use of this option is recommended for most cases because it automatically switches between
G-Optimality if the number of input variables is five or VIF-Optimality otherwise. However, you can
select Rotatable as the design type if the default option does not provide good values for the goodness
of fit from the response surface plots.
For more information, see Using a Central Composite Design DOE (p. 82)
• Template Type: Enabled when Design Type is set to either Face-Centered or Rotatable. Choices are
Standard (default) and Enhanced. Choose Enhanced for a possible better fit for the response surfaces.
For more information, see Using a Central Composite Design DOE (p. 82).
To offset the noise associated with physical experimentation, classic DOE types such as CCD focus on
parameter settings near the perimeter of the design region. Because computer simulation is not quite
as subject to noise, OSF is able to distribute the design parameters equally throughout the design
space with the objective of gaining the maximum insight into the design with the fewest number of
points. This advantage makes it appropriate when a more complex modeling technique such as Kriging,
non-parametric regression, or neural networks is used.
OSF has some of the same disadvantages as LHS, though to a lesser degree. Possible disadvantages
follow.
• When Samples Type is set to CCD Samples, a maximum of 20 input parameters is supported. For more
information, see Number of Input Parameters for DOE Types (p. 77).
• Extremes, such as the corners of the design space, are not necessarily covered.
• The selection of too few design points can result in a lower quality of response prediction.
– Max-Min Distance: Maximizes the minimum distance between any two points (default). This strategy
ensures that no two points are too close to each other. For a small size of sampling (N), the Max-Min
Distance design generally lies on the exterior of the design space and fills in the interior as N becomes
larger. Generally, this is the faster algorithm.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 73
Using Design of Experiments
– Centered L2: Minimizes the centered L2-discrepancy measure. The discrepancy measure corresponds
to the difference between the empirical distribution of the sampling points and the uniform distribution.
This means that the centered L2 yields a uniform sampling. This design type is computationally faster
than the Maximum Entropy type.
– Maximum Entropy: Maximizes the determinant of the covariance matrix of the sampling points to
minimize uncertainty in unobserved locations. This option often provides better results for highly cor-
related design spaces. However, its cost increases non-linearly with the number of input parameters
and the number of samples to be generated. Thus, it is recommended only for small parametric problems.
• Maximum Number of Cycles: Determines the number of optimization loops the algorithm needs, which
in turns determines the discrepancy of the DOE. The optimization is essentially combinatorial, so a large
number of cycles slows down the process. However, this makes the discrepancy of the DOE smaller. For
practical purposes, 10 cycles is generally good for up to 20 variables. The value must be greater than 0.
The default is 10.
• Samples Type: Determines the number of DOE points the algorithm should generate. This option is
suggested if you have some advanced knowledge about the nature of the model. Choices are:
– CCD Samples: Supports a maximum of 20 inputs (default). Generates the same number of samples a
CCD DOE would generate for the same number of inputs. You can use this to generate a space filling
design that has the same cost as a corresponding CCD design.
– Linear Model Samples: Generates the number of samples as needed for a linear model.
– Pure Quadratic Model Samples: Generates the number of samples as needed for a pure quadratic
model (no cross terms).
– Full Quadratic Samples: Generates the number of samples needed to generate a full quadratic model.
• Random Generator Seed: Enabled when LHS is used. Set the value used to initialize the random number
generator invoked internally by LHS. Although the generation of a starting point is random, the seed value
consistently results in a specific LHS. This property allows you to generate different samplings by changing
the value or regenerate the same sampling by keeping the same value. The default is 0.
• Number of Samples: Enabled when Samples Type is set to User-Defined Samples. Specifies the default
number of samples. The default is 10.
Box-Behnken Design
When Design of Experiments Type is set to Box-Behnken Design,a three-level quadratic design is
generated. This design does not contain fractional factorial design. The sample combinations are
treated in such a way that they are located at midpoints of edges formed by any two factors. The
design is rotatable (or in cases, nearly rotatable).
One advantage of Box-Behnken Design is that it requires fewer design points than a full factorial CCD
and generally requires fewer design points than a fractional factorial CCD. Additionally, Box-Behnken
Design avoids extremes, allowing you to work around extreme factor combinations. Consider using
Box-Behnken Design if your project has parametric extremes (for example, has extreme parameter
values in corners that are difficult to build). Because a DOE based on Box-Behnken Design doesn't
have corners and does not combine parametric extremes, it can reduce the risk of update failures.
For more information, see the Box-Behnken Design (p. 316) theory section.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
74 of ANSYS, Inc. and its subsidiaries and affiliates.
DOE Types
• Prediction at the corners of the design space is poor and that there are only three levels per parameter.
• A maximum of 12 input parameters is supported. For more information, see Number of Input Para-
meters for DOE Types (p. 77).
Custom
When Design of Experiments Type is set to Custom, you can add points directly in the design points
table by manually entering input parameters and optionally output parameter values. You can also
import and export (p. 85) design points into the design points table of the custom DOE from the
Parameter Set bar.
You can change the mode of the design points table so that output parameter values are editable.
You can also copy and paste data and import data from a CSV file by right-clicking and selecting
Import Design Points. For more information, see Working with Tables (p. 299).
Note:
• If you generate design points using a DOE type other than Custom and then later switch to
Custom, all points existing in the design points table from the initial DOE type are retained.
• If you set the DOE type to Custom and add points directly to the design points table, these
manually added design points are cleared if you later switch to another DOE type.
• The table can contain derived parameters. Derived parameters are always calculated by the
system, even if the table mode is All Output Values Editable.
• Editing output values for a row changes the state of the Design of Experiments cell to Update
Required. The DOE must be updated, even though no calculations are done.
• DOE charts do not reflect the design points added manually using the Custom DOE type until
the DOE is updated.
• It is expected that the Custom DOE type is used to enter DOEs that were built externally. If
you use this feature to manually enter all design points, you must make sure to enter enough
points so that a good fitting can be created for the response surface. This is an advanced feature
that should be used with caution. Always verify your results with a direct solve.
Custom + Sampling
When Design of Experiments Type is set to Custom + Sampling, you have the same capabilities
as when it is set to Custom (p. 75). You can complete the design points table automatically to fill the
design space efficiently. For example, you can initialized the design points table with design points
imported from a previous study. Or, your initial DOE (Central Composite Design, Optimal Space Filling
Design or Custom type) can be completed with new points that you manually add. The generation
of these new design points takes into account the coordinates of previous design points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 75
Using Design of Experiments
When Custom + Sampling is set, Total Number of Samples specifies the number of samples that
you want, including the number of existing design points. You must enter a positive number. If the
total number of samples is less than the number of existing points, no new points are added. If there
are discrete input parameters, the total number of samples corresponds to the number of points that
should be reached for each combination of discrete parameters.
One advantage of Sparse Grid Initialization is that it refines only in the directions necessary, so that
fewer design points are needed for the same quality response surface. Another is that it is effective
at handling discontinuities. Although you must use this DOE type to build a Sparse Grid response
surface, you can also use it for other types of response surfaces.
• When Samples Type is set to CCD Samples, a maximum of 20 input parameters is supported. For
more information, see Number of Input Parameters for DOE Types (p. 77).
• Extremes, such as the corners of the design space, are not necessarily covered. Additionally, the se-
lection of too few design points can result in a lower quality of response prediction.
Note:
The Optimal Space-Filling Design (OSF) DOE type is an LHS design that is extended with
post-processing. For more information, see Comparison of LHS and OSF DOE Types (p. 81).
• Samples Type: Determines the number of DOE points the algorithm should generate. This option is
suggested if you have some advanced knowledge about the nature of the model. The following choices
are available:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
76 of ANSYS, Inc. and its subsidiaries and affiliates.
Number of Input Parameters for DOE Types
– CCD Samples: Supports a maximum of 20 inputs (default). Generates the same number of samples a
CCD DOE would generate for the same number of inputs. You can use this to generate an LHS design
that has the same cost as a corresponding CCD design.
– Linear Model Samples: Generates the number of samples as needed for a linear model.
– Pure Quadratic Model Samples: Generates the number of samples as needed for a pure quadratic
model (no cross terms).
– Full Quadratic Samples: Generates the number of samples needed to generate a full quadratic model.
• Random Generator Seed: Enabled when LHS is used. Set the value used to initialize the random number
generator invoked internally by LHS. Although the generation of a starting point is random, the seed value
consistently results in a specific LHS. This property allows you to generate different samplings by changing
the value or regenerate the same sampling by keeping the same value. The default is 0.
• Number of Samples: Enabled when Samples Type is set to User-Defined Samples. Specifies the default
number of samples. The default is 10.
DesignXplorer filters the Design of Experiments Type list for applicability to the current project,
displaying only those sampling methods that you can use to generate the DOE as it is currently
defined. For example, assume that a given sampling allows a maximum of 10 input parameters. As
soon as more than 10 inputs are defined, that sampling method is removed from the list.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 77
Using Design of Experiments
between input and output parameters, so having fewer enabled inputs makes it easier to determine
how they affect the outputs.
As such, the recommendation for most DOE types is to have as few enabled input as possible. Fewer
than 20 inputs is ideal. Some DOE types have a limit on the number of inputs:
• LHS and OSF have a limit of 20 inputs when Samples Type is set to CCD Samples
The number of inputs should be taken into account when selecting a DOE type for your study—or
when defining inputs if you know ahead of time which DOE type you intend to use.
If you are using a DOE that does not limit the number of inputs and more than 20 are enabled, in the
Outline pane for the following cells, DesignXplorer shows an alert icon in the Message column for the
root node:
• Design of Experiments
• Response Surface
The warning icon is displayed only when required component edits are completed. The number next
to the icon indicates the number of active warnings. You can click the icon to review the warning
messages. To remove a warning about having too many inputs defined, disable inputs by clearing check
boxes in the Enabled column until only the permitted number of inputs is selected. If you are unsure
of which parameters to disable, you can use a Parameters Correlation system to determine the inputs
that are least correlated with your results. For more information, see Using Parameters Correlations (p. 53).
Factors other than the number of enabled inputs affect response surface generation:
• Response surface types using automatic refinement add additional design points to improve the resol-
ution of each output. The more outputs and the more complicated the relationship between the inputs
and outputs, the more design points that are required.
• Increasing the number of output parameters increases the number of response surfaces that are required.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
78 of ANSYS, Inc. and its subsidiaries and affiliates.
Number of Input Parameters for DOE Types
• Discrete input parameters can be expensive because a response surface is generated for each discrete
combination, as well as for each output parameter.
• A non-linear or non-polynomial relationship between input and output parameters requires more
design points to build an accurate response surface, even with a small number enabled inputs.
Such factors can offset the importance of using a small number of enabled input parameters. If you
expect that a response surface can be generated with relative ease, it might be worthwhile to exceed
the recommended number of inputs. For example, assume that the project has polynomial relationships
between the inputs and outputs, only continuous inputs, and a small number of outputs. In this case,
you might ignore the warning and proceed with the update.
If the DOE-response surface combination is not supported, quick help explains the underlying problem.
The following example explains that the number of design points is less than the number of enabled
input parameters. When you enable discrete input parameters, the number of required design points
is affected. For each enabled discrete input parameter combination, the number of design points must
be equal to or greater than the number of enabled continuous input parameters.
Two examples show how the minimum number for design points is determined when the number of
enabled discrete parameters is first 0 and then 2.
Example 1
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 79
Using Design of Experiments
Example 2
– Where the 2nd discrete parameter has 2 levels (for example, 5;6)
{10; 5}, {15; 5}, {20; 5}, {10; 6}, {15; 6}, {20; 6}
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
80 of ANSYS, Inc. and its subsidiaries and affiliates.
Comparison of LHS and OSF DOE Types
• LHS is an advanced form of the Monte Carlo sampling method. In LHS, no point shares a row or column of
the design space with any other point.
• OSF is essentially an LHS design that is optimized through several iterations, maximizing the distance
between points to achieve a more uniform distribution across the design space. Because it aims to gain the
maximum insight into the design by using the fewest number of points, it is an effective DOE type for
complex modeling techniques that use relatively large numbers of design points.
Because OSF incorporates LHS, both DOE types aim to conserve optimization resources by avoiding the
creation of duplicate points. Given an adequate number of design points to work with, both methods
result in a high quality of response prediction. OSF, however, offers the added benefit of fuller coverage
of the design space.
For example, with a two-dimensional problem that has only two input parameters and uses only six
design points, it can be difficult to build an adequate response surface. This is especially true in the
case of LHS because of its nonuniform distribution of design points over the design space.
When the number of design points for the same scenario is increased to twenty, the quality of the
resulting response surface is improved. LHS, however, can result in close, uneven groupings of design
points and so can skip parts of the design space. OSF, with its maximization of the distance between
points and more uniform distribution of points, addresses extremes more effectively and provides far
better coverage of the design space. For this reason, OSF is the recommended method.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 81
Using Design of Experiments
• Face-Centered
• Rotatable
• VIF-Optimality
• Optimality
• Auto Defined
A Rotatable (spherical) design is preferred because the prediction variance is the same for any two
locations that are the same distance from the design center. However, there are other criteria to consider
for an optimal design setup. The following two criteria are commonly considered in setting up an op-
timal design using the design matrix.
• The degree of non-orthogonality of regression terms can inflate the variance of model coefficients.
• The position of sample points in the design can be influential based on their position with respect to
others of the input variables in a subset of the entire set of observations.
An optimal CCD design should minimize both the degree of non-orthogonality of term coefficients and
the number of sample points having abnormal influence. In minimizing the degree of non-orthogonality,
the Variation Inflation Factor (VIF) of regression terms is used. For a VIF-Optimality design, the maximum
VIF of the regression terms is to be minimized, and the minimum value is 1.0. In minimizing the oppor-
tunity of influential sample points, the leverage value of each sample points is used. Leverages are the
diagonal elements of the Hat matrix, which is a function of the design matrix. For a G-Optimality design,
the maximum leverage value of sample points is to be minimized.
For a VIF-Optimality design, the alpha value or level is selected such that the maximum VIF is minimized.
Likewise, for a G-Optimality design, the alpha value or level is selected such that the maximum leverage
is minimized. The rotatable design is found to be a poor design in terms of VIF- and G-efficiencies.
For an optimal CCD, the alpha value or level is selected such that both the maximum VIF and the
maximum leverage are the minimum possible. For an Auto Defined design, the alpha value is selected
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
82 of ANSYS, Inc. and its subsidiaries and affiliates.
Using a Central Composite Design DOE
from either the VIF-Optimality or G-Optimality design that meets the criteria. Because it is a multi-ob-
jective optimization problem, in many cases, there is no unique alpha value such that both criteria reach
their minimum. However, the alpha value is evaluated such that one criterion reaches minimum while
the other approaches minimum.
For the current Auto Defined setup (except for a problem with five variables that uses a G-Optimality
design) all other multi-variable problems use VIF-Optimality. In some cases, despite the fact that Auto
Defined provides an optimal alpha meeting the criteria, this design might not give as good of a response
surface as anticipated due to the nature of the physical data used for fitting in the regression process.
In this case, you should try other design types that might give a better response surface approximation.
Note:
You can set any design type for CCD as the default in the Options dialog box. On the Design
of Experiments tab, set Design Type to the type to want to use as the default. For more
information, see Design Exploration Options (p. 22).
It is a good practice to always verify some selected points on the response surface with an actual sim-
ulation evaluation to determine its validity of use for further analyses. In some cases, a good response
surface does not mean a good representation of an underlying physics problem. The response surface
is generated according to the predetermined sampling points in the design space, which sometimes
misses capturing an unexpected change in some regions of the design space. In this case, try using an
enhanced design—a Rotatable or Face-Centered design type with Template Type set to Enhanced.
For an enhanced DOE, a mini CCD is appended to a standard CCD design, where a second alpha value
is added and set to half the alpha value of the standard CCD. The mini CCD is set up so that the rotat-
ability and symmetry of the CCD design are maintained. The purposes of the appended mini CCD are
to capture any drastic changes in the design space and to provide a better response surface fit.
Note:
Alternatively, you can try to enrich the DOE by changing the selection for Design of Exper-
iments Type from Central Composite Design to Custom + Sampling and then specifying
a value for Total Number of Samples. For more information, see Custom + Sampling (p. 75).
The location of the generated design points for the deterministic method is based on a central composite
design. If N is the number of input parameters, then a central composite design consists of:
• 2*N axis point located at the -α and +α position on each axis of the selected input parameters.
• 2(N-f) factorial points located at the -1 and +1 positions along the diagonals of the input parameter space.
The fraction f of the factorial design and the resulting number of design points are given in the following
table:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 83
Using Design of Experiments
For example, for goal-driven optimization, the DOE points should be located close to where the optimum
design is determined to be. For a Six Sigma Analysis, the DOE points should be close to the area where
failure is most likely to occur. In both cases, the location of the DOE points depends on the outcome
of the analysis. Not having that knowledge at the start of the analysis, you can determine the location
of the points as follows:
• For a design variable, the upper and lower levels of the DOE range coincide with the bounds specified for
the input parameter. It often happens in optimization that the optimum point is at one end of the range
specified for one or more input parameters.
• For an uncertainty variable, the upper and lower levels of the DOE range are the quantile values corresponding
to a probability of 0.1% and 99.9%, respectively. This is the standard procedure, whether the input parameter
follows a bounded distribution (such as uniform) or unbounded distribution (such as normal). This occurs
because the probability that the input variable value exactly coincides with the upper or lower bound for
a bounded distribution is exactly zero. Failure can never occur when the value of the input variable is equal
to the upper or lower bound. Failure typically occurs in the tails of a distribution, so the DOE points should
be located there, but not at the very end of the distribution.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
84 of ANSYS, Inc. and its subsidiaries and affiliates.
Exporting and Importing Design Points
Note:
The design points are solved simultaneously if the analysis system is configured to perform
simultaneous solutions. Otherwise, they are solved sequentially.
To clear the design points generated for the DOE matrix, return to the Project Schematic, right-click
the Design of Experiments cell, and select Clear Generated Data. You can clear data from any design
exploration cell in the Project Schematic in this way and regenerate your solution for the cell with
changes to the parameters if desired.
Note:
The Clear Generated Data operation does not clear the design point cache. To clear this
cache, right-click in an empty area of the Project Schematic and select Clear Design Points
Cache for All Design Exploration Systems.
To export design points to an ASCII file, you use the following options:
• Right-click a cell in the Table pane and select Export Table Data as CSV.
• Right-click a chart in the Chart pane and select Export Chart Data as CSV.
The parameter values for each design point in the table or chart are exported to a CSV file. The values
are always exported in the units defined in Workbench. This means that they export as when Display
Values as Defined is selected from the Units menu.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 85
Using Design of Experiments
You can also import an external CSV file to create either design points in a custom Design of Exper-
iments cell or refinement and verification points in a Response Surface cell. Right-click a cell in the
Table pane for a Design of Experiments or Response Surface cell and select the appropriate import
option. For more information, see Working with Tables (p. 299) and Exporting Table Data (p. 301).
When a DOE cell is set to a custom type, Import Design Points from CSV is available on the context
menu when you right-click any of the following:
• A Design of Experiments or Design of Experiments (3D ROM) cell in a system on the Project Schematic
• The Design of Experiments node in the Outline pane for a Design of Experiments cell
• The Design of Experiments (3D ROM) node in the Outline pane for a Design of Experiments (3D ROM)
cell
• In the design points table for a Design of Experiments or Design of Experiments (3D ROM) cell
For more information, see Importing Data from a CSV File (p. 301).
Note:
If the range of imported or copied values is more than 10 percent smaller than the
DOE settings, the shrink option is available to reduce the DOE range to fit. If the values
exceed the range defined in the DOE settings, the expand option is available to extend
the range.
If you choose not to select either or both check boxes that are available for adjusting parameter
ranges, when you click Apply, only the design points with all parameters falling within the predefined
parameter ranges are imported or copied into the DOE. With the import or copy limited to the pre-
defined parameter ranges, all out-of-bounds design points are ignored. If you click Cancel, the import
or copy operation is terminated.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
86 of ANSYS, Inc. and its subsidiaries and affiliates.
Copying Design Points
all or selected design points. Any output parameter values for the design points copied to the DOE cell
are read-only because they have been calculated by a solver.
Note:
Design point data is always parsed and validated before being either copied or imported
into a DOE. The dialog boxes that you might see during this process are described in
the previous topic.
Copying All Design Points from the Parameter Set Bar into a DOE Cell
When a DOE cell is set to a custom DOE type, you can copy all design points from the Parameter Set
bar into the DOE cell:
1. In the Project Schematic, double-click the DOE cell to which you want to copy design points from
the Parameter Set bar.
3. Either right-click the parent node in the Outline pane or right-click in the Table pane and select Copy
all Design Points from the Parameter Set.
All design points from the Parameter Set bar are copied into the DOE cell.
Copying Selected Design Points from the Parameter Set Bar into a DOE Cell
When a DOE cell is set to a custom DOE type, you can copy selected design points from the Parameter
Set bar into the DOE cell:
3. Right-click and select Copy Design Points to and then select a DOE cell on the submenu.
While the submenu lists all Design of Experiments and Paramters Correlation cells defined
in the project, only custom DOEs and custom correlations are available for selection.
All selected design points in the Parameter Set bar are copied into the DOE cell.
Note:
If an unsolved design point was previously copied to a custom DOE, and subsequently this
design point was solved in the Parameter Set bar, you can copy it to the custom DOE again
to push the output values for this design point to the custom DOE.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 87
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
88 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Response Surfaces
Response surfaces are functions of varying natures in which the output parameters are described in
terms of the input parameters. Built from the DOE, they quickly provide the approximated values of
the output parameters throughout the design space without having to perform a complete solution.
DesignXplorer provides tools to estimate and improve the quality of a response surface.
Once a response surface is generated, you can create and manage response points and charts. These
postprocessing tools help you to understand how each output parameter is driven by input parameters
and how you can modify your design to improve its performance.
Once your response surface is solved, you can export it as an independent reduced-order model (DX-
ROM) to be reused in other environments.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 89
Using Response Surfaces
Genetic Aggregation
Genetic Aggregation is the default algorithm for generating response surfaces. It automates the process
of selecting, configuring, and generating the type of response surface best suited to each output
parameter in your problem. From the different types of response surface available (Full 2nd-Order
Polynomials, Non-Parametric Regression, Kriging, and Moving Least Squares), Genetic Aggregation
automatically builds the response surface type that is the most appropriate approach for each output.
Auto-refinement is available when you select at least one output parameter for refinement in the
Tolerances table and specify a tolerance value for this parameter. Once begun, auto-refinement
handles design point failures and continues until one of the stopping criteria is met.
Auto-refinement takes into account failed design points by avoiding the areas close to the failed
points when generating the next refinement points. The Crowding Distance Separation Percentage
property specifies the minimum allowable distance between new refinement points, providing a ra-
dius around failed design points that serves as a constraint for refinement points.
Genetic Aggregation takes more time than classical response surfaces such as Full 2nd order Polyno-
mial, Non-Parametric Regression, or Kriging because of multiple solves of response surfaces and the
cross-validation process. In general, Genetic Aggregation is more reliable than the classical response
surface models.
The Genetic Aggregation response surface can be a single response surface or a combination of
several different response surfaces (obtained by a crossover operation during the genetic algorithm).
For more information, see Genetic Aggregation (p. 317) in the DesignXplorer theory section.
Because a Genetic Aggregation response surface can take longer to generate than other response
surfaces, you can monitor the generation process via the progress bar and messages. You also have
the ability to stop the update. However, if an update is stopped, any data generated up to that point
is discarded.
Once the response surface has been generated, a link to the log file is available in the Properties
pane. The log file contains information that can help you to assess the quality of your response surface.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
90 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
Note:
For advanced options to be visible, the Show Advanced Options check box must
be selected on the Design Exploration tab in the Options window. For more in-
formation, see Design Exploration Options (p. 22).
Meta Model
• Response Surface Type: Determines the type of response surface. This section assumes Genetic
Aggregation is selected and advanced options are shown.
• Random Generator Seed: Advanced option allowing you to specify the value used to initialize the
random number generator. By changing this value, you start the Genetic Aggregation from a different
population of response surfaces. The default is 0.
• Maximum Number of Generations: Advanced option determining the maximum allowable number
of iterations of the genetic algorithm. The default is 12, with a minimum of 1 and a maximum of
20.
Log File
• Display Level: Advanced option determining the content of the ResponseSurface.log file.
You can select one of the following values:
– Final: Information on the best response surface generated by the last generation.
– Final With Details: Information on the full population of response surfaces generated by the last
generation.
– Iterative With Details: Information on the full population of response surfaces generated by
each generation.
If you change the Display Level value after generating the response surface, you must update
the response surface again to regenerate the log file with the new content.
• Log: Advanced option that displays after a response surface and its log file are generated. This
property provides a link to the ResponseSurface.log file, which is stored in the project files
in the subdirectory dpall\global\DX.
Refinement
• Maximum Number of Refinement Points: Determines the maximum number of refinement points
that can be generated for use with Genetic Aggregation.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 91
Using Response Surfaces
• Number of Refinement Points: Read-only property indicating the number of existing refinement
points.
– Maximum Output: Only the output with the largest ratio between the maximum predicted error
and the tolerance is considered. Only one refinement point is generated in each iteration.
– All Outputs: All outputs are considered. Multiple refinement points can be generated. If two re-
finement points are too close (with a distance less than specified for Crowding Distance Separ-
ation Percentage), only one is inserted.
• Crowding Distance Separation Percentage: Advanced option determining the minimum allowable
distance between new refinement points, implemented as a constraint in the search for refinement
points.
• Maximum Number of Refinement Points per Iteration: Specifies the maximum number of refine-
ment points that can be simultaneously updated at each iteration using your HPC resources. This
property is available only when Output Variable Combinations is set to Maximum Output. The
default for Maximum Number of Refinement Points per Iteration is 1. However, to improve effi-
ciency, you can increase this value. For simultaneous design point updates to occur, in the properties
for the Parameter Set bar, you must set Update Option to Submit to Remote Solve Manager,
specify an available RSM Queue, and set Job Submission to One Job for Each Design Point. For
more information about using Remote Solve Manager, see Working with Parameters and Design
Points in the Workbench User's Guide. For theoretical information about how design points are updated
simultaneously, see Genetic Aggregation with Multiple Refinement Points (p. 321).
• Convergence State: Read-only value indicating the state of the convergence. Possible values are
Converged and Not Converged. If the value is Not Converged, the reason for convergence failure
is appended.
Verification Points
Generate Verification Points: Specifies whether verification points (p. 126) are to be generated. When
this check box is selected, Number of Verification Points becomes visible. The default value is 1.
However, you can enter a different value if desired.
2. Select at least one output parameter for refinement and specify its tolerance value. If no output para-
meters are selected for refinement, manual refinement is used.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
92 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
To view the tolerances table, select an output parameter or one of the following in the Outline
pane:
• Response Surface
• Output Parameters
• Refinement
• Tolerances
Calculated Minimum
Minimum value, which is calculated from design point values and Min-Max search results.
Calculated Maximum
Maximum value, which is calculated from design point values and Min-Max search results.
Refinement
Determines whether an output parameter and its tolerance are taken into account for the Genetic
Aggregation refinement process. When this check box is selected for an output, a tolerance value
must be specified.
Tolerance
Enabled and required when an output has been selected for refinement. The value must be greater
than 0.
Tolerance values have units and are refreshed according to whether Display Values as
Defined or Display Values in Project Units is selected from the Workbench Units menu.
The tolerances table, the Properties pane for the output parameter, and the Convergence chart
are fully synchronized. Consequently, changes in one are reflected in the others.
Note:
Discrete output parameters are excluded from the refinement. Property values
are approximations based the resulting response surface.
To use Genetic Aggregation auto-refinement, you must select at least one output parameter for
refinement and assign it a tolerance value.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 93
Using Response Surfaces
1. Use one of the following methods to select at least one output parameter for refinement:
• In the tolerances table, select the Refinement check box for the output.
• In the Outline pane, right-click the output parameter and select Use for Refinement.
All output parameters selected for refinement are included in the refinement process. Their toler-
ance values are taken into account when identifying new refinement points.
You can remove an output parameter from consideration for Genetic Aggregation auto-refinement.
• In the tolerances table, clear the Refinement check box for the output.
• In the Outline pane, right-click the output parameter and select Ignore for Refinement.
If at least one output parameter is still selected for refinement, the auto-refinement process
is used. If the check boxes for all output parameters have been cleared, the refinement reverts
to a manual process.
Cleared and derived output parameters are disabled. They are excluded from the refinement
process and their tolerance values are ignored.
The following tolerances table shows output parameters with different auto-refinement settings.
Because at least one output is selected for refinement, you know that auto-refinement is to be
used.
• P7: Selected for refinement with tolerance value defined (is included in refinement).
• P8: Selected for refinement with tolerance value pending (prevents refinement until tolerance is
defined).
• P9: Not selected for refinement with the previously defined tolerance value disabled (is not included
in refinement).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
94 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
The chart is automatically generated and dynamically updated as the Genetic Aggregation refinement
runs. To view the chart, select one of the following in the Outline pane:
• Refinement
• Tolerances
• Refinement Points
The X axis displays the number of refinement points (whether successfully updated or not) used
to refine the response surface, while the Y axis displays the ratio between the maximum predicted
error and the tolerance for each output parameter.
Each output parameter marked for auto-refinement is represented by a separate curve corresponding
to this ratio, which is calculated for each output parameter to refine and at each iteration. Anything
below the convergence threshold curve is in the convergence region, indicated by a shaded blue
area on the chart.
Example
Before the first refinement point is run, if output P3 has a tolerance of 0.2[g] and a maximum pre-
dicted error of 0.3[g], it has a ratio of 0.3/0.2=1.5. If the ratio is equal to 1.5, you know that the
maximum predicted error is 1.5 times larger than the tolerance.
The auto-refinement process can generate one point or several points per iteration. The objective
is to reach a convergence threshold of less than or equal to 1 for all outputs used in the auto-re-
finement process, at which point all the convergence curves are in the area below the convergence
threshold. The refinement process stops when either the maximum number of refinement points
has been reached or the convergence threshold objective has been met.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 95
Using Response Surfaces
You can enable or disable a convergence curve by selecting or clearing the check box for the output
parameter in the Properties pane of the Convergence Curves chart.
Once the Convergence Curves chart has been generated, you can export it as a CSV file by right-
clicking the chart and selecting Export Chart Data as CSV.
A second-order polynomial is typically preferred for the regression model. This model is generally an
approximation of the true input-to-output relationship, and only in special cases does it yield a true
and exact relationship. Once this relationship is determined, the resulting approximation of the output
parameter as a function of the input variables is called the response surface.
• Calculation of polynomial coefficients based on these modified input values. Some polynomial terms
can be filtered by using the F-Test Filtering and Significance Level properties.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
96 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
Consequently, the response surface that is generated can fit more complex responses than simple
parabolic curvatures.
If the goodness of fit of the response surface is not as good as expected for an output parameter,
you can select a different transformation type in its properties. The Yeo-Johnson transformation is
more numerically stable in its back-transformation. While the Box-Cox transformation is more numer-
ically unstable in its back-transformation, in some cases, it gives a better fit. If in the Properties pane
for an output parameter, you set Transformation Type to None, the full 2nd-order polynomials re-
sponse surface is computed without any transformation on the data for this output parameter.
Note:
For advanced options to be visible, the Show Advanced Options check box must be
selected on the Design Exploration tab in the Options window. For more information,
see Design Exploration Options (p. 22).
• Response Surface Type: Determines the type of response surface. This section assumes Standard
Response Surface - Full 2nd-Order Polynomials is selected and advanced options are shown.
• Inputs Transformation Type: Advanced option determining the type of power transformation to
apply to all continuous input parameters, with and without manufacturable values, before solving
the response surface. Choices are Yeo-Johnson (default) and None. When None is selected, no
transformations are applied to continuous input parameters.
• Inputs Scaling: Advanced option determining whether to scale the data for all continuous input
parameters, with and without manufacturable values, before solving the response surface. This check
box is selected by default. If you clear this check box, no scaling of input parameter data occurs.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 97
Using Response Surfaces
• Significance Level: Advanced option indicating the threshold to use during model-building to filter
significant terms of the polynomial regression. The range for possible values is from 0 to 1. The default
is 0.05.
Note:
As indicated earlier, for advanced options to be visible, the Show Advanced Options
check box must be selected on the Design Exploration tab in the Options window.
• Transformation Type: Determines the type of power transformation to apply to the output parameter.
Choices are Yeo-Johnson (default), Box-Cox, and None. When None is selected, no transformation is
applied to the output parameter. Transformations are not applied to derived output parameters.
• Scaling: Advanced option determining whether to scale the data for the output parameter. This check
box is selected by default. If you clear this check box, no scaling of the output parameter data occurs.
Kriging
Kriging is a meta-modeling algorithm that provides an improved response quality and fits higher order
variations of the output parameter. It is an accurate multidimensional interpolation combining a
polynomial model similar to the one of the standard response surface—which provides a "global"
model of the design space—plus local deviations so that the Kriging model interpolates the DOE
points. Kriging provides refinement capabilities for continuous input parameters, including those with
manufacturable values. It does not support discrete parameters. The effectiveness of Kriging is based
on the ability of its internal error estimator to improve response surface quality by generating refine-
ment points and adding them to the areas of the response surface most in need of improvement.
In addition to manual refinement capabilities, Kriging offers an auto-refinement option that automat-
ically and iteratively updates the refinement points during the update of the response surface. At
each iteration of the refinement, Kriging evaluates a predicted relative error in the full parameter
space. DesignXplorer uses the predicted relative error instead of the predicted error because this allows
the same values to be used for all output parameters, even when the parameters have different ranges
of variation.
At this step in the process, the predicted relative error for one output parameter is the predicted error
of the output parameter normalized by the known maximum variation of the output parameter:
Where Omax and Omin are the maximum and minimum known values (on design points) of the
output parameter.
For guidelines on when to use Kriging, see Changing the Response Surface (p. 111).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
98 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
The prediction of error is a continuous and differentiable function. To find the best candidate refine-
ment point, the refinement process determines the maximum of the prediction function by running
a gradient-based optimization procedure. If the prediction of the accuracy for the new candidate re-
finement point exceeds the required accuracy, the point is then promoted as a new refinement point.
The auto-refinement process continues iteratively, locating and adding new refinement points until
either the refinement has converged or the maximum allowable number of refinement points has
been generated. The refinement converges when the response surface is accurate enough for direct
output parameters.
For more information, see Kriging (p. 324) in the theory section.
1. In the Outline pane for the response surface, select the Response Surface cell.
2. Under Meta Model in the Properties pane, set Response Surface Type to Kriging.
• For Maximum Number of Refinement Points, enter the maximum number of refinement points
that can be generated.
• For Maximum Predicted Relative Error (%), enter the maximum predicted relative error that is
acceptable for all parameters.
Note:
For advanced options to be visible, the Show Advanced Options check box
must be selected on the Design Exploration tab in the Options window. For
more information, see Design Exploration Options (p. 22).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 99
Using Response Surfaces
• For Output Variable Combinations, select a value for determining how output variables are con-
sidered in terms of predicted relative error. This value controls the number of refinement points that
are created per iteration.
• For Crowding Distance Separation Percentage, enter a value for determining the minimum allow-
able distance between new refinement points.
During the refinement process, if one or more design points fail, auto-refinement uses the
value specified for this property to avoid areas close to failed design points when generating
the next refinement points. This property specifies the minimum allowable distance between
new refinement points, providing a radius around failed design points that serves as a con-
straint for refinement points.
• If you want to generate verification points (p. 126), select the Generate Verification Points check
box. Number of Verification Points becomes visible. The default value is 1.
• If you want a different number of verification points to be generated, enter the desired value.
2. Under Refinement in the Properties pane, select or clear the Inherit From Model Settings check
box. This determines whether the maximum predicted relative error defined at the model level is ap-
plicable to the parameter.
If the Inherit From Model Settings check box is cleared, Maximum Predicted Relative Error
becomes available so that you can enter the maximum predicted relative error that you find
acceptable. This can be different than the maximum predicted relative error defined at the
model level.
The generated points for the refinement appear in the refinement points table, which displays in
the Table pane when either Refinement or a refinement point is selected in the Outline pane. As
the refinement points are updated, the Convergence Curves chart updates dynamically, allowing
you to monitor the progress of the Kriging auto-refinement. For more information, see Kriging
Convergence Curves Chart (p. 102).
The auto-refinement process continues until either the maximum number of refinement points is
reached or the response surface is accurate enough for direct output parameters. If all output
parameters have a predicted relative error that is less than the Maximum Predicted Relative Error
values defined for them, the refinement is converged.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
100 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
As with Sparse Grid, Kriging is an interpolation. Goodness of fit is not a reliable measure for Kriging
because the response surface passes through all of the design points, making the goodness of fit
appear to be perfect. As such, the generation of verification points is essential for assessing the
quality of the response surface and understanding the actual goodness of fit.
If the accuracy of the verification points is larger than the predicted relative error given by Kriging,
you can insert the verification points as refinement points and then run a new auto-refinement so
that the new points are included in the generation of the response surface. Verification points can
only be inserted as refinement points in manual refinement mode. For more information, see Veri-
fication Points (p. 126).
Kriging Properties
For Kriging, the following properties are available in the Properties pane for the response surface.
Note:
For advanced options to be visible, the Show Advanced Options check box must be
selected on the Design Exploration tab in the Options window. For more information,
see Design Exploration Options (p. 22).
Meta Model
• Response Surface Type: Determines the type of response surface. This section assumes Kriging is
selected and advanced options are shown.
• Kernel Variation Type: Determines the type of kernel variation. Choices are:
– Variable: Sets the kernel variation to pure Kriging mode, which uses a correlation parameter for
each design variable.
– Constant: Sets the kernel variation to radial basis function mode, which uses a single correlation
parameter for all design variables.
Refinement
• Refinement Type: Determines whether refinement criteria are to be specified manually (points of
your choice in addition to the DOE points) or selected automatically by the response surface.
• Maximum Number of Refinement Points: Determines the maximum number of refinement points
that can be generated for use with the Kriging algorithm.
• Number of Refinement Points: Read-only property indicating the number of existing refinement
points.
• Maximum Predicted Relative Error (%): Determines the maximum predicted relative error that is
acceptable for all parameters.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 101
Using Response Surfaces
– Maximum Output: Only the output with the largest predicted relative error is considered. Only
one refinement point is generated in each iteration.
– All Outputs: All outputs are considered. Multiple refinement points are generated in each iteration.
– Sum of Outputs: The combined predicted relative error of all outputs is considered. Only one
refinement point is generated in each iteration.
• Crowding Distance Separation Percentage: Advanced option determining the minimum allowable
distance between new refinement points, implemented as a constraint in the search for refinement
points.
If two candidate refinement points are closer together than the defined minimum distance,
only the first candidate is inserted as a new refinement point.
• Predicted Relative Error (%): Read-only value indicating the predicted relative error for all para-
meters.
• Converged: Read-only value indicating the state of the convergence. Possible values are Yes and
No.
• Inherit From Model Settings: Indicates if the maximum predicted relative error that you've defined at
the model level is applicable for this output parameter. This check box is selected by default.
• Maximum Predicted Relative Error (%): Displays only when the Inherit From Model Settings check
box is cleared. Determines the maximum predicted relative error that you accept for this output para-
meter. This value can be different than the maximum predicted relative error defined at the model level.
• Predicted Relative Error: Read-only value populated on update. Predicted relative error for this output
parameter.
The chart is automatically generated and dynamically updated as the Kriging refinement runs. You
can view the chart and its properties by selecting Refinement or a Refinement Points in the
Outline pane.
There are two curves for each output parameter. On curve represents the percentage of the current
predicted relative error. The other curve represents the maximum predicted relative error required
that parameter.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
102 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
Additionally, there is a single curve that represents the maximum of the predicted relative error for
output parameters that are not converged.
During the run, you can click the red stop button in the Progress pane to interrupt the process so
that you can adjust the requested maximum error or change chart properties before continuing.
Non-Parametric Regression
Non-parametric regression (NPR) provides improved response quality and is initialized with one of
the available DOE types. NPR algorithm is implemented in DesignXplorer as a metamodeling technique
prescribed for predictably high nonlinear behavior of the outputs with respect to the inputs.
NPR belongs to a general class of Support Vector Method (SVM) type techniques. These are data
classification methods that use hyperplanes to separate data groups. The regression method works
similarly. The main difference is that the hyperplane is used to categorize a subset of the input sample
vectors that are deemed sufficient to represent the output in question. This subset is called the support
vector set.
The internal parameters of the response surface are fixed to constant values and are not optimized.
The values are determined from a series of benchmark tests and strike a compromise between the
response surface accuracy and computational speed. For a large family of problems, the current settings
provide good results. However, for some problem types (like ones dominated by flat surfaces or lower
order polynomials), some oscillations might be noticed between the DOE points.
To circumvent this, you can use a larger number of DOE points or, depending on the fitness landscape
of the problem, use one of the several optimal space-filling DOEs provided. In general, it is suggested
that the problems first be fitted with a quadratic response surface and the NPR fitting adopted only
when the goodness of fit from the quadratic response surface model is unsatisfactory. This ensures
that the NPR is only used for problems where low order polynomials do not dominate.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 103
Using Response Surfaces
Neural Network
A neural network is a mathematical technique based on the natural neural network in the human
brain.
To interpolate a function, a network with three levels (input, hidden, and output) is built and the
connections between them are weighted.
Each arrow is associated with a weight ( ). Each ring is called a cell (like a neuron).
If the inputs are xi , the hidden level contains function gj(xi). The output solution is:
Where K is a predefined function, such as the hyperbolic tangent or an exponential based function,
to obtain something similar to the binary behavior of the electrical brain signal (like a step function).
The function is continuous and differentiable.
The weight functions (wjk ) are issued from an algorithm that minimizes (as the least squares
method) the distance between the interpolation and the known values (design points). This is called
learning. The error is checked at each iteration with the design points that are not used for learning.
Learning design points need to be separated from error-checking design points.
The error decreases and then increases when the interpolation order is too high. The minimization
algorithm is stopped when the error is the lowest.
This method uses a limited number of design points to build the approximation. It works better when
the number of design points and the number of intermediate cells are high. It can give interesting
results with several parameters.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
104 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
Meta Model
• Response Surface Type: Determines the type of response surface. This section assumes Neural
Network is selected.
• Number of Cells: Determines the number of neurons to use in the hidden layer of the neural network.
Sparse Grid
Sparse Grid provides refinement capabilities for continuous parameters, including those with manu-
facturable values. It does not support discrete parameters. Sparse Grid uses an adaptive response
surface, which means that it refines itself automatically. A dimension-adaptive algorithm allows it to
determine which dimensions are most important to the objectives functions, thereby reducing com-
putational effort.
Sparse Grid is an adaptive algorithm based on a hierarchy of grids. The DOE type Sparse Grid Initial-
ization generates a DOE matrix containing all the design points for the smallest required grid: the
level 0 (the point at the current values) plus the level 1 (two points per input parameters). If the ex-
pected level of quality is not met, the algorithm further refines the grid by building a new level in
the corresponding directions. This process is repeated until one of the following occurs:
• The value specified for Maximum Depth is reached in one level. The maximum depth is the maximum
number of levels that can be created in the hierarchy. Once the maximum depth for a direction is reached,
there is no further refinement in that direction.
The relative error for an output parameter is the error between the predicted and the observed output
values, normalized by the known maximum variation of the output parameter at this step of the
process. Because there are multiple output parameters to process, DesignXplorer computes the worst
relative error value for all of the output parameters and then compares this against the value for
Maximum Relative Error. As long as at least one output parameter has a relative error greater than
the expected error, the maximum relative error criterion is not validated.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 105
Using Response Surfaces
tialization. If you've defined another DOE type, the Sparse Grid response surface cannot be updated.
However, the DOE type Sparse Grid Initialization can be used by other types of response surfaces.
Because Sparse Grid uses an auto-refinement algorithm, it is not possible to add refinement points
manually. As a result, for Sparse Grid, the following behaviors occur:
• Insert as Refinement Point operation is not available on the right-click context menu. Also, if you use
commands to attempt to insert a refinement point, an error occurs.
1. In the Outline pane for the Design of Experiments cell, select Design of Experiments.
2. Under Design of Experiments in the Properties pane, set Design of Experiments Type to Sparse
Grid.
3. Either preview or update the DOE to validate your selections. This causes the Response Surface Type
property for Response Surface cells in downstream Design Exploration systems to default to Sparse
Grid.
Note:
If the Design of Experiments cell is shared by multiple systems, the DOE definition ap-
plies to the Response Surface cells for each of those systems.
1. In the Outline pane for the Response Surface cell, select Response Surface.
2. In the Properties pane under Meta Model, verify that Response Surface Type is set to Sparse Grid.
• For Maximum Relative Error (%), enter the value to be allowed for all of the output parameters.
The smaller the value, the more accurate the response surface.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
106 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Types
• For Maximum Depth, enter the number of grid levels that can be created in a given direction. If
needed, you can adjust this value later, according to your update results.
• For Maximum Number of Refinement Points, enter the maximum number of refinement points
that can be generated as part of the refinement process.
For more information, see Sparse Grid Auto-Refinement Properties (p. 108).
At any time, you can click the red stop button in the Progress pane to interrupt the Sparse Grid
refinement so that you can change properties or see partial results. The refinement points that
have already been calculated are visible, and the displayed charts are based on the response surface's
current level of refinement. The refinement points that have not been calculated display the Update
Required icon to indicate which output parameters must be updated.
If a design point fails during the refinement process, Sparse Grid refinement stops in the area where
the failed design point is located, but refinement continues in the rest of the parameter space to
the degree possible. Failed refinement points display the Update Failed, Update Required icon.
You can attempt to pass the failed design points by updating the response surface once again.
The auto-refinement process continues until the Maximum Relative Error (%) objective is attained,
the Maximum Depth limit is reached for all input parameters, the Maximum Number of Refinement
Points is reached, or the response surface converges. If all output parameters have a Maximum
Relative Error (%) that is higher than the Current Relative Error defined for them, the refinement
is converged.
Note:
If Sparse Grid refinement does not appear to be converging, you can do the following
to accept the current level of convergence:
2. Either set the value for Maximum Relative Error (%) slightly above the value for Current
Relative Error (%) or set the value for Maximum Number of Refinement Points equal to
the value for Number of Refinement Points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 107
Using Response Surfaces
Note:
Because Sparse Grid uses an auto-refinement process, you cannot add refinement points
manually as with other DOE types. To generate verification points automatically, under
Verification Points in the Properties pane for the Response Surface cell, select the
Generate Verification Points check box. Then, update the response surface.
For more information on assessing the quality of a response surface, see Goodness of Fit for Output
Parameters in a Response Surface (p. 117) and Verification Points (p. 126).
• Maximum Relative Error (%): Maximum relative error allowable for the response surface. This value is
used to compare against the worst relative error obtained for all output parameters. So long as any
output parameter has a relative error greater than the expected relative error, this criterion is not valid-
ated. This property is a percentage. The default is 5.
• Maximum Depth: Maximum depth (number of hierarchy levels) that can be created as part of the Sparse
Grid hierarchy. Once the number of levels defined in this property is reached for a direction, refinement
does not continue in that direction. The default is 4. A minimum value of 2 is required because the
Sparse Grid Initialization DOE type is already generating levels 0 and 1).
• Maximum Number of Refinement Points: Maximum number of refinement points that can be generated
for use with the Sparse Grid algorithm. The default is 1000.
• Converged: Indicates the state of the convergence. Possible values are Yes and No.
The chart is automatically generated and dynamically updated as the Sparse Grid refinement runs.
You can view the chart and its properties by selecting Refinement or a refinement point in the
Outline pane.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
108 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Refinement
There is one curve per output parameter to represent the current relative error (in percentage),
and one curve to represent the maximum relative error for all direct output parameters. Auto-re-
finement stops when the maximum relative error required (represented as a horizontal threshold
line) has been met.
You can disable one or several output parameters curves and keep only the curve of the maximum
relative error.
During the run, you can click the red stop button in the Progress pane to interrupt the process so
that you can adjust the requested maximum error or change chart properties before continuing.
The following sections provide recommendations on making your initial selection of a meta-modeling
algorithm, evaluating the response surface performance in terms of goodness of fit, and changing the
response surface as needed to improve the goodness of fit:
Working with Response Surfaces
Changing the Response Surface
Refinement Points
Performing a Manual Refinement
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 109
Using Response Surfaces
The number of design points should always exceed the number of inputs. Ideally, you should have
at least twice as many design points as inputs. Most of the standard DOE types are designed to
generate a sufficient number, but custom DOE types might not.
Only keep the input parameters that are playing a major role in your study. Disable any input
parameters that are not relevant by clearing their check boxes in the DOE's Outline pane. You can
determine which are relevant from a correlation or sensitivity analysis.
Requirements and recommendations regarding the number of input parameters vary according to
the DOE type selected. For more information, see Number of Input Parameters for DOE Types (p. 77).
To specify that verification points are to be created when the response surface is generated, select
the Generate Verification Points check box in the response surface's Properties pane.
If you want to manually select a response surface type, however, it is usually a good practice to begin
with the Standard Response Surface 2nd-Order Polynomials response surface. Once the response
surface is built, you can assess its quality by reviewing the goodness of fit for each output parameter.
1. In the Project Schematic, right-click the Response Surface cell and select Edit.
2. In the Outline pane, under Quality, select the goodness of fit object.
The Table pane displays goodness-of-fit metrics for each output parameter. The Chart pane displays
the Predicted vs. Observed chart.
3. Review the goodness of fit, paying particular attention to the Coefficient of Determination property.
The closer this value is to 1, the better the response surface. For more information, see Goodness of Fit
Criteria (p. 117).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
110 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Refinement
• If the goodness of fit is acceptable, add verification points and then recheck the goodness of fit. If
needed, you can further refine the response surface manually. For more information, see Performing
a Manual Refinement (p. 113).
• If the goodness of fit is poor, try changing your response surface. For more information, see Changing
the Response Surface (p. 111).
1. In the Project Schematic, right-click the Response Surface cell and select Edit.
3. In the Properties pane under Meta Model, select a different choice for Response Surface Type.
5. Review the goodness of fit for each output parameter to see if the new response surface provides a
better fit.
Once the goodness of fit is acceptable, you should add verification points and then recheck the
goodness of fit. If needed, you can further refine the response surface manually. For more information,
see Performing a Manual Refinement (p. 113).
Kriging
If Standard Response Surface Full 2nd-Order Polynomials does not produce a response surface with
an acceptable goodness of fit, try Kriging. After updating the response surface with this method, recheck
the goodness of fit. For Kriging, Coefficient of Determination must be set to 1. If it is not, the model
is over-constrained and not suitable for refinement via Kriging. Kriging fits the response surface through
all the design points, which means that many of the other metrics are always perfect. Therefore, it is
particularly important to run verification points with Kriging.
Non-Parametric Regression
If the model is over-constrained and not suitable for refinement via Kriging, try switching to Non-Para-
metric Regression.
If you decide to use one of the other response surface types available, consider your selection carefully
to ensure that the response surface suits your specific purpose. Characteristics for response surface
types follow.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 111
Using Response Surfaces
Genetic Aggregation
• Effective when the variation of the output is smooth with regard to input parameters
Kriging
Note:
Kriging is an interpolation that matches the points exactly. Always use verification
points to check the goodness of fit.
Non-Parametric Regression
Neural Network
Sparse Grid
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
112 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Refinement
Refinement Points
Refinement points are points added to your model to enrich and improve your response surface.
They can either be generated automatically with the response surface update or added manually as
described in Performing a Manual Refinement (p. 113). As with design points, DesignXplorer must
perform a design point update (a real solve) to obtain the output parameters for the refinement
points.
On update, the refinement points are used to build the response surface and are taken into account
for the generation of verification points. Along with DOE points, refinement points are used as
learning points in calculations for the goodness of fit.
All refinement points are shown in the refinement points table, which you access by selecting Refine-
ment → Refinement Pointsin the Outline pane.
After a first optimization study, you can insert the best candidate design as a refinement point to
improve the response surface quality in this area of the design space. To create a new refinement
point, you can use one of the following methods:
1. Right-click a response surface verification point, a response point, or an optimization candidate point.
The point is added to the refinement points table and is used in the next response surface
generation.
2. In the Table pane, enter input parameter values into the bottom row.
Note:
By default, output values are not editable in the table. However, you can change
the editing mode of the table. For more information, see Editable Output Parameter
Values (p. 300).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 113
Using Response Surfaces
1. In the Table pane, right-click in the table and select Import Refinement Points.
2. Browse to and select the CSV file containing the refinement points to import. For more information,
see Importing Data from a CSV File (p. 301).
Note:
You can also copy information from a CSV file and paste them into the refinement
points table.
To update the refinement points and then rebuild the response surface to take these points into ac-
count, click Update on the toolbar. Each out-of-date refinement point is updated, and the response
surface is rebuilt from the DOE points and refinement points.
Min-Max Search
The Min-Max search examines the entire output parameter space from a response surface to approximate
the minimum and maximum values of each output parameter. If the Min-Max Search check box is se-
lected, a Min-Max search is performed each time the response surface is updated. Clearing the check
box disables the Min-Max search. You can disable this feature in cases where the search could be very
time-consuming, such as when there are a large number of input parameters or when there are discrete
input parameters.
Note:
If you have discrete input parameters, an alert displays before a Min-Max search is per-
formed, reminding you that the search can be time-consuming. If you do not want this
message to display, you can clear the Confirm if Min-Max Search can take a long time
check box on the Design Exploration tab in the Options window. For more information,
see Design Exploration Options (p. 22).
The algorithm used to search the output parameter space for the minimum and maximum values depends
on the input parameters.
• If at least one input parameter is continuous with manufacturable values, MISQP is used.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
114 of ANSYS, Inc. and its subsidiaries and affiliates.
Min-Max Search
Before updating your response surface, set the options for the Min-Max search. In the Outline pane,
select Min-Max Search.
When there are only discrete parameters, in the Properties pane, Number of Initial Samples and
Number of Start Points are not available because they are not applicable. DesignXplorer computes
the number of parameter combinations and then sorts the sample points to get the minimum and
maximum values. For example, if there are only two discrete input parameters (with four and three
levels respectively), 12 samples points (4*3) are sorted to get the minimum and maximum values.
When there is at least one continuous parameter, in the Properties pane, Number of Initial Samples
and Number of Start Points are available.
• For Number of Initial Samples, enter the size of the initial sample set to generate in the space of
continuous input parameters.
• For Number of Start Points, enter the number of starting points that the Min-Max search algorithm
is to use. Using more starting points lengthens the search times.
If all input parameters are continuous, one search is done for the minimum and one for the maximum.
When the number of starting points is greater than one, two searches (minimum and maximum) are
done for each starting point. To find the minimum or maximum of an output, the generated sample
points are sorted and the n first points of the sort are used as the starting points. The search algorithm
is then run twice for each starting point, once for the minimum and once for the maximum.
Example 1
DesignXplorer generates 100 initial sample points and then sorts them to get the starting points. For
each starting point, a local optimization is done.
If Number of Start Points is set to 3, six local optimization are done (three for the minimum and three
for the maximum).
When there are discrete input parameters, the number of searches increases by the number of parameter
combinations. There are two searches per combination, one for the minimum and one for the maximum.
Example 2
• There are two discrete input parameters (with three and four levels respectively).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 115
Using Response Surfaces
With multiple starting points, the number of searches is multiplied accordingly. For three starting points,
six local optimization are done (three for the minimum and three for the maximum) for each combination.
The total number of discrete combinations is equal to 12 (3*4). For each combination, DesignXplorer
generates 100 initial sample points in the space of the continuous parameters and where discrete
parameter values are fixed to the discrete combination values. Consequently, 12*100 initial sample
points are generated and 12*6 local optimizations are done.
Search Results
Once your response surface is updated, selecting Min-Max Search in the Outline pane displays the
sample points that contain the minimum and maximum values calculated for each output parameter
in the response surface table. The minimum and maximum values in the output parameter Properties
pane are also updated based on the results of the search. If the response surface is updated in any way,
including changing the fitting for an output parameter or performing a refinement, a new Min-Max
search is performed.
You can save the sample points shown in the response surface table by selecting Insert as Design
Points from the context menu. You can also save sample points as response points (or as refinement
points to improve the response surface) by selecting Explore Response Surface at Point from the
context menu.
The sample points obtained from the Min-Max search are used by the Screening optimization. If you
run a Screening optimization, the samples are automatically taken into account in the sample set used
to run or initialize the optimization. For more information, see Performing a Screening Optimiza-
tion (p. 165).
You can disable the Min-Max search by clearing the check box in the Enabled column in the Outline
pane. If you disable the Min-Max search, no search is done when the response surface is updated. If
you disable the search after performing an initial search, the results from the initial search remain in
the output parameter properties and are shown in the Table pane for the response surface when you
select Min-Max Search in the Outline pane.
Note:
• If the Design of Experiments cell is solved but the Response Surface cell is not solved, the
minimum and maximum values for each output parameter are extracted from the DOE
solution's design points and displayed in the Properties pane for the output parameters.
• For discrete parameters and continuous parameters with manufacturable values, there is
only one minimum and maximum value per output parameter, even if a discrete parameter
has many levels. There is not one Min-Max value set per combination of parameter values.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
116 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for Response Surfaces
A response surface is built from design points in the DOE and refinement points, which are collectively
called learning points. Calculations for the goodness of fit compare the response surface outputs with
the DOE results used to create them.
For response surface types that try to find the best fit of the response surface to DOE points (such
as Standard Response Surface - Full 2nd-Order Polynomial), you can get an idea of how well the fit
was accomplished. However, for interpolated response surface methods that force the response surface
to pass through all of the DOE points (such as Kriging), the goodness of fit usually appears to be
perfect. In this case, goodness of fit indicates that the response surface passed through the DOE
points used to create it, but it does not indicate whether the response surface captures the parametric
solution.
If any of the input parameters is discrete, a different response surface is built for each combination
of the discrete levels and the quality of the response surface might be different from one configuration
to another.
Goodness of fit is closely related to the response surface type used to generate the response surface.
If the goodness of fit is not of the expected quality, you can try to improve it by changing the response
surface. For more information, see Changing the Response Surface (p. 111).
To add a new goodness of fit object, right-click Quality and select Insert Goodness of Fit. Right-click
the object to copy and paste, delete, or duplicate.
Note:
Criteria marked as advanced options are visible in the goodness of fit table only if you've
selected the Show Advanced Options check box on the Design Exploration tab in the
Options window. For more information, see Design Exploration Options (p. 22).
Several criteria are calculated for the points taken into account in the construction of the response
surface. The mathematical representations in the descriptions for these criteria use the following
notation:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 117
Using Response Surfaces
The points used to create the response surface are likely to contain variation for each output
parameter, unless all output values are the same, which results in a flat response surface. This
variation is illustrated by the response surface that is generated. If the response surface were
to pass directly through each point, which is the case for the Kriging response surface, the
coefficient of determination would be 1, meaning that all variation is explained.
However, in some situations, you can have a larger value and still have a good response surface.
For example, this can be true when the mean of the output values is close to zero.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
118 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for Response Surfaces
However, in some situations, you can have a larger value and still have a good response surface.
For example, this can be true when some of the output values are close to zero. You can obtain
100% of relative error if the observed value = 1e-10 and the predicted value = 1e-8. However,
if the range of output values is 1, this error becomes negligible.
The relative maximum absolute error and the relative average absolute error correspond to the
maximum error and average absolute error scaled by the standard deviation. For example, the
relative root mean square error becomes negligible if both of these values are small.
The relative average absolute error and the relative maximum absolute error correspond to the
maximum error and average absolute error scaled by the standard deviation. For example, the
relative root mean square error becomes negligible if both of these values are small.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 119
Using Response Surfaces
surface. You can display or hide the verification points in the Table and Chart panes by changing
the settings in the Properties pane.
If a response surface has discrete input parameters, you can view the response surface for each
combination of discrete levels by selecting the discrete input values in the Properties pane.
For a Genetic Aggregation response surface, goodness of fit is also calculated for learning points
by using the cross-validation technique. The advantages of cross-validation are that it assesses:
• Richness of the DOE (instead of only measuring the quality of the fit)
When cross-validation produces bad metrics, it means that there are not enough points.
For more information on how goodness of fit is calculated for the different types of response surface
points, see Goodness of Fit Calculations (p. 125).
Chart
• Display Parameter Full Name: Specifies whether to display the full parameter name.
Input Parameters
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
120 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for Response Surfaces
This category is shown only when there are discrete input parameters. You can select a discrete
value to view the associated response surface.
General
This category is shown only when Response Surface Type is set to Standard Response Surface
- Full 2nd-Order Polynomial and the Show Advanced Options check box is selected on the
Design Exploration tab in the Options window. For more information, see Design Exploration
Options (p. 22).
The single advanced property under this category, Confidence Level, is used post-modeling as
an input for assessing the goodness of fit. The confidence level indicates how likely an output
parameter being estimated falls within the confidence interval. The default setting is 0.95, which
means that the interval is calculated so that the true value will fall within the interval 95 out of
100 times. However, when this advanced option is shown, you can set any value between 0 and
1.
Output Parameters
This category displays all output parameters so that you can indicate which to display.
For a Standard Response Surface - Full 2nd-Order Polynomials response surface, you can generate
the Advanced Goodness of Fit report (p. 122) for the selected output parameter.
In the goodness of fit table, each parameter is rated on how close it comes to the ideal value
for each goodness of fit metric. The rating is indicated by the number of gold stars or red
crosses next to the parameter. The worst rating is three red crosses. The best rating is three
gold stars.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 121
Using Response Surfaces
Note:
The root mean square error has no rating because it is not a bounded characteristic.
When calculated for different types of response surface points, goodness-of-fit metrics quantify
different aspects of your response surface. You can interpret your goodness of fit table as follows:
For the Genetic Aggregation response surface, goodness-of-fit metrics calculated for learning
points via the cross-validation technique provide an especially effective way to assess the sta-
bility of your response surface without needing to add new verification points. If the metrics
are good, you can be confident in the quality of your model. If the metrics are not good, you
know that you need to enrich your model by adding new refinement points.
The Advanced Goodness of Fit report is a text-based report that shows goodness-of-fit metrics
for the response surface of a selected output parameter. This report can help you to assess
whether your response surface is reliable and accurate enough for you to proceed with confid-
ence. It is available for an output parameter only in a response surface generated when Re-
sponse Surface Type is set to Standard Response Surface - Full 2nd-Order Polynomials.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
122 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for Response Surfaces
To generate the Advanced Goodness of Fit report for a given output parameter:
1. In the Outline pane for the response surface, select Quality → Goodness of Fit.
2. In the Table pane, right-click the column for the desired output parameter and select Generate
Advance Goodness of Fit Report.
Once generated, the Advanced Goodness of Fit report displays in a separate window. The fol-
lowing properties provide general information about analyzing the goodness of fit:
Regression Model
Type of polynomial regression. Cross Quadratic Surface means that the response surface uses
constant terms, linear terms (X), pure quadratic terms (Xi*Xi ) and cross-quadratic terms (Xi*Xj
). You cannot change this property.
Input Transformation
Applied transformation on continuous input parameters before solving the response surface. To
change the transformation type, advanced properties (p. 22) must be shown. In the Outline pane
for the Response Surface cell, select Response Surface. Then, in the Properties pane under
Meta Model, change the selection for Inputs Transformation Type. This property and the sub-
sequent advanced property, Inputs Scaling, apply to all continuous input variables, with and
without manufacturable values. If you clear the check box for Inputs Scaling, no scaling occurs
of the data for input parameters.
Output Transformation
Applied transformation on the given output parameter before solving the response surface. To
change the transformation type for the output parameter, in the Outline pane for the Response
Surface cell, select it. Then, in the Properties pane under Output Settings, change the selection
for Transformation Type. Transformations do not apply to derived output parameters. If advanced
properties (p. 22) are shown, you can also change the Scaling property. When you clear this check
box, no scaling occurs of the data for this output parameter.
Analysis Type
Algorithm used to select the relevant regression terms. For Modified Linear Forward Stepwise
Regression, the individual regression terms are iteratively added to the regression model if they
are found to cause a significant improvement of the regression results. A partial F-test is used to
determine the significance of the individual regression terms. If a regression term is not significant,
it is ignored.
The following properties provide all the scaling models, transformation models, and regression
term coefficients used to build the regression model:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 123
Using Response Surfaces
• Std. Dev. of Coefficient: Statistical estimation of the dispersion of the coefficient. A low
standard deviation indicates stability of the coefficient. A high standard deviation indicates that
the coefficient has a wide range of potential values.
• Prob. Coeff. =0: Corresponds to the probability that the coefficient is zero. If this probability
is high, it suggests that the coefficient is insignificant. More significant coefficients have a lower
probability of being zero.
• Confidence Interval [Lower Bound; Upper Bound]: Corresponds to the confidence interval
of each coefficient. For example, if the confidence level is 0.95, there is a 95% probability that
the coefficient is in this range. The smaller the confidence interval, the more likely it is that the
coefficient is accurate.
The following properties provide statistical data that enable you to measure the stability of the
regression model and to understand which input parameters are significant.
• Residual Value: Difference between the sample value and the approximated value.
• Approx. Std Dev: Standard deviation of the predicted value at this sample point. If the standard
deviation is small, the predicted value is statistically stable. Consequently, you can have confid-
ence in the predicted values in the area close to this point.
• Lower Bound and Upper Bound: Bounds of the confidence interval of the approximated value.
For example, if the confidence level is 0.95, there is a 95% probability that the approximated
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
124 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for Response Surfaces
value is in this range. The smaller the confidence interval, the more likely it is that the approx-
imated value is accurate.
• Hat Matrix Diagonal: These elements are the leverages that describe the influence each ob-
served value has on the fitted value for the same observation.
• Student's Residual: Corresponds to the residual divided by an estimate of its standard deviation.
• Student's Deleted Residual: Student's residual with the i-th observation removed.
• Cook's Distance: Measures the effect of the i-th observation on all fitted values. A point with
a value greater than 1 can be considered influential.
• P-value: Probability of obtaining a test statistic at least as extreme as the one that was actually
observed, assuming that the null hypothesis is true. This value represents the error percentage
you can make if you reject the null hypothesis, where the null hypothesis is that the sample
point has not influenced the observed result. Traditionally, following Fisher, one rejects the null
hypothesis if the p-value is less than or equal to a specified significance level, often 0.05, or
more stringent values, such as 0.02 or 0.01.
Learning Points
For learning points, goodness of fit measures the quality of the response surface interpolation. To
calculate the goodness of fit, the error between the predicted and observed values is calculated
for each learning point.
Verification Points
For verification points, goodness of fit measures the quality of the response surface prediction. To
calculate the goodness of fit, the verification points are placed in locations that maximize the distance
to the learning points. After the verification points are calculated with a design point update, the
differences between verification point values and predicted values are calculated. For more inform-
ation, see Verification Points (p. 126).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 125
Using Response Surfaces
value is evaluated on a model built with all DOE points except the associated observed value. For
example, the first response surface is built without the first design point, the second response surface
is built without the second design point, and so on.
• the i-th learning point where and are respectively the observed input and output para-
meter values.
• the response surface built with all the learning points except the i-th learning point.
Standard goodness of fit on learning points compares to . Goodness of fit based on cross-
validation compares to .
For more information, see Genetic Aggregation (p. 317) in the theory section.
Verification Points
Verification points enable you to verify that the response surface accurately approximates the output
parameter values. They compare the predicted and observed values of the output parameters. After
the response surface is created, the verification points are placed in locations that maximize the dis-
tance from existing DOE points and refinement points (Optimal Space-Filling algorithm). Verification
points can also be added manually or imported from a CSV file. For more information, see Creating
Verification Points (p. 126).
A design point update is a real solve that calculates each verification point. These verification point
results are then compared with the response surface predictions and the difference is calculated.
Verification points are useful in validating any type of response surface. In particular, however, you
should always use verification points to validate the accuracy of interpolated response surfaces, such
as Kriging or Sparse Grid.
1. In the Outline pane for the response surface, select Quality → Verification Points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
126 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for Response Surfaces
2. In the verification points table, enter the values of the input parameters into the New Verification
Point row.
3. Update the verification points and goodness-of-fit metrics by right-clicking the row and selecting
Update.
• The verification points table shows the input and output values for each verification point generated.
• The Predicted vs Observed chart shows the values predicted from the response surface versus the values
observed from the design points. For more information, see Predicted vs. Observed Chart (p. 127).
Verification points are not used to build the response surface until they are turned into refinement
points and the response surface is recalculated. If a verification point reveals that the current response
surface is of a poor quality, you can insert it as a refinement point so that it is taken into account
to improve the accuracy of the response surface.
1. In the Outline pane for the response surface, select Quality → Verification Points.
2. In the verification points table, right-click the verification point and select Insert as Refinement Point.
Note:
The Insert as Refinement Point option is not available for Kriging and Sparse Grid
response surfaces.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 127
Using Response Surfaces
To view the Predicted vs. Observed chart, in the Outline pane for the response surface, under Quality,
select either Goodness Of Fitor Verification Points. A chart is available for each goodness of fit object.
The display is determined by your selections in the Properties pane.
By default, all output parameters are displayed on the chart, and the output values are normalized.
However, if only one output parameter is plotted, the output values are not normalized.
Verification points are not used in the generation of the response surface. Consequently, if they appear
close to the diagonal line, the response surface is correctly representing the parametric model. Oth-
erwise, not enough data is provided for the response surface to detect the parametric behavior of
the model. You must refine the response surface.
If you position the mouse cursor over a point of the chart, the corresponding parameter values appear
in the Properties pane, including the predicted and observed values for the output parameters.
If you right-click a point on the chart, you can select from the context menu options available for this
point. For example, you can insert the point as a refinement point or response point in the response
surface table, which is a good way to improve the response surface around a verification point with
an insufficient goodness of fit.
When a response surface is updated, one response point and one of each of the chart types are created
automatically. You can insert as many response points and charts as you want. To add a new chart:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
128 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
2. Select the insert option for the type of chart that you want to add.
If an instance of the chart already exists for any response point in the Outline pane, additional instances
of the chart have numbers appended to their names. For example, the second and third instances of
a Response chart are named Response 1 and Response 2.
To duplicate a chart that already exists in the Outline pane, right-click the chart and select Duplicate.
This operation creates a duplicate chart under the same response point.
Note:
Chart duplication triggers a chart update. If the update succeeds, both the original chart and
the duplicate are up-to-date.
Once you've created a chart, you can change the name of a chart cell by double-clicking it and entering
the new name. This does not affect the title of the chart, which is set as part of the chart properties.
You can also save a chart as a graphic by right-clicking it and selecting Save Image As. For more in-
formation, see Saving a Chart in the ANSYS Workbench documentation.
Each chart provides the ability to visually explore the parameter space using options provided in the
chart's Properties pane. For discrete parameters and continuous parameters with manufacturable values,
you use drop-down menus. For continuous variables, you use sliders. The sliders allow you to modify
an input parameter value and view its effect on the displayed output parameter.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 129
Using Response Surfaces
All of the Response charts under a response point in the Chart pane use the same input parameter
values because they are all based on the current parameter values for that response point. Thus, when
you modify the input parameter values, the response point and all of its charts are refreshed to take
the new values into account.
Once your inputs and output are selected, you can use the sliders or enter values to change the values
of the input parameters that are not selected to explore how these parameters affect the shape and
position of the curve.
Additionally, you can opt to display all the design points currently in use (from the DOE and the re-
sponse surface refinement) by selecting the Show Design Points check box in the Properties pane.
With a small number of input parameters, this option can help you to evaluate how closely the response
surface fits the design points in your project.
• 2D: Displays a two-dimensional contour graph that allows you to view how changes to a single input affect
a single output. For more information, see Using the 2D Response Chart (p. 133).
• 3D: Displays a three-dimensional contour graph that allows you to view how changes to two inputs affect
a single output. For more information, see Using the 3D Response Chart (p. 134).
When you solve a response surface, a Response chart is automatically added for the default response
point in the Outline pane for the response surface. You can add an additional response chart for
either the default response point or a different response point. To add another Response chart, right-
click the desired response point in the Outline pane and select Insert Response.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
130 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
• Continuous parameters are represented by colored curves that reflect the continuous nature of the
parameter values.
• Discrete parameters are represented by bars that reflect the discrete nature of the parameter values.
There is one bar for each discrete value.
• Continuous parameters with manufacturable values are represented by a combination of curves and
markers. Transparent gray curves reflect the continuous values, and colored markers reflect the discrete
nature of the manufacturable values. There is one marker for each manufacturable value.
The following examples show how each type of parameter is displayed on the 2D Response chart.
The 3D Response chart has two inputs and can have combinations of parameters with like types
or unlike types. Response chart examples follow for possible combinations.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 131
Using Response Surfaces
The 2D Slices Response chart has two inputs. Combinations of these inputs are categorized first by
the parameter type of the select input (the X Axis) and then further distinguished by the parameter
type of the calculated input (the Slice Axis). For each X-axis parameter type, there are two different
renderings:
• The X axis in conjunction with continuous values (a continuous parameter). In this instance, you specify
the number of curves or “slices”.
• The X axis in conjunction with discrete values (either a discrete parameter or a continuous parameter
with manufacturable values). In this instance, the number of slices is automatically set to the number
of discrete levels or the number of manufacturable values.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
132 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 133
Using Response Surfaces
1. In the Outline pane for the response surface, select the Response chart object.
For more information on available properties, see Response Chart: Properties (p. 141).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
134 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
• For Chart Resolution Along X and Chart Resolution Along Y, specify the resolutions.
The Response chart automatically updates according to your selections. A smooth three-dimensional
contour of Z versus X and Y displays.
For more information on available properties, see Response Chart: Properties (p. 141).
The triad control at the bottom left of the Chart pane allows you to rotate the 3D response chart
in freehand mode or quickly view the chart from a particular plane. To zoom in or out on any part
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 135
Using Response Surfaces
of the chart, use the shift-middle mouse button or scroll wheel. See Setting Chart Properties for
details.
The value of the input on the slice axis is calculated from the number of curves defined for the X
and Y axes.
• When one or both of the inputs are continuous parameters, you specify the number of slices to display.
• When one or both of the input parameters are either discrete or continuous with manufacturable values,
the number of slices is determined by the number of levels defined for the input parameters.
Essentially, the first input on the X axis is varying continuously, while the number of curves, or
slices, defined for the slice axis represents the second input. Both inputs are then displayed on the
XY plane, with regard to the output parameter on the Y axis. You can think of the 2D Slices Response
chart as a projection of the 3D response surface curves onto a flat surface.
1. In the Outline pane for the response surface, select the Response Chart object.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
136 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
For more information on available properties, see Response Chart: Properties (p. 141).
The 3D image is then rotated so that the X axis is along the bottom edge of the chart, mirroring
the perspective of the 2D Slices Response chart for a better comparison.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 137
Using Response Surfaces
Finally, Mode is switched to 2D Slices. By comparing the following chart to the 3D version, you
can see how the 2D Slices Response chart is actually a two-dimensional rendering of a three-dimen-
sional image. From the following example, you can see observe the following:
• Along the Y axis, there are 10 slices, corresponding to the value for Number of Slices.
• Along the X axis, each slice intersects with 25 points, corresponding to the value for Chart Resolution
Along X.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
138 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
• Input Parameter 2: WB_Y (continuous with manufacturable values of 1, 1.5, and 1.8, with a range of 1;
1.8)
• Design Points: Six design points on the X axis and six design points on the Y axis.
Because there is an input with manufacturable values, the number of slices is determined by the
number of levels defined.
The following figures show the initial Response chart in 2D, 3D, and 2D Slices modes.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 139
Using Response Surfaces
To display the design points from the DOE and the response surface refinement, select the Show
Design Points check box in the Properties pane. The design points are superimposed on your
chart.
When working with manufacturable values, you can improve chart quality by extending the para-
meter range. Here, the range is increased by adding manufacturable values of -1 and 3, which be-
come the upper and lower bounds.
To improve the quality of your chart further, you can increase the number of points used in building
it by entering values in the Properties pane. In the following figures, the number of points on the
X and Y axes are increased from 6 to 25.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
140 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
Chart Properties
Determines the properties of the chart.
• Display Parameter Full Name: Specifies whether to show the full parameter name or the short para-
meter name.
• Chart Resolution Along X: Determines the number of points on the X axis. The number of points controls
the amount of curvature that can be displayed. A minimum of 2 points is required and produces a
straight line. A maximum of 100 points is allowed for maximum curvature. The default is 25.
• Chart Resolution Along Y: Determines the number of points on the Y axis (3D and 2D Slices modes
only). The number of points controls the amount of curvature that can be displayed. A minimum of 2
points is required and produces a straight line. A maximum of 100 points is allowed for maximum
curvature. The default is 25.
• Number of Slices: Determines the number of slices displayed in the 2D Slices chart.
• Show Design Points: Determines whether all of the design points currently in use, both in the Design
of Experiments and from the response surface refinement, are used to build the response surface.
Axes Properties
Determines the data to display on each chart axis. For each axis, under Value, you can change what
the chart displays on an axis by selecting an option from the drop-down list.
• For X Axis, available options are each of the input parameters enabled in the project.
– For the 3D mode, each of the input parameters enabled in the project.
– For the 2D Slices mode, each of the output parameters in the project.
• For the Z Axis (3D mode only), available options are each of the output parameters in the project.
• For Slice Axis (2D Slices mode only), available options are each of the input parameters enabled in
the project. This property is available only when both input parameters are continuous.
Input Parameters
For each input parameter, you can change the value.
• For continuous parameters, you move the slider. The number to the right of the slider represents
the current value.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 141
Using Response Surfaces
• For discrete parameters or continuous parameters with manufacturable values, you use the keyboard
to enter a new value.
Output Parameters
For each output parameter, you can view the interpolated value.
You can use the slider bars in the Properties pane for the chart to adjust values for input parameters
to visualize different designs. You can also enter specific values. In the top left of the Chart pane, the
parameter legend box allows you to select the parameter that is in the primary (top) position. Only
the axis of the primary parameter is labeled with values.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
142 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
• The Local Sensitivity chart is a powerful project-level tool, allowing you see at a glance the effect of all
the input parameters on output parameters. For more information, see Using the Local Sensitivity
Chart (p. 145).
• The Local Sensitivity Curves chart helps you to further focus your analysis by allowing you to view inde-
pendent parameter variations within the standard Local Sensitivity chart. It provides a means of viewing
the effect of each input on specific outputs, given the current values of other parameters. For more inform-
ation, see Using the Local Sensitivity Curves Chart (p. 149).
When you solve a response surface, a Local Sensitivity chart and a Local Sensitivity Curves chart are
automatically added for the default response point in the Outline pane for the response surface. To
add another chart (this can be either an additional chart for the default response point or a chart for
a different response point), right-click the desired response point in the Outline pane and select
either Insert Local Sensitivity or Insert Local Sensitivity Curves.
For information on how continuous parameters with manufacturable values are represented on local
sensitivity charts, see Understanding the Local Sensitivities Display (p. 143).
For discrete parameters, the Properties pane includes a drop-down menu that is populated with the
discrete values defined for the parameter. By selecting different discrete values for each parameter,
you can explore the different sensitivities given different combinations of discrete values. The chart
is updated according to the changed parameter values. You can check the sensitivities in a single
chart, or you can create multiple charts to compare different designs.
For continuous parameters with manufacturable values, continuous values are represented by a
transparent gray curve, while manufacturable values are represented by colored markers.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 143
Using Response Surfaces
• Input parameters P2-D and P4-P (the yellow and blue curves) are continuous parameters.
• Input parameters P1-B and P3-L (the gray curves) are continuous parameters with manufacturable
values.
• The black markers indicate the location of the response point on each of the input curves.
For continuous parameters with manufacturable values, continuous values are represented by a
gray bar, while manufacturable values are represented by a colored bar in front of the gray bar.
Each bar is defined with the minimum and maximum extracted from the manufacturable values
and the average calculated from the support curve. The minimum and maximum of the output can
vary according to whether or not manufacturable values are used. In this case, both the colored
bar and the gray bar for the input are visible on the chart.
Also, if the parameter range extends beyond the manufacturable values that are defined, the bar
is topped with a gray line to indicate the sensitivity obtained while ignoring the manufacturable
values.
• Input parameters P2-D and P4-P (the yellow and blue bars) are continuous parameters.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
144 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
• Input parameters P1-B and P3-L (the red and green bars) are continuous with manufacturable values.
• The bars for inputs P1-B and P3-L show differences between the minimum and maximum of the output
when manufacturable values are used (the colored bar in front) versus when they are not used (the gray
bar in back, now visible).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 145
Using Response Surfaces
By default, the Local Sensitivity chart shows the effect of all input parameters on all output para-
meters, but it is also possible to specify the inputs and outputs to be considered. If you consider
only one output, the resulting chart provides an independent sensitivity analysis for each single
input parameter. To specify inputs and outputs, select or clear the Enabled check box.
For more information on available properties, see Local Sensitivity Chart: Properties (p. 148).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
146 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
By default, Axes Range is set to Use Min Max of the Output Parameter and the maximum
known variation of P4 is 2.36. Using this chart, you want to determine what percentage of this
range of variation is produced by varying only P1, or only P2.
3. If P4 increases while P1 increases, the sign is positive. Otherwise, the sign is negative.
For this example, the sensitivity is 83%. This corresponds to the red bar for P1-WB_Thickness in
the Local Sensitivity chart.
Now plot P4 = f(P2), Max(P4)-Min(P4) ~= 0.4. This corresponds to 16.9% of 2.36, as represented
by the blue bar for P2-WB_Radius in the Local Sensitivity chart.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 147
Using Response Surfaces
On each of these curves, only one input is varying. The other input parameters are constant.
However, you can change the value of any input and get an updated curve. This also applies to
the standard Local Sensitivity chart. All the sensitivity values are recalculated when you change
the value of an input parameter. If parameters are correlated, you'll see the sensitivity varying.
The relative weights of inputs can vary, depending on the design.
If Axes Range is set to Use Chart Data, the Min-Max search results are ignored. The input para-
meter generating the largest variation of the output is taken as the reference to calculate the
percentages for other input parameters.
In this example, P1 is the input generating the largest variation of P4 (1.96). The red bar for P1
is set to 100%, and the blue bar for P2 is set to 20.4%, meaning that P2 only generates 20.4%
of the variation that P1 can generate.
Chart Properties
• Display Parameter Full Name: Specify whether to show the full parameter name or the short para-
meter name.
• Axes Range: Determines the lower and upper bounds of the Y axis. Available only for bar charts.
– If Use Min Max of the Output Parameter is selected, values on the Y axis are scaled based on Min-
Max search results. If the Min-Max search object was disabled, the output parameter bounds are
determined from existing design points. The axis bounds are fixed to -100, 100, or 0.
– If Use Chart Data is selected, values on the Y axis are scaled based on the input parameter that
generates the largest variation of the output parameter. The axis bounds are adjusted to fit the
displayed bars.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
148 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
Input Parameters
Each of the input parameters is listed in this section. For each input parameter:
• Under Value, you can change the value by moving the slider or entering a new value with the keyboard.
The number to the right of the slide represents the current value.
• Under Enabled, you can enable or disable the parameter by selecting or clearing the check box. Dis-
abled parameters do not display on the chart.
Output Parameters
Each of the output parameters is listed in this section. For each output parameter:
• Under Value, you can view the interpolated value for the parameter.
• Under Enabled, you can enable or disable the parameter by selecting or clearing the check box. Dis-
abled parameters do not display on the chart.
You can modify various generic chart properties for this chart. For more information, see Setting
Chart Properties.
• Single output: Calculates the effect of each input parameter on a single output parameter of your choice.
• Dual output: Calculates the effect of each input parameter on two output parameters of your choice.
• Each curve represents the effect of an enabled input parameter on the selected output.
• For each curve, the current response point is indicated by a black point marker (all of the response
points have equal Y axis values).
• Continuous parameters with manufacturable values are represented by gray curves. The colored
markers are the manufacturable values that are defined.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 149
Using Response Surfaces
• Where effects are the same for one or more inputs, the curves are superimposed. The curve of the input
displayed first on the list hides the curves for the other inputs.
To change the output being considered, go to the Properties pane and select a different output
parameter for the Y axis property.
• Each curve represents the effect of an enabled input parameter on the two selected outputs.
• The circle at the end of a curve represents the beginning of the curve, which is the lower bound of the
input parameter.
• For each curve, the current response point is indicated by a black point marker.
• Continuous parameters with manufacturable values are represented by gray curves. The colored
markers are the manufacturable values that are defined.
• Where effects are the same for one or more inputs, the curves are superimposed. The curve of the input
displayed first on the list hides the curves for the other inputs.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
150 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
To change the outputs being considered, go to the Properties pane and select a different output
parameter for one or both of the chart axes.
For more information, see Local Sensitivity Curves Chart: Properties (p. 154).
An example follows of a Local Sensitivity bar chart for a given response point. This chart shows
the effect of four input parameters (P1-B, P2-D, P4-P, and P5-E) on two output parameters (P6-
V and P8-DIS).
On the left side of the chart, you can see how input parameters affect the output parameter P6-
V:
• P2-D (yellow bar) has the most effect and the effect is positive.
• Input parameters P4-P and P5-E (the teal and blue bars) have no effect at all.
On the right side of the chart, you can the difference in how the same input parameters affect
the output parameter P8-DIS:
• P2-D yellow bar) has the most effect, but the effect is negative.
• P1-B (red bar) has a moderate effect, but the effect is negative.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 151
Using Response Surfaces
• P4-P (teal bar) has now has a moderate effect, and the effect is positive.
• P5-E (blue bar) now has a moderate effect, and the effect is negative.
Single Output
When you view the Local Sensitivity Curves chart for the same response point, the chart defaults
to the Single Output version. This means that the chart shows the effect of all enabled inputs on
a single selected output. The output parameter is on the Y axis, with effect measured horizontally.
In the two examples that follow, you can see that the Single Output curve charts for output
parameters P6-V and P8-DIS show the same sensitivities as the Local Sensitivity bar chart.
For output P6-V, inputs P4-P and P5-E have the same level of effect. Consequently, the blue line
is hidden behind the teal line.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
152 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Charts
For output P8-DIS, inputs P1-B and P5-E have the same level of effect. Consequently, the blue
line is hidden behind the red line.
Dual Output
For more information, you can view the dual output version of the Local Sensitivity Curves chart.
The dual output version shows the effect of all enabled inputs on two selected outputs. In this
particular example, there are only two output parameters. If there were six outputs in the project,
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 153
Using Response Surfaces
however, you could narrow the focus of your analysis by selecting the two that are of most interest
to you.
The following figure shows the effect of the same input parameters on both of the output para-
meters used previously. Output P6-V is on the X axis, with effect measured vertically. Output P8-
DIS is on the Y axis, with effect measured horizontally. From this dual representation, you can
see the following:
• P2-D has the most significant effect on both outputs. The effect is positive for output P6-V and is
negative for output P8-DIS.
• P1-B has a moderate effect on both outputs. The effect is positive for output P6-V and is negative
for output P8-DIS.
• P4-P has no effect on output P6-V and has a moderate positive effect on output P8-DIS.
• P5-E has no effect on output P6-V and a moderate negative effect on output P8-DIS. Due to duplicate
effects, its curve is hidden for both outputs.
Chart Properties
• Display Parameter Full Name: Specify whether to show the full parameter name or the short para-
meter name.
• Axes Range: Determines the lower and upper bounds of the output parameters axes.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
154 of ANSYS, Inc. and its subsidiaries and affiliates.
Exporting Response Surfaces
– If Use Min Max of the Output Parameter is selected, the values on the Y axis are scaled based on
Min-Max search results. If the Min-Max search object was disabled, the output parameter bounds
are determined from existing design points. The axis bounds are fixed to -100, 100, or 0.
– If Use Chart Data is selected, the values on the Y axis are scaled based on the input parameter that
generates the largest variation of the output parameter. The axis bounds are adjusted to fit the
displayed bars.
• Chart Resolution: Determines the number of points per curve. The number of points controls the
amount of curvature that can be displayed. A minimum of 2 points is required and produces a straight
line. A maximum of 100 points is allowed for maximum curvature. The default is 25.
Axes Properties
Determines what data is displayed on each chart axis. For each axis, under Value, you can change
what the chart displays on an axis by selecting an option from the drop-down.
• For X-Axis, available options are Input Parameters and each of the output parameters defined in
the project. If Input Parameters is selected, you are viewing a single output chart. Otherwise, you
are viewing a dual output chart.
• For Y-Axis, available options are each of the output parameters defined in the project.
Input Parameters
Each of the input parameters is listed in this section. For each input parameter:
• Under Value, you can change the value by moving the slider or entering a new value with the keyboard.
The number to the right of the slider represents the current value.
• Under Enabled, you can enable or disable the parameter by selecting or clearing the check box. Dis-
abled parameters do not display on the chart.
Note:
Discrete input parameters display with their levels values associated to a response
point, but they cannot be enabled as chart variables.
Output Parameters
Each of the output parameters is listed in this section. For each output parameter, under Value,
you can view the interpolated value.
You can modify various generic chart properties for this chart. For more information, see Setting
Chart Properties.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 155
Using Response Surfaces
up Interface (FMI). For example, ANSYS Twin Builder implements FMI and Mathworks MATLAB®. For
more information, see the Twin Builder help and MathWorks FMI Toolbox documentation.
An update of a DX-ROM consumes fewer resources and is faster than updating the actual solvers. While
a solver-based evaluation can consume gigabytes of memory and take hours (or even days), a DX-ROM
evaluation consumes only kilobytes and is nearly instantaneous. To take advantage of this, you might
want to import a response surface back into Workbench and use it as a lightweight replacement for
one or more systems in a simulation project.
Response surface export is supported for all DesignXplorer response surface types except Neural Network
and Sparse Grid. Because DesignXplorer does not build a response surface for derived parameters, derived
parameters are not exported. All other parameter types are supported.
DesignXplorer can export a ROM response surface in the following file formats:
Functional Mock-up Unit (version 1.0): *.fmu and Functional Mock-up Unit (version 2.0): *.fmu
The .fmu format has the broadest applicability. It can be used by Twin Builder, MATLAB, or any software
with functional mock-up interface (FMI) support. It is the recommended format for export to external
software.
Note:
Using an FMU file exported from DesignXplorer implies the approval of the terms of use
supplied in the License.txt file. To access License.txt, use a zip utility to manually
extract all files from the .fmu package.
Once exported, the DX-ROM response surface file is available to be imported into the software or
reader of your choice. Disabled parameters and derived parameters are not included in the export.
1. Access the Export Response Surface dialog box use one of the following methods:
• In the Project Schematic, right-click the Response Surface cell and select Export Response
Surface.
– In the Outline pane, right-click Response Surface and select Export Response Surface.
– In the Outline pane, under Response Surface select the Response chart for the desired point.
Then, in the Chart pane, right-click and select Export Response Surface.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
156 of ANSYS, Inc. and its subsidiaries and affiliates.
Exporting Response Surfaces
2. In the Export Response Surface dialog box, specify a file type, location, and name. Then, click Save.
You can import your DX-ROM response surface file into the software or FMI-supported reader of your
choice. For more information, see Tools for Importing a DX-ROM Response Surface into Work-
bench (p. 157).
Each of these tools is delivered in the Response Surface Reader app, which is available for download
from the ANSYS Store.
ANSYS provides the following custom DX-ROM tools for reading the DesignXplorer Native Database
(DXROM) file format:
While you can optionally select a product version, on the page for the app, you can select the
version that you want to download. The default is the latest version.
4. Download the desired version, which requires agreeing to the software license agreement.
5. Decompress the downloaded ZIP file for the app to a directory of your choice.
The DX-ROM tools included in the app are now available to install, load, and use. For more information,
see the help document DX_ResponseSurfaceReaders.pdf, which is included in the app's doc
directory.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 157
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
158 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Goal-Driven Optimizations
DesignXplorer offers two different types of goal-driven optimization systems: Response Surface Optim-
ization and Direct Optimization.
• A Response Surface Optimization system draws its information from its own Response Surface cell
and so is dependent on the quality of the response surface. The available optimization methods are
Screening, MOGA, NLPQL, and MISQP, which all use response surface evaluations rather than real solves.
• A Direct Optimization system has only one cell, which utilizes real solves rather than response surface
evaluations. The available optimization methods are Screening, NLPQL, MISQP, Adaptive Single-Objective,
and Adaptive Multiple-Objective.
Tip:
On the DesignXplorer page In the ANSYS Help, DesignXplorer Optimization Tutorials provides
several optimization examples that you can step through to learn how to use DesignXplorer
to analyze and optimize design spaces.
To make performing optimizations easy for non-experts, DesignXplorer automatically selects an optim-
ization method by default. Based on the number and types of input parameters, the number of objectives
and constraints, any defined parameter relationships, and the run time index (for a Direct Optimization
system only), DesignXplorer selects the most appropriate method and sets its properties.
Any time that you make a change to any input used for automatic method selection, DesignXplorer
once again assesses the scenario to select the most appropriate method. For method availabilities and
capabilities, see Goal-Driven Optimization Methods (p. 164).
Note:
In Tools → Options → Design Exploration → Sampling and Optimization, you can change
Method Selection from Auto to Manual if you want this to be the default for newly inserted
optimization systems. As indicated in the next topic, you can always change the method
selection for a particular optimization system in the properties of the Optimization cell.
Although a Direct Optimization system does not have a Response Surface cell, it can draw information
from any other system or component that contains design point data. It is possible to reuse existing
design point data, reducing the time needed for the optimization, without altering the source of the
design points. For example:
• You can transfer design point data from an existing Response Surface Optimization optimization and
improve upon it without actually changing the original response surface.
• You can use information from a Response Surface system that has been refined with the Kriging
method and validated. You can transfer the design point data to a Direct Optimization system and
then adjust the quality of the original response surface without affecting the attached direct optimization.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 159
Using Goal-Driven Optimizations
• You can transfer information from any DesignXplorer system or component containing design point
data that has already been updated, saving time and resources by reusing existing, up-to-date data
rather than reprocessing it.
Note:
The transfer of design point data between two Direct Optimization systems is not supported.
You can monitor optimization progress of a Direct Optimization system from the Table pane. During
the direct optimization, the Table pane displays all the design points as they are calculated, allowing
you to see how the optimization proceeds, how it converges, and so on. Once the optimization is
complete, the raw design point data is stored for future reference. You can access the data by selecting
Raw Optimization Data in the Outline pane. The Table pane displays the design points that were
calculated during the optimization.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on for the in-
cluded design points.
1. From under Design Exploration in the Workbench Toolbox, drag either the Response Surface Optimiz-
ation system or Direct Optimization system and drop it in the Project Schematic. You can drag it:
• Directly under either the Parameter Set bar or an existing system under the Parameter Set bar, in which
case it does not share any data with any other systems in the Project Schematic.
• On the Design of Experiments cell of a system containing a response surface, in which case it does
share all data generated by the Design of Experiments cell.
• On the Response Surface cell of a system containing a response surface, in which case it does share all
data generated for the Design of Experiments and Response Surface cells.
For more information on data transfer, see Transferring Design Point Data for Direct Optimiza-
tion (p. 163).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
160 of ANSYS, Inc. and its subsidiaries and affiliates.
Creating a Goal-Driven Optimization System
2. For a Response Surface Optimization system, if you are not sharing the Design of Experiments and
Response Surface cells, edit the DOE, setting it up as described in Design of Experiments Component
Reference (p. 32). Then, solve both the Design of Experiments and Response Surface cells.
3. For a Direct Optimization system, if you have not already shared data via the options in step 1, you can
create data transfer links to provide the system with design point data.
Note:
If no design point data is shared, design point data is generated automatically by the
Update operation.
4. On the Project Schematic, double-click the Optimization cell of the new system to open the component
tab.
5. Indicate whether DesignXplorer is to automatically select the most appropriate optimization method or if
you are to manually select the method.
Automatic Selection
To have DesignXplorer select the most appropriate method:
• For a Response Surface Optimization system, Maximum Number of Candidates is the only
other property available.
• For a Direct Optimization system, Maximum Number of Candidates and Run Time Index
are available. For Run Time Index, the default value is 5 – Medium. However, you can change
this value if you want. This value and the number and types of input parameters, number of
objectives and constraints, and any defined parameter relationships determine the method
selected and its properties.
Manual Selection
To manually select the method yourself:
b. In the Properties pane, for Method Selection, select Manual. If after automatic method selection
you switch to manual selection, the method and properties initially proposed by automatic se-
lection are kept so that you can simply modify only the needed properties.
c. Select the method and specify properties. For more information, see Goal-Driven Optimization
Methods (p. 164).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 161
Using Goal-Driven Optimizations
This read-only property displays an estimate of the number of either evaluations or design points
that will be calculated during the optimization process. However, this process can be stopped before
this estimate is reached. For a Direct Optimization system, you can change the selection for Run
Time Index to increase or decrease the number shown in Estimated Number of Design Points.
a. In the Outline pane, select either Objectives and Constraints or an object under it.
b. In the Table or Properties pane, define the optimization objectives and constraints. For more
information, see Defining Optimization Objectives and Constraints (p. 188).
• In the Outline pane, select Domain or an input parameter or parameter relationship under
it.
• In the Table or Properties pane, define the selected domain object. For more information,
see Defining the Optimization Domain (p. 184).
• A stopping criterion, such as the maximum number of iterations or maximum number of points, is
reached. To continue the convergence, you can set Method Selection to Manual and modify the
stopping criterion. Or, you can adjust the problem, perhaps by changing the starting point or domain
definition.
• The optimization converges before reaching the maximum number of points or iterations. In this case,
in the Property pane under Optimization Status, DesignXplorer sets the read-only property Converged
to Yes.
• The optimization is unable to converge before reaching the maximum number of points or iterations.
In this case, in the Property pane under Optimization Status, DesignXplorer sets the read-only property
Converged to No.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
162 of ANSYS, Inc. and its subsidiaries and affiliates.
Transferring Design Point Data for Direct Optimization
Note:
The transfer of design point data between two Direct Optimization systems is not supported.
Data Transferred
The data transferred consists of all design points that have been obtained by a real solution. Design
points obtained from the evaluation of a response surface are not transferred. Design point data is
transferred according to the nature of its source component.
• Design of Experiments cell: All points, including those with custom output values.
• Response Surface cell: All refinement and verification points used to create or evaluate from the re-
sponse surface, because they are obtained by a real solution.
Note:
The points from the DOE are not transferred. To transfer these points, you must create
a data transfer link from the Design of Experiments cell.
• Parameters Correlation cell (linked to a Response Surface cell): No points because all of them are
obtained from a response surface evaluation rather than from a real solve.
Data Usage
Once transferred, the design point data is stored in an initial sample set. In the Properties pane for the
Optimization cell, Method Name is available under Optimization.
• If you set Method Name to NLPQL or MISQP, the samples are not used.
• If you set Method Name to Adaptive Single-Optimization, the initial sample set is filled with transferred
points and additional points are generated to reach the requested number of samples and the workflow of
LHS.
• If you set Method Name to MOGA or Adaptive Multiple-Objective, the initial sample set is filled with
transferred points and additional points are generated to ready the requested number of samples.
The transferred points are added to the samples generated to initiate the optimization. For example, if
you have requested 100 samples and 15 points are transferred, a total of 115 samples are available to
initiate the optimization.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 163
Using Goal-Driven Optimizations
When there are duplicates between the transferred points and the initial sample set, the duplicates are
removed. For example, if you have requested 100 samples, 15 points are transferred, and 6 duplicates
are found, a total of 109 samples are available to initiate the optimization.
For more information on data transfer links, see Links in the Workbench User's Guide.
The following table indicates the general capabilities of each goal-driven optimization method.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
164 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
Adaptive
X X X X
Multiple-Objective
External Capabilities are determined by the optimizer, as defined in the
Optimizer optimization extension.
Method Name, which appears in the Properties pane for the Optimization cell, specifies the optimiz-
ation method for the design study. DesignXplorer filters Method Name choices for applicability to the
current project, displaying only the methods that can be used to solve the optimization problem as it
is currently defined. For example, if your project has multiple objectives defined and an external optimizer
does not support multiple objectives, External Optimizer is excluded. When no objectives or constraints
are defined for a project, all optimization methods are available for Method Name. If you already know
that you want to use a particular external optimizer, you should select External Optimizer as the
method before setting up the rest of the project. Otherwise, it could be inadvertently filtered from the
list.
For instructions for setting the optimization properties for each algorithm, see:
Performing a Screening Optimization
Performing a MOGA Optimization
Performing an NLPQL Optimization
Performing an MISQP Optimization
Performing an Adaptive Single-Objective Optimization
Performing an Adaptive Multiple-Objective Optimization
Performing an Optimization with an External Optimizer
• Verify Candidate Points: Select to verify candidate points automatically at end of an update for a
Response Surface Optimization system. This property is not applicable to a Direct Optimization
system.
• Number of Samples: Enter the number of samples to generate for the optimization. Samples are
generated very rapidly from the response surface and do not require an actual solve of design points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 165
Using Goal-Driven Optimizations
The number of samples must be greater than the number of enabled input parameters. The
number of enabled input parameters is also the minimum number of samples required to
generate the Sensitivities chart. You can enter a minimum of 2 and a maximum of 10,000. The
default is 1000 for a Response Surface Optimization system and 100 for a Direct Optimization
system.
• Maximum Number of Candidates: Maximum number of candidates that the algorithm is to generate.
The default is 3. For more information, see Viewing and Editing Candidate Points in the Table
Pane (p. 194).
• In the Outline pane, select Objectives and Constraints or an object under it.
• In the Table or Properties pane, define the optimization objectives and constraints. For more inform-
ation, see Defining Optimization Objectives and Constraints (p. 188).
• In the Outline pane, select Domain or an input parameter or parameter relationship under it.
• In the Table or Properties pane, define the selected domain object. For more information, see Defining
the Optimization Domain (p. 184).
The result is a group of points or sample set. The Table pane displays the points that are most in
alignment with the objectives and constraints as the candidate points for the optimization. The
Properties pane displays Size of Generated Sample Set, which is read-only.
When the Screening method is used for a Response Surface Optimization system, the sample
points obtained from a response surface Min-Max search are automatically added to the sample
set used to initialize or run the optimization.
For example, if the Min-Max search results in 4 points and you run the Screening optimization
with Number of Samples set to 100, the final optimization sample set contains up to 104 points.
If a point found by the Min-Max search is also contained in the initial screening sample set, it is
only counted once in the final sample set.
Note:
When constraints exist, the Tradeoff chart indicates which samples are feasible (meet
the constraints) or infeasible (do not meet the constraints). There is a display option
for the infeasible points.
6. In the Outline pane, select Optimization. Then, in the Properties pane under Optimization Status,
review the outcome:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
166 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
Number of Evaluations: Number of design point evaluations to perform. This value takes all
points used in the optimization, including design points pulled from the cache. It can be used
to measure the efficiency of the optimization method to find the optimum design point.
• Number of Failures: Number of failed design points for the optimization. When design points fail, a
Screening optimization does not attempt to solve additional design points in their place. The Samples
chart for a Direct Optimization system does not include failed design points.
• Size of Generated Sample Set: Number of samples generated in the sample set. For a Direct Optim-
ization system, this is the number of samples successfully updated. For a Response Surface Optim-
ization system, this is the number of samples successfully updated plus the number of different (non-
duplicate) samples generated by the Min-Max search if it is enabled.
Number of Candidates: Number of candidates obtained. This value is limited by the Maximum
Number of Candidates input property.
7. In the Outline pane, select Domain or any object under it to view domain data in the Properties and
Table panes.
8. For a Direct Optimization system, select Raw Optimization Data in the Outline pane. The Table pane
displays the design points that were calculated during the optimization. If the raw optimization data
point exists in the design points table for the Parameter Set bar, the corresponding design point name
is indicated in parentheses in the Name column.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on for
the included design points.
Note:
MOGA requires advanced options. Ensure that the Show Advanced Options check box is
selected on the Design Exploration tab of the Options window. Advanced options display
in italic type.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 167
Using Goal-Driven Optimizations
• Verify Candidate Points: Select to verify candidate points automatically at the end of the optimization
update.
• Type of Initial Sampling: Advanced option for generating different kinds of sampling. If you do not
have any parameter relationships defined, set to Screening (default) or Optimal Space-Filling. If you
have parameter relationships defined, the initial sampling must be performed by the constrained
sampling algorithms because parameter relationships constrain the sampling. For such cases, this
property is automatically set to Constrained Sampling.
• Random Generator Seed: Advanced option that displays only when Type of Initial Sampling is set
to Optimal Space-Filling. Value for initializing the random number generator invoked internally by
the Optimal Space-Filling (OSF) algorithm. The value must be a positive integer. You can generate
different samplings by changing the value or regenerate the same sampling by keeping the same
value. The default is 0.
• Maximum Number of Cycles: Advanced option that displays only when Type of Initial Sampling is
set to Optimal Space-Filling. Number of optimization loops the algorithm needs, which in turns de-
termines the discrepancy of the OSF. The optimization is essentially combinatorial, so a large number
of cycles slow down the process. However, this makes the discrepancy of the OSF smaller. The value
must be greater than 0. For practical purposes, 10 cycles is usually good for up to 20 variables. The
default is 10.
• Number of Initial Samples: Initial number of samples to use. This number must be greater than the
number of enabled input parameters. The minimum recommended number of initial samples is 10
times the number of enabled input parameters. The larger the initial sample set, the better your chances
of finding the input parameter space that contains the best solutions.
The number of enabled input parameters is also the minimum number of samples required to
generate the Sensitivities chart. You can enter a minimum of 2 and a maximum of 10000. The
default is 100.
If you switch from the Screening method to the MOGA method, MOGA generates a new sample
set. For the sake of consistency, enter the same number of initial samples as you used for the
Screening method.
• Number of Samples Per Iteration: Number of samples to iterate and update with each iteration. This
number must be greater than the number of enabled input parameters but less than or equal to the
number of initial samples. The default is 100 for a Response Surface Optimization system and 50
for a Direct Optimization system.
• Maximum Allowable Pareto Percentage: Convergence criterion. Percentage value that represents
the ratio of the number of desired Pareto points to the number of samples per iteration. When this
percentage is reached, the optimization is converged. For example, a value of 70 with Number of
Samples Per Iteration set to 200 would mean that the optimization should stop once the resulting
front of the MOGA optimization contains at least 140 points. Of course, the optimization stops before
that if the maximum number of iterations is reached.
If the Maximum Allowable Pareto Percentage is too low (below 30), the process can converge
prematurely. If the value is too high (above 80), the process can converge slowly. The value of
this property depends on the number of parameters and the nature of the design space itself.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
168 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
The default is 70. Using a value between 55 and 75 works best for most problems. For more
information, see Convergence Criteria in MOGA-Based Multi-Objective Optimization (p. 356).
• Convergence Stability Percentage: Convergence criterion. Percentage value that represents the
stability of the population based on its mean and standard deviation. This criterion allows you to
minimize the number of iterations performed while still reaching the desired level of stability. When
the specified percentage is reached, the optimization is converged. The default percentage is 2. To
not take the convergence stability into account, set to 0. For more information, see Convergence Cri-
teria in MOGA-Based Multi-Objective Optimization (p. 356).
• Maximum Number of Iterations: Stop criterion. Maximum number of iterations that the algorithm
is to execute. If this number is reached without the optimization having reached convergence, iterations
stop. This also provides an idea of the maximum possible number of function evaluations that are
needed for the full cycle, as well as the maximum possible time it can take to run the optimization.
For example, the absolute maximum number of evaluations is given by:
• Mutation Probability: Advanced option for specifying the probability of applying a mutation on a
design configuration. The value must be between 0 and 1. A larger value indicates a more random
algorithm. If the value is 1, the algorithm becomes a pure random search. A low probability of mutation
(<0.2) is recommended. The default is 0.01. For more information on mutation, see MOGA Steps to
Generate a New Population (p. 359).
• Crossover Probability: Advanced option for specifying the probability with which parent solutions
are recombined to generate offspring solutions. The value must be between 0 and 1. A smaller value
indicates a more stable population and a faster (but less accurate) solution. If the value is 0, the parents
are copied directly to the new population. A high probability of crossover (>0.9) is recommended. The
default is 0.98.
• Maximum Number of Candidates: Maximum number of candidates that the algorithm is to generate.
The default is 3. For more information, see Viewing and Editing Candidate Points in the Table
Pane (p. 194).
• Type of Discrete Crossover: Advanced option for specifying the type of crossover for discrete para-
meters. This property is visible only if there is at least one discrete input variable or continuous input
variable with manufacturable values. Three crossover types are available: One Point, Two Points, and
Uniform. According to the type of crossover selected, the children are closer to or farther from their
parents. The children are closer for One Point and farther for Uniform. The default is One Point. For
more information on crossover, see MOGA Steps to Generate a New Population (p. 359).
• In the Outline pane, select Objectives and Constraints or an object under it.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 169
Using Goal-Driven Optimizations
• In the Table or Properties pane, define the optimization objectives. For more information, see Defining
Optimization Objectives and Constraints (p. 188).
Note:
For MOGA, at least one output parameter must have an objective defined. Multiple
objectives are allowed.
• In the Outline pane under Domain, enable the desired input parameters.
• In the Table or Properties pane, define the selected domain object. For more information, see Defining
the Optimization Domain (p. 184).
The result is a group of points or sample set. The Table pane displays the points that are most in
alignment with the objectives as the candidate points for the optimization. The Properties pane
displays Size of Generated Sample Set, which is read-only.
6. In the Outline pane, select Optimization. Then, in the Properties pane under Optimization Status,
view the outcome:
• Obtained Pareto Percentage: Percentage representing the ratio of the number of Pareto points ob-
tained by the optimization.
• Number of Iterations: Number of iterations executed. Each iteration corresponds to the generation
of a population.
• Number of Evaluations: Number of design point evaluations performed. This value takes all points
used in the optimization, including design points pulled from the cache. It can be used to measure
the efficiency of the optimization method to find the optimum design point.
• Number of Failures: Number of failed design points for the optimization. When a design point fails,
a MOGA optimization does not retain this point in the Pareto front and does not attempt to solve an-
other design point in its place. Failed design points are also not included on the Direct Optimization
Samples chart.
• Size of Generated Sample Set: Number of samples generated in the sample set. This is the number
of samples successfully updated for the last population generated by the algorithm. It usually equals
the Number of Samples Per Iteration.
• Number of Candidates: Number of candidates obtained. This value is limited by the Maximum
Number of Candidates input property.
7. In the Outline pane, select Domain or any object under it to review domain data in the Properties and
Table panes.
8. For a Direct Optimization system, select Raw Optimization Data in the Outline pane. The Table pane
displays the design points that were calculated during the optimization. If the raw optimization data
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
170 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
point exists in the design points table for the Parameter Set bar, the corresponding design point name
is indicated in parentheses in the Name column.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on for
the included design points.
• Verify Candidate Points: Select to verify candidate points automatically at end of the optimization
update.
• Finite Difference Approximation: When analytical gradients are not available, NLPQL approximates
them numerically. This property allows you to specify the method of approximating the gradient of
the objective function. Choices are:
– Central: Increases the accuracy of the gradient calculations by sampling from both sides of the
sample point but increases the number of design point evaluations by 50%. This method makes
use of the initial point, as well as the forward point and rear point. This is the default method for
preexisting databases and new Response Surface Optimization systems.
– Forward: Uses fewer design point evaluations but decreases the accuracy of the gradient calculations.
This method makes use of only two design points, the initial point and forward point, to calculate
the slope forward. This is the default method for new Direct Optimization systems.
• Initial Finite Difference Delta (%): Advanced option for specifying the relative variation used to
perturb the current point to compute gradients. Used in conjunction with Allowable Convergence
(%) to ensure that the delta in NLPQL's calculation of finite differences is large enough to be seen
above the noise in the simulation problem. This wider sampling produces results that are more clearly
differentiated so that the difference is less affected by solution noise and the gradient direction is
clearer. The value should be larger than both the value for Initial Finite Difference Delta (%) and the
noise magnitude of the model. However, smaller values produce more accurate results, so set Initial
Finite Difference Delta (%) only as high as necessary to be seen above simulation noise.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 171
Using Goal-Driven Optimizations
For a Direct Optimization system, the default percentage value is 1. For a Response Surface
Optimization system, the default percentage value is 0.001. The minimum is 0.0001, and the
maximum is 10.
For parameters with Allowed Values set to Manufacturable Values or Snap to Grid, the value
for Initial Finite Difference Delta (%) is ignored. In such cases, the closest allowed value is
used to determine the finite difference delta.
• Allowable Convergence (%): Stop criterion. Tolerance to which the Karush-Kuhn-Tucker (KKT) optim-
ality criterion is generated during the NLPQL process. A smaller value indicates more convergence it-
erations and a more accurate (but slower) solution. A larger value indicates fewer convergence iterations
and a less accurate (but faster) solution.
For a Direct Optimization system, the default percentage value is 0.1. For a Response Surface
Optimization system, the default percentage value is 0.0001. The maximum percentage value
is 100. These values are consistent across all problem types because the inputs, outputs, and
gradients are scaled during the NLPQL solution.
• Maximum Number of Iterations: Stop criterion. Maximum number of iterations that the algorithm
is to execute. If convergence happens before this number is reached, the iterations stop. This also
provides an idea of the maximum possible number of function evaluations that are needed for the
full cycle. For NLPQL, the number of evaluations can be approximated according to the Finite Difference
Approximation gradient calculation method, as follows:
• Maximum Number of Candidates: Maximum number of candidates that the algorithm is to generate.
The default is 3. For more information, see Viewing and Editing Candidate Points in the Table
Pane (p. 194).
• In the Outline pane, select Objectives and Constraints or an object under it.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
172 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
• In the Table or Properties pane, define the optimization objectives. For more information, see Defining
Optimization Objectives and Constraints (p. 188).
Note:
For NLPQL, exactly one output parameter must have an objective defined. Multiple
parameter constraints are permitted.
• In the Outline pane, select Domain or an input parameter or parameter relationship under it.
• In the Table or Properties pane, define the selected domain object. For more information, see Defining
the Optimization Domain (p. 184).
The result is a group of points or sample set. The points that are most in alignment with the ob-
jective are displayed in the table as the candidate points for the optimization. In the Properties
pane, the result Size of Generated Sample Set is read-only. This value is always equal to 1 for
NLPQL.
6. In the Outline pane, select Optimization. Then, in the Properties pane under Optimization Status,
view the outcome:
• Number of Iterations: Number of iterations executed. Each iteration corresponds to one formulation
and solution of the quadratic programming subproblem, or alternatively, one evaluation of gradients.
• Number of Evaluations: Number of design point evaluations performed. This value takes all points
used in the optimization, including design points pulled from the cache. It can be used to measure
the efficiency of the optimization method to find the optimum design point.
• Number of Failures: Number of failed design points for the optimization. When a design point fails,
NLPQL changes the direction of its search. It does not attempt to solve an additional design point in
its place and does not include it on the Direct Optimization Samples chart.
• Size of Generated Sample Set: Number of samples generated in the sample set. This is the number
of iteration points obtained by the optimization and should be equal to the number of iterations.
• Number of Candidates: Number of candidates obtained. This value is limited by the Maximum
Number of Candidates input property.
7. In the Outline pane, select Domain or any node under it to review domain information in the Properties
and Table panes.
8. For a Direct Optimization system, in the Outline pane, select Raw Optimization Data. The Table pane
displays the design points that were calculated during the optimization. If the raw optimization data
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 173
Using Goal-Driven Optimizations
point exists in the design points table for the Parameter Set bar, the corresponding design point name
is indicated in parentheses in the Name column.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on for
the included design points.
• Verify Candidate Points: Select to verify candidate points automatically at end of the optimization
update.
• Finite Difference Approximation: When analytical gradients are not available, MISQP approximates
them numerically. This property allows you to specify the method of approximating the gradient of
the objective function. Choices are:
– Central: Increases the accuracy of the gradient calculations by sampling from both sides of the
sample point but increases the number of design point evaluations by 50%. This method makes
use of the initial point, as well as the forward point and rear point. This is the default method for
preexisting databases and new Response Surface Optimization systems.
– Forward: Uses fewer design point evaluations but decreases the accuracy of the gradient calculations.
This method makes use of only two design points, the initial point and forward point, to calculate
the slope forward. This is the default method for new Direct Optimization systems.
• Initial Finite Difference Delta (%): Advanced option for specifying the relative variation used to
perturb the current point to compute gradients. Used in conjunction with Allowable Convergence
(%) to ensure that the delta in MISQP's calculation of finite differences is large enough to be seen
above the noise in the simulation problem. This wider sampling produces results that are more clearly
differentiated so that the difference is less affected by solution noise and the gradient direction is
clearer. The value should be larger than both the value for Initial Finite Difference Delta (%) and the
noise magnitude of the model. Because smaller values produce more accurate results, set Initial Finite
Difference Delta (%) only as high as necessary to be seen above simulation noise.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
174 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
For a Direct Optimization system, the default percentage value is 1. For a Response Surface
Optimization system, the default percentage value is 0.001. The minimum is 0.0001, and the
maximum is 10.
For parameters with Allowed Values set to Manufacturable Values or Snap to Grid, the value
for Initial Finite Difference Delta (%) is ignored. In such cases, the closest allowed value is
used to determine the finite difference delta.
• Allowable Convergence (%): Stop criterion. Tolerance to which the Karush-Kuhn-Tucker (KKT) optim-
ality criterion is generated during the MISQP process. A smaller value indicates more convergence it-
erations and a more accurate (but slower) solution. A larger value indicates fewer convergence iterations
and a less accurate (but faster) solution.
For a Direct Optimization system, the default percentage value is 0.1. For a Response Surface
Optimization system, the default percentage value is 0.0001. The maximum value is 100. These
values are consistent across all problem types because the inputs, outputs, and gradients are
scaled during the MISQP solution.
• Maximum Number of Iterations: Stop criterion. Maximum number of iterations that the algorithm
is to execute. If convergence happens before this number is reached, the iterations stop. This also
provides an idea of the maximum possible number of function evaluations that are needed for the
full cycle. For MISQP, the number of evaluations can be approximated according to the Finite Difference
Approximation gradient calculation method, as follows:
• Maximum Number of Candidates: Maximum number of candidates that the algorithm is to generate.
The default is 3. For more information, see Viewing and Editing Candidate Points in the Table
Pane (p. 194).
• In the Outline pane, select Objectives and Constraints or an object under it.
• In the Table or Properties pane, define the optimization objectives. For more information, see Defining
Optimization Objectives and Constraints (p. 188).
Note:
For MISQP, exactly one output parameter must have an objective defined, but multiple
parameter constraints are permitted.
• In the Table or Properties pane, define the optimization domain. For more information, see Defining
the Optimization Domain (p. 184).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 175
Using Goal-Driven Optimizations
The result is a group of points or sample set. The points that are most in alignment with the ob-
jective are displayed in the table as the candidate points for the optimization. In the Properties
pane, the Size of Generated Sample Set result is read-only. For MISQP, this value is always equal
to 1.
6. In the Outline pane, select Optimization. Then, in the Properties pane under Optimization Status,
view the outcome:
• Number of Iterations: Number of iterations executed. Each iteration corresponds to one formulation
and solution of the quadratic programming subproblem, or alternatively, one evaluation of gradients.
• Number of Evaluations: Number of design point evaluations performed. This value takes all points
used in the optimization, including design points pulled from the cache. It can be used to measure
the efficiency of the optimization method to find the optimum design point.
• Number of Failures: Number of failed design points for the optimization. When a design point fails,
MISQP changes the direction of its search. It does not attempt to solve an additional design point in
its place and does not include it on the Direct Optimization Samples chart.
• Size of Generated Sample Set: Number of samples generated in the sample set. This is the number
of design points updated in the last iteration.
• Number of Candidates: Number of candidates obtained. This value is limited by the Maximum
Number of Candidates input property.
7. In the Outline pane, select Domain or any node under it to review domain information in the Properties
and Table panes.
8. For a Direct Optimization system, in the Outline pane, select Raw Optimization Data. The Table pane
displays the design points that were calculated during the optimization. If the raw optimization data
point exists in the design points table for the Parameter Set bar, the corresponding design point name
is indicated in parentheses in the Name column.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on for
the included design points.
The Adaptive Single-Objective method is available for input parameters that are continuous, including
those with manufacturable values. It can handle only one output parameter goal, although other
output parameters can be defined as constraints. It does not support the use of parameter relationships
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
176 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
in the optimization domain. For more information, see Adaptive Single-Objective Optimization
(ASO) (p. 352).
Note:
The Adaptive Single-Objective method requires advanced options. Ensure that the Show
Advanced Options check box is selected on the Design Exploration tab of the Options
window. Advanced options display in italic type.
• Number of Initial Samples: Number of samples generated for the initial Kriging and after all domain
reductions for the construction of the next Kriging.
You can enter a minimum of (NbInp+1)*(NbInp+2)/2 (also the minimum number of OSF
samples required for the Kriging construction) or a maximum of 10,000. The default is
(NbInp+1)*(NbInp+2)/2 for a Direct Optimization system. There is no default for a Re-
sponse Surface Optimization system.
Because of the Adaptive Single-Objective workflow (in which a new OSF sample set is generated
after each domain reduction), increasing the number of OSF samples does not necessarily improve
the quality of the results and significantly increases the number of evaluations.
• Random Generator Seed: Advanced option that displays only when Type of Initial Sampling is set
to Optimal Space-Filling. The value for initializing the random number generator invoked internally
by OSF. The value must be a positive integer. This property allows you to generate different samplings
by changing the value or to regenerate the same sampling by keeping the same value. The default is
0.
• Maximum Number of Cycles: Number of optimization loops that the algorithm needs, which in turns
determines the discrepancy of the OSF. The optimization is essentially combinatorial, so a large number
of cycles slows down the process. However, this makes the discrepancy of the OSF smaller. The value
must be greater than 0. For practical purposes, 10 cycles is usually good for up to 20 variables. The
default is 10.
• Number of Screening Samples: Number of samples for the screening generation on the current Kriging.
This value is used to create the next Kriging (based on error prediction) and verified candidates.
You can enter a minimum of (NbInp+1)*(NbInp+2)/2 (also the minimum number of OSF
samples required for the Kriging construction) or a maximum of 10,000. The default is
100*NbInp for a Direct Optimization system. There is no default for a Response Surface
Optimization system.
The larger the screening sample set, the better the chances of finding good verified points.
However, too many points can result in a divergence of the Kriging.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 177
Using Goal-Driven Optimizations
• Number of Starting Points: Determines the number of local optima to explore. The larger the number
of starting points, the more local optima explored. In the case of a linear surface, for example, it is not
necessary to use many points. This value must be less than the value for Number of Screening Samples
because these samples are selected in this sample. The default is the value for Number of Initial
Samples.
• Maximum Number of Evaluations: Stop criterion. Maximum number of evaluations (design points)
that the algorithm is to calculate. If convergence occurs before this number is reached, evaluations
stop. This value also provides an idea of the maximum possible time it takes to run the optimization.
The default is 20*(NbInp +1).
• Maximum Number of Domain Reductions: Stop criterion. Maximum number of domain reductions
for input variation. (No information is known about the size of the reduction beforehand.) The default
is 20.
• Percentage of Domain Reductions: Stop criterion. Minimum size of the current domain according
to the initial domain. For example, with one input ranging between 0 and 100, the domain size is equal
to 100. The percentage of domain reduction is 1%, so the current working domain size cannot be less
than 1 (such as an input ranging between 5 and 6). The default is 0.1.
• Convergence Tolerance: Stop criterion. Minimum allowable gap between the values of two successive
candidates. If the difference between two successive candidates is smaller than the value for Conver-
gence Tolerance multiplied by the maximum variation of the parameter, the algorithm is stopped. A
smaller value indicates more convergence iterations and a more accurate (but slower) solution. A larger
value indicates fewer convergence iterations and a less accurate (but faster) solution. The default is
1E-06.
• Retained Domain per Iteration (%): Advanced option that allows you to specify the minimum per-
centage of the domain you want to keep after a domain reduction. The percentage value must be
between 10 and 90. A larger value indicates less domain reduction, which implies better exploration
but a slower solution. A smaller value indicates a faster and more accurate solution, with the risk of it
being a local one. The default percentage value is 40.
• Maximum Number of Candidates: Maximum number of candidates that the algorithm is to be gen-
erate. The default is 3. For more information, see Viewing and Editing Candidate Points in the Table
Pane (p. 194).
• In the Outline pane, select Objectives and Constraints or an object under it.
• In the Table or Properties pane, define the optimization objectives. For more information, see Defining
Optimization Objectives and Constraints (p. 188).
Note:
• For the Adaptive Single-Objective method, exactly one output parameter must have an
objective defined.
• After you have defined an objective, a warning icon displays in the Message column of the
Outline pane if the recommended number of input parameters has been exceeded. For
more information, see Number of Input Parameters for DOE Types (p. 77).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
178 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
• In the Table or Properties pane, define the optimization domain. For more information, see Defining
the Optimization Domain (p. 184).
The result is a group of points or sample set. The points that are most in alignment with the ob-
jectives are displayed in the table as the candidate points for the optimization. In the Properties
pane, the Size of Generated Sample Set result is read-only.
6. In the Outline pane, select Optimization. Then, in the Properties pane under Optimization Status,
view the outcome:
• Number of Evaluations: Number of design point evaluations performed. This value takes into account
all points used in the optimization, including design points pulled from the cache. It corresponds to
the total of LHS points and verification points.
• Number of Failures: Number of failed design points for the optimization. When a design point fails,
the Adaptive Single-Objective method changes the direction of its search. It does not attempt to solve
an additional design point in its place and does not include it on the Direct Optimization Samples
chart.
• Size of Generated Sample Set: Number of samples generated in the sample set. This is the number
of unique design points that have been successfully updated.
• Number of Candidates: Number of candidates obtained. This value is limited by the Maximum
Number of Candidates input property.
7. For a Direct Optimization system, in the Outline pane, select Raw Optimization Data. The Table displays
the design points that were calculated during the optimization. If the raw optimization data point exists
in the design points table for the Parameter Set bar, the corresponding design point name is indicated
in parentheses in the Name column.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on for
the included design points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 179
Using Goal-Driven Optimizations
The Adaptive Multiple-Objective method is available only for continuous input parameters, including
those with manufacturable values. It can handle multiple objectives and multiple constraints. For
more information, see Adaptive Multiple-Objective Optimization (p. 361).
Note:
The Adaptive Multiple-Objective method requires advanced options. Ensure that the Show
Advanced Options check box is selected on the Design Exploration tab of the Optionswin-
dow. Advanced options display in italic type.
• Type of Initial Sampling: If you do not have any parameter relationships defined, set to
Screening (default) or Optimal Space-Filling. If you do have parameter relationships defined,
the initial sampling must be performed by the constrained sampling algorithms (because para-
meter relationships constrain the sampling). In such cases, this property is automatically set to
Constrained Sampling.
• Random Generator Seed: Advanced option that displays only when Type of Initial Sampling
is set to Optimal Space-Filling. Value for initializing the random number generator invoked in-
ternally by the Optimal Space-Filling (OSF) algorithm. The value must be a positive integer. This
property allows you to generate different samplings by changing the value or to regenerate the
same sampling by keeping the same value. The default is 0.
• Maximum Number of Cycles: Determines the number of optimization loops the algorithm needs,
which in turns determines the discrepancy of the OSF. The optimization is essentially combinat-
orial, so a large number of cycles slows down the process. However, this makes the discrepancy
of the OSF smaller. The value must be greater than 0. For practical purposes, 10 cycles is usually
good for up to 20 variables. The default is 10.
• Number of Initial Samples: Initial number of samples to use. This number must be greater than
the number of enabled inputs. The minimum recommended number of initial samples is 10 times
the number of enabled input parameters. The larger the initial sample set, the better your chances
of finding the input parameter space that contains the best solutions.
The number of enabled input parameters is also the minimum number of samples required
to generate the Sensitivities chart. You can enter a minimum of 2 and a maximum of 10000.
The default is 100.
If you are switching the method from Screening to MOGA, MOGA generates a new sample
set. For the sake of consistency, enter the same number of initial samples as used for the
Screening method.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
180 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
Number of Samples Per Iteration set at 200 samples would mean that the optimization should
stop once the resulting front of the MOGA optimization contains at least 140 points. Of course,
the optimization stops before that if the Maximum Number of Iterations is reached.
If the Maximum Allowable Pareto Percentage is too low (below 30), the process can
converge prematurely, and if it is too high (above 80), it can converge slowly. The value
of this property depends on the number of parameters and the nature of the design space
itself. The default is 70. Using a value between 55 and 75 works best for most problems.
For more information, see Convergence Criteria in MOGA-Based Multi-Objective Optimization
(p. 356).
• Maximum Number of Iterations: Stop criterion. Maximum number of iterations that the algorithm
is to execute. If this number is reached without the optimization having reached convergence,
iterations stop. This also provides an idea of the maximum possible number of function evaluations
that are needed for the full cycle, as well as the maximum possible time it can take to run the
optimization. For example, the absolute maximum number of evaluations is given by: Number
of Initial Samples + Number of Samples Per Iteration * (Maximum
Number of Iterations - 1).
• Mutation Probability: Advanced option for specifying the probability of applying a mutation on
a design configuration. The value must be between 0 and 1. A larger value indicates a more random
algorithm. If the value is 1, the algorithm becomes a pure random search. A low probability of
mutation (<0.2) is recommended. The default is 0.01. For more information on mutation, see
MOGA Steps to Generate a New Population (p. 359).
• Crossover Probability: Advanced option for specifying the probability with which parent solutions
are recombined to generate offspring solutions. The value must be between 0 and 1. A smaller
value indicates a more stable population and a faster (but less accurate) solution. If the value is
0, the parents are copied directly to the new population. A high probability of crossover (>0.9) is
recommended. The default is 0.98.
• Maximum Number of Candidates: Maximum number of candidates for the algorithm to generate.
The default is 3. For more information, see Viewing and Editing Candidate Points in the Table
Pane (p. 194).
• Type of Discrete Crossover: Advanced option for specifying the kind of crossover for discrete
parameters. This property is visible only if there is at least one discrete input variable or continuous
input variable with manufacturable values. Three crossover types are available: One Point, Two
Points, and Uniform. According to the type of crossover selected, the children are closer to or
farther from their parents. Children are closer for One Point and farther for Uniform. The default
is One Point. For more information on crossover, see MOGA Steps to Generate a New Popula-
tion (p. 359).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 181
Using Goal-Driven Optimizations
with 4 and 5 levels respectively and 1 continuous parameter with 6 manufacturable values,
Maximum Number of Permutations is equal to 120 (4*5*6).
• In the Outline pane, select Objectives and Constraints or an object under it.
• In the Table or Properties pane, define the optimization objectives. For more information, see
Defining Optimization Objectives and Constraints (p. 188).
Note:
– For the Adaptive Multiple-Objective method, at least one output parameter must
have an objective defined. Multiple objectives are allowed.
– After you have defined an objective, a warning icon displays in the Message column
of the Outline pane if the recommended number of input parameters is exceeded.
For more information, see Number of Input Parameters for DOE Types (p. 77).
• In the Outline pane, select Domain or an input parameter or parameter relationship under it.
• In the Table or Properties pane, define the selected domain object. For more information, see
Defining the Optimization Domain (p. 184).
The result is a group of points or sample set. The points that are most in alignment with the
objectives are displayed in the table as the candidate points for the optimization. In the
Properties pane, the result Size of Generated Sample Set is read-only.
6. In the Outline pane, select Optimization. Then, in the Properties pane under Optimization Status,
view the outcome:
• Number of Evaluations: Number of design point evaluations performed. This value takes all
points used in the optimization, including design points pulled from the cache. It can be used to
measure the efficiency of the optimization method to find the optimum design point.
• Number of Failures: Number of failed design points for the optimization. When a design point
fails, an Adaptive Multiple-Objective optimization does not retain this point in the Pareto front
to generate the next population, attempt to solve an additional design point in its place, or include
it on the Direct Optimization Samples chart.
• Size of Generated Sample Set: Number of samples generated in the sample set. This is the
number of samples successfully updated for the last population generated by the algorithm. It
usually equals the Number of Samples Per Iteration.
• Number of Candidates: Number of candidates obtained. This value is limited by the Maximum
Number of Candidates input property.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
182 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Methods
7. In the Outline pane, select Domain or any object under it to view domain data in the Properties
and Table panes.
8. For a Direct Optimization system, select Raw Optimization Data in the Outline pane. The Table
pane displays the design points that were calculated during the optimization. If the raw optimization
data point exists in the design points table for the Parameter Set bar, the corresponding design
point name is indicated in parentheses in the Name column.
Note:
This list is compiled of raw data and does not show feasibility, ratings, and so on
for the included design points.
Once an optimization extension is installed and loaded to the project as described in Working with
DesignXplorer Extensions (p. 305), you are ready to start using it extended functionality.
DesignXplorer filters the Method Name list for applicability to the current project, displaying only
those optimization methods that you can use to solve the optimization problem as it is currently
defined. When no objectives or constraints are defined for a project, all optimization methods are
listed. If you already know that you want to use a particular external optimizer, you should select it
as the method before setting up the rest of the project. Otherwise, the optimization method could
be inadvertently filtered from the list.
DesignXplorer shows only the optimization functionality that is specifically defined in the extension.
Additionally, DesignXplorer filters objectives and constraints according to the optimization method
selected, making only those objects supported by the selected optimization method available for
selection. For example, if you have selected an optimizer that does not support the Maximize objective
type, Maximize is not included in the Objective Type list.
If you already have a specific problem you want to solve, you should set up the project before selecting
an optimization method for Method Name. Otherwise, the desired objectives and constraints could
be filtered from the lists.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 183
Using Goal-Driven Optimizations
When you select Domain or any object under it, the Table pane displays the input parameters and
parameter relationships that are defined and enabled for the optimization. It does not display disabled
domain objects.
• Select Domain and then edit the input parameter domain in the Table pane.
• Select an input parameter under Domain and then edit the input parameter domain in either the Prop-
erties or Table pane.
For enabled input parameters in the Properties and Table panes, the following settings are available:
Lower Bound
Defines the lower bound for the optimization input parameter space. Increasing the lower bound confines
the optimization to a subset of the DOE domain. By default, the lower bound corresponds to the following
values defined in the DOE:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
184 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining the Optimization Domain
Upper Bound
Defines the upper bound for the input parameter space. By default, the upper bound corresponds to
the following values defined in the DOE:
Starting Value
Available only for NLPQL and MISQP. Specifies where the optimization starts for each input parameter.
Because NLPQL and MISQP are gradient-based methods, the starting point in the parameter space
determines the candidates found. With a poor starting point, NLPQL and MISQP might find a
local optimum, which is not necessarily the same as the global optimum. This setting gives give
you more control over your optimization results by allowing you specify exactly where in the
parameter space the optimization should begin.
• Must fall within the domain constrained by the enabled parameter relationships. For more in-
formation, see Defining Parameter Relationships (p. 185).
For each disabled input parameter, specify the desired value to use in the optimization. By default,
the value is copied from the current design point when the optimization system was created.
Note:
When the optimization is refreshed, disabled input values persist. However, they
are not updated to the current design point values.
Note:
To specify parameter relationships for outputs, you can create derived parameters. You
create derived parameters for an analysis system by providing expressions. The derived
parameters are then passed to DesignXplorer as outputs. For more information, see Creating
or Editing Parameters in the Workbench User's Guide.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 185
Using Goal-Driven Optimizations
A parameter relationship has one operator and two expressions. In the following example, you can
see that two parameter relationships have been defined, each involving input parameters P1 and P2.
You can create, edit, enable and disable, and view parameter relationships.
• In the Outline pane, right-click Parameter Relationships and select Insert Parameter Relationship.
• In the Table pane under Parameter Relationships, enter parameter relationship data in the bottom row.
• In the Outline pane, select Domain or Parameter Relationships under it. Then, edit the parameter rela-
tionship in the Table pane.
• In the Outline pane under Parameter Relationships, select a parameter relationship. Then, edit the
parameter relationship in the Properties or Table pane.
The following table indicates the editing tasks that the Properties, Table, and Outline panes allow
you to perform.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
186 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining the Optimization Domain
Name
Editable in the Properties and Table panes. Each parameter relationship is given a default name such
as Parameter or Parameter 2, based on the order in which the parameter relationship was created.
When you define both the left and right expressions for the parameter relationship, the default name
is replaced by the relationship. For example, v can become P1<=P2. The name is updated accordingly
when either of the expressions is modified.
The Name property allows you to edit the name of the parameter relationship. Once you edit
this property, the name persists. To resume the automated naming system, you must delete the
custom name, leaving the property empty.
Operator
Editable in the Properties and Table panes. Allows you to select the expression operator from a drop-
down menu. Available values are <= and >=.
When the evaluation is complete, the value for each expression is displayed:
• Under Domain in the Table pane. To view expression values for a parameter relationship, click the plus
icon next to the name. The values display below the corresponding expression.
• Under Candidate Points in the Table pane. In the Properties pane, select the Show Parameter Relation-
ships check box. Parameter relationships that are defined and enabled, along with their expressions and
current expression values for NLPQL and MISQP, are shown in the candidate points table in the Table
pane. For more information, see Viewing and Editing Candidate Points in the Table Pane (p. 194).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 187
Using Goal-Driven Optimizations
If the evaluation fails, the Outline and Properties panes display an error message. Review the error
to identify problems with the corresponding parameter relationship.
Note:
• The evaluation can fail because the selected optimization method does not support
parameter relationships or because the optimization includes one or more invalid para-
meter relationships. Parameter relationships can be invalid if they contain quantities that
are not comparable, parameters for which the values are unknown, or expressions that
are incorrect.
• The gradient-based optimization methods, NLPQL and MISQP, require a feasible value for
Starting Value that falls within the domain constrained by the enabled parameter rela-
tionships. If this value is infeasible, the optimization cannot be updated.
Additional optimization properties can be set in the Properties pane. The optimization approach used
in the design exploration environment departs in many ways from traditional optimization techniques,
giving you added flexibility in obtaining the desired design configuration.
Note:
If you are using an external optimizer, DesignXplorer filters objectives and constraints accord-
ing to the optimization method selected, displaying only the types that the method supports.
For example, if you have selected an optimizer that does not support the Maximize objective
type, DesignXplorer does not display Maximize as a choice.
In both the Table and Properties pane, the following optimization options are available:
For example, assume that you have selected parameter P1 in the empty row and set the objective
type to Minimize. DesignXplorer assigns Minimize P1 as the name. If you change the objective
type to Maximize, DesignXplorer changes the name to Maximize P1. If you then add a constraint
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
188 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining Optimization Objectives and Constraints
type of Values >= Bound and set Lower Bound to 3, DesignXplorer changes the name to Maximize
P1; P1 >= 3.
In either the Table pane or Outline pane, you can manually change the name of an objective or
constraint. Any custom name that you assign is persisted, which means that DesignXplorer no longer
changes the name if you change the properties. To restore automated naming, you delete the
custom name, leaving the option empty. When you click elsewhere, DesignXplorer will restore
automated naming.
Available options depend on the type of parameter and whether it is an input or output.
Parameter
In the last row in the Table pane, this option allows you to select the input or output parameter for which
to add an objective or constraint. In the newly inserted row, you then define properties for this parameter.
For existing rows, this option is display-only. However, you can delete rows.
Objective Type
Available for continuous input parameters without manufacturable values and output parameters. Allows
you to define an objective by setting the objective type. See the following tables for available objective
types.
Objective Initial
Visible only when tolerance settings are enabled and the Solution Process Update property for the
Parameter Set bar is set to Submit to Design Point Service (DPS). This property does not affect
DesignXplorer but rather is included in the fitness terms that DesignXplorer sends when the update of an
optimization study is sent to DPS. For more information, see Initial Values for Objectives to Send to
DPS (p. 192).
Objective Target
For a parameter with an objective, allows you to set the best estimated goal value that the optimization
method can achieve for the objective. This value is not a stopping criterion. If the optimization method
can find a better value than the target value, it will do so. On the history chart and sparkline, the target
value is shown by a dashed line.
Objective Tolerance
Visible when tolerance settings are enabled. For a parameter with a Seek Target objective type, allows
you to set the level of accuracy for reaching the target value. The tolerance value is not a strict constraint
to satisfy but rather a goal to reach. The tolerance value must be positive. For more information, see Toler-
ance Settings (p. 191).
Constraint Type
Available for continuous input parameters with manufacturable values, discrete input parameters, and
output parameters. Allows you to define a constraint by setting the constraint type. See the following
tables for available constraint types.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 189
Using Goal-Driven Optimizations
• For a discrete input parameter or a continuous input parameter with manufacturable variables, set the
lower or upper limit for the input value.
• For an output parameter, set the range for limiting the target value for the constraint.
Constraint Tolerance
Allows you to set a feasibility tolerance. The optimization method considers any point with a constraint
violation less than the tolerance value as a feasible point. The tolerance value must be positive.
Constraints for Discrete Input Parameters or Continuous Input Parameters with Manufacturable
Values
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
190 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining Optimization Objectives and Constraints
Tolerance Settings
DesignXplorer optimization methods can use tolerance settings to improve convergence and the relev-
ance of results. The Decision Support Process can also use tolerance settings to sort candidate points
and define their ratings values.
Note:
In the Workbench Options window under Design Exploration → Sampling and Optim-
ization, the Tolerance Settings check box is selected by default. This preference value
is used to initialize the Tolerance Settings check box in the properties for the Optim-
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 191
Using Goal-Driven Optimizations
ization cell when an optimization system is newly inserted in the Project Schematic.
When this check box is selected, tolerance values can be entered for objectives of the
Seek Target type and for constraints. Additionally, if the Solution Process Update
property for the Parameter Set bar is set to Submit to Design Point Service (DPS),
the Initial option is shown for objectives. The next section describes how DesignXplorer
initializes and sends these values to DPS.
– If the selected optimization method requires a starting value, the initial value is synchronized with
the starting value.
– If the selected optimization method does not require a starting value, the initial value is set based
on values for the project’s current design point. The initial value must be consistent with the input
parameter type (continuous or discrete). For continuous input parameters, the initial value must also
be consisted with the setting for the Allowed Values property (Any, Manufacturable Values, or
Snap to Grid), meaning it must be set to the closest discrete level, closest manufacturable value, or
closest point on the grid.
• For an output parameter, if the design point is up-to-date or has been already updated formerly, this
property is initialized from the values of the current design point of the project. Otherwise, if the design
point has never been updated, this property is blank and is highlighted in yellow to indicate that a
value is required.
To update an Optimization cell using DPS, the type, target, and initial values for all objectives must
be consistent.
Note:
For Workbench projects created in 2019 R3 or earlier, no changes are required for an
up-to-date Optimization cell in which tolerance settings are enabled. However, if you
make any change to the optimization definition, initial values must be present before
an update can be submitted to DPS successfully.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
192 of ANSYS, Inc. and its subsidiaries and affiliates.
Defining Optimization Objectives and Constraints
For a given design point P, the fitness function can be written as:
Where represents the rating for an objective or a constraint and corresponds to its relative
weight.
Note:
While tolerance settings are available for external optimizers in DesignXplorer, they are not
available for external optimizers in the DesignXplorer API.
Postprocessing Properties
Under Decision Support Process in the Properties pane, the following postprocessing properties for
objectives and constraints are available:
Objective Importance
For a parameter with an objective, allows you to select the relative importance of this parameter
compared to other objectives. Choices are Default, Higher, and Lower.
Constraint Importance
For a parameter with a constraint defined, allows you to select the relative importance of this para-
meter compared to other constraints. Choices are Default, Higher, and Lower.
Constraint Handling
For a parameter with a constraint defined, allows you to specify the handling of the constraint for this
parameter. This option can be used for any optimization application and is best thought of as a con-
straint satisfaction filter on samples generated from optimization runs. It is especially useful for
screening samples to detect the edges of solution feasibility for highly constrained nonlinear optim-
ization problems. Choices are:
• Relaxed: Samples are generated in the full parameter space, with the constraint only being used
to identify the best candidates. When constraint handling is relaxed, the upper, lower, and equality
constrained objectives of the candidate points are treated as objectives. Therefore, any violation
of the objective is still considered feasible.
• Strict: Samples are generated in the reduced parameter space defined by the constraint. When
constraint handling is strict (default), the upper, lower, and equality constraints are treated as hard
constraints. If any of these constraints is violated, the candidate point is no longer shown.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 193
Using Goal-Driven Optimizations
• Screening requires that an objective or a constraint is defined for at least one parameter. Multiple
output objectives are allowed.
• MOGA and Adaptive Multiple-Objective require that an objective is defined for at least one parameter.
Multiple output objectives are allowed.
• NLPQL, MISQP, and Adaptive Single-Objective require that an objective is defined for one parameter.
Only a single output objective is allowed.
• When you select Optimization, the Table pane displays a summary of candidate data.
• When you select Candidate Points, the Table pane displays existing candidate points and allows you to
add new custom candidate points. The Chart pane displays results graphically. For more information, see
Using Candidate Point Results (p. 210).
Once candidate points are created, you can verify them and also have the option of inserting them into
the response surface as other types of points:
Viewing and Editing Candidate Points in the Table Pane
Retrieving Intermediate Candidate Points
Inserting Candidate Points as New Design, Response, Refinement, or Verification Points
Verifying Candidates by Design Point Update
The maximum number of candidate points that can be generated is determined by Maximum
Number of Candidates. The recommended maximum number of candidate points depends on the
optimization method selected for use. For example, only one candidate is generally needed for
gradient-based, single-objective methods (NLPQL and MISQP). For multiple-objective methods, you
can request as many candidates as you want. For each Pareto front that is generated, there are sev-
eral potential candidates.
Note:
Because the number of candidate points does not affect the optimization, you can
experiment by changing the value for Maximum Number of Candidates and then
updating the optimization. Providing that only this property changes, the update
performs only postprocessing operations, which means candidates are rapidly gener-
ated.
The Table pane displays each candidate point, along with its input and output values. Output para-
meter values calculated from design point updates display in black text. Output parameter values
calculated from a response surface are displayed in the custom color defined in Tools → Options →
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
194 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Candidate Points
Design Exploration → Response Surface. For more information, see Response Surface Options (p. 25).
The number of gold stars or red crosses displayed next to each goal-driven parameter indicates how
well the parameter meets the stated goal. Parameters with three gold stars are the best, and parameters
with three red crosses are the worst.
For each parameter with a goal defined, the optimization also calculates the percentage of variation
for all parameters with regard to an initial reference point. By default, the initial reference point for
NLPQL or MISQP is the Starting Point defined in the optimization properties. For Screening or MOGA,
the initial reference point is the most viable candidate, Candidate 1. You can set any candidate point
as the initial reference point by selecting it in the Reference column. The Parameter Value column
displays the parameter value and the stars or crosses indicating the quality of the candidate. In the
Variation from Reference column, green text indicates variation in the expected direction. Red text
indicates variation that is not in the expected direction. When there is no obvious direction (as for a
constraint), the percentage value displays in black text.
The Name property for each candidate point indicates whether it corresponds to a design point in
the Table pane for the Parameter Set bar. A candidate point corresponds to a design point when
they both have the same input parameter values. If the design point is deleted from the Parameter
Set bar or the definition of either point is changed, the link between the two points is broken, without
invalidating your model or results. Additionally, the indicator is removed from the candidate point's
name.
If parameter relationships are defined and enabled, you can opt to also view parameter relationships
in the candidate points table. In the Properties pane for the Optimization cell, select the Show
Parameter Relationships check box. In the Table pane, the candidate points table then displays
parameter relationships and their expressions. For NLPQL and MISQP, the candidate points table also
displays current values for each expression.
• In the Outline pane under Results, select Candidate Points. In the Table pane, enter data into the cells
of the bottom table row. For a Response Surface Optimization system, you can also right-click a candidate
point row in the Table pane and select Insert as Custom Candidate Point.
• In the Outline pane, select Optimization. For a Response Surface Optimization system, you can also
right-click a candidate point in the Table pane and select Insert as Custom Candidate Point.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 195
Using Goal-Driven Optimizations
• In the Outline pane under Results, select any chart. Right-click a point in the chart and select Insert as
Custom Candidate Point.
When a custom candidate point is created in a Response Surface Optimization system, the outputs
of custom candidates are automatically evaluated from the response surface. When a custom candidate
is created in a Direct Optimization system, the outputs of the custom candidates are not brought
up-to-date until the next real solve.
Once a candidate point is created, it is automatically plotted in the results in the Chart pane and can
be treated as any other candidate point. You have the ability to edit the name, edit input parameter
values, and select options from the right-click context menu. In addition to the standard context
menu options, an Update Custom Candidate Point option is available for out-of-date candidates in
a Direct Optimization system. Additionally, a Delete option allows you to delete a custom candidate
point.
To stop the optimization, click Show Progress in the lower right corner of the window to open the
Progress pane. To the right of the progress bar, click the red stop button. In the dialog box that
opens, select either Interrupt or Abort. Intermediate results are available in either case.
When the optimization is stopped, candidate points are generated from the data available at this
time, such as solved samples, results of the current iteration, the current populations, and so on.
To assess the state of the optimization at the time it was stopped, under Optimization in the Prop-
erties pane, look at the optimization status and counts.
To view the intermediate candidate points, under Results, select Candidate Points.
Note:
DesignXplorer might not be able to return verified candidate points for optimizations
that have been stopped.
When an optimization is stopped midway, the Optimization cell remains in an unsolved state. If you
change any settings before updating the optimization again, the optimization process must start
over. However, if you do not change any settings before the next update, DesignXplorer makes use
of the design point cache to quickly return to the current iteration.
For more information, see Using the History Chart (p. 201).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
196 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Candidate Points
When either Optimization or Candidate Points is selected in the Outline pane, you can select one
or more candidate points in the Table pane and then right-click one of them to select an option for
inserting them as new points. You can also right-click a point in an optimization chart. The options
available on the context menu depend on the type of optimization. Possible options are:
• Explore Response Surface at Point. Inserts new response points in the Table pane for the Response
Surface cell by copying the input and output parameter values of the selected candidate points.
• Insert as Design Point. Inserts new design points in the Table pane for the Parameter Set bar by copying
the input parameter values of the selected candidate points. The output parameter values are not copied
because they are approximated values provided by the response surface.
• Insert as Refinement Point. Inserts new refinement points in the Table pane for the Response Surface
cell by copying the input parameter values of the selected candidate points.
• Insert as Verification Point. Inserts new verification points in the Table pane for the Response Surface
cell by copying the input and output parameter values of the selected candidate points.
• Insert as Custom Candidate Point. Inserts new custom candidate points in the candidate points table
by copying the input parameter values of the selected candidate points.
For a Response Surface Optimization system, the same insertion operations are available for the
raw optimization data table and most optimization charts, depending on the context. For instance,
it is possible to right-click a point in a Tradeoff chart to insert the corresponding sample as a response
point, refinement point, or design point. The same operations are also available from a Samples chart.
Note:
For a Direct Optimization system, only the Insert as Design Point and Insert as Custom
Candidate Point options are available.
With either Optimization or Candidate Points is selected in the Outline pane, select one or more
candidate points in the Table pane and then right-click one of them and select Verify by Design
Points Update. This context menu option is available for both optimization-generated candidate
points and custom candidate points.
DesignXplorer verifies candidate points by creating and updating design points with a real solve, using
the input parameter values of the candidate points. The output parameter values for each candidate
point are displayed in a separate row. For a Response Surface Optimization system, verified candidates
are placed next to the row containing the output values generated by the response surface. The se-
quence varies according to sort order. Output parameter values calculated from design point updates
are displayed in black text. Output parameter values calculated from a response surface are displayed
in the custom color defined in Tools → Options → Design Exploration → Response Surface. For
more information, see Response Surface Options (p. 25).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 197
Using Goal-Driven Optimizations
In a Response Surface Optimization system, if a large difference exists between the results of the
verified and unverified rows for a point, the response surface might not be accurate enough in that
area. In such cases, perhaps refinement or other adjustments are necessary. If desired, you can insert
the candidate point as a refinement point. You then recompute the optimization so that the refinement
point and new response surface are taken into account.
Note:
• Often candidate points do not have practical input parameters. For example, ideal thickness
could be 0.127 instead of the more practical 0.125. If desired, you can right-click the candidate
and select Insert as Design Point, edit the parameters of the design point, and then run this
design point instead of the candidate point.
To solve the verification points, DesignXplorer uses the same mechanism that is used to solve DOE
points. The verification points are either deleted or persisted after the run as determined by the
Preserve Design Points after a DX Run option for DesignXplorer. As usual, if the update of the
verification point fails, it is preserved automatically in the project. You can explore it as a design point
by editing the Parameter Set bar in the Project Schematic.
• The History chart is available for the following objects in the Outline pane:
With the exception of the Candidate Points chart, the Convergence Criteria chart, and the History chart,
it is possible to duplicate charts. Right-click the chart in the Outline pane and select Duplicate. Or, use
the drag-and-drop operation, which attempts an update of the chart so that the duplication of an up-
to-date chart results in the creation of an up-to-date chart.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
198 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
Note:
• The Convergence Criteria chart is not available for the Screening method.
• When an external optimizer is used, the Convergence Criteria chart is generated if data is
available.
When Optimization or Convergence Criteria is selected in the Outline pane, the Chart pane displays
the Convergence Criteria chart. The rendering and logic of the chart varies according to whether you
are using a multiple-objective or single-objective optimization method.
Because the chart is updated after each iteration, you can use it to monitor the progress of the op-
timization. When the convergence criteria have been met, the optimization stops and the chart remains
available.
Note:
If you are using an external optimizer that supports multiple objectives, the Convergence
Criteria chart displays the data that is available.
Before running your optimization, you specify values for the convergence criteria. After selecting
Optimization in the Outline pane, edit the values under Optimization in the Properties pane.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 199
Using Goal-Driven Optimizations
You can enable or disable the convergence criteria that display on the Convergence Criteria chart
by. Select Convergence Criteria in the Outline pane. In the Properties pane under Criteria, select
or clear the Enabled check box for a criterion to enable or disable it.
Before running your optimization, you specify values for the convergence criteria relevant to your
selected optimization method. Although these criteria are not explicitly shown on the chart, they
affect the optimization and the selection of the best candidate.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
200 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
To specify convergence criteria values, select Optimization in the Outline pane. Then, edit the
values under Optimization in the Properties pane.
• Red points representing the best candidates that are feasible points
Additionally, it gives you the option of monitoring the progress of the selected object while the op-
timization is still in progress. If you select an object during an update, the chart refreshes automatically
and shows the evolution of the objective, constraint, input parameter, or parameter relationship
throughout the update.
For the iterative optimization methods, the chart is refreshed after each iteration. For the Screening
method, it is updated only when the optimization update is complete. You can select a different object
at any time during the update to plot and view a different chart.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 201
Using Goal-Driven Optimizations
The History charts remain available when the update is completed. In the Outline pane, a sparkline
version of the History chart is displayed for each objective, constraint, input parameter, or parameter
relationship.
If the History chart indicates that the optimization has converged midway through the process, you
can stop the optimization and retrieve the results without having to run the rest of the optimization.
For more information, see Retrieving Intermediate Candidate Points (p. 196).
Note:
You can access the History chart by selecting an objective or a constraint under Objectives and
Constraints in the Outline pane. Or, you select an input parameter or parameter relationship under
Domain.
• Number of points in the sample set (as defined by the Size of Generated Sample Set property)
along the X axis
• Objective values, which fall within the optimization domain defined for the associated parameter,
along the Y axis
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
202 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
You can place the mouse cursor over any data point in the chart to view the X and Y coordinates.
Screening
For a Screening optimization, which is non-iterative, the History chart displays all the points of the
sample set. The chart is updated when all points have been evaluated. The plot reflects the non-
iterative process, with each point visible on the chart.
MOGA
For a MOGA optimization, the History chart displays the evolution of the population of points
throughout the iterations in the optimization. The chart is updated at the end of each iteration
with the most recent population (as defined by the Number of Samples per Iteration property).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 203
Using Goal-Driven Optimizations
For an NLPQL or MISQP optimization, the History chart enables you to trace the progress of the
optimization from a defined starting point. The chart displays the objective value associated with
the point used for each iteration. The chart does not display the points used to evaluate the deriv-
ative values. It reflects the gradient optimization process, displaying a point for each iteration.
Adaptive Single-Objective
For an Adaptive Single-Objective optimization, the History chart enables you to trace the progress
of the optimization through a specified maximum number of evaluations. On the Input Parameter
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
204 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
History chart, the upper and lower bounds of the input parameter are represented by blue lines,
allowing you to see the domain reductions narrowing toward convergence.
The chart displays the objective value corresponding to LHS or verification points, showing all
evaluated points.
Adaptive Multiple-Objective
For an Adaptive Multiple-Objective optimization, the History chart displays the evolution of the
population of points throughout the iterations in the optimization. Each set of points (the number
of which is defined by the Number of Samples Per Iteration property) corresponds to the popu-
lation used to generate the next population. Points corresponding to real solve are plotted as black
points. Points from the response surface are plotted with a square colored as specified in Tools →
Options → Design Exploration → Response Surface. For more information, see Response Surface
Options (p. 25).
The plot reflects the iterative optimization process, with each iteration visible on the chart. All
candidate points generated by the optimization are real design points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 205
Using Goal-Driven Optimizations
The History chart sparkline is similar to the History chart in the Chart pane:
• Sparklines are gray if no constraints are present. However, if constraints are present:
– Sparklines are red when the constraint or parameter relationship is not met. When parameter
relationships are enabled and taken into account, the optimization should not pick unfeasible
points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
206 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
• In the Outline pane, the sparkline for Minimize P9; P9 <= 14000 N is entirely green, indicating
that the constraints are met throughout the optimization history.
• In the Outline pane, the sparkline for Maximize P7; P7 >= 13000 is both red and green, indicating
that the constraints are violated at some points and met at others.
• In the Charts pane, the History chart is shown for the constraint Maximize P7; P7 >= 13000. The
points beneath the dotted gray line for the lower bound are infeasible points.
The History chart for an objective or constraint displays a red line to represent the evolution of the
parameter for which an objective or constraint has been defined. Constraints are represented by
gray dashed lines. The target value is represented by a blue dashed line.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 207
Using Goal-Driven Optimizations
In the following History chart for a MOGA optimization, the output parameter P9 – WB_BUCK is
plotted. The parameter is constrained such that it must have a value lesser than or equal to 1100.
The dotted gray line represents the constraint.
Given that Constraint Type is set to Maximize and Upper Bound is set to 1100, the area under
the dotted gray line represents the infeasible domain.
For an input parameter, the History chart displays a red line to represent the evolution of the
parameter for which the objective has been defined. If an objective or constraint is defined for the
parameter, the same chart displays when the objective, constraint, or input parameter is selected.
In the following History chart for an NLPQL optimization, the input parameter P3 – WB_L is plotted.
For P3 – WB_L, Starting Value is set to 100, Lower Bound is set to 90, and Upper Bound is set
to 110. The optimization converged upward to the upper bound for the parameter.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
208 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
For a parameter relationship, the History chart displays two lines to represent the evolution of the
left expression and right expression of the relationship. The number of points are along the X axis.
The expression values are along the Y axis.
In the following History chart for a Screening optimization, the parameter relationship P2 > P1 is
plotted.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 209
Using Goal-Driven Optimizations
To generate candidate points results, update the Optimization cell. Then, in the Outline pane under
Results, select Candidate Points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
210 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
• Green lines represent each the candidate points generated by the optimization.
If you set Coloring Method to by Source Type, the samples are colored according to the source
from which they were calculated, following the color convention used for data in the Table pane.
Samples calculated from a simulation are represented by black lines. Samples calculated from a
response surface are represented by a line in the custom color specified in Tools → Options →
Design Exploration → Response Surface. For more information, see Response Surface Op-
tions (p. 25).
When you move your mouse over the results, you can pick out individual objects, which become
highlighted in orange. When you select a point, the parameter values for the point are displayed
in the Value column of the Properties pane.
Across the bottom of the results, a vertical line displays for each parameter. When you mouse over
a vertical line, two handles appear at the top and bottom of the line. Drag the handles up or down
to narrow the focus down to the parameters ranges that interest you.
When you select a point on the results, the right-click context menu provides options for exporting
data and saving candidate points as design, refinement, verification, or custom candidate points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 211
Using Goal-Driven Optimizations
Table
Determines the properties of the results displayed in the Table pane. For the Show Parameter
Relationships property, select the Value check box to display parameter relationships in the Table
pane for the results.
Chart
Determines the properties of the results displayed in the Chart pane. Select the Value check box
to enable the property.
• Display Parameter Full Name: Select to display the full parameter name rather than the short parameter
name.
• Show Starting Point: Select to show the starting point in the results (NLPQL and MISQP only).
• Show Verified Candidates: Select to show verified candidates in the results. This option is available
for a Response Surface Optimization system only. Candidate verification is not necessary for a Direct
Optimization system because the points result from a real solve, rather than an estimation.
• Coloring Method: Select whether the results should be colored by candidate type or source type:
– by Candidate Type: Different colors are used for different types of candidate points. This is the default
value.
– by Source Type: Output parameter values calculated from simulations are displayed in black. Output
parameter values calculated from a response surface are displayed in the custom color selected on
the Response Surface tab in the Options window. For more information, see Response Surface Op-
tions (p. 25).
Input Parameters
Each of the input parameters is listed in this section. In the Enabled column, you can select or clear
check boxes to enable or disable input parameters. Only enabled input parameters are shown on
the chart.
Output Parameters
Each of the output parameters is listed in this section. In the Enabled column, you can select or
clear check boxes to enable or disable output parameters. Only enabled output parameters are
shown on the chart.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
212 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
You can change various generic chart properties can be changed for this chart.
Note:
• The Sensitivities chart is available only for the Screening and MOGA optimization methods.
• If the p-Value calculated for a particular input parameter is above the Significance Level spe-
cified in Tools → Options → Design Exploration, the bar for that parameter is shown as a flat
line on the chart. For more information, see Viewing Significance and Correlation Values (p. 60).
You can change the properties for the chart in the Properties pane.
• You can select which parameter to display on each axis of the chart by selecting the parameter from the
list next to the axis name.
• You can limit the Pareto fronts shown by moving the slider or entering a value in the field above the slider.
You can change various generic chart properties for this chart.
When an optimization is updated, you can view the best candidates (up to the requested number)
from the sample set based on the stated objectives. However, these results are not truly representative
of the solution set, as this approach obtains results by ranking the solution by an aggregated weighted
method. Schematically, this represents only a section of the available Pareto fronts. To display different
sections, you change the weights for the Objective Importance or Constraint Importance property
in the Properties pane. This postprocessing step helps in selecting solutions if you are sure of your
preferences for each parameter.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 213
Using Goal-Driven Optimizations
The following figures shows the results of the tradeoff study performed on a MOGA sample set. The
first Pareto front (non-dominated solutions) is represented by blue points on the output-axis plot.
You can move the slider in the Properties pane to the right to add more fronts, effectively adding
more points to the Tradeoff chart. Additional points added in this way are inferior to the points in
the first Pareto front in terms of the objectives or constraints that you specified. However, in some
cases where there are not enough first Pareto front points, these additional points can be necessary
to obtain the final design. You can right-click individual points and save them as design points or
response points.
In 2D and 3D Tradeoff charts, MOGA always ensures that feasible points are shown as being of better
quality than the infeasible points. It uses different markers to indicate them in the chart. Colored
rectangles represent feasible points. Gray circles represent infeasible points. Infeasible points are
available if any of the objectives are defined as constraints. You can enable or disable the display of
infeasible points in the Properties pane.
Also, in both 2D and 3D Tradeoff charts, the best Pareto front is blue. The fronts gradually transition
to red for the worst Pareto front. The following figure is a typical 2D Tradeoff chart with feasible and
infeasible points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
214 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
For more information on Pareto fronts and Pareto-dominant solutions, see GDO Principles (p. 335).
This chart provides a multidimensional representation of the parameter space that you are studying.
It uses the parallel Y axes to represent all of the inputs and outputs. Each sample is displayed as a
group of lines, where each point is the value of one input or output parameter. The color of the line
identifies the Pareto front to which the sample belongs. You can also set the chart so that the lines
display the best candidates and all other samples.
While the Tradeoff chart can show only three parameters at a time, the Samples chart can show all
parameters at once, making it a better option for exploring the parameter space. Because of its inter-
activity, the Samples chart is a powerful exploration tool. Using the axis sliders in the Properties pane
to easily filter each parameter provides you with an intuitive way to explore alternative designs. The
Samples chart dynamically hides the samples that fall outside of the bounds. Repeating this operation
with each axis allows you to manually explore and find trade-offs.
The Properties pane for the Samples chart has two choices for Mode: Candidates and Pareto Fronts.
You can display the candidates with the full sample set or display the samples by Pareto front (same
as the Tradeoff chart). If you set Mode to Pareto Fronts, the Coloring method property becomes
available, allowing you to specify by Pareto Front or by Samples for coloring the chart.
In the following Samples chart, Mode is set to Pareto Fronts and Coloring method is set to by
Pareto Front. The gradient ranges are from blue for the best to red for the worst.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 215
Using Goal-Driven Optimizations
When Mode is set to Candidates, Coloring method is not shown because coloring is always by
samples.
Chart Properties
• Display Parameter Full Name: Select to show the full parameter name rather than the short parameter
name.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
216 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Charts and Results
• Mode: Specifies whether to display the chart by candidates or Pareto fronts. Choices are by Pareto Fronts
and by Samples. If Pareto Fronts is selected, Coloring method becomes available as the last option so
that you can specify how to color the chart.
• Number of Pareto Fronts to Show: Specifies the number of Pareto fronts to display on the chart.
Input Parameters
Lists the input parameters. In the Enabled column, you can select or clear check boxes to enable or
disable input parameters. Only enabled input parameters are shown on the chart.
Output Parameters
Lists the output parameters. In the Enabled column, you can select or clear check boxes to enable
or disable output parameters. Only enabled output parameters are shown on the chart.
Specifies various generic chart properties. For more information, see Setting Chart Properties.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 217
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
218 of ANSYS, Inc. and its subsidiaries and affiliates.
Using ROMs
The following topics explain how to produce parametric ROMs (reduced order models) from Fluent
steady analyses and then use them to evaluate models in 2D or 3D to rapidly explore the variation of
results:
ROM Overview
ROM Workflow
ROM Production Example for Fluent
Opening the ROM in the ROM Viewer
Exporting the ROM
Consuming a ROMZ File in the Standalone ROM Viewer
Consuming a ROMZ File in Fluent
Consuming an FMU 2.0 File in Twin Builder
Analyzing and Troubleshooting ROM Production
Quality Metrics for ROMs
ROM Limitations and Known Issues
ROM Overview
You can produce a ROM by learning the physics of a given model and extracting its global behavior
from offline simulations. As a standalone digital object, a ROM can be consumed outside of its production
environment for computationally inexpensive, near real-time analysis. The following figure shows a
ROM for a steady fluids simulation of a heat exchanger. Results are for the velocity magnitude in the
shell.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 219
Using ROMs
In a parametric ROM, you can evaluate the model and rapidly explore the variation of the results, de-
pending on input parameter values. In this particular example, to explore the variation of the result,
you move the sliders in the left pane to change input parameter values and then click Evaluate to
display updated results. You can also use tools to show the mesh and set the mesh translucency. Above
the list of regions, Show Result provides for showing and hiding result values.
Because calculating a ROM result requires only simple algebraic operations (such as vectors summation
and response surfaces evaluations), this step is computationally inexpensive compared to the FOM (full
order model) processing step. Not only is the ROM processing step several orders of magnitude
cheaper, but also the ROM can easily be delivered to and consumed by any number of users, yielding
significant returns on your initial investment in its production.
ROM Workflow
The ROM workflow consists of two distinct stages:
ROM Production
ROM Consumption
ROM Production
In Workbench, you use a DesignXplorer 3D ROM system to drive ROM production from either a 2D
or 3D simulation. In the Workbench Toolbox, the 3D ROM system is visible under Design Exploration.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
220 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Workflow
A 3D ROM system is based on a Design of Experiments (DOE) and its design points, which automate
the production of solution snapshots and the ROM itself.
You define the input parameters and content of the ROM in the simulation environment, which is
currently limited to Fluent. A medium-sized ROM generally has 3 to 6 input parameters, while a very
large ROM might have more than 15.
When lots of input parameters are enabled, you might need to increase the number of ROM snapshot
files to maintain ROM accuracy. If you decide that you no longer want to vary an input parameter,
you can disable it.
A ROM must always have a least one output parameter. While output parameters have no impact on
ROM production, you can use them to monitor results while DesignXplorer updates the design points.
While ROM setup is specific to the ANSYS product, the ROM production workflow is generic. This
means that as ROM support is extended to additional ANSYS products in future releases, the steps
that you take to produce a ROM will be the same in all simulation environments.
Note:
• It is not possible to update DesignXplorer systems when the project includes a 3D ROM
system and Update Option is set to Submit to Remote Solve Manager. You must change
Update Option to Run in Foreground and might want to set Submit to Remote Solve
Manager at the solution component level.
• If a Workbench project with a ROM was created in a version earlier than 2019 R3, the
previously existing ROM system name (ROM Builder) and DOE cell name (Design of Ex-
periments (RB)) are shown.
Because ROM production requires several simulations, this stage can be computationally expensive.
However, once the ROM is built, it can be consumed at negligible cost.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 221
Using ROMs
ROM Consumption
The workflows for consuming ROMs can differ from one application to another. Currently, to consume
a Fluent ROM, you export the ROM (p. 236) as either a ROMZ file or an FMU 2.0 file, depending on
which consumption environment is targeted.
• A ROMZ file is a compressed package containing the data and libraries that allow the ROM to be
consumed in the standalone ROM Viewer. For more information, see Consuming a ROMZ File in the
Standalone ROM Viewer (p. 237). A ROMZ file can also be imported into Fluent in Workbench so that
results can be evaluated directly in Fluent. For more information, see Reduced Order Model (ROM)
Evaluation in Fluent in the Fluent User's Guide.
• An FMU 2.0 file is similar to a ROMZ file as it too contains the entire ROM. This file can be imported
into ANSYS Twin Builder, which provides for displaying the fields in the ROM Viewer while the sim-
ulation is running and after the simulation is complete. For more information, see Consuming an
FMU 2.0 File in Twin Builder (p. 239).
Because an exported ROM is a standalone digital object, deployment for consumption by many users
is quick and easy.
Note:
For advanced Fluent users, a beta workflow exists for manually defining or generating in
standalone Fluent all of the appropriate files for finally producing the Fluent ROM in Work-
bench. For more information about this advanced workflow, see the DesignXplorer Beta Features
Manual.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
222 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Production Example for Fluent
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 223
Using ROMs
After you build a ROM, you can open it in the ROM Viewer. For more information, see Opening the
ROM in the ROM Viewer (p. 235).
You can also open ROM snapshot files and the ROM Builder log file in the ROM Viewer. On snapshot
files, you can perform many additional operations. For more information, see Analyzing and
Troubleshooting ROM Production (p. 250).
To assess ROM accuracy, you can view quality metrics and run both verification points and refinement
points. For more information, see Quality Metrics for ROMs (p. 263).
Once you have verified ROM accuracy, you can export the ROM for consumption in the standalone
ROM Viewer, Fluent in Workbench, or Twin Builder. For more information, see Exporting the
ROM (p. 236).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
224 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Production Example for Fluent
Prerequisites
The following steps explain how to download and extract sample files and then how to enable
DesignXplorer advanced options:
3. Start Workbench.
b. On the Design Exploration page, select the Show Advanced Options check box.
c. Click OK.
DesignXplorer now displays advanced options in italic type in various panes. Advanced ROM options
provide for opening the ROM Builder log file and setting a user-generated ROM mesh file.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 225
Using ROMs
To estimate minimum production time for a ROM, you can multiply the time it takes to update one
design point by the number of learning points. For this example, producing the ROM takes approx-
imately 40 minutes on 10 cores.
For your convenience, project archive files without solved design points (HeatExchanger.wbpz)
and with solved design points (HeatExchanger_DOE_Solved.wbpz) are included in the directory
where you extracted the sample files.
Note:
If you want to learn how to use a ROM rather than how to create a ROM, you can skip to
Consuming a ROMZ File in the Standalone ROM Viewer (p. 237). You can then consume the
supplied ROMZ file (HeatExchanger.romz). This file is included in the directory where
you extracted the sample files.
1. In Workbench, select File → Open, navigate to the directory where you extracted the sample files,
and open the project archive file HeatExchanger.wbpz.
a. In the Project Schematic, double-click the Parameter Set bar to open it.
b. In the Outline pane, look at the input and output parameters, which have been well defined.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
226 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Production Example for Fluent
4. In the Project Schematic, right-click the Mesh cell and select Update.
5. When the update finishes, double-click the Setup cell to open Fluent, clicking Yes when asked
whether to load the new mesh.
6. When the Fluent Launcher window opens, if necessary, adjust the number of processors based
on your local license and hardware configurations and then click Start.
a. To make Reduced Order Model (Off) visible in the tree under Setup → Models, enable and
load the ROM addon module by executing this command in the Fluent console:
b. On the ribbon's Solution tab, click the button in the Initialization group for initializing the
entire flow field and then click Calculate in the Run Calculation group.
c. When the calculation completes, click OK to close the dialog box. Then, in the tree under
Setup → Models, double-click Reduced Order Model (Off).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 227
Using ROMs
d. In the Reduced Order Model window that opens, select the Enable Reduced Order Model
check box to expand the window so that you can set up the ROM.
On the Setup tab, each pane displays a filtering option and buttons for toggling which
values to show, how to show these values, and selecting and deselecting all shown
values. Any Fluent custom function fields that you have defined are included in the list
of variables available for selection.
Note:
e. Select the variables and zones to include in the ROM and then click Add to move them to the
Selected for ROM list.
Because you cannot add selections to the ROM later, ensure that you add all variables
and zones of interest. If you want to delete a particular selection from the Selected for
ROM list, click it and then click Delete beneath the list.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
228 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Production Example for Fluent
Everything in the Selected for ROM list will be included in the ROM.
g. In the Fluent toolbar, click the button for synchronizing the Workbench cell ( ) to push your
changes to the Design of Experiments (3D ROM) cell.
h. If you are asked to save your changes, select the option for saving changes for current and
future calculations.
a. On the Project Schematic, right-click the Fluid Flow (Fluent) system and select Update.
b. When the update finishes, select View → Files and locate the ROM snapshot file (ROMSNP)
that has been created.
This global file captures the results for the current parameters configured for DP 0. It is one of a
set of snapshot files that is needed to build the ROM.
1. In the Workbench Toolbox under Design Exploration, double-click 3D ROM to add a system of this
type to the Project Schematic.
A 3D ROM system is inserted under the Parameter Set bar. In the Design of Experiments (3D
ROM) cell, (3D ROM) indicates that the design points in this cell are used to build the ROM.
The DOE for a 3D ROM system can share data only with the DOE for another 3D ROM system.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 229
Using ROMs
Tip:
You can import data from an external CSV file into the Design of Experiments
(3D ROM) cell. For more information, see Importing Data from a CSV File (p. 301)
and Exporting and Importing ROM Snapshot Archive Files (p. 254).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
230 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Production Example for Fluent
The default value for Design of Experiments Type is Optimal Space-Filling Design.
The default value for Number of Samples depends on the number of input parameters.
DesignXplorer calculates this value by multiplying the number of enabled input para-
meters by eight. In the example, four input parameters are enabled. Consequently,
Number of Samples is set to 32.
d. For each input parameter in the Outline pane, select it and edit the lower and upper bounds
in the Properties pane. For this example, use the following values.
f. In the Properties pane, select the Preserve Design Points After DX Run check box and
set Report Image to FFF-Results:Figure001.png.
In the Table pane, the Report Image column is added. The image for each design point
will display temperature results on the symmetry face. These figures are defined in the
Results cell in CFD-Post. For more information, see Figure Command in the CFD-Post
User's Guide and Viewing Design Point Images in Tables and Charts (p. 289).
3. In the toolbar, click Preview to generate the 32 design points without updating them.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 231
Using ROMs
Tip:
To avoid the wait, you can save and close this project and then open the project
archive file HeatExchanger_DOE_Solved.wbpz and save it to a new file name.
Because the 32 design points are already updated in this file, you can skip to step 7.
For each design point, a report image and a ROM snapshot file (ROMSNP) are produced. For
more information, see ROM Snapshot Files (p. 250).
5. When the update finishes, close the Design of Experiments (3D ROM) cell.
a. On the Project Schematic, double-click the ROM Builder cell to open it.
The Table pane displays a summary of all the variables that are included in the ROM.
c. In the Properties pane, verify that Solver System is set to the correct ANSYS product.
The default is Fluid Flow (Fluent) (FFF) because this is only the system in which a ROM is
set up.
d. For Construction Type, accept the default value, which is Fixed Number of Modes.
Note:
If the design points for the Design of Experiments (3D ROM) cell was smaller
than 10, you would set Number of Modes to the number of design points.
When the update completes, toolbar buttons are enabled for opening the ROM in the ROM
Viewer and exporting the ROM.
Tip:
On the Project Schematic, right-clicking the ROM Builder cell displays a context
menu that also includes the options for opening and exporting the ROM.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
232 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Production Example for Fluent
For more information, see Opening the ROM in the ROM Viewer (p. 235) and Exporting the
ROM (p. 236).
8. To assess the ROM accuracy, in the Outline pane, select Goodness Of Fit. Then, in the chart, check
the error metrics at the DOE points.
Learning points include DOE points plus refinement points. You can right-click a bar in the chart
to open the ROM Viewer and assess the ROM accuracy for a specific DOE or verification point.
For more information, see Quality Metrics for ROMs (p. 263).
b. In the Properties pane, select the Preserve Design Points After DX Run check box and the
Generate Verification Points check box.
f. When the update finishes, in the Outline pane, select Verification Points; then in the Table pane,
check the verification point values and report images.
g. In the Outline pane, select Goodness Of Fit; then in the chart, check the error metrics at the veri-
fication points.
h. In the bar chart, right-click the bar with the highest error and select Display Prediction Errors in
ROM Viewer.
When the ROM Viewer opens, you can see the difference between the ROM approximation and a
real solve.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 233
Using ROMs
To improve the ROM accuracy, you can choose to run some additional design points. You can do
so in either the Design of Experiments cell or the ROM Builder cell.
3. In the toolbar, click Preview to generate the design points without updating them.
The additional points are added where the distances between existing design points are the
highest.
In the Table pane for the Design of Experiments (3D ROM) cell, you can see the statuses of ROM
snapshot files and perform many other operations. For more information, see ROM Snapshot
Files (p. 250). ROM snapshot files and the ROM Builder log file provide for analyzing and
troubleshooting ROM production (p. 250).
If you want to edit output values for a design point or add, import, or copy design points, you must
change the Design of Experiments Type property to Custom or Custom + Sampling. To edit
one or more output values for a design point, you right-click the output value and then select either
the option for setting it or all output values as editable. For more information, see Editable Output
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
234 of ANSYS, Inc. and its subsidiaries and affiliates.
Opening the ROM in the ROM Viewer
Parameter Values (p. 300). For any design points that you add manually, you must set ROM snapshot
files. For more information, see Setting Specific ROM Snapshot Files for Design Points (p. 253).
When you edit a design point, you must update the ROM.
• In the toolbar for the ROM Builder cell, click Open in ROM Viewer.
• In the Project Schematic, right-click the ROM Builder cell and select Open in ROM Viewer.
• The first button is for saving the graphics view as an image file (PNG or BMP), which you
can do both before and after an evaluation.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 235
Using ROMs
• The selection control is for choosing a result for a past evaluation so that you can view it
once again. You can select from up to the last 40 results.
Below the toolbar, the title bar displays the coordinates of the ROM Builder cell in the Project
Schematic followed by the name of this cell, which is editable. For instance, the title bar in this
example displays C3: ROM Builder.
2. Under Input Parameters, use the sliders to change input parameter values.
3. Under Results, select the region (zone) and field (variable) that you want to evaluate.
4. Under Tools, use the controls to show or hide the mesh and result and to set the mesh translucency.
5. Under Regions, use the controls to select all zones or only one or more particular zones.
6. Click Evaluate. The ROM rapidly calculates and displays the updated results.
• To zoom in and out, use the mouse scroll wheel or middle mouse button.
• To rotate, press the mouse scroll wheel or middle mouse button while moving the mouse.
• To translate, press the Ctrl key and right mouse button at the same time while moving the mouse.
In the pane displaying the model, additional buttons are also available:
When you finish your analysis, you can close the ROM Viewer. If you want, you can export this ROM
to a ROMZ file or FMU 2.0 file so that others can consume it as a standalone model.
• In the toolbar for the ROM Builder cell, click Export ROM.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
236 of ANSYS, Inc. and its subsidiaries and affiliates.
Consuming a ROMZ File in the Standalone ROM Viewer
• In the Project Schematic, right-click the ROM Builder cell and select Export ROM.
2. In the Export ROM dialog box, navigate to the location where you want to save the file.
4. For Save as type, select the file type to which to export the file. Choices are:
• ROM Files (*.romz). A ROMZ file can be consumed by anyone who has access to the standalone
ROM Viewer. For more information, see Consuming a ROMZ File in the Standalone ROM View-
er (p. 237). A ROMZ file can also be imported into Fluent in Workbench so that results can be evaluated
directly in Fluent. For more information, see Reduced Order Model (ROM) Evaluation in Fluent in
the Fluent User's Guide.
• FMU Version 2 Files (*.fmu). An FMU (Functional Mock-up) 2.0 file can be consumed by anyone
who has access to ANSYS Twin Builder or any other tool that can read this file type. When you import
the FMU 2.0 file into Twin Builder, you can display the fields in the ROM Viewer while the simulation
is running and after the simulation is complete. For more information, see Consuming an FMU 2.0
File in Twin Builder (p. 239).
Note:
Using an FMU 2.0 file exported from DesignXplorer implies the approval of the terms
of use supplied in the License.txt file. To access License.txt, use a zip utility to
manually extract all files from the .fmu package.
Consumption of the exported ROM file can require a lot of memory, depending on the ROM setup and
the ROM mode count.
1. Double-click the ROMViewerLauncher script for your operating system. The standalone ROM
Viewer starts.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 237
Using ROMs
3. Navigate to and select the ROMZ file that you want to consume.
4. Click Open.
The ROMZ file opens in the standalone ROM Viewer, which provides the same functionality as the ROM
Viewer in Workbench.
Note:
While this topic assumes that you have opened a ROMZ file, you can also open a ROM
project file (ROMPJT) or a ROM mesh file (ROMMSH). To open a ROM project file, the
ROM mesh file must reside in the project directory. If the ROM mesh file is not found,
an error displays, indicating that you must move or copy the ROM mesh file to this dir-
ectory.
Tip:
You can create a BAT file that automatically opens a specific ROMZ file. In your BAT file,
add these lines:
@echo off
call "%AWP_ROOT201%\Addins\DesignXplorer\bin\Win64\ROMViewerLauncher.bat" -romz "C:\PATH_TO_ROMZ_FILE"
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
238 of ANSYS, Inc. and its subsidiaries and affiliates.
Consuming an FMU 2.0 File in Twin Builder
Note:
You cannot import a ROMZ file into the Fluent system used to produced the 3D ROM
in the same instance of Workbench.
Note:
If you want a sample 3D ROM, you can consume the supplied FMU 2.0 file (HeatEx-
changer.fmu) in Twin Builder. This file resides in the directory where you extracted
the sample files.
An FMU 2.0 file that is created by exporting a 3D ROM has all the enabled input parameters from the
DOE and works only within the ranges for these inputs. The FMU outputs are the minimum, maximum,
and average values for each selected variable on each selected zone. Existing scalar outputs from
Workbench are ignored. Currently, there is no way to add custom scalar outputs.
Otherwise, you can use an FMU 2.0 file that is created by exporting a 3D ROM in the same manner as
any other FMU file. For more information on these files and using 3D ROMs in Twin Builder, search the
Twin Builder help for these topics:
• FMU Components
2. Navigate to and select the FMU 2.0 file for the 3D ROM.
3. Click Open.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 239
Using ROMs
6. To have the ROM Viewer launch during the simulation, set these properties for the FMU component:
• TwinBuilder_SendingPeriod: Set the time period in seconds at which the Twin Builder is to send
the data to the viewer. The default value is 0, which means data is sent at each simulation time
instance.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
240 of ANSYS, Inc. and its subsidiaries and affiliates.
Consuming an FMU 2.0 File in Twin Builder
7. To have the ROM Viewer stay open once the simulation is finished, click Edit Active Setup in the
ribbon and select the Enable continue to solve check box.
8. Connect all FMU inputs with the remainder of the system model.
9. Click Analyze.
Twin Builder launches the ROM Viewer that is set up in the properties for the FMU component.
Note:
Closing the ROM Viewer will not end the Twin Builder simulation. In Twin Builder's
Message Manager, a warning indicates that the viewer has been closed or disconnec-
ted. After the simulation finishes, the Twin Builder simulation is kept in the Continue
mode. You can extend the simulation by defining a new Tend. Stopping the Twin
Builder simulation will close all open instances of the ROM Viewer.
If you set up the FMU component to launch the original viewer, it looks and works similar to the ROM
Viewer in Workbench, which is described in Opening the ROM in the ROM Viewer (p. 235). However,
the Time property and replay buttons appear below the title bar (project name). You must click the
toolbar button to update the 3D scene to the most recent Twin Builder simulation data.
For Time, the unit is always seconds. You can enter the time instance from which you want to start
replaying simulations. If you directly enter a value for Time, results for the closest time instance display.
Using the replay buttons below this property, you can step through time instances. Unlike the latest
viewer, the original viewer does not update automatically.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 241
Using ROMs
If instances are sent at a speed faster than the ROM Viewer can evaluate them, the viewer will skip
instances and stay synchronized with the Twin Builder simulation. All instances, including those
skipped during the live update, are available for replay.
The following image shows an exploded view of the ROM Viewer. The subsequent table explains
each area.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
242 of ANSYS, Inc. and its subsidiaries and affiliates.
Consuming an FMU 2.0 File in Twin Builder
Area
A Button for opening ROM files. While this button must not be used in Twin Builder, it will be
supported for use in other ANSYS products to open the following file types: ROMZ, ROMPJT,
and ROMMSH.
B 3D scene provides these interactions:
• To hide a region, press and hold the Shift key and then left-click the region.
• To display all regions, right-click the 3D scene and select Show all parts. The context menu
also has options for choosing how to render all parts.
• To set a rotation point, in the 3D scene, click where you want to set it and select Set a
Rotation Point.
• To pan, press and hold the Control key, left-click in the 3D scene, and then move the mouse
• Geometry rendering controls for setting the draw style of all parts. Draw style choices are
Surface, Transparent Surface, Lines, Outline, Surface Mesh, and Outline Mesh. The current
selection applies to all regions of the 3D scene, except for postprocessing regions, where
opacity can be controlled on individual regions.
• Postprocessing tools for configuring results, cutting planes, isosurfaces, isovolumes, and
particle traces. Later sections describe how to use these tools.
D Status box displays the number of points, elements, and nodes in the ROM.
E Color bar for displaying results. For more information, see Configuring ROM Results (p. 245).
F Control for disabling and enabling automatic updates. For more information, see Disabling
and Enabling Automatic Updates (p. 243).
G Table displaying the time instances that Twin Builder has sent.
H Controls for replaying the parametric trajectory. For more information, see Replaying the
Parametric Trajectory (p. 244).
I Analysis area for updating results in the 3D scene. After providing input parameter values,
you click EVALUATE to update ROM results in the 3D scene.
• In the lower left corner of the window, clicking Auto update switches between disabling and enabling
automatic updates.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 243
Using ROMs
• Clicking the play or next button in the left pane disables automatic updates.
When Auto update is disabled, the 3D scene no longer updates in real time. However, any new in-
stances that Twin Builder sends to the viewer are still written to the table.
• Enter input parameters values in the left pane and then click EVALUATE to update the 3D scene.
• Replay the trajectory from a specific point. When you place the mouse cursor over a table row,
clicking the three dots that appear to the left displays a menu with options for replaying the trajectory
from this point or displaying the results for this point.
When you enable Auto update again, the 3D scene returns to updating in real time.
In the table, the Current label marks the point that is currently shown in the viewer.
During replay, the play button becomes a pause button, and clicking the next or previous button
automatically pauses the replay. When paused, the play button allows you to resume the replay.
Clicking the stop button and then the play button restarts the replay from the beginning.
Note:
Additionally, when you place the mouse cursor over any table row, three dots appear on the left.
When you click the three dots, you can select Evaluate or Play from here from the menu:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
244 of ANSYS, Inc. and its subsidiaries and affiliates.
Consuming an FMU 2.0 File in Twin Builder
• Selecting Play from here moves through the trajectory, starting from this point.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 245
Using ROMs
• The first set of controls is used to select, add, or delete a cutting plane in the 3D scene.
• Position provides both a slider and entry fields for moving the cutting plane position in the 3D scene.
You can click Pick to select the cutting plane position in the 3D scene.
• Normal specifies the cutting plane orientation. You can use the preselected X, Y, and Z axes or enter
axis values manually. You can click the Inv button to invert the cutting plane.
• Map vector specifies the vector result to render on the cutting plane. It is displayed using 3D arrows.
To be able to select a vector, a field of vectors must be available in the ROM setup.
• Grid spacing, which is visible only when a vector result is selected, provides both a slider and an
entry field for specifying the arrow density.
• Opacity provides both a slider and an entry field for specifying the opacity for the cutting plane.
• Clip indicates whether to hide one side of the geometry. This check box is cleared by default. The
side of the geometry that is hidden can be controlled using the Normal Inv button.
Configuring Isosurfaces
Clicking the Isosurfaces button ( ) in the postprocessing tools opens the Isosurfaces window, where
you can configure isosurfaces. An isosurface is a three-dimension surface that represents points of a
constant value (such as pressure, temperature, velocity, or density). To view the isosurface, some regions
must be hidden, or the global draw style must be set to Transparent Surface, Lines, or Outline.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
246 of ANSYS, Inc. and its subsidiaries and affiliates.
Consuming an FMU 2.0 File in Twin Builder
• The first set of controls is used to select, add, or delete an isosurface in the 3D scene.
• Iso value provides both a slider and an entry field for specifying the constant value used to generate
the isosurface.
• Iso scalar specifies the result used for generating the isosurface.
• Map vector specifies the vector result for displaying arrows on the isosurface. To be able to select a
vector, a field of vectors must be available in the ROM setup.
• Opacity provides both a slider and an entry field for specifying the opacity for the isosurface.
Configuring Isovolumes
Clicking the Isovolumes button ( ) in the postprocessing tools opens the Isovolumes window, where
you can configure isovolumes. An isovolume is a three-dimensional region whose nodes and elements
are constrained to a constant interval range in a scalar field. To view the isovolume, some regions
must be hidden, or the global draw style must be set to Transparent Surface, Lines, or Outline.
• The first set of controls is used to select, add, or delete an isovolume in the 3D scene.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 247
Using ROMs
• Min iso value provides both a slider and an entry field for specifying the minimum value for the in-
terval range.
• Max iso value provides both a slider and an entry field for specifying the maximum value for the
interval range.
• Iso scalar specifies the result used for generating the Isovolume.
• Map vector specifies the vector result for displaying arrows in the isovolume. To be able to select a
vector, a field of vectors must be available in the ROM setup.
• Opacity provides both a slider and entry field for specifying the opacity for the isovolume.
• The first set of controls is used to select, add, or delete a particle trace in the 3D scene.
• Under Results:
– Compute vector specifies the vector result used to compute the particle trace.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
248 of ANSYS, Inc. and its subsidiaries and affiliates.
Consuming an FMU 2.0 File in Twin Builder
• Under Style:
– Radius provides both a slider and an entry field for specifying the thickness of the particle trace.
– Trace both directions indicates whether to draw the particle trace both upstream (backward) and
downstream (forward) of the seed points. This check box is selected by default. If you clear it, the
particle trace is drawn only downstream (forward).
– Num U provides both a slider and an entry field for specifying the number of seed points in the U
direction of the grid.
– Num V provides both a slider and an entry field for specifying the number of seed points in the V
direction of the grid.
– Spacing provides both a slider and an entry field for specifying the particle trace density.
– Pick allows you to select the center of the grid by clicking any surface of the model.
• Num levels provides both a slider and an entry field for specifying the number of colors used in the
color bar.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 249
Using ROMs
• Min value provides both a slider and an entry field for specifying the minimum value for displaying
results.
• Max value provides both a slider and an entry field for specifying the maximum value for displaying
results.
• Autoscale indicates whether to automatically scale the minimum and maximum values for the current
result. This check box is selected by default. If you clear this check box, the minimum and maximum
values will be the same for each evaluation.
• Logarithmic mapping indicates whether to use a logarithmic scale. This check box is cleared by
default, which means a logarithmic mapping is not used.
• Node averaged values indicates whether to average elemental results to nodes or not. This setting
has no effect on nodal results.
1. In the upper right corner of the ROM Viewer, click the gear icon.
2. For Refresh Display, increase the number of milliseconds between two result displays.
When creating a ROM in the ROM Builder, you update the Design of Experiments (3D ROM) cell in
the 3D ROM system, which generates a ROM snapshot file for each design point.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
250 of ANSYS, Inc. and its subsidiaries and affiliates.
Analyzing and Troubleshooting ROM Production
In the ROM Builder cell, this same column is also available in refinement and verification point
tables. Placing the mouse cursor over a cell in the column displays the name of the snapshot file.
Descriptions follow for each status icon that can display in the snapshot column:
• Up-to-date ( ): The design point has a snapshot file associated with it, and no issues with this file
were detected.
• Update required ( ): The design point is not yet solved and must be updated. Updating the design
point will generate a new snapshot file.
• Update failed ( ): The design point has been calculated or has an imported snapshot file associated
with it but the file is not useable.
• Attention required ( ): The design point has editable outputs but does not have a snapshot file
associated with it. To continue, you must select a snapshot file for this design point.
When an information icon ( ) appears to the right of the status icon, you can click it to view remarks,
errors, or guidance.
When a snapshot file is invalid or missing, the DOE cell cannot be updated. To continue, you must
do one of the following:
• Set a valid snapshot file manually, which is described in Manually Adding Design Points and Setting
Snapshot Files (p. 253)
The same is true if there is an invalid or missing snapshot file for a refinement point or validation
point in the ROM Builder cell.
• Invalid DOE design points and refinement points prevent building the ROM.
• Invalid verification points prevent building the goodness of fit (p. 263).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 251
Using ROMs
Additionally, you can display information about a ROM snapshot file from the Files pane of the
Project Schematic:
2. In the Name column, right-click the cell with the snapshot file in which you are interested and
select Display File Info.
The file information displays results for each variable and zone that was included in the ROM.
When you finish reviewing this information, you can close the file.
Tip:
You can gain additional insight by viewing values for ROM snapshots in the ROM
Viewer. For more information, see Viewing Observed and Predicted Snapshot Values in
the ROM Viewer (p. 255).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
252 of ANSYS, Inc. and its subsidiaries and affiliates.
Analyzing and Troubleshooting ROM Production
However, for added design points, ROM snapshot files must be set. Multiple 3D ROM systems can
use the same snapshot files.
To add design points and set snapshot files, you can use these methods:
Manually Adding Design Points and Setting Snapshot Files
Exporting and Importing ROM Snapshot Archive Files
Importing Design Points and Snapshot File Settings from CSV Files
In certain situations, these methods ask how you would like to proceed:
• Before copying or importing design points into a DOE, DesignXplorer parses and validates the data.
For more information, see the parsing and validation information in Copying Design Points (p. 86).
• If you set a snapshot file that is in the ROM production folder for a different project and this same
file does not already exist in the ROM production folder for the current project, a dialog box opens,
asking whether you want to move or copy the file.
• In a case where a snapshot file with the same name already exists in the ROM production folder,
a dialog box opens, asking whether you want to use the existing file or rename the new file. Before
deciding which action to take, you can click Show Details and compare information for the two
files. If you select rename, the imported file is copied and assigned a new name that corresponds
to the current name plus a suffix with time stamp information.
Any time that you edit design points or refinement points, you must update the ROM.
Caution:
If you edit verification points without defining output parameter values and setting
snapshot files, you must update these points to get the new goodness of fit. If you
add verification points that are up-to-date, the goodness of fit is updated automat-
ically.
If you select multiple rows, output parameter values for all selected rows are set as editable.
If you want to make output parameter values for all rows editable, you would select Set All
Output Values as Editable. For more information, see Editable Output Parameter Val-
ues (p. 300).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 253
Using ROMs
2. In the Table pane, right-click any editable output parameter value in the row and select Set Snapshot
File.
3. In the dialog box that opens, select the appropriate snapshot file and click Open.
1. In the Table pane for the Design of Experiments (3D ROM) cell, right-click and select Export
as Snapshot Archive.
2. In the dialog box that opens, specify a directory location and file name and then click Save.
1. In the Table pane for the Design of Experiments (3D ROM) cell, right-click and select Import
Design Points and Snapshot Files → Browse.
2. In the dialog box that opens, navigate to the desired snapshot archive file and click Open.
A check for consistency is made against the first valid snapshot that is imported.
Note:
You can use a software tool like 7-Zip to open and modify a snapshot archive file.
If you want, you can save the modified file as a ZIP file. DesignXplorer supports
importing snapshot archive files with either SNPZ or ZIP extensions.
Importing Design Points and Snapshot File Settings from CSV Files
In CSV files that contain design points to import, for each design point, you can include the name
of the snapshot file to set to avoid having to manually set it. The snapshot files referenced must
be in the ROM production folder of the project into which you are importing the design points.
1. Ensure that the snapshot files referenced in the CSV file are copied into the ROM production folder
of the project into which to import design points.
To easily find this directory, in the Table pane for the DOE cell, you can right-click a cell in
the ROM snapshot column and select Open Containing Folder.
2. Proceed with the import of the CSV file. For more information, see Importing Data from a CSV
File (p. 301).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
254 of ANSYS, Inc. and its subsidiaries and affiliates.
Analyzing and Troubleshooting ROM Production
from the context menu, DesignXplorer removes all snapshot files that are not associated with any
of the points in the following tables:
DesignXplorer keeps any snapshot file that was generated with the same inputs as a point in any
of these tables. The tables listed in the first three bullets have ROM snapshot columns, so it’s clear
which snapshot files are kept. While the table listed in the last bullet does not have a ROM snapshot
column, DesignXplorer still keeps snapshot files associated with its points.
The Clear All Unused ROM Snapshots option is available on the context menu after you set up a
solver for ROM production. It is disabled if there are no unused snapshot files to remove.
Note:
The Retain Data option does not modify the behavior of this functionality.
• Display Observed Values in ROM Viewer: Observed values for the snapshot file. Observed values
correspond to the solver values associated with the snapshot file and are used to build the ROM.
• Display Predicted Values in ROM Viewer: Values for evaluating the ROM for the snapshot.
• Display Prediction Errors in ROM Viewer: Differences between the observed and predicted
snapshot values shown as absolute values.
Note:
If the design point is from a DOE cell that is shared by at least two 3D ROM systems,
these options provide for selecting the system associated with the DOE cell in which
you are interested. If the 3D ROM system is not up-to-date, the options based on the
system are disabled.
When Quality → Goodness Of Fit is selected in the Outline pane for the ROM Builder cell, in the
Chart pane, you can right-click a bar for a particular ROM snapshot file and select from the same
options for displaying values or errors in the ROM Viewer.
When the ROM Viewer opens, the title bar displays something like this: C3: DOE Point 1 (observed).
A summary follows of each portion of the title:
• Coordinates of the cell in the Project Schematic associated with the snapshot file
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 255
Using ROMs
• The display-only value (number) in the Name column of the DesignXplorer table
Input parameter values are display-only. You can choose results and regions and click Evaluate.
Tip:
To compare values for different snapshot files, you can open them in separate in-
stances of the ROM Viewer. You can also compare observed and predicted values
in the Table pane for the Goodness Of Fit object
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
256 of ANSYS, Inc. and its subsidiaries and affiliates.
Analyzing and Troubleshooting ROM Production
When DesignXplorer advanced options are shown, ROM Mesh File is visible in the Properties pane
for the Design of Experiments (3D ROM) cell.
The mesh file must reside in the ROM production folder for the current project. If you select a mesh
file in a different project and this same file does not already exist in the current project, a dialog box
opens, asking whether you want to move or copy the file. In a case where a mesh file with the same
name already exists in the ROM production folder, the dialog box indicates that the existing file will
be overwritten.
Caution:
Ensure that the custom mesh file that you select is compatible with the ROM.
To stop using the custom mesh file, you would clear the ROM Mesh File property. While DesignXplorer
does not remove the file from the project, it unregisters the file so that you can remove it manually.
Tip:
If the mesh file already exists in the ROM production folder for the current project, you
can directly enter the file name in ROM Mesh File. If the mesh file is in the ROM production
for a different project, you must enter the full path.
1. In the Outline pane for the ROM Builder cell, select ROM Builder.
When DesignXplorer advanced options are shown, ROM Builder.log is visible in the Properties
pane under Log File.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 257
Using ROMs
Tip:
When advanced options are not shown, in the Files pane for the Project Schematic,
you can right-click Rom Builder.log and select Open Containing Folder to go
to the location where this log file is stored. You can then double-click the file to open
it.
When Display Level Log File is set to Medium, the ROM Builder writes metric errors about SVD.
• When Construction Type is set to Global, the ROM Builder computes one SVD per field.
• When Construction Type is set to Local, the ROM Builder computes one SVD per field and per support.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
258 of ANSYS, Inc. and its subsidiaries and affiliates.
Analyzing and Troubleshooting ROM Production
Nb Rel. Abs. Rel. Abs. LOO Rel. LOO LOO Rel. LOO Abs.
Modes Proj. Proj. Proj. Proj. Err. Proj. Abs. Proj. Err. Proj. Err.
Err. Err. Err. RMS RMS Err. Proj. Err. RMS RMS
1
2
...
For a given row, the errors shown in the table correspond to errors obtained by using the M first
modes of the SVD, with M equal to the number of modes. See the Nb Modes column.
The following notation is used in the mathematical representations in descriptions for these criteria:
• corresponds to the vector of values of the projection of the i-th snapshot onto the subspace
based on the first modes of the SVD.
• corresponds to the approximated value of the ROM on the j-th entities of the i-th snapshot.
NB: This metric corresponds to the one exposed in the DesignXplorer interface to control the
level of ROM accuracy when Construction Type is set to Fixed Accuracy. The number of modes
for building the ROM is selected such that the associated Rel. Proj. Err. RMS is less than the
defined Maximum Relative Error.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 259
Using ROMs
The LOO (Leave-one-out) computation metrics are similar to the previous metrics except for one
difference: the projection of the i-th snapshot is computed by using SVD based only on all
snapshots except the i-th snapshot. This technique allows you to see the stability of the SVD and
the accuracy of the projection on other points than those used by the SVD.
The number of selected modes used to build the ROM is specified below the table of SVD errors. For
example, it might indicate: Number of selected modes = 3.
When Display Level Log File is set to High, the ROM Builder writes additional metrics about ROM
approximation errors.
One table of ROM errors is shown per support and per field. Each row displays metric errors per
snapshot on its entities.
Snapshot Err. Rel. Err. Rel. Err. DOF error DOF DOF error DOF error
Nrm2 Err. NrmInf NrmInf 1st error 3rd 95th
Nrm2 Quartile Median Quartile Percentile
1
2
...
Err. Nrm2
Computes the root of the sum of squared errors on entities of the snapshot:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
260 of ANSYS, Inc. and its subsidiaries and affiliates.
Analyzing and Troubleshooting ROM Production
Err. NrmInf
Computes the maximum absolute error measured on the entities of the snapshot:
Minimum
Corresponds, for a given metric, to the minimum value obtained on all snapshots
Maximum
Corresponds, for a given metric, to the maximum value obtained on all snapshots
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 261
Using ROMs
Average
Corresponds, for a given metric, to the average value obtained on all snapshots
1st quart
Corresponds, for a given metric, to the upper bound of 25% of all snapshots
median
Corresponds, for a given metric, to the upper bound of 50% of all snapshots
3rd quart
Corresponds, for a given metric, to the upper bound of 75% of all snapshots
Reference values on DOFs of snapshots are used to build the ROM. The following table allows you to
see quickly statistics associated to reference values of entities of each snapshot.
Min. Value
Minimum value of entities of the snapshot
Max. Value
Maximum value of entities of the snapshot
Mean Value
Mean value of entities of the snapshot
1st Quartile
Upper bound of 25% of entities of the snapshot (=Q1)
Median
Upper bound of 50% of entities of the snapshot
3rd Quartile
Upper bound of 75% of entities of the snapshot (=Q3)
Lower Limit
Equal to Q1-1.5*IQR with IQR = Q3-Q1
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
262 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for ROMs
Upper Limit
Equal to Q3+1.5*IQR
Lower and upper limits correspond to the "inner fences” that mark off the “reasonable” values
from the outlier values. An outlier is a value that is distant from other values. In some cases, an
outlier can correspond to an erratic point and reveal a real problem on the snapshot.
Outliers
Indicates when the Min. Value is less than the Lower Limit or when the Max. Value is greater than the
Upper Limit.
If goodness of fit is poor, you can enrich your ROM and improve its accuracy by manually adding refine-
ment points (p. 113). Adding refinement points to a ROM Builder cell is equivalent to adding design
points to a Design of Experiments cell. Both design points and refinement points are taken into account
when you update the ROM.
To view goodness of fit for the output parameters in a ROM, in the Outline pane for the ROM
Builder cell, select Goodness Of Fit.
In both the Table and Chart panes, the design points and refinement points are grouped together
and labeled as Learning Points.
• The Table pane displays the differences (errors) between the real solve solutions and the ROM solu-
tions, calculated at all nodes and all design points (average, maximum, and so on), for each field
(variable) for the selected region (zone). Each parameter is rated on how close it comes to the ideal
value for each goodness of fit metric. The rating is indicated by the number of gold stars or red crosses
next to the parameter. The worst rating is three red crosses. The best rating is three gold stars.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 263
Using ROMs
• The Chart pane displays the error that is measured for each snapshot. You can choose to turn on
and off the display of learning points and verification points in the chart properties. You can display
the chart as either a bar chart of the error for each snapshot or a cumulative distribution. Depending
on the mode, placing the mouse cursor over a particular point displays the error value or cumulative
error percentage.
Goodness of fit differs for each field. In the Properties pane, Chart and General categories indicate
what to display in the chart and table.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
264 of ANSYS, Inc. and its subsidiaries and affiliates.
Quality Metrics for ROMs
– Error per Snapshot: Allows you to quickly see if the level of error is uniform for all snapshots and
which snapshots have the highest and lowest errors.
– Cumulative Distribution Error: Allows you to quickly see what percentage of snapshots have an
error smaller than a given value.
• Error Type: Indicates the type of error to display. Choices are L-Infinity Norm Error and Relative
L2-Norm Error (%). The first choice displays the absolute error. The other choice displays the normal-
ized error. For more information, see ROM Goodness of Fit Criteria (p. 265).
Under General, Region indicates the region of interest to display in the table. You can select All
Regions or a particular region. In the table, you see error metrics for each field included in the ROM.
If you want to add a new goodness-of-fit object, right-click Quality and select Insert Goodness of
Fit. You can also right-click an existing goodness-of-fit object and then copy and paste, duplicate, or
delete it. After duplicating a goodness-of-fit object, you can modify is properties, such as the region
to display in the table and the field to show in the chart.
Note:
During computation of the goodness of fit, you can interrupt the update of the ROM
Builder cell if the computation is taking too long. You will still be able to open the ROM
in the ROM Viewer (p. 235) or export the ROM (p. 236). You can then update the ROM
Builder cell later to compute the goodness of fit.
Two types of errors are calculated for the points taken into account in the construction of the response
surface: L-Infinity Norm Error and Relative L2-Norm Error (%). The mathematical representations
for these criteria use the following notation:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 265
Using ROMs
1. In the Outline pane for the ROM Builder cell, select Verification Points.
2. In the Table pane, for each verification point to add, enter values for its input parameters in the
New Verification Point row.
3. On the toolbar, click Update to perform a real solve of each verification point.
Verification point results are then compared with the ROM predictions and the difference is calculated
and displayed in the refinement points table.
2. In the Table pane, for each refinement point to add, enter values for its input parameters in the
New Refinement Point row.
3. On the toolbar, click Update to update each out-of-date refinement point and then rebuild the
ROM from both the design points and refinement points.
In the ROM Builder log file (p. 257), metrics take into account refinement points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
266 of ANSYS, Inc. and its subsidiaries and affiliates.
ROM Limitations and Known Issues
Note:
For ROM limitations specific to Fluent, see ROM Limitations in the Fluent User's Guide.
• Input parameters coming from a non-Fluent cell must be disabled (for instance a Geometry cell in a Fluid
Flow system). All enabled input parameters must be defined in Fluent.
• A 3D ROM system does not support geometric parameter updates. All geometric parameters enabled in
the Design of Experiments (3D ROM) cell must be disabled.
• After importing a Fluent case file with the ROM already created, the Design of Experiments (ROM) cell
may appear as though it is undefined. If this occurs for a case that already has the DOE defined, open the
Fluent Setup cell and then close Fluent. The Design of Experiments (3D ROM) cell will change to Update
Required.
• Clearing the Geometry or Mesh cell in a Fluent Fluid Flow system created in a previous release may
prevent the ROM from being created. Create the Fluent mesh again using the latest release to allow for
ROM creation.
• You cannot import a ROMZ file into the Fluent system used to produced the 3D ROM in the same instance
of Workbench.
• The hardware and software prerequisites for the initial ROM Viewer are the same as those for
Workbench-based applications. For more information, see these topics in the Linux and Windows
installation guides:
• The graphic card prerequisites for the ROM Viewer are the same as those for ANSYS Discovery AIM.
For more information, see Installation Prerequisites in the ANSYS Discovery Installation Guide. The
driver must also be up-to-date.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 267
Using ROMs
• If using remote desktop, remote display requirements for the ROM Viewer are the same as those for
Workbench-based applications. For more information, see Using Remote Display Technologies with
ANSYS Workbench Products.
• The ROM Viewer cannot be opened on Linux from a remote display tool. For example, you cannot
use a remote display tool to look at a Linux screen from a Windows machine.
• If multiple regions are selected in the left panel, the color scale is different for each zone.
• The ROM Viewer might not be able to display ROM results on every region. In such a case, only the
mesh is shown.
• The ROM Viewer might not be able to display every region. In such a case, the region is not listed
in the left panel.
• Although the ROM Viewer supports polyhedral mesh elements, they are displayed with triangles.
Note that this does not affect results.
• The ROM Viewer does not support displaying results on edges. It can display results only on elements
and faces. For results on edges, the ROM Viewer displays the following message:
• Opening very large ROMs in the ROM Viewer can take several minutes. For example, opening a 5-
GB ROMZ file with 5 million cells can take 40 to 45 minutes.
• When you start the ROM Viewer on Linux, an unexpected shutdown can occur. On some Linux op-
erating system variants such as Red Hat, removing the package totem-mozplugin resolves the
issue:
yum remove totem-mozplugin
• The button for opening ROM files is accessible, even though it must not be used in Twin Builder.
• When Twin Builder opens the ROM Viewer, it controls the parameter values, not you. While the
parameter values should be read-only, this is not currently the case.
• When increasing the minimum display time for a set of results, clicking the pause or stop button
might not have an immediate effect.
• When clicking the play button, if the time instance to be replayed is already shown, nothing happens.
The workaround is to first display another time instance point by clicking the previous or next button
and then clicking the play button.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
268 of ANSYS, Inc. and its subsidiaries and affiliates.
Using Six Sigma Analysis
This section contains information about running a Six Sigma Analysis. For more information, see Six
Sigma Analysis Component Reference (p. 48) and Six Sigma Analysis (SSA) Theory (p. 367).
Performing a Six Sigma Analysis
Using Statistical Postprocessing
Statistical Measures
• Uniform
• Triangular
• Normal
• Truncated Normal
• Lognormal
• Exponential
• Beta
• Weibull
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 269
Using Six Sigma Analysis
• You measured the snow height on both ends of the beam 30 different times.
From these histograms, you can conclude that an exponential distribution is suitable to describe
the scatter of the snow height data for H1 and H2. From the measured data, you determine that
the average snow height of H1 is 100 mm and the average snow height of H2 is 200 mm. You
can directly derive the parameter λ by dividing 1 by the mean value. This leads to λ1 = 1/100 =
0.01 for H1, and λ 1 = 1/200 = 0.005 for H2.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
270 of ANSYS, Inc. and its subsidiaries and affiliates.
Performing a Six Sigma Analysis
Truncated Lower bound is smaller than upper bound, and these bounds are not too far away
Normal from the mean in terms of standard deviation:
If inconsistencies are found, a warning dialog box opens. The Messages pane provides additional
information.
In the Project Schematic, the Design of Experiments cell has a state of Attention Required, in-
dicating that you must edit the distribution definition to resolve any inconsistencies.
Example:
The truncated normal distribution is used in this example. It is defined by the following attributes:
• Mean
• Standard deviation
In the following figure, you can see these attributes in the Properties and Chart panes for the
Design of Experiments cell in the Six Sigma Analysis system:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 271
Using Six Sigma Analysis
In this example, Distribution Upper Bound (3) is far away from the Mean (1.5) when compared
to the Standard Deviation (0.075). The validation check fails because (Upper Bound −
Mean)/Standard Deviation = 20. To fix the inconsistency, you must either reduce the bounds,
modify the mean, or modify the standard deviation.
• To update each cell in the analysis separately, right-click the cell and select Update.
• To update the entire analysis at once, right-click the system header in the Project Schematic and select
Update.
• To update the entire project at once, in the Project Schematic, click Update Project on the toolbar.
Tables (SSA)
In the Six Sigma Analysis component tab, you can view probability tables for any input or output
parameter selected in theOutline pane. In the Properties pane for the parameter, select Quantile-
Percentile or Percentile-Quantile for Probability Table.
• To add a value to the Quantile-Percentile table, type the desired value into the New Parameter Value
cell at the end of the table. A row with the value that you entered is added to the table in the appro-
priate location.
• To add a new value to the Percentile-Quantile table, type the desired value into the appropriate cell
(New Probability Value or New Sigma Level) at the end of the table.
• To delete a row from either table, right-click the row and select Remove Level.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
272 of ANSYS, Inc. and its subsidiaries and affiliates.
Statistical Measures
You can also overwrite any value in an editable column. Corresponding values are then displayed in
the other columns in this row.
You can change various generic chart properties for this chart.
You can change various generic chart properties for this chart.
Note:
If the p-Value calculated for a particular input parameter is above the value specified for
Significance Level in Tools → Options → Design Exploration, the bar for that parameter
is shown as a flat line on the chart. For more information, see Viewing Significance and
Correlation Values (p. 60).
Statistical Measures
When the Six Sigma Analysis cell is updated, the following statistical measures display in the Properties
pane for each parameter. For descriptions of these measures, see SSA Theory (p. 381).
• Mean
• Standard deviation
• Skewness
• Kurtosis
• Shannon entropy
• Signal-to-noise ratios
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 273
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
274 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with DesignXplorer
The following section describe the type of work you do in DesignXplorer:
Working with Parameters
Working with Design Points
Working with Sensitivities
Working with Tables
Working with Remote Solve Manager and DesignXplorer
Working with Design Point Service and DesignXplorer
Working with DesignXplorer Extensions
Working with Design Exploration Results in Workbench Project Reports
Note:
If you modify your analysis after it is solved, parameters can change. DesignXplorer displays
the refresh required icon ( ) on cells with changed data.
• When a direct input or direct output parameter is added to or deleted from the project, or when the
unit of a parameter is changed, all results, including design point results and the cache of design point
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 275
Working with DesignXplorer
results, are invalidated on a Refresh operation. An Update operation is required to recalculate all design
points.
• When a derived output parameter is added to or deleted from the project, or when the expression of
a derived output parameter is modified, all results but the cache of design point results are invalidated
on a Refresh operation. So, on an Update operation, all design point results are retrieved without any
new calculation. Other design exploration results are recalculated.
• Because DesignXplorer is primarily concerned with the range of variation in a parameter, changes to
a parameter value in the model or the Parameter Set bar are not updated to existing design exploration
systems with a Refresh or Update operation. The parameter values that were used to initialize a new
design exploration system remain fixed within DesignXplorer unless you change them manually.
Tip:
You can change the way that units are displayed in your design exploration systems from
the Units menu. Changing the units display in this manner causes the existing data in each
system to be shown in the new units system. It does not require an update of the design
exploration systems.
When you make non-parametric changes, the first cell in the design exploration system indicates that
a refresh is required. However, the Refresh operation invalidates design point results and the cache of
design points. An Update operation is then required to recalculate design points.
Because generating new design points can be time-consuming and costly, when you know that recal-
culating design points is unnecessary, you can approve generated data instead. Rather than invalidating
design point results, the Approve Generated Data operation retains the already generated design
points as up-to-date and retains the design point results in the cache of design points.
If non-parametric changes exist, the Approve Generated Data option is available in the right-click
context menu for the first cell in a design exploration system. Additionally, when the cell is being edited,
a button is available on the toolbar:
Note:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
276 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters
When this operation runs, all cells that were up-to-date before the non-parametric changes are once
again up-to-date.
• In the Outline pane for each cell with user-approved data, an alert icon ( ) displays in the Message
column for the root node. If you click this icon to view warnings, you see a cautionary message that
indicates data contains user-approved generated data that may include non-parametric changes.
• In the Table pane for the cell, the user-approved icon ( ) displays in the Name column of all design
points with user-approved data.
A Design of Experiments cell with user-approved design points is once again marked as up-to-date.
A Response Surface cell can be marked as up-to-date if it depends on a Design of Experiments cell
with user-approved design points or if it contains user-approved points, such as refinement points.
When a cell in a design exploration system contains user-approved data, any cell that depends on it
also displays the user-approved icon. For example, if a Design of Experiments cell contains user-approved
data, any Response Surface cell that depends on this DOE displays this icon. Any new design points
that you might insert in a cell's table do not display icons because their results are to be based on a
real solve.
Assume that you switch to another DOE type and generate new design points, without keeping the
previous design points. The DOE then contains only new design points, so no user-approved icons
display. However, cells that depend on this DOE still display user-approved icons.
• The Response Surface cell displays the icon because it still contains user-approved refinement points.
If you deleted these refinement points and updated the response surface, the Response Surface cell
would no longer display the icon.
• The Optimization cell displays the icon because it still contains user-approved verified candidate points.
If you deleted these candidate points and updated the optimization, the Optimization cell would no
longer display the icon.
All reports that DesignXplorer automatically generates include the same cautionary message that is
displayed when you click the alert icon ( ) in the Message column for the root node of a cell with
user-approved data. For example, the Workbench project report contains this cautionary message in
corresponding component sections. Additionally, in table images, the project report displays the user-
approved icon ( ) in the Name column of user-approved points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 277
Working with DesignXplorer
If you export table or chart data, the first row of the CSV file also includes the same cautionary message.
Note:
• In some situations, non-parametric changes are not detected. For example, if you edit the input
file of a Mechanical APDL system outside of Workbench, this non-parametric change is not detec-
ted. To synchronize design exploration systems with the new state of the project, you must perform
a Clear Generated Data operation followed by an Update operation. In a few rare cases, inserting,
deleting, duplicating or replacing systems in a project is not reliably detected.
• The Clear Generated Data operation does not clear the design point cache. To clear the design
point cache, right-click in an empty area of the Project Schematic and select Clear Design Points
Cache for All Design Exploration Systems.
Input Parameters
By defining and adjusting input parameters, you specify the analysis of the model under investigation.
This section describes how to define and change input parameters.
1. In the Outline pane for the Design of Experiments cell, select the parameter.
Both the Properties pane and Table pane display level information for the discrete para-
meter selected in the Outline pane. In the Properties pane, you see the number of levels
(discrete values). In the Table pane, you see the integer values for each level. You can add,
delete, and edit levels for the discrete parameter, even if it is disabled.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
278 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters
Note:
3. In the Table pane, define the levels for the discrete parameter:
• To add a level, select the empty cell in the bottom row of the Discrete Value column, type an
integer value, and press Enter. In the Properties pane, Number of Levels is updated automat-
ically.
• To delete a level, right-click any part of the row containing the level to remove and select Delete.
• To edit a level, select the cell with the value to change, type an integer value, and press Enter.
Note:
In the Table pane, the Discrete Value column is not sorted as you add, delete, and edit
levels. To sort it manually, click the down-arrow on the right of the header cell and select
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 279
Working with DesignXplorer
a sorting option. Once you sort the column, integer values are auto-sorted as you add,
delete, and edit levels.
1. In the Outline pane for the Design of Experiments cell, select the parameter.
The values for Lower Bound and Upper Bound define the range of the analysis.
DesignXplorer initializes the range based on the current value for the parameter, using
−10% for the lower bound and +10% for the upper bound. If a parameter has a current
value of 0.0, the initial range is computed as 0.0 → 10.0.
Because DesignXplorer is not aware of the physical limits of parameters, you must check
that the assigned range is compatible with the physical limits of the parameter. Ideally, the
current value of the parameter is at the midpoint of the range between the upper and
lower bounds. However, this is not a requirement.
3. If you need to change the range, select the bound to change, type in a number, and press Enter.
You are not limited to entering integer values. However, the relative variation must be equal to
or greater than 1e-10 in the same units as the parameter. If the relative variation is less than 1e-
10, you can either adjust the range or disable the parameter.
• To adjust the range, select the parameter in the Outline pane and then edit the values for the
bounds in the Properties pane.
• To disable the parameter, clear the Enable check box in the Outline pane.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
280 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters
In the Properties pane, Allowed Values specifies whether you want to impose a limitation
beyond the range defined by the lower and upper bounds. When this property is editable,
the default setting is Any, which means any value within the range is allowed.
4. If you want to further limit values, change the setting for Allowed Values. Descriptions follow for
the other choices that are possible. The subsequent table summarizes when a choice is shown.
• Snap to Grid. When selected, Grid Interval displays, specifying the distance that must exist
between adjacent design points when generating new design points. The units are the same
as those for the parameter. The default value is calculated as follows: (Upper Bound −
Lower Bound)/1000. This value is then rounded to the closest power of 10. For more in-
formation, see Defining a Grid Interval (p. 281).
• Manufacturable Values. When selected, levels display in the Table pane so that only the real-
world manufacturing or production values that you specify are taken into account during
postprocessing. For more information, see Defining Levels for Manufacturable Values (p. 282).
The following table describes possible states for Allowed Values and choice availability
based on its state.
• Manufacturable
Values
Response Design of Editable • Any
Surface Experiments
Optimization • Manufacturable
Values
Response Read-only Same value as
Surface selected in the
DOE
Optimization Editable if Any • Any
is selected in
the DOE • Snap to Grid
Read-only if Manufacturable
Manufactur- Values
able Values is
selected in the
DOE
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 281
Working with DesignXplorer
values between the lower and upper bounds, selectively picking new design points from the
grid.
Note:
• Use of this method does not affect existing design points. They are reused without
adjustment, even when they do not match the grid.
• Not all optimization methods support setting Allowed Values to Snap to Grid. For
example, NLPQL does not support this setting.
The default value for Grid Interval is calculated as follows: (Upper Bound − Lower Bound)/1000.
The resulting value is then rounded to the closest power of 10. For example, assume a lower
bound of 3.52 and an upper bound of 4.08. After subtracting 3.52 from 4.08, the calculation
(0.56/1000) yields 0.00056, which is rounded to 0.001.
You can specify a different value for Grid Interval. However, the value cannot be negative or
zero. It must also be less than or equal to this value: (Upper Bound − Lower Bound).
When you edit the grid interval, the lower bound, upper bound, or starting point can become
invalid. In this case, DesignXplorer highlights in yellow the values which need your attention.
Edit each highlighted property to enter a value matching the grid interval. If you enter an invalid
value, DesignXplorer automatically snaps it to the grid.
When Snap to Grid is set, DesignXplorer excludes optimization methods that do not support
this setting from the list of methods available. However, you might have already selected such
a method before you set Snap to Grid. In this case, DesignXplorer cannot update the Optimization
cell. When you place the mouse cursor over Optimization in the Outline pane, a tooltip indicates
that the selected optimization method does not support continuous input parameters with Al-
lowed Values set to Snap to Grid. It then indicates that you must either select another optimiz-
ation method or change Allowed Values to some choice other than Snap to Grid.
DesignXplorer also cannot update the Optimization cell in other situations where at least one
continuous input variable has Allowed Values set to Snap to Grid. Tooltips for the Optimization
cell indicate how to resolve these additional problems:
• A lower or upper bound is not on the grid defined by the grid interval.
In the Properties pane, Number of Levels is initially set to 2 because this is the minimum
number of levels allowed. The Table pane displays all levels specified for manufacturable values.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
282 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters
The values for the two initial levels default to the lower and upper bounds. You can add, delete,
and edit levels, even if this continuous input parameter is currently disabled.
• To add a level, select the empty cell in the bottom row of the Manufacturable Values column, type
a numeric value, and press Enter. In the Properties pane, Number of Levels is updated automatically.
• To delete a level, right-click any part of the row containing the level to remove and select Delete. If
you delete the level representing either the upper bound or lower bound, the range is not narrowed.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 283
Working with DesignXplorer
• To edit a level, select the manufacturable value to change, type a numeric value, and press Enter.
Note:
• In the Properties pane, Value is populated with the parameter value defined in the
Parameters table. This value cannot be edited in the DOE.
• If you enter a numeric value for a level that is outside of the range defined in the
Properties pane, you can opt to either automatically extend the range to encompass
the new value or cancel the commit of the new value. If you opt to extend the range,
all existing DOE results and design points are deleted.
• If you adjust the range when manufacturable values are defined, manufacturable
values falling outside the new range are automatically removed and all results are
invalidated.
If you decide that you no longer want to limit analysis of the sample set to only manufacturable
values, you can set Allowed Values to Any. This removes the display of levels from the Tables
pane. If you ever set Allowed Values to Manufacturable Values again, the Table pane once
again displays the levels that you previously defined.
Once results are generated, you can change the setting for Allowed Values without invalidating
the DOE or response surface. You can also add, delete, or edit levels without needing to regenerate
the entire DOE and response surface, provided that you do not alter the range. As long as the
range remains the same, DesignXplorer reuses the information from the previous updates.
Because the following types of results are based on manufacturable values, if you make any edits
to manufacturable values, you must regenerate them:
• Min/Max objects
• From the Outline pane for the Design of Experiments cell, you enable or disable an input parameter
by selecting or clearing the check box to the right of the parameter.
• In the Properties pane for the parameter selected in the Outline pane, you specify further attributes,
such as the levels (integer values) for a discrete input parameter or the range (upper and lower bounds)
and allowed values for a continuous input parameter.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
284 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Parameters
Making any of the following changes to an input parameter in a Design of Experiments, Design
of Experiments (SSA), or Parameters Correlation cell would require clearing all generated data
associated with this system:
For example, assume that you make one of these changes in a Design of Experiments cell for a
goal-driven optimization system. Because the change would clear all generated data in all cells of
the system, a dialog box displays, asking you to confirm the change. The change is committed only
if you click Yes. If you click No, the change is discarded. If desired, you can duplicate the system
and then make the parameter change in the new system. Alternatively, you can change the DOE
type to Custom before making a parameter change to retain the design points falling within the
new range.
Note:
Using the DOE type (p. 71), the number of generated design points is directly related
to the number of selected input parameters. The design and analysis workflow (p. 17)
shows that specifying many input parameters makes heavy demands on computer time
and resources, including system analysis, DesignModeler geometry generation, and CAD
system generation. Also, large ranges for input parameters can lead to inaccurate results.
Six Sigma Analysis refers to input variables as uncertainty variables. For more information, see Defining
Uncertainty Variables (p. 269).
Output Parameters
Each output parameter corresponds to a response surface, which is expressed as a function of the
input parameters. Some typical output parameters are equivalent stress, displacement, and maximum
shear stress.
When you select an output parameter in the Outline pane for a cell that displays parameters, you
can see maximum and minimum values for this output parameter in the Properties pane. The max-
imum and minimum values shown depend on the state of the design exploration system.
They are the "best" minimum and maximum values available in the context of the current cell. This
means that they are the best values between what the current cell eventually produced and what
the parent cell provided. A Design of Experiment cell produces design points, and a best minimum
value and maximum value are extracted from these design points. A Response Surface cell produces
Min-Max search results if this option is enabled. If a refinement is run, new points are generated, and
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 285
Working with DesignXplorer
a best minimum value and maximum value are extracted from these points. An Optimization cell
produces a sample set and again, a better minimum value and maximum value than what is provided
by the parent response surface is found in these samples. Consequently, it is important to remember
the state of the design exploration system when viewing the minimum and maximum values in the
parameter properties.
The number and the definition of the design points created depend on the number of input parameters
and the properties of the DOE. For more information, see Using a Central Composite Design DOE (p. 82).
You can preview the generated design points by clicking Preview on the toolbar before updating a
DOE, a response surface (if performing a refinement), or another design exploration feature that generates
design points during an update.
Some design exploration features provide the ability to edit the list of design points and the output
parameter values of these points. For more information, see Working with Tables (p. 299).
Before starting the update of design points, you can set the design point update option, change the
design point update order, and specify initialization conditions for a design point update. For more in-
formation, see Specifying Design Point Update Options (p. 287).
• From a component tab, right-click the root node in the Outline pane and select Update.
• In the Project Schematic, click Update Project on the toolbar to update all systems in the project.
Note:
• The Update All Design Points operation updates the design points for the Parameter Set bar.
It does not update the design points for design exploration systems.
• If an update of a design exploration system reuses design points that are partially updated, the
update required icon ( ) displays beside the output parameters that are partially updated.
DesignXplorer always updates partially updated design points to completion and publishes an
informational message.
During the update, design points are updated simultaneously if the analysis system is configured to
perform simultaneous solutions. Otherwise, they are updated sequentially.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
286 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points
When you update a Design of Experiments, Response Surface, or Parameters Correlation cell, its
design points table is updated dynamically. As the points are solved, generated design points appear
and their results display.
As each design point is updated, parameter values are written to a CSV log file. If you want to use this
data to continue your work with the response surface, you can import it back into this cell's design
points table.
You can change the update order using options in the design points table. For more information, see
Changing the Design Point Update Order in the Workbench User's Guide.
• If Design Point Initiation is set to From Current (default), when a design point is updated, it is ini-
tialized with the data of the design point that is designated as current.
• If Design Point Initiation is set to From Previously Updated, when a design point is updated, it is
initialized with the data of the previously updated design point. In some cases, it can be more efficient
to update each design point starting from the data of the previously updated design point, rather
than restarting from the current design point each time.
Note:
Retained design points with valid retained data do not require initialization data.
For more information, see Specifying the Initialization Conditions for a Design Point Update in the
Workbench User's Guide.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 287
Working with DesignXplorer
DesignXplorer allows you to preserve generated design points so that they are automatically saved
to the Parameter Set bar for later exploration or reuse. The preservation of design points must be
enabled first at the project level. You can then configure the preservation of design points for indi-
vidual components.
• You enable this functionality at the project level in Tools → Options → Design Exploration. Under
Design Points, select the Preserve Design Points After DX Run check box.
• You enable this functionality at the component level in the cell properties. Right-click the cell and
select Edit. In the Properties pane, under Design Points, select the Preserve Design Points After
DX Run check box.
When design points are preserved, they are included in the design points table for the Parameter
Set bar. A design points table for a cell indicates if design points here correspond to design points
for the Parameter Set bar. Design points include DOE points, refinement points, direct correlation
points, and candidate points. Design points correspond when they share the same input parameter
values.
When a correspondence exists, the point's name specifies the design point to which it is related. If
the source design point is deleted from the Parameter Set bar or the definition of either design point
is changed, the indicator is removed from the point's name and the link between the two points is
broken, without invalidating your model or results.
• You enable this functionality at the project level in Tools → Options → Design Exploration. Under Design
Points, select both the Preserve Design Points After DX Run and Retain Data for Each Preserved
Design Point check boxes.
• You enable this functionality at the component level in the cell properties. Right-click the cell and selecting
Edit. In the Properties pane, under Design Points, select both the Preserve Design Points After DX
Run and Retain Data for Each Preserved Design Point check boxes.
Note:
The behavior of a newly inserted cell follows the project-level settings. The behavior of
existing cells is not affected. Existing cells follow their configuration at the component
level.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
288 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points
When a DesignXplorer cell is updated, preserved design points are added to the project's design
points table and the calculated data for each of these design points is retained.
Once design point data has been retained, you have the option of using the data within the project
or exporting design point data to a separate project:
• To switch to another design within the project, right-click a design point in the design points table and
select Set as Current. This allows you to review and explore the associated design.
• To export retained design point data to a separate project, go to the design points table, right-click one
or more design points with retained data, and select Export Selected Design Points.
For more information about using retained design point data, see Preserving Design Points and Re-
taining Data (p. 296) in the Workbench User's Guide.
In a chart, you can right-click a design point and select Insert as Design Point. Some charts that
support this operation are the Samples chart, Tradeoff chart, Response chart, and Correlation Scatter
chart.
When inserting new design points into the project, the input parameter values of the selected can-
didate design points are copied. The output parameter values are not copied because they are ap-
proximated values provided by the response surface. To view the existing design points in the project,
edit the Parameter Set bar in the Project Schematic.
Th availability of the Insert as Design Point option from an optimization chart depends on the context.
For instance, you can right-click a point in a Tradeoff chart to insert it as a design point in the project.
The same operation is available from a Samples chart, providing it is not being displayed as a Spider
chart.
Note:
If your cell is out-of-date, the charts and tables that you see are out-of-date. However, you
can still explore and manipulate the existing table and chart data and insert design points.
• Design points exist in the Parameter Set bar. Because this is not the case by default, you can easily
accomplish this by selecting the Preserve Design Points After DX Run check box for the
DesignXplorer component and then updating the component. If you want to enable this property
for all new DesignXplorer components, you do so in Tools → Options → Design Exploration. For
more information, see Design Exploration Options (p. 22).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 289
Working with DesignXplorer
• A Workbench project report exists. For information about Workbench project reports, see Working
with Project Reports in the Workbench User's Guide.
• The image to display in DesignXplorer tables and charts is selected for the Report Image property
for a DesignXplorer component that uses design points. In the following figure, you can see that
Preserve Design Points After DX Run is selected and Report Image has a PNG file selected. You
can select from all PNG files that were generated for the Workbench project report.
Initially, Report Image is set to None, which means tables using design points do not display the
Report Image column. However, once you select a PNG file for Report Image, all tables using design
points display this column. If the selected image exists for a design point, the column displays a
thumbnail.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
290 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points
From a chart, you open the image for a point by right-clicking the point and selecting Show Report
Image. If this context menu option is not available, the selected point is not linked to a design point.
During a design point update, the Report Image column displays new thumbnails as images for
design points are generated. While the update is running, you can open an image.
As a consequence, if the same design points are reused when previewing or updating a design ex-
ploration system, they immediately show up as up-to-date and the cached output parameter values
display.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 291
Working with DesignXplorer
The cached data is invalidated automatically when relevant data changes occur in the project. You
can also force the cache to be cleared by right-clicking in an empty area of the Project Schematic
and selecting Clear Design Points Cache for All Design Exploration systems.
While a Direct Optimization system is updating, DesignXplorer refreshes the results in the design
points table as it is calculated. Design points pulled from the cache also display. The Table pane
generally refreshes dynamically as design points are submitted for update and as they are updated.
However, if design points are updated via Remote Solve Manager, the RSM job must be completed
before results in the Table pane are refreshed.
Once the optimization is completed, the raw design point data is saved. When you select Raw Optim-
ization Data in the Outline pane, the Table pane displays the raw data. While you cannot edit the
raw data, you can export it to a CSV file by right-clicking in the table and selecting Export Table
Data as CSV. For more information, see Exporting Design Point Parameter Values to a Comma-Separ-
ated Values File in the Workbench User's Guide. Once the raw data is exported, you can import it as a
custom DOE.
You can also select one or more rows in the table and right-click to select one of the following options
from the context menu:
• Insert as Design Point: Creates new design points in the project by copying the input parameter
values of the selected candidate points to the design points table for the Parameter Set bar. The
output parameter values are not copied because they are approximated values provided by the re-
sponse surface.
• Insert as Custom Candidate Point: Creates new custom candidate points in the candidate points
table by copying the input parameter values of the selected candidate points.
Note:
The design point data is in raw format, which means it is displayed without analysis or
optimization results. Consequently, it does not show feasibility, ratings, Pareto fronts, and
so on.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
292 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points
• Update the current design point by right-clicking it and selecting Update Selected Design Point.
Note:
The Update Project operation is processed differently than a design point update. When
you update the whole project, design point data is not logged. You must use the right-
click context menu to log data on the current design point.
• Update either selected design points or all design points using the Update Selected Design Point option
on the right-click context menu.
• Open an existing project. For all up-to-date design points in the project data is immediately logged.
Formatting
The generated log file is in the extended CSV file format used to export table and chart data and to import
data from external CSV files to create new design, refinement, and verification points. While this file is
primarily formatted according to CSV standards, it supports some non-standard formatting conventions.
For more information, see Exporting Design Point Parameter Values to a Comma-Separated Values File
and Exporting Design Point Parameter Values to a Comma-Separated Values File.
File Location
The log file is named DesignPointLog.csv and is written to the directory user_files for the
Workbench project. You can locate the file in the Files pane of Workbench by selecting View → Files.
To import data from the design point log file, you set Design of Experiments Type to Custom.
The list of parameters in the file must exactly match the order and parameter names in
DesignXplorer. For example, the order and names might be P1, P7, and P3.
To import the log file, you might need to manually extract a portion.
• Review the column order and parameter names in the header row of the Table pane.
• Export the first row of your custom DOE to create a file with the correct order and parameter names
for the header row.
2. Find the file DesignPointLog.csv in the directory user_files for the project and then compare
it to your exported DOE file. Verify that the column order and parameter names exactly match those in
DesignXplorer.
3. If necessary, update the column order and parameter names in the design point log file.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 293
Working with DesignXplorer
If parameters were added or removed from the project, the file has several blocks of data to reflect
this that are distinguished by header lines.
4. Manually remove any unnecessary sections from the file, keeping only the block of data that is consistent
with your current parameters.
Note:
The header line is produced when the log file is initially created and reproduced when
a parameter is added or removed from the project. If parameters have been added or
removed, you must verify the match between DesignXplorer and the log file header
row again.
2. Right-click any cell in the design points table and select Import Design Points and then Browse.
3. Browse to the directory user_files for the project and select the file DesignPointLog.csv.
The design point data is loaded into the design points table.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
294 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points
If overall system performance is not a concern, you can reduce the overall processing time by
directing ANSYS Mechanical and ANSYS Meshing to restart less frequently or not at all. Select
Tools → Options and then in the Mechanical and Meshing tabs, under Design Points, do one
of the following:
• To restart less frequently, set the number of design points to update before restarting to a higher
value, such as 10.
• To prevent restarts completely, clear the check box for periodically restarting during a design point
update.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 295
Working with DesignXplorer
Caution:
Because every design point is retained within the project, this approach can affect
performance and disk resources when used on projects with 100 or more design
points.
The preservation of design points and design point data can be defined as the default behavior at
the project level or can be configured for individual cells.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
296 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Points
• To set this as the default behavior at the project level, select Tools → Options → Design Exploration.
Under Design Points, select both the Preserve Design Points After DX Run and Retain Data for Each
Preserved Design Point check boxes.
– When you opt to preserve the design points, the design points are added to the design points table
for the Parameter Set bar.
– When you opt to also retain the design point data, the calculated data for each design point is saved
in the project.
• To configure this functionality at the component level, right-click the cell and select Edit. In the Prop-
erties pane, under Design Points, select both the Preserve Design Points After DX Run and Retain
Data for Each Preserved Design Point check boxes.
Once data has been retained for a design point, you can set it as the current design point. This
enables you to review the associated design within the project and further investigate any update
problems that might have occurred.
You can find this information by hovering your mouse over a failed design point in the design
points table. A primary error message tells you what output parameters have failed for the
design point and specifies the name of the first failed component. Additional information might
also be available in the Messages pane.
In Tools → Options → Design Exploration, the Retry Failed Design Points check box globally
sets whether DesignXplorer is to make additional attempts to solve all design points that failed
during the previous run. This check box is applicable to all DesignXplorer systems except Six
Sigma Analysis systems and Parameters Correlation systems that are linked to response
surfaces.
When Retry Failed Design Points is selected, Number of Retries and Retry Delay become
available so that you can specify the number of times that the update should be retried and
the delay in seconds between each attempt.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 297
Working with DesignXplorer
You can override this global option in the Properties pane for a cell. In the Properties pane,
under Failed Design Points Management, setting Number of Retries to 0 disables automat-
ically retrying the update for failed design points. When any other integer value is set, Retry
Delay is available so that you can specify the delay in seconds between each attempt.
To avoid potentially invalidating the original project, you should create a duplicate test project
using the Save As menu option. You can either duplicate the entire project or narrow your focus
by creating separate projects for one or more failed design points. To create separate projects, select
the failed design points in the table. Then, right-click any of these selections and select Export
Selected Design Points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
298 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Tables
The sensitivities available for Six Sigma Analysis and goal-driven optimizations are statistical sensitivities,
which are global sensitivities. The single parameter sensitivities available for response surfaces are local
sensitivities.
Global, statistical sensitivities are based on a correlation analysis using the generated sample points,
which are located throughout the entire space of input parameters.
Local parameter sensitivities are based on the difference between the minimum and maximum value
obtained by varying one input parameter while holding all other input parameters constant. As such,
the values obtained for local parameter sensitivities depend on the values of the input parameters that
are held constant.
Global, statistical sensitivities do not depend on the values of the input parameters because all possible
values for the input parameters are already taken into account when determining the sensitivities.
• Sensitivity charts are not available if all input parameters are discrete.
You can add a new row to a table by entering values in the * row. You enter values in columns for input
parameters. Once you enter an input value in the * row, the row is added to the table and the remaining
input parameters are set to their initial values. You can then edit this row in the table, changing input
parameter values as needed. Output parameter values are calculated when the cell is updated.
Output parameter values calculated from a design point update are displayed in black text. Output
parameter values calculated from a response surface are displayed in the custom color specified in
Tools → Options → Design Exploration → Response Surface. For more information, see Response
Surface Options (p. 25).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 299
Working with DesignXplorer
points, candidate points, and more. Points correspond when they share the same input parameter
values. When a correspondence exists, the name of the point indicates the design point to which it
is related.
If the source design point is deleted from the Parameter Set bar or the definition of either design
point is changed, the link between the two points is broken without invalidating your model or results.
Additionally, the indicator is removed from the name of the point.
• Design points table for a Design of Experiments cell or Design of Experiments (3D ROM) cell when
the Design of Experiments Type property is set to either Custom or Custom + Sampling
Because output values are provided by a real solve, editing output values is not normally necessary.
However, you might want to insert existing results from external sources such as experimental data
or known designs. Rows with editable output values are not calculated during an update of the DOE.
By default, output values are calculated during an update. However, you can make output values in
the previously specified tables editable using options on the context menu:
• Set Output Values as Editable makes the output values editable in selected rows. If you right-click
one or more rows with editable output values, you can select Set Output Values as Calculated if
you want output values to be set as calculated once again. However, this invalidates the DOE, requiring
you to update it.
• Set All Output Values as Calculated sets output values for all rows as calculated.
• Set All Output Values as Editable makes the output values editable in all rows.
– If you enter an input value in the * row, the row is added to the table and the remaining input
parameters are set to their initial values. The output parameter values are blank. You must enter
values in all columns before updating the DOE.
– If you enter an output value in the * row, the row is added to the table and the values for all input
parameters are set to their initial values. The remaining output parameter values are blank. You
must enter values in all columns before updating the DOE.
Note:
• A table can contain derived parameters. Derived parameters are always calculated, even if Set
All Output Values as Editable is selected.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
300 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Tables
• Editing output values for a row changes the cell's state to indicate that an update is required.
The cell must be updated, even though no calculations are done.
• If the points are solved and you select Set All Output Values as Editable and then select Set
All Output Values as Calculated without making any changes, the outputs are marked out-
of-date. You must update the cell to recalculate the points.
To manipulate the data in a large number of table rows, you should use the export and import cap-
abilities that are described in the next two sections.
The import option is available from the context menu when you right-click a cell in the Project
Schematic. It is also available when you right-click in the Table pane for a cell where the import
feature is implemented. For example, you can right-click a Response Surface cell in the Project
Schematic and select Import Verification Points. Or, you can right-click in the Table pane for a
Design of Experiments cell or a Design of Experiments (3D ROM) cell and select Import Design
Points from CSV.
Note:
For the Import Design Points from CSV option to be available for a DOE cell, the
Design of Experiments Type property must be set to Custom or Custom + Sampling.
For example, assuming that the DOE cell is set to a custom DOE type, to import design points from
an external CSV file into the DOE cell, do the following:
1. In the Project Schematic, right-click the DOE cell into which to import design points and select
Import Design Points from CSV.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 301
Working with DesignXplorer
During the import, the CSV file is parsed and validated. If the format is invalid or the described para-
meters are not consistent with the current project, a list of errors is displayed and the import operation
is terminated. If the file validates, the data is imported.
• The file must conform to the extended CSV file format. In particular, a header line identifying each
parameter by its ID (P1, P2, …, Pn) is mandatory to describe each column. For more information, see
Extended CSV File Format in the Workbench User's Guide.
• The order of the parameters in the file might differ from the order of the parameters in the project.
• Values must be provided in the units defined, which you can see by selecting Units → Display Values
As Defined.
• If values for output parameter are provided, they must be provided for all output parameters except
derived output parameters. Any values provided for derived output parameters are ignored.
• Even if the header line states that output parameter values are provided, it is possible to omit them
on a data line.
• If parameter values for imported design points are out of range or do not fill the range, a dialog box
opens, giving you options for automatically expanding and shrinking the parameter ranges in the
DOE to better fit the imported values. This same dialog box can also appear when you are copying
design points from the Parameter Set bar into a DOE. For more information, see Parsing and Validation
of Design Point Data (p. 86).
• Once design points are imported, any provided output parameter values are editable. For design
points imported without output parameter values, values are read-only. You must either update the
DOE cell or set output parameter values as editable (p. 300) and then enter values to continue.
To send updates via RSM, you must first install and configure RSM. For more information, see the Install-
ation and Licensing Help and Tutorials page on the ANSYS customer site. From the customer site menu,
select Downloads → Installation and Licensing Help and Tutorials. For general information about
materials and services available to our customers, go to the main page of the customer site.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
302 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Remote Solve Manager and DesignXplorer
For more information on submitting design points for remote update, see Updating Design Points in
Remote Solve Manager in the Workbench User's Guide.
For more information on configuring the solution cell update location, see Solution Process in the
Workbench User's Guide.
For more information on configuring the design point update option, see Setting the Design Point
Update Option in the Workbench User's Guide.
Previewing Updates
When a Design of Experiments, Response Surface, or Parameters Correlation cell requires an update,
in some cases, you can preview the results of the update. A preview prepares the data and displays it
without updating the design points. This allows you to experiment with different settings and options
before actually solving a DOE or generating a response surface or parameters correlation.
• In the Project Schematic, either right-click the cell to update and select Preview or select the cell and click
Preview on the toolbar.
• In the Outline pane for the cell, either right-click the root node and select Preview or select this node and
click Preview on the toolbar.
Pending State
Once a remote update for a Design of Experiments, Response Surface, or Parameters Correlation
cell begins, the cell enters a pending state. However, some degree of interaction with the project is still
available. For example, you can:
• Open the Options window, access the component tab for the Parameter Set bar, and archive the project.
• Follow the overall process of an update in the Progress pane by clicking Show Progress in the lower right
corner of the window.
• Interrupt, abort, or cancel the update by clicking the red stop button to the right of the progress bar in the
Progress pane and, in the dialog box that then opens, clicking the button for the desired operation.
• Follow the process of individual design point updates in the Table pane for the cell. Each time a design
point is updated, the Table pane displays the results and status.
• Exit the project and either create a new project or open another existing project. If you exit the project while
it is in a pending state due to a remote design point update, you can later reopen it. The update will then
resume automatically.
Exit a project while a design point update or a solution cell update via RSM is in progress. For inform-
ation on behavior when exiting during an update, see Updating Design Points in Remote Solve
Manager or Exiting a Project During a Remote Solve Manager Solution Cell Update in the Workbench
User's Guide.
Note:
• For iterative update processes such as Kriging refinement, Sparse Grid response surface updates,
and Correlation with Auto-Stop, design point updates are sent to RSM iteratively. The calculations
of each iteration are based on the convergence results of the previous iteration. Consequently,
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 303
Working with DesignXplorer
if you exit Workbench during the pending state for an iterative process, when you reopen the
project, only the current iteration is completed.
• The pending state is not available for updates that contain verification points or candidate points.
You can still submit these updates to RSM. However, you do not receive intermediate results
during the update process, and you cannot exit the project. Once the update is submitted via
RSM, you must wait for the update to complete and the results to be returned from the remote
server.
For more information, see Working with ANSYS Remote Solve Manager and User Interface Overview in
the Workbench User's Guide.
To use DPS, in the properties for either the Parameter Set bar or a DesignXplorer system cell, you set
Update Option to Submit to Design Point Service (DPS). Any update of design points then sends
the Workbench project definition and unsolved design points to DPS for evaluation.
For more information, see the DCS for Design Points Guide and the following topics:
Sending an Optimization Study to DPS
Importing DPS Design Points into DesignXplorer
If these values are set and are consistent, when the update is submitted to DPS, DesignXplorer
automatically sends the objectives and constraints for the study to the DPS project as fitness terms.
If these values are either not set or are inconsistent, DesignXplorer cannot send the objectives and
constraints, which means DPS cannot start the update.
If the Optimization cell is already up-to-date, you can manually send objectives and constraints for
the study to the DPS project. In the Outline pane of the Optimization cell, right-click Optimization
and select Send Study to DPS Project. The Messages pane indicates if the study has been sent
successfully.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
304 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with DesignXplorer Extensions
In the DPS web app, the project's Configuration tab displays a Fitness heading, under which you
can see any fitness terms sent from DesignXplorer, along with any fitness terms that are entered directly
in the DPS project. On the project's Design Points tab, you can easily sort design points by their
calculated fitness values to find the best candidates. For more information, see Defining Fitness in
the DCS for Design Points Guide.
Note:
Changes to fitness terms do not invalidate evaluated DPS design points but rather
trigger the re-evaluation of fitness values for all DPS design points.
• Custom DOEs
You can also import design points updated in DPS into the design points table for the Workbench
Parameter Set bar. For more information, see Importing DPS Design Points into the Workbench
Project in the DCS for Design Points Guide.
Note:
For information on using ACT, see the ANSYS ACT Developer's Guide, the ANSYS ACT XML Ref-
erence Guide, and the ANSYS ACT Reference Guide.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 305
Working with DesignXplorer
To install an extension:
In most cases, you select a binary extension in which a defined optimizer is compiled into a
WBEX file. WBEX files cannot be modified.
The extension is installed in your App Data directory. Once installed, it is shown in the Extensions
Manager and can be loaded to your projects.
1. From the Workbench Project tab, select Extensions → Manage Extensions. The Extensions Manager
opens, showing all installed extensions.
2. Select the check box for the extension to load and then close the Extensions Manager.
The extension should now be loaded to the project, which means that it is available to be selected
as an optimization method. You can select Extensions → View Log File to verify that the extension
loaded successfully.
Note:
The extension must be loaded separately to each project unless you have specified it as
a default extension in the Options window. You can also specify whether loaded extensions
are saved to your project. For more information, see Extensions in the Workbench User's
Guide.
In some cases, a necessary extension might not be available when a project is reopened. For example,
the extension could have been unloaded or the extension might not have been saved to the project
upon exit. When this happens, the option in the drop-down menu is replaced with an alternate label,
<extensionname>@<optimizername> or <extensionname>@<doename> instead of the actual
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
306 of ANSYS, Inc. and its subsidiaries and affiliates.
Working with Design Exploration Results in Workbench Project Reports
name of the external optimizer or DOE. You can still do postprocessing with the project for data ob-
tained when the problem was solved previously. However, the properties are read-only and further
calculations cannot be performed unless you reload the extension or select a different optimizer or
DOE type.
• If you reload the extension, the properties become editable and calculations can be performed. You have
the option of saving the extension to the project so it does not have to be reloaded again the next time
the project is opened.
• When you select a different optimization method or sampling type (one not defined in the missing exten-
sion), the alternate label disappears from the list of options.
Selecting File → Export Report generates the project report, which you can edit and save. For more
information, see Working with Project Reports in the Workbench User's Guide.
The project report begins with a summary, followed by separate sections for global, system, and com-
ponent information. The project report also has appendices.
Project Summary
The project summary includes the project name, date and time created, and product version.
Global Information
This section includes an image of the schematic and tables corresponding to the Files, Outline of All
Parameters, Design Points, and Properties panes.
System Information
A system information section exists in the project report for each design exploration system in the
project. For example, a project with Response Surface and Response Surface Optimization systems
has two system information sections.
Component Information
Each system information section contains subsections for the components (cells) in the system. For ex-
ample, because a Response Surface Optimization system has three cells, its system information section
has three subsections. Because a Parameters Correlation system has only one cell, its system information
section has only one subsection. Each component information subsection contains such data as para-
meter or model properties and charts.
Appendices
The final section of the project report consists of matrices, tables, and additional information related
to the project.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 307
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
308 of ANSYS, Inc. and its subsidiaries and affiliates.
DesignXplorer Theory
When performing a design exploration, a theoretical understanding of the methods available is beneficial.
The underlying theory of the methods is categorized as follows:
Parameters Correlation Filtering Theory
Response Surface Theory
Goal-Driven Optimization Theory
Six Sigma Analysis (SSA) Theory
Theory References
• The relevance of the correlation value between input and output parameters
• The R2 contribution of input parameters in prediction of output parameter values (gain in prediction)
For each correlation value, DesignXplorer computes the p-value of this correlation.
The p-value of a correlation value allows DesignXplorer to quantify the relevance of the correlation.
• rXY = observed correlation between X (an input parameter) and Y (an output parameter)
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 309
DesignXplorer Theory
The p-value corresponds to the probability of obtaining the same rXY assuming that the null hypo-
thesis is true. The p-value is given by .
When trying to find a relationship between X and Y, two types of errors are possible:
• Type I Error (false positive): You think that there is a relationship between X and Y, when there is in fact
no relation between X and Y.
• Type II Error (false negative): You think that there is no relationship between X and Y, when there is in
fact a relationship between X and Y.
The larger the correlation value rXY , the less likely it is that the null hypothesis is true.
The relevance value of the correlation value exposed at the DesignXplorer level is a transformation
of the p-value from [0; 1] to [0; 1]:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
310 of ANSYS, Inc. and its subsidiaries and affiliates.
Parameters Correlation Filtering Theory
When running a filtering method with a relevance threshold equal to 0.5 on the correlation value
only, the major input parameters selected must be at least a p-value of the correlation coefficient
with any filtering output parameter equals to or less than 0.05.
R2 Contribution
For each filtering output parameter, DesignXplorer builds an internal meta-model (polynomial response
surface) based on the sample points generated during the correlation update.
A first response surface is built based on major input parameter returned by the filtering by using
the correlation values.
Once a first response surface has been built, DesignXplorer tests the contribution of each input
parameter in the prediction of the output parameter, and refines the response surface by removing
the insignificant input parameters and by adding new significant input parameters.
• = R-squared of the response surface based on current major input parameters without the
major input parameter
• = R-squared of the response surface based on current major input parameters and the minor
input parameter
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 311
DesignXplorer Theory
DesignXplorer computes the Fisher Test to test the significance of major input parameter and the
gain in prediction:
Under the null hypothesis that response surface does not provide a significantly better R-squared
than the response surface , has an distribution, with degrees of freedom.
The null hypothesis is rejected if the calculated from the data is greater than the critical value of
the distribution for the false-rejection probability equals to 0.05. This means that if the p-value of
the Fisher Test is greater than 0.05, the major input parameter becomes a minor input parameter.
DesignXplorer computes the Fisher Test to test the significance of major input parameter and the
gain in prediction:
Under the null hypothesis that response surface does not provide a significantly better R-squared
than the response surface , has an distribution, with degrees of freedom.
For each output parameter, DesignXplorer computes the contribution of each input parameter
( or ).
For a given output, to compute the relevance of contribution, DesignXplorer scales contribution
by the best known contribution for this output, and transforms this value with the following
function:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
312 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
• = relevance of the i-th input parameter on the j-th filtering output parameter
• = best relevance of the i-th input parameter for any filtering output parameter
The list of major inputs corresponds to the input parameters with its relevance
.
If the size of the list exceeds input parameters, DesignXplorer selects only the input parameters
with the best relevance.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 313
DesignXplorer Theory
A very simple designed experiment is the screening design. In this design, a permutation of lower and
upper limits (two levels) of each input variable (factor) is considered to study their effect to the output
variable of interest. While this design is simple and popular in industrial experimentations, it only
provides a linear effect, if any, between the input variables and output variables. Furthermore, effect
of interaction of any two input variables, if any, to the output variables is not characterizable.
To compensate for the insufficiency of the screening design, it is enhanced to include the center point
of each input variable in experimentations. The center point of each input variable allows a quadratic
effect, minimum or maximum inside the explored space, between input variables and output variables
to be identifiable, if one exists. The enhancement is commonly known as response surface design to
provide quadratic response model of responses.
The quadratic response model can be calibrated using full-factorial design (all combinations of each
level of input variable) with three or more levels. However, the full-factorial designs generally require
more samples than necessary to accurately estimate model parameters. In light of the deficiency, a
statistical procedure is developed to devise much more efficient experiment designs using three or five
levels of each factor but not all combinations of levels, known as fractional factorial designs. Among
these fractional factorial designs, the two most popular DOE types (p. 314) are Central Composite Designs
(CCDs) (p. 314) and Box-Behnken designs (p. 316).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
314 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
Example 1: Circumscribed
Inscribed CCDs, on the contrary, are designed using as "true" physical lower and upper
limits for experiments. The five-level coded values of inscribed CCDs are evaluated by scaling down
CCDs by the value of evaluated from circumscribed CCDs. For the inscribed CCDs, the five-level
coded values are labeled as . The following is a geometrical representation
of an inscribed CCD of three factors:
Example 2: Inscribed
Face-centered CCDs are a special case of CCDs in which . As a result, the face-centered CCDs
become a three-level design that is located at the center of each face formed by any two factors.
The following is a geometrical representation of a face-centered CCD of three factors:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 315
DesignXplorer Theory
Example 3: Face-Centered
Box-Behnken Design
Unlike the CCD, the Box-Behnken Design is quadratic and does not contain embedded factorial or
fractional factorial design. As a result, the Box-Behnken Design has a limited capability of orthogonal
blocking, compared to CCD. The main difference of Box-Behnken Design from CCD is that Box-
Behnken is a three-level quadratic design in which the explored space of factors is represented by
. The "true" physical lower and upper limits corresponding to . In this design,
however, the sample combinations are treated such that they are located at midpoints of edges
formed by any two factors. The following is a geometry representation of Box-Behnken designs of
three factors:
The circumscribed CCD generally provides a high quality of response prediction. However, the cir-
cumscribed CCD requires factor level setting outside the physical range. Due to physical and eco-
nomic constraints, some industrial studies prohibit the use of corner points, where all factors are
at an extreme. In such cases, factor spacing/range can possibly be planned out in advance to ensure
, so each coded factor falls within feasible levels.
The inscribed CCD uses points only within the specified factor levels. This means that it does not
have the "corner point constraint" issue that the circumscribed CCD has. However, the improvement
compromises the accuracy of the response prediction near the lower and upper limits of each
factor. While the inscribed CCD provides a good response prediction over the central subset of
factor space, the quality is not as high as the response prediction provided by the inscribed CCD.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
316 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
Like the inscribed CCD, the face-centered CCD does not require points outside the original factor
range. Compared to the inscribed CCD, the face-centered CCD provides a relatively high quality of
response prediction over the entire explored factor space. The drawback of the face-centered CCD
is that it gives a poor estimate of the pure quadratic coefficient, which is the coefficient of square
term of a factor.
For relatively the same accuracy, Box-Behnken Design is more efficient than CCD in cases involving
three or four factors because it requires fewer treatments of factor level combinations. However,
like the inscribed CCD, the prediction at the extremes (corner points) is poor. The property of
"missing corners" can be useful if these should be avoided due to physical or economic constraints,
because potential of data loss in those cases can be prevented.
For information on the Neural Network response surface, see Neural Network (p. 104).
Genetic Aggregation
The Genetic Aggregation response surface can be written as an ensemble using a weighted average
of different metamodels:
where:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 317
DesignXplorer Theory
To estimate the best weight factor values, DesignXplorer minimizes the Root Mean Square Error
(RMSE) of the Design of Experiments points (or design points) on , and the RMSE of the same
design points based on the cross-validation of .
With:
where:
= design point
= output parameter value at
= prediction of the response surface built without the design point
= number of design points
Cross-Validation
DesignXplorer uses the Leave-One-Out and K-Fold cross-validation methods.
Leave-One-Out Method:
• For a given i-th response surface, DesignXplorer computes N sub-metamodels, where each sub-
response surface corresponds to thei-th response surface fitted to N − 1 design points.
• The cross-validation error of the j-th design point is the error at this point of the sub-response
surface built without the j-th design point.
K-Fold Method:
• Builds k sub-metamodels of the i-th response surface, where each sub-response surface corresponds
to the i-th response surface fitted to N − N /k design points.
• The cross-validation error at the j-th design point is the error at this point of the sub-response surface
built without the subset of N /k design points containing the j-th design point.
• To improve the relevance of the k-fold strategy, the N /k design points used as validation points of
each fold are selected by using the maximum of minimum distance.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
318 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
If you note , the cross-validation error of i-th response surface built without the j-th design
point, then:
DesignXplorer computes analytically the weight factor values. This computation is based on a sim-
ilar approach proposed by Viana et al. (2009).
where 1 is the identity matrix and C the matrix of the mean square errors:
With:
Genetic Algorithm
There are many types of metamodels, including Polynomial Regression, Kriging, Support Vector
Regression, and Moving Least Squares. For each response surface, there are different settings. For
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 319
DesignXplorer Theory
example, on the Kriging response surface, you can control the type of kernel (Gaussian, exponential,
and so on), the type of kernel variation (anisotropic or isotropic), and the type of polynomial regres-
sion (linear, quadratic, and so on).
To increase the chance of getting the most effective response surface, DesignXplorer generates a
population of metamodels with different types and settings. This population corresponds to the
first population of the genetic algorithm run by DesignXplorer. The next populations are obtained
by cross-over and mutation of previous population.
Cross-Over Operator
There are two types of cross-over. The first one is the cross-over between two response surfaces
of the same type (for example, two Kriging), and the second one is the cross-over between two
response surfaces of different types (for example, a Kriging and a polynomial regression).
In the first case, DesignXplorer exchanges a part of settings from the first parent to the second
parent (for example, the exchange of kernel type between two Kriging response surfaces).
In the second case, DesignXplorer creates a new response surface (an ensemble), which is a com-
bination of the two parents (for example, the combination of a Kriging and a polynomial regression
response surface).
Mutation Operator
DesignXplorer mutates one or several settings of the response surface (or of the response surfaces,
in the case of a combination of response surfaces).
To keep a diversity of response surfaces, the genetic algorithm removes a part of the response
surface type that is too present in the population, while it retains other response surfaces less
present.
In the best case, the population contains similar metamodels in terms of prediction accuracy ( )
when the predicted values are different : this increases the chance of error cancellation on
the ensemble.
For external references, see Genetic Aggregation Response Surface References (p. 386).
Automatic Refinement
Gaussian Process regression methods, such as Kriging, provide a prediction distribution. The genetic
aggregation can contain several surrogate models that do not afford a local prediction distribution.
Ben Salem et al. [1] introduced a universal prediction (UP) for all surrogate models. This distribution
relies on cross-validation sub-model predictions. Based on this empirical distribution, they define
an adaptive sampling technique for global refinement (UP-SMART).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
320 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
The Genetic Aggregation refinement is a hybrid variant of UP-SMART consisting in adding at step
a point:
With:
• = distance penalization
Bibliography
[1] M. Ben Salem, O. Roustant, F. Gamboa, & L. Tomaso (2017). "Universal prediction distribution for
surrogate models." SIAM/ASA Journal on Uncertainty Quantification, 5(1), 1086-1109.
The following figure represents the absolute predicted error of a response surface with only one
input parameter:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 321
DesignXplorer Theory
For N refinement points to submit simultaneously, the generation of the i-th refinement point
depends on the (i-1) first pending refinement points, with i > 2. The response surface is updated
only when all N refinement points have been generated.
While the previous figure showns that the first refinement point is based on the APE, in the next
figure, refinement points are based on the absolute weighted predicted error (AWPE), which takes
into account the influence of the pending refinement points on the response surface. AWPE is a
transformation of APE that reduces the predicted error around the pending refinement point to
favor the domain with a high predicted error and a low density of pending refinement points. The
i-th refinement point, with i > 1, is selected as the worst point with the maximum AWPE.
Algorithm Summary:
3. j = 0
c. i = 2
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
322 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
iii. i = i + 1
h. j = j + N
where:
• W(X’) is equal to 1 in the full input parameter space except close to point X’, where W decreases
to be equal to 0 at X’
General Definitions
The error sum of squares SSE is:
(1)
where:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 323
DesignXplorer Theory
where:
For linear regression analysis the relationship between these sums of squares is:
For a linear regression analysis, the regression model at any sampled location , with in the
-dimensional space of the input variables can be written as:
where:
= row vector of regression terms of the response surface model at the sampled
location
Kriging
Kriging postulates a combination of a polynomial model plus departures of the form given by:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
324 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
While "globally" approximates the design space, creates "localized" deviations so that
the Kriging model interpolates the sample data points. The covariance matrix of is given
by:
where is the correlation matrix and is the spatial correlation of the function between
any two of the sample points and . is an symmetric, positive definite matrix with
ones along the diagonal. The correlation function is a Gaussian correlation function:
The in are the unknown parameters used to fit the model, is the number of design variables,
and and are the components of sample points and . In some cases, using a single
correlation parameter gives sufficiently good results. You can specify the use of a single correlation
parameter or one correlation parameter for each design variable in the Options dialog box. Select
Tools → Options → Design Exploration → Response Surface and then, under Kriging Options,
make the desired section for Kernal Variation Type.
can be written:
Predicted Error
Kriging or Gaussian process regression (GPR) is widely popular, especially in spatial statistics. It is
based on the early works of Krige [1]. The mathematical framework can be found in [2,3]. Kriging
models predict the outputs of a function f: based on a set of n observations. Within the GP
framework, the posterior distribution is given by the conditional distribution of Y given the obser-
vations:
In the Gaussian process framework, Kriging or Gaussian process regression provides a mean function
(prediction) and a predicted variance that DesignXplorer uses to assess the local prediction.
The general form assumes that the true model response is a realization of a Gaussian process de-
scribed by the following equation:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 325
DesignXplorer Theory
In this case, the Kriging predictions (m) and variance ( ) are given by:
The standard deviation is used as a tool to assess the errors. In fact, for a Gaussian distribution,
cover 99.7% of the possible predictions. The predicted error displayed in DesignXplorer is .
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
326 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
For a Genetic Aggregation response surface, predicted error is calculated using the universal pre-
diction (UP) distribution [4] to quantify the predicted error. The UP distribution can be applied for
any surrogate model. It is based on a weighted empirical probability measure supported by the CV
submodel prediction.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 327
DesignXplorer Theory
As indicated in Automatic Refinement (p. 320) in the genetic aggregation theory section, the predicted
error is the criterion Y.
Bibliography
[1] D. G. Krige. "A Statistical Approach to Some Basic Mine Valuation Problems on the Witwatersrand."
Journal of the Chemical, Metallurgical and Mining Society of South Africa, 52:119–139, 1951
[3] M. L. Stein. "Interpolation of Spatial Data: Some Theory for Kriging." Springer Science & Business
Media, New York, 2012.
[4] M. Ben Salem, O. Roustant, F. Gamboa, & L. Tomaso (2017). "Universal prediction distribution for
surrogate models." SIAM/ASA Journal on Uncertainty Quantification, 5(1), 1086-1109.
Non-Parametric Regression
Let the input sample (as generated from a DOE method) be , where each
is an -dimensional vector and represents an input variable. The objective is to determine the
equation of the form:
(2)
(3)
where: is the kernel map and the quantities and are Lagrange multipliers whose
derivation are shown in later sections.
To determine the Lagrange multipliers, you start with the assumption that the weight vector
must be minimized such that all (or most) of the sample points lie within an error zone around the
fitted surface. The following figure provides a simple demonstration. It fits a regression line for a
group of sample points with a tolerance of ε, which is characterized by slack variables ξ* and ξ.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
328 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
(4)
where is an arbitrary constant. To characterize the tolerance properly, you define a loss
function in the interval (- , ), which de facto becomes the residual of the solution. In the present
implementation, you use the -insensitive loss function, which is given by the equation:
(5)
The primal problem in Equation 4 (p. 329) can be rewritten as the following using generalized loss
functions:
(6)
To solve this efficiently, a Lagrangian dual formulation is done on this to yield the following expres-
sion:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 329
DesignXplorer Theory
(7)
After some simplification, the dual Lagrangian can be written as the following Equation 8 (p. 330):
(8)
is a quadratic constrained optimization problem, and the design variables are the vector . Once
this is computed, the constant in can be obtained by the application of Karush-Kuhn-Tucker (KKT)
conditions. Once the -insensitive loss functions are used instead of the generic loss functions l( )
in, this can be rewritten in a much simpler form as the following:
(9)
which is to be solved by a QP optimizer to yield the Lagrange multipliers and the constant
is obtained by applying the KKT conditions.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
330 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
Sparse Grid
The Sparse Grid metamodeling is a hierarchical Sparse Grid interpolation algorithm based on
piecewise multilinear basis functions.
Piecewise linear hierarchical basis (from the level 0 to the level 3):
The calculation of coefficients values associated to a piecewise linear basis is hierarchical and ob-
tained by the differences between the values of the objective function and the evaluation of the
current Sparse Grid interpolation.
For a multi-dimensional problem, the Sparse Grid metamodeling is based on piecewise multilinear
basis functions that are obtained by a sparse tensor product construction of one-dimensional multi-
level basis.
The generation of the Sparse Grid is obtained by the following tensor product:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 331
DesignXplorer Theory
Example 6: Tensor Product Approach to Generate the Piecewise Bilinear Basis Functions W2,0
To generate a new Sparse Grid , any Sparse Grid that meets the order relation
must be generated before:
means:
with
and
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
332 of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface Theory
For example, in a two-dimensional problem, the generation of the grid requires several steps:
1. Generation of
2. Generation of and
3. Generation of n and
The calculation of coefficients values associated to a piecewise multilinear basis is similar to the
calculation of coefficients of linear basis: the coefficients are obtained by the differences between
the values of the objective function on the new grid and the evaluation (of the same grid) with the
current Sparse Grid interpolation (based on old grids).
You can observe for a higher-dimensional problem that not all input variables carry equal weight.
A regular Sparse Grid refinement can lead to too many support nodes. This is why the Sparse Grid
metamodeling uses a dimension-adaptive algorithm to automatically detect separability and which
dimensions are the more or the less important ones to reduce computational effort for the objectives
functions.
The hierarchical structure is used to obtain an estimate of the current approximation error. This
current approximation error is used to choose the relevant direction to refine the Sparse Grids. If
the approximation error has been found with Sparse Grid Wl, the next iteration consists in the
generation of new Sparse Grids obtained by incrementing of each dimension level of (one by
one) as far as possible: the refinement of can generate two new Sparse Grids and
(if the and already exist).
The Sparse Grid metamodeling stops automatically when the desired accuracy is reached or when
the maximum depth is met in all directions (the maximum depth corresponds to the maximum
number of hierarchical interpolation levels to compute: if the maximum depth is reached in one
direction, the direction is not refined further).
The new generation of the Sparse Grid allows as many linear basis functions as there are points of
discretization.
All Sparse Grids generated by the tensor product contain only one-point that allows refinement
more locally.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 333
DesignXplorer Theory
Sparse Grid metamodeling is more efficient with a more local refinement process that uses less
design points, as well as, reaching the requested accuracy faster.
• Screening
• MOGA
• NLPQL
• MISQP
• Adaptive Single-Objective
• Adaptive Multiple-Objective
The Screening, MISQP, and MOGA methods can be used with discrete parameters. The Screening, MISQP,
MOGA, Adaptive Multiple-Objective, and Adaptive Single-Objective methods can be used with continuous
parameters with manufacturable values.
The GDO process allows you to determine the effect on input parameters with certain objectives applied
to the output parameters. For example, in a structural engineering design problem, you can determine
the combination of design parameters that best satisfy minimum mass, maximum natural frequency,
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
334 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
maximum buckling and shear strengths, and minimum cost, with maximum value constraints on the
von Mises stress and maximum displacement.
This section describes GDO and its use in performing single- and multiple-objective optimization.
GDO Principles
GDO Guidelines and Best Practices
Goal-Driven Optimization Theory
GDO Principles
You can apply goal-driven optimization to design optimization by using any of the following methods:
• Screening. This is a non-iterative direct sampling method that uses a quasi-random number gener-
ator based on the Hammersley algorithm.
• MOGA. This is an iterative Multi-Objective Genetic Algorithm that can optimize problems with con-
tinuous input parameters.
– Part of the population is simulated by evaluations of the Kriging response surface, which is con-
structed of all design points submitted by MOGA.
MOGA is better for calculating the global optima, while NLPQL and MISQP are gradient-based al-
gorithms ideally suited for local optimization. So you can start with Screening or MOGA to locate the
multiple tentative optima and then refine with NLPQL or MISQP to zoom in on the individual local
maximum or minimum value.
The GDO framework uses a Decision Support Process (DSP) based on satisfying criteria as applied to
the parameter attributes using a weighted aggregate method. In effect, the DSP can be viewed as a
postprocessing action on the Pareto fronts as generated from the results of the various optimization
methods.
Usually Screening is used for preliminary design, which can lead you to apply one of the other ap-
proaches for more refined optimization results. Note that running a new optimization causes a new
sample set to be generated.
In either approach, the Tradeoff chart, as applied to the resulting sample set, shows the Pareto-
dominant solutions. However, in MOGA, the Pareto fronts are better articulated and most of the
feasible solutions lie on the first front, as opposed to the usual results of Screening, where the solutions
are distributed across all the Pareto fronts. This is illustrated in the following two figures.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 335
DesignXplorer Theory
This first figure shows the 6,000 sample points generated by the Screening method.
This second figure shows a final sample set for the same problem after 5,400 evaluations by MOGA.
The following figure demonstrates the necessity of generating Pareto fronts. The optimal Pareto front
shows two non-dominated solutions. The first Pareto front or Pareto frontier is the list of non-dominated
points for the optimization.
A dominated point is a point that, when considered in regard to another point, is not the best solution
for any of the optimization objectives. For example, if point A and point B are both defined, point B
is a dominated point when point A is the better solution for all objectives.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
336 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
In this example, the two axes represent two output parameters with conflicting objectives: the X axis
represents Minimize P6 (WB_V), and the Y axis represents Maximize P9 (WB_BUCK). The chart shows
two optimal solutions, point 1 and point 2. These solutions are non-dominated, which means that
both points are equally good in terms of Pareto optimality, but for different objectives. Point 1 is the
better solution for Minimize P6 (WB_V), and point 2 is the better solution for Maximize P9
(WB_BUCK). Neither point is strictly dominated by any other point, so both are included on the first
Pareto front.
Convergence Rate % and Initial Finite Difference Delta % in NLPQL and MISQP
Typically, the use of NLPQL or MISQP is suggested for continuous problems when there is only one
objective function. The problem might or might not be constrained and must be analytic. This
means that the problem must be defined only by continuous input parameters and that the objective
functions and constraints should not exhibit sudden "jumps" in their domain.
The main difference between these algorithms and MOGA is that MOGA is designed to work with
multiple objectives and does not require full continuity of the output parameters. However, for
continuous single objective problems, the use of NLPQL or MISQP gives greater accuracy of the
solution as gradient information and line search methods are used in the optimization iterations.
MOGA is a global optimizer designed to avoid local optima traps, while NLPQL and MISQP are local
optimizers designed for accuracy.
For NLPQL and MISQP, the default convergence rate, which is specified by the Allowable Conver-
gence (%) property, is set to 0.1% for a Direct Optimization system and 0.0001 % for a Response
Surface Optimization system. The maximum value for this property is 100%. This is computed
based on the (normalized) Karush-Kuhn-Tucker (KKT) condition. This implies that the fastest conver-
gence rate of the gradients or the functions (objective function and constraint) determine the ter-
mination of the algorithm.
The default convergence rate is used in conjunction with the initial finite difference delta percentage
value, which is specified by the Initial Finite Difference Delta (%) advanced property. This property
defaults to 1% for a Direct Optimization system and 0.001 % for a Response Surface Optimization
system. You use this property to specify a percentage of the variation between design points to
ensure that the Delta use in the calculation of finite differences is large enough to be seen over
simulation noise. The specified percentage is defined as a relative gradient perturbation between
design points.
The advantage of this approach is that for large problems, it is possible to get a near-optimal
feasible solution quickly without being trapped into a series of iterations involving small solution
steps near the optima. To work most effectively with NLPQL and MISQP, keep the following guidelines
in mind:
• If the Initial Finite Difference Delta (%) is greater than the Allowable Convergence (%), the relative
gradient perturbation gets iteratively smaller, until it matches the allowable convergence rate. At this
point, the relative gradient value stays the same through the rest of the analysis.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 337
DesignXplorer Theory
• If the Initial Finite Difference Delta (%) is less than or equal to the Allowable Convergence (%),
the current relative gradient step remains constant through the rest of the analysis.
• Both the Initial Finite Difference Delta (%) and Allowable Convergence (%) should be higher than
the magnitude of the noise in your simulation.
When setting the values for these properties, you have the usual trade-offs between speed and
accuracy. Smaller values result in more convergence iterations and a more accurate (but slower)
solution, while larger values result in fewer convergence iterations and a less accurate (but faster)
solution. At the same time, however, you must be aware of the amount of noise in your model.
For the input variable variations to be visible in the output variables, both values must be
greater than the magnitude of the simulation's noise.
In general, DesignXplorer's default values for Initial Finite Difference Delta (%) and Allowable
Convergence (%) cover the majority of optimization problems. For example, if you know that
the noise magnitude in your direct optimization problem is 0.0001, then the default values (Al-
lowable Convergence (%) = 0.001 and Initial Finite Difference Delta (%) = 0.01) are good.
When the defaults are not a good match for your problem, of course, you can adjust the values
to better suit your model and your simulation needs. If you require a more numerically accurate
solution, you can set the convergence rate to as low as 1.0E-10% and then set the Initial Finite
Difference Delta (%) accordingly.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
338 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
Note:
An optimization problem is undefined when none of its parameters have an objective defined. An
optimization problem can also be unable to satisfy a constraint.
In the Project Schematic, both the state and the Quick Help for the Optimization cell would indicate
that input is required. Likewise, in the Outline view for the cell, both the state and Quick Help for
the root node would indicates that input is required. If you tried to update the optimization with
no objective set, an error message would display in the Message view.
As demonstrated by the following figure, some combinations of Constraint Type, Lower Bound,
and possibly Upper Bound settings specify constraints that cannot be satisfied. The results obtained
from these settings indicate only the boundaries of the feasible region in the design space. Currently,
only the Screening method can solve a pure constraint satisfaction problem.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 339
DesignXplorer Theory
In a Screening optimization, sample generation is driven by the domain definition, which is the
lower and upper bounds for the parameter. Sample generation is not affected by parameter objective
and constraint settings, unlike non-Screening optimization methods. However, parameter objective
and constraint settings do affect the generation of candidate points.
Typically, the Screening method is best suited for conducting a preliminary design study. It is a
low-resolution, fast, and exhaustive study that you can use to quickly locate approximate solutions.
Because the Screening method does not depend on any parameter objectives, you can change the
objectives after performing the analysis to view the candidates that meet different objective sets,
allowing you to quickly perform preliminary design studies. It's easy to keep changing the objectives
and constraints to view the different corresponding candidates, which are drawn from the original
sample set.
For example, you can run a Screening optimization for 3,000 samples and then use Tradeoff or
Samples charts to view the Pareto fronts. You can use the solutions slider in the Properties view
for the chart to display only the prominent points in which you are interested, which are usually
the first few fronts. When you run MOGA or Adaptive Multiple-Objective, you can limit the number
of Pareto fronts that are computed in the analysis.
Once all of your objectives are defined, update the Optimization cell to generate up to the requested
number of candidate points. You can save any of the candidates by right-clicking and selecting
Explore Response Surface at Points, Insert as Design Points, or Insert as Refinement Points.
When working with a Response Surface Optimization system, you should validate the best obtained
candidate results by saving the corresponding design points, solving them at the project level, and
comparing the results.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
340 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
subject to:
The symbols and denote the vectors of the continuous and the integer variables, respectively.
DesignXplorer allows you to define a constrained design space with linear or non-linear constraints.
The constraint sampling is a heuristic method based on Shifted-Hammersley (Screening) and MISQP
sampling methods.
For a given screening of sample points generated in the hypercube of input parameters, only a
part of this sampling ( ) is within the constrained design space (feasible domain).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 341
DesignXplorer Theory
Otherwise, DesignXplorer needs to create additional points to reach the sample points.
The Screening method is not guaranteed to find either enough sample points or at least one feasible
point. For this reason, DesignXplorer solves an MISQP problem for each constraint on input para-
meters:
subject to:
subject to:
Once the point closest to the center mass of the feasible points has been found, DesignXplorer
projects part of the infeasible points onto the skin of the feasible domain.
Given an infeasible point and feasible point , you can build a new feasible point :
To find close to the skin of the feasible domain, the algorithm starts with equals 1, and decreases
value until is feasible.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
342 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
To optimize the distribution of point on the skin of the feasible domain, the algorithm chooses the
furthermost infeasible points in terms of angles:
Once enough points have been generated on the skin of the feasible domain, the algorithm generates
internal points. The internal points are obtained by combination of internal points and points
generated on the skin of the feasible domain.
The conventional Hammersley sampling algorithm is constructed by using the radical inverse
function. Any integer can be represented as a sequence of digits by the following
equation:
(10)
For example, consider the integer 687459, which can be represented this way as , and
so on. Because this integer is represented with radix 10, you can write it as
and so on. In general, for a radix representation, the equation
is:
(11)
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 343
DesignXplorer Theory
The inverse radical function is defined as the function that generates a fraction in (0, 1) by reversing
the order of the digits in Equation 11 (p. 343) about the decimal point, as shown.
(12)
Thus, for a -dimensional search space, the Hammersley points are given by the following expression:
(13)
where indicates the sample points. Now, from the plot of these points, it is seen that the
first row (corresponding to the first sample point) of the Hammersley matrix is zero and the last
row is not 1. This implies that, for the -dimensional hypercube, the Hammersley sampler generates
a block of points that are skewed more toward the origin of the cube and away from the far edges
and faces. To compensate for this bias, a point-shifting process is proposed that shifts all Hammersley
points by the amount that follows:
(14)
This moves the point set more toward the center of the search space and avoids unnecessary bias.
Thus, the initial population always provides unbiased, low-discrepancy coverage of the search space.
minimize:
subject to:
where:
It is assumed that objective function and constraints are continuously differentiable. The idea is
to generate a sequence of quadratic programming subproblems obtained by a quadratic approx-
imation of the Lagrangian function and a linearization of the constraints. Second order information
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
344 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
is updated by a quasi-Newton formula and the method is stabilized by an additional (Armijo) line
search.
The method presupposes that the problem size is not too large and that it is well-scaled. Also,
the accuracy of the methods depends on the accuracy of the gradients. Because analytical
gradients are unavailable for most practical problems, it is imperative that the numerical (finite
difference based) gradients are as accurate as possible.
Before the actual derivation of the NLPQL equations, Newton's iterative method for the solution
of nonlinear equation sets is reviewed. Let be a multivariable function such that it can be
expanded about the point in a Taylor's series.
(15)
where it is assumed that the Taylor series actually models a local area of the function by a
quadratic approximation. The objective is to devise an iterative scheme by linearizing the vector
Equation 15 (p. 345). To this end, it is assumed that at the end of the iterative cycle, the Equa-
tion 15 (p. 345) would be exactly valid. This implies that the first variation of the following expres-
sion with respect to Δx must be zero.
(16)
The first expression indicates the first variation of the converged solution with respect to the in-
crement in the independent variable vector. This gradient is necessarily zero because the converged
solution clearly does not depend on the step-length. Therefore, Equation 17 (p. 345) can be written
as the following:
(18)
where the index indicates the iteration. Equation 18 (p. 345) is therefore used in the main
quadratic programming scheme.
NLPQL Derivation:
Consider the following single-objective nonlinear optimization problem. It is assumed that the
problem is smooth and analytic throughout and is a problem of decision variables.
minimize:
subject to:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 345
DesignXplorer Theory
where:
(19)
where and are the numbers of inequality and equality constraints. In many cases the inequality
constraints are bound above and below, in such cases, it is customary to split the constraint into
two inequality constraints. To approximate the quadratic sub-problem assuming the presence of
only equality constraints, the Lagrangian for Equation 19 (p. 346) as:
(20)
where is a dimensional vector (non-zero), which are used as Lagrange multipliers. Thus,
Equation 20 (p. 346) becomes a functional, which depends on two sets of independent vectors.
To minimize this expression, you seek the stationarity of this functional with respect to the two
sets of vectors. These expressions give rise to two sets of vector expressions as the following:
(21)
Equation 21 (p. 346) defines the Karush-Kuhn-Tucker (KKT) conditions that are the necessary con-
ditions for the existence of the optimal point. The first equation is composed of nonlinear al-
gebraic equations and the second one is made up of nonlinear algebraic equations. The matrix
is a matrix defined as the following:
(22)
Thus, in Equation 22 (p. 346), each column is a gradient of the corresponding equality constraint.
For convenience, the nonlinear equations of Equation 21 (p. 346) can be written as the following:
(23)
where Equation 23 (p. 346) is a system of nonlinear equations. The independent variable
set can be written as:
(24)
(25)
Referring to the section on Newton based methods, the vector in Equation 24 (p. 346) is
updated as the following:
(26)
The increment of the vector is given by the iterative scheme in Equation 18 (p. 345). Referring to
Equation 23 (p. 346), Equation 24 (p. 346), and Equation 25 (p. 346), the iterative equation is expressed
as the following:
(27)
This is only a first order approximation of the Taylor expansion of Equation 23 (p. 346). This is in
contrast to Equation 18 (p. 345) where a quadratic approximation is done. This is because in
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
346 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
Equation 24 (p. 346), a first order approximation has already been done. The matrices and the
vectors in Equation 27 (p. 346) can be expanded as the following:
(28)
and
(29)
This is obtained by taking the gradients of Equation 25 (p. 346) with respect to the two variable
sets. The sub-matrix is the Hessian of the Lagrange function in implicit form.
To demonstrate how Equation 29 (p. 347) is formed, let us consider the following simple case.
Given a vector of two variables and , you write:
It is required to find the gradient or Jacobian of the vector function ). To this effect, the deriv-
ation of the Jacobian is evident because in the present context, the vector indicates a set of
nonlinear (algebraic) equations and the Jacobian is the coefficient matrix that "links" the increment
of the independent variable vector to the dependent variable vector. Thus, you can write:
Thus, the Jacobian matrix is formed. In some applications, the equation is written in the following
form:
where the column indicates the gradient of the component of the dependent variable
vector with respect to the independent vector. This is the formalism you use in determining
Equation 29 (p. 347).
(30)
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 347
DesignXplorer Theory
Solution of Equation 30 (p. 347) iteratively solves Equation 31 (p. 348) in a series of linear steps till
the point when the increment is negligible. The update schemes for the independent variable
and Lagrange multiplier vectors and are written as the following:
(31)
The individual equations of the Equation 30 (p. 347) are written separately now. The first equation
(corresponding to minimization with respect to ) can be written as:
(32)
The last step in Equation 32 (p. 348) is done by using Equation 31 (p. 348). Thus, using Equa-
tion 32 (p. 348) and Equation 30 (p. 347), the iterative scheme can be rewritten in a simplified form
as:
(33)
Thus, Equation 33 (p. 348) can be used directly to compute the value of the Lagrange
multiplier vector. Note that by using Equation 33 (p. 348), it is possible to compute the update of
and the new value of the Lagrange multiplier vector in the same iterative step. Equa-
tion 33 (p. 348) shows the general scheme by which the KKT optimality condition can be solved
iteratively for a generalized optimization problem.
where the definition of the matrices in Equation 34 (p. 348) and Equation 35 (p. 348) are given
earlier. To solve the quadratic minimization problem let us form the Lagrangian as:
(36)
Now, the KKT conditions can be derived (as done earlier) by taking gradients of the Lagrangian
in Equation 36 (p. 348) as the following:
(37)
In a condensed matrix form Equation 37 (p. 348) can be written as the following:
(38)
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
348 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
Equation 38 (p. 348) is the same as Equation 33 (p. 348). This implies that the iterative scheme in
Equation 38 (p. 348), which actually solves a quadratic subproblem (Equation 34 (p. 348) and
Equation 35 (p. 348)) in the domain . In case the real problem is quadratic, then this iterative
scheme solves the exact problem.
On addition of inequality constraints, the Lagrangian of the actual problem can be written as the
following:
(39)
The inequality constraints have been converted to equality constraints by using a set of slack
variables as the following:
(40)
The squared term is used to ensure that the slack variable remains positive, which is required to
satisfy Equation 40 (p. 349). The Lagrangian in Equation 39 (p. 349) acts as an enhanced objective
function. It is seen that the only case where the additional terms might be active is when the
constraints are not satisfied.
The KKT conditions as derived from Equation 39 (p. 349) (by taking first variations with respect to
the independent variable vectors) are:
(41)
From the KKT conditions in Equation 41 (p. 349), it is evident that there are equations
for a similar number of unknowns. Therefore, this equation set possesses a unique solution. Let
this (optimal) solution be marked as . At this point, a certain number of constraints are active
while others are inactive. Let the number of active inequality constraints be and the total
number of active equality constraints be . By an active constraint, it is assumed that the constraint
is at its threshold value of zero. Therefore, let and be the sets of active and inactive equality
constraints (respectively) and and be the sets of active and inactive inequality constraints
respectively. Therefore, you can write the following relations:
(42)
where indicates the count of the elements of the set under consideration. These sets
partition the constraints into active and inactive sets. Therefore, you can write:
(43)
Thus, the last three equations in Equation 41 (p. 349) can be represented by the Equation 43 (p. 349).
These are the optimality conditions for constraint satisfaction. From these equations, you can
now eliminate such that the Lagrangian in Equation 39 (p. 349) depends on only three independ-
ent variable vectors. From the last two conditions in Equation 43 (p. 349), you can write the follow-
ing condition, which is always valid for an optimal point:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 349
DesignXplorer Theory
(44)
Using Equation 44 (p. 350) in the set Equation 41 (p. 349), the KKT optimality conditions can be
written as the following:
(45)
Thus, the new set contains only unknowns. Now, following the same logic as in
Equation 24 (p. 346) as done earlier—let us express Equation 45 (p. 350) in the same form as in
Equation 23 (p. 346). This represents a system of nonlinear equations. The independent
variable set can be written in vector form as the following:
(46)
Newton's iterative scheme is also used here. Therefore, the same equations as in Equation 26 (p. 346)
and Equation 27 (p. 346) also apply here. Following Equation 27 (p. 346), you can write:
(47)
Taking the first variation of the KKT equations in Equation 45 (p. 350) and equating to zero, the
sub-quadratic equation is formulated as the following:
(48)
At the step, the first equation can be written (by linearization) as:
(49)
Thus, the linearized set of equations for Newton’s method to be applied can be written in an
explicit form as:
(51)
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
350 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
So, in presence of both equality and inequality constraints, Equation 51 (p. 350) can be used in a
quasi-Newtonian framework to determine the increments and Lagrange multipliers
and when stepping from iteration to .
The Hessian matrix is not computed directly but is estimated and updated in a BFGS type
line search.
minimize:
subject to:
where:
The symbols and denote the vectors of the continuous and integer variables, respectively.
It is assumed that problem functions and are continuously differentiable
subject to all . It is not assumed that integer variables can be relaxed. In other words,
problem functions are evaluated only at integer points and never at any fractional values in
between.
MISQP solves MINLP by a modified sequential quadratic programming (SQP) method. After linear-
izing constraints and constructing a quadratic approximation of the Lagrangian function, mixed-
integer quadratic programs are successively generated and solved by an efficient branch-and-cut
method. The algorithm is stabilized by a trust region method as originally proposed by Yuan for
continuous programs. Second order corrections are retained. The Hessian of the Lagrangian
function is approximated by BFGS updates subject to the continuous and integer variables. MISQP
is able to solve also non-convex nonlinear mixed-integer programs.
For external references, see MISQP Optimization Algorithm References (p. 387).
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 351
DesignXplorer Theory
ASO supports a single objective and multiple constraints. It is available for continuous parameters,
including those with manufacturable values. It does not support the use of parameter relationships
in the optimization domain and is available only for a Direct Optimization system.
Like MISQP, ASO solves constrained nonlinear programming problems of the form:
minimize:
subject to:
where:
The purpose is to refine and reduce the domain intelligently and automatically to provide the
global extrema.
ASO Workflow
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
352 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
ASO Steps
OSF Sampling
OSF (Optimal Space-Filling Design) is used for the Kriging construction. In the original OSF, the
number of samples equals the number of divisions per axis and there is one sample in each divi-
sion.
When a new OSF is generated after a domain reduction, the reduced OSF has the same number
of divisions as the original and keeps the existing design points within the new bounds. New
design points are added until there is a point in each division of the reduced domain.
In the following example, the original domain has eight divisions per axis and contains eight
design points. The reduced domain also has eight divisions per axis and includes two of the ori-
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 353
DesignXplorer Theory
ginal design points. To have a design point in each division, six new design points need to be
added.
Note:
The total number of design points in the reduced domain can exceed the number in
the original domain if multiple existing points wind up in the same division. In the
previous example above, if two existing points wound up in the same division of the
new domain, seven new design points (rather than six) would have been added to
have a point in each of the remaining divisions.
Kriging Generation
A response surface is created for each output, based on the current OSF and consequently on
the current domain bounds.
MISQP
MISQP is run on the current Kriging response surface to find potential candidates. A few MISQP
processes are run at the same time, beginning with different starting points, and consequently,
giving different candidates.
All the obtained candidates are either validated or not, based on the Kriging error predictor. The
candidate point is checked to see if further refinement of the Kriging surface changes the selection
of this point. A candidate is considered as acceptable if there aren't any points, according to this
error prediction, that call it into question. If the quality of the candidate is not called into question,
the domain bounds are reduced. Otherwise, the candidate is calculated as a verification point.
When a new verification point is calculated, it is inserted in the current Kriging response
surface as a refinement point and the MISQP process is restarted.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
354 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
When candidates are validated, new domain bounds must be calculated. If all of the can-
didates are in the same zone, the bounds are reduced, centered on the candidates. Other-
wise, the bounds are reduced as an inclusive box of all candidates. At each domain reduc-
tion, a new OSF is generated (conserving design points between the new bounds) and a
new Kriging response surface is generated based on this new OSF.
Taking a closer, more formal look at the multi-objective optimization problem, let the following
denote the set of all feasible solutions, which are those that do not violate constraints:
(52)
(53)
If there exists such that for all objective functions is optimal. This, for is ex-
pressed:
(54)
This indicates that is certainly a desirable solution. Unfortunately, this is a utopian situation
that rarely exists, as it is unlikely that all has minimum values for at a common point
. The question is left: What solution should be used? That is, how should an "optimal" solution
be defined? First, consider the so-called ideal (utopian) solution. To define this solution, separately
attainable minima must be found for all objective functions. Assuming there is one, let be
the solution of the scalar optimization problem:
(55)
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 355
DesignXplorer Theory
Here is called the individual minimum for the scalar problem . The vector
is called ideal for a multi-objective optimization problem, and the points in , which determined
this vector is the ideal solution.
It is usually not true that Equation 56 (p. 356) holds, although it would be useful, as the multi-
objective problem would have been solved by considering a sequence for scalar problems. It is
necessary to define a new form of optimality, which leads to the concept of Pareto Optimality.
Introduced by V. Pareto in 1896, it is still the most important part of multi-objective optimization.
(56)
A point is said to be Pareto Optimal for the problem if there is no other vector such
that for all
(57)
This definition is based on the intuitive conviction that the point is chosen as the optimal
if no criterion can be improved without worsening at least one other criterion. Unfortunately, the
Pareto optimum almost always gives not a single solution, but a set of solutions. Usually Pareto
optimality is spoken of as being global or local depending on the neighborhood of the solutions
, and in this case, almost all traditional algorithms can at best guarantee a local Pareto optimality.
However, this MOGA-based system, which incorporates global Pareto filters, yields the global
Pareto front.
• Population 1: When the optimization is run, the first population is not taken into account. Because
this population was not generated by the MOGA algorithm, it is not used as a range reference for
the output range (for scaling values).
• Population 2: The second population is used to set the range reference. The minimum, maximum,
range, mean, and standard deviation are calculated for this population.
• Populations 3 – 11: Starting from the third population, the minimum and maximum output values
are used in the next steps to scale the values (on a scale of 0 to 100). The mean variations and
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
356 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
standard deviation variations are checked. If both of these are smaller than the value for the
Convergence Stability Percentage property, the algorithm is converged.
At each iteration and for each active output, convergence occurs if:
where:
The first Pareto front solutions are archived in a separate sample set internally and are distinct
from the evolving sample set. This ensures minimal disruption of Pareto front patterns already
available from earlier iterations. You can control the selection pressure (and, consequently, the
elitism of the process) to avoid premature convergence by altering the Maximum Allowable
Pareto Percentage property. For more information about this and other MOGA properties, see
Performing a MOGA Optimization (p. 167).
MOGA Workflow
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 357
DesignXplorer Theory
MOGA Steps
MOGA is run and generates a new population via cross-over and mutation. After the first iter-
ation, each population is run when it reaches the number of samples defined by the Number
of Samples Per Iteration property. For details, see MOGA Steps to Generate a New Popula-
tion (p. 359).
4. Convergence Validation
MOGA converges when the Maximum Allowable Pareto Percentage or the Convergence
Stability Percentage has been reached.
If the optimization is not converged, the process continues to the next step.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
358 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
If the optimization has not converged, it is validated for fulfillment of stopping criteria.
When the Maximum Number of Iterations criterion is met, the process is stopped without
having reached convergence.
If the stopping criteria have not been met, MOGA is run again to generate a new population
(return to Step 2).
6. Conclusion
Steps 2 through 5 are repeated in sequence until the optimization has converged or the
stopping criteria have been met. When either of these things occurs, the optimization con-
cludes.
The process MOGA uses to generate a new population has two main steps: Cross-over and
Mutation.
1. Cross-over
A cross-over operator that linearly combines two parent chromosome vectors to produce
two new offspring according to the following equations:
Consider the following two parents (each consisting of four floating genes), which have
been selected for cross-over:
• Cross-over for Discrete Parameters and Continuous Parameters with Manufacturable Values
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 359
DesignXplorer Theory
The concatenation of these chains forms the chromosome, which crosses over with another
chromosome.
– Uniform: A uniform cross-over operator decides (with some probability, which is known
as the "mixing ratio") which parent contributes each of the gene values in the offspring
chromosomes. This allows the parent chromosomes to be mixed at the gene level rather
than the segment level (as with one and two-point cross-over). For some problems, this
additional flexibility outweighs the disadvantage of destroying building blocks.
2. Mutation
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
360 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
Mutation alters one or more gene values in a chromosome from its initial state. This can result
in entirely new gene values being added to the gene pool. With these new gene values, the
genetic algorithm might be able to arrive at a better solution than was previously possible.
Mutation is an important part of the genetic search, as it helps to prevent the population
from stagnating at any local optima. Mutation occurs during evolution according to a user-
defined mutation probability.
where is the child, is the parent, and δ is a small variation calculated from a polynomial
distribution.
• Mutation for Discrete Parameters and Continuous Parameters with Manufacturable Values
AMO supports multiple objectives and multiple constraints. It is limited to continuous parameters,
including those with manufacturable values. It is available only for a Direct Optimization system.
Note:
AMO does not support discrete parameters because with discrete parameters, it is
necessary to construct a separate response surface for each discrete combination.
When discrete parameters are used, MOGA is the more efficient optimization method.
For more information, see Multi-Objective Genetic Algorithm (MOGA) (p. 357).
AMO Workflow
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 361
DesignXplorer Theory
AMO Steps
The initial population of MOGA is used for the constructing Kriging response surfaces.
2. Kriging Generation
A Kriging response surface is created for each output, based on the first population and then
improved during simulation with the addition of new design points.
For more information, see Kriging (p. 98) or Kriging (p. 324).
3. MOGA
MOGA is run, using the Kriging response surface as an evaluator. After the first iteration,
each population is run when it reaches the number of samples defined by the Number of
Samples Per Iteration property.
5. Error Check
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
362 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
Each point is validated for error. If the error for a given point is acceptable, the approxim-
ated point is included in the next population to be run through MOGA (return to Step 3).
If the error is not acceptable, the points are promoted as design points. The new design
points are used to improve the Kriging response surface (return to Step 2) and are included
in the next population to be run through MOGA (return to Step 3).
6. Convergence Validation
MOGA converges when the maximum allowable Pareto percentage has been reached.
When this happens, the process is stopped.
If the optimization is not converged, the process continues to the next step.
If the optimization has not converged, it is validated for fulfillment of the stopping criteria.
When the maximum number of iterations has been reached, the process is stopped without
having reached convergence.
If the stopping criteria have not been met, the MOGA algorithm is run again (return to
Step 3).
8. Conclusion
Steps 2 through 7 are repeated in sequence until the optimization has converged or the
stopping criteria have been met. When either of these things occurs, the optimization con-
cludes.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 363
DesignXplorer Theory
• Screening: Sample set corresponds to the number of Screening points plus the Min-Max search
results. Search results that duplicate existing points are omitted.
• Single-objective optimization (NLPQL, MISQP, ASO). Sample set corresponds to the iteration points.
• Multiple-objective optimization (MOGA, AMO) Sample set corresponds to the final population.
The Decision Support Process sorts the sample set using the cost function to extract the best can-
didates. The cost function takes into account both the Importance level of objectives and constraints
and the feasibility of points. (The feasibility of a point depends on how constraints are handled.
When the Constraint Handling property is set to Relaxed, all infeasible points are included in the
sort. When the property is set to Strict, all infeasible points are removed from the sort.) Once the
sample set has been sorted, you can change the Importance level and Constraint Handling
properties for one or more constraints or objectives without causing DesignXplorer to create more
design points. The Decision Support Process sorts the existing sample set again.
Given input parameters, output parameters, and their individual targets, the collection of ob-
jectives is combined into a single, weighted objective function, , which is sampled by means of
a direct Monte Carlo method using uniform distribution. The candidate points are subsequently
ranked by ascending magnitudes of the values of . The function for (where all continuous input
parameters have usable values of type "Continuous") is given by the following:
(59)
where:
(60)
(61)
where:
The fuzziness of the combined objective function derives from the weights , which are simply
defined as follows:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
364 of ANSYS, Inc. and its subsidiaries and affiliates.
Goal-Driven Optimization Theory
(62)
The labels used are defined in Defining Optimization Objectives and Constraints (p. 188).
The targets represent the desired values of the parameters, and are defined for the continuous input
parameters as follows:
(63)
And, for the output parameters, you have the following desired values:
(64)
where:
= user-specified target
= constraint lower bound
= constraint upper bound
Thus, Equation 62 (p. 365) and Equation 63 (p. 365) constitute the input parameter objectives for the
continuous input parameters and Equation 62 (p. 365) and Equation 64 (p. 365) constitute the output
parameter objectives and constraints.
The following section considers the case where discrete input parameters and continuous input
parameters with manufacturable values are possible. Assume that a continuous input parameter
with manufacturable values is defined as follows:
(65)
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 365
DesignXplorer Theory
(66)
where, as before:
(67)
Thus, the GDO objective equation becomes the following (for parameters with discrete usable values).
(68)
Therefore, Equation 61 (p. 364), Equation 62 (p. 365), and Equation 66 (p. 366) constitute the input
parameter objectives for parameters that might be continuous or possess discrete usable values.
The norms, objectives, and constraints as in equations Equation 66 (p. 366) and Equation 67 (p. 366)
are also adopted to define the input goals for input parameters of the type Discrete, which are
those parameters whose usable alternatives indicate a whole number of some particular design
feature (number of holes in a plate, number of stiffeners, and so on.
Thus, equations Equation 61 (p. 364), Equation 62 (p. 365), and Equation 66 (p. 366) constitute the input
parameter goals for discrete parameters.
Therefore, the GDO objective function equation for the most general case (where there are continu-
ous and discrete parameters) can be written as the following:
(69)
where:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
366 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis (SSA) Theory
From the normed values it is obvious that the lower the value of , the better the design with re-
spect to the desired values and importance. Thus, a quasi-random uniform sampling of design
points is done by a Hammersley algorithm and the samples are sorted in ascending order of .
The desired number of designs are then drawn from the top of the sorted list. A crowding technique
is employed to ensure that any two sampled design points are not very close to each other in the
space of the input parameters.
Following the same procedures, you get rating scale for design candidate value of 0.9333 as
[two crosses] (away from target). Therefore, the extreme cases are as follows:
1. Design Candidate value of 0.9 (the worst), the rating scale is [three crosses]
2. Design Candidate value of 1.1 (the best), the rating scale is [three stars]
Note:
Objective-driven parameter values with inequality constraints receive either three stars
(the constraint is met) or three red crosses (the constraint is violated).
SSA uses statistical distribution functions (such as Gaussian, normal, uniform, and so on) to describe
uncertain parameters.
SSA allows you to determine whether your product satisfies Six Sigma quality criteria. A product has
Six Sigma quality if only 3.4 parts out of every 1 million manufactured fail. This quality definition is
based on the assumption that an output parameter relevant to the quality and performance assessment
follows a Gaussian distribution, as shown.
An output parameter that characterizes product performance is typically used to determine whether a
product's performance is satisfactory. The parameter must fall within the interval bounded by the lower
specification limit (LSL) and upper specification limit (USL). Sometimes only one of these limits exists.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 367
DesignXplorer Theory
An example of this is a case when the maximum von Mises stress in a component must not exceed the
yield strength. The relevant output parameter is the maximum von Mises stress and the USL is the yield
strength. The lower specification limit is not relevant. The area below the probability density function
falling outside the specification interval is a direct measure of the probability that the product does not
conform to the quality criteria, as shown above. If the output parameter does follow a Gaussian distri-
bution, then the product satisfies a Six Sigma quality criterion if both specification limits are at least six
standard deviations away from the mean value.
In reality, an output parameter rarely follows a Gaussian distribution exactly. However, the definition
of Six Sigma quality is inherently probabilistic. It represents an admissible probability that parts do not
conform to the quality criteria defined by the specified limits. The nonconformance probability can be
calculated no matter which distribution the output parameter actually follows. For distributions other
than Gaussian, the Six Sigma level is not really six standard deviations away from the mean value, but
it does represent a probability of 3.4 parts per million, which is consistent with the definition of Six
Sigma quality.
This section describes a Six Sigma Analysis system in DesignXplorer and how to use it to perform a
SSA.
SSA Principles
Guidelines for Selecting SSA Variables
Sample Generation
Weighted Latin Hypercube Sampling (WLHS)
Postprocessing SSA Results
SSA Theory
SSA Principles
Computer models are described with specific numerical and deterministic values. For example, mater-
ial properties are entered using certain values, and the geometry of the component is assigned a
certain length or width. An analysis based on a given set of specific numbers and values is called a
deterministic analysis. The accuracy of a deterministic analysis depends upon the assumptions and
input values used for the analysis.
While scatter and uncertainty naturally occur in every aspect of an analysis, deterministic analyses do
not take them into account. To deal with uncertainties and scatter, use SSA to answer the following
questions:
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
368 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis (SSA) Theory
• If the input variables of a finite element model are subject to scatter, how large is the scatter of the output
parameters? How robust are the output parameters? Here, output parameters can be any parameter that
ANSYS Workbench can calculate. Examples are the temperature, stress, strain, or deflection at a node, the
maximum temperature, stress, strain, or deflection of the model, and so on.
• If the output is subject to scatter due to the variation of the input variables, then what is the probability
that a design criterion given for the output parameters is no longer met? How large is the probability that
an unexpected and unwanted event, such as failure, takes place?
• Which input variables contribute the most to the scatter of an output parameter and to the failure prob-
ability? What are the sensitivities of the output parameter with respect to the input variables?
SSA can be used to determine the effect of one or more variables on the outcome of the analysis. In
addition to SSA techniques available, ANSYS Workbench offers a set of strategic tools to enhance the
efficiency of the SSA process. For example, you can graph the effects of one input parameter versus
an output parameter, and you can easily add more samples and additional analysis loops to refine
your analysis.
In traditional deterministic analyses, uncertainties are either ignored or accounted for by applying
conservative assumptions. You would typically ignore uncertainties if you know for certain that the
input parameter has no effect on the behavior of the component under investigation. In this case,
only the mean values or some nominal values are used in the analysis. However, in some situations,
the influences of uncertainties exist but are still neglected, as for the thermal expansion coefficient,
for which the scatter is usually ignored.
If you are performing a thermal analysis and want to evaluate the thermal stresses, the equation is:
because the thermal stresses are directly proportional to the Young's modulus as well as to the thermal
expansion coefficient of the material.
The following table shows the probability that the thermal stresses will be higher than expected,
taking uncertainty variables into account.
Uncertainty variables taken into account Probability that the Probability that the
thermal stresses are more thermal stresses are more
than 5% higher than than 10% higher than
expected expected
Young's modulus (Gaussian distribution with ~16% ~2.3%
5% standard deviation)
Young's modulus and thermal expansion ~22% ~8%
coefficient (each with Gaussian distribution
with 5% standard deviation)
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 369
DesignXplorer Theory
Reliability is typically a concern when product or component failures have significant financial con-
sequences (costs of repair, replacement, warranty, or penalties) or worse, can result in injury or loss
of life.
If you use a conservative assumption, the difference in thermal stresses shown above tells you that
uncertainty or randomness is involved. Conservative assumptions are usually expressed in terms of
safety factors. Sometimes regulatory bodies demand safety factors in certain procedural codes. If you
are not faced with such restrictions or demands, then using conservative assumptions and safety
factors can lead to inefficient and costly over-design. By using SSA methods, you can avoid over-
design while still ensuring the safety of the component.
SSA methods even enable you to quantify the safety of the component by providing a probability
that the component will survive operating conditions. Quantifying a goal is the necessary first step
toward achieving it.
More information about choosing and defining uncertainty variables can be found in the following
sections.
Uncertainty Variables for Response Surface Analyses
Measured Data
If you have measured data, then you must first know how reliable that data is. Data scatter is
not just an inherent physical effect but also includes inaccuracy in the measurement itself. You
must consider that the person taking the measurement might have applied a "tuning" to the
data. For example, if the data measured represents a load, the person measuring the load could
have rounded the measurement values. This means that the data you receive is not truly the
measured values. The amount of this tuning could provide a deterministic bias in the data that
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
370 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis (SSA) Theory
you need to address separately. If possible, you should discuss any bias that might have been
built into the data with the person who provided the data to you.
If you are confident about the quality of the data, then how you proceed depends on how
much data you have. In a single production field, the amount of data is typically sparse. If you
have only a small amount of data, use it only to evaluate a rough figure for the mean value
and the standard deviation. In these cases, you could model the uncertainty variable as a
Gaussian distribution if the physical effect you model has no lower and upper limit, or use the
data and estimate the minimum and maximum limit for a uniform distribution.
In a mass production field, you probably have a lot of data. In these cases you could use a
commercial statistical package that allows you to actually fit a statistical distribution function
that best describes the scatter of the data.
The mean value and the standard deviation are most commonly used to describe the scatter
of data. Frequently, information about a physical quantity is given as a value such as .
Often, this form means that the value 100 is the mean value and 5.5 is the standard deviation.
Data in this form implies a Gaussian distribution, but you must verify this (a mean value and
standard deviation can be provided for any collection of data regardless of the true distribution
type). If you have more information, for example, you know that the data is lognormal distrib-
uted, then SSA allows you to use the mean value and standard deviation for a lognormal dis-
tribution.
Sometimes the scatter of data is also specified by a mean value and an exceedance confidence
limit. The yield strength of a material is sometimes given in this way. For example, a 99% ex-
ceedance limit based on a 95% confidence level is provided. This means that, from the measured
data, you can be sure by 95% that in 99% of all cases the property values exceed the specified
limit and only in 1% of all cases do they drop below the specified limit. The supplier of this
information is using the mean value, the standard deviation, and the number of samples of
the measured data to derive this kind of information. If the scatter of the data is provided in
this way, the best way to pursue this further is to ask for more details from the data supplier.
Because the given exceedance limit is based on the measured data and its statistical assessment,
the supplier might be able to provide you with the details that were used.
If the data supplier does not give you any further information, then you could consider assuming
that the number of measured samples was large. If the given exceedance limit is denoted with
and the given mean value is denoted with , the standard deviation can be derived
from the equation:
Exceedance Probability C
99.5% 2.5758
99.0% 2.3263
97.5% 1.9600
95.0% 1.6449
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 371
DesignXplorer Theory
Exceedance Probability C
90.0% 1.2816
No Data
In situations where no information is available, there is never just one right answer. Following
are hints about which physical quantities are usually described in terms of which distribution
functions. This information might help you with the particular physical quantity that you have
in mind. Additionally, a list follows of which distribution functions are usually used for which
kind of phenomena. You might need to choose from multiple options.
Geometric Tolerances
• If you are designing a prototype, you could assume that the actual dimensions of the manufactured
parts would be somewhere within the manufacturing tolerances. In this case it is reasonable to use
a uniform distribution, where the tolerance bounds provide the lower and upper limits of the dis-
tribution function.
• If the manufacturing process generates a part that is outside the tolerance band, one of two things
can happen: the part must either be fixed (reworked) or scrapped. These two cases are usually on
opposite ends of the tolerance band. An example of this is drilling a hole. If the hole is outside the
tolerance band, but it is too small, the hole can just be drilled larger (reworked). If, however, the
hole is larger than the tolerance band, then the problem is either expensive or impossible to fix. In
such a situation, the parameters of the manufacturing process are typically tuned to hit the tolerance
band closer to the rework side, steering clear of the side where parts need to be scrapped. In this
case, a Beta distribution is more appropriate.
• Often a Gaussian (normal) distribution is used. The fact that this distribution has no bounds (it spans
minus infinity to infinity), is theoretically a severe violation of the fact that geometrical extensions
are described by finite positive numbers only. However, in practice, this lack of bounds is irrelevant
if the standard deviation is very small compared to the value of the geometric extension, which is
typically true for geometric tolerances.
Material Data
• In some cases the material strength of a part is governed by the weakest-link theory. This theory
assumes that the entire part fails whenever its weakest spot fails. For material properties where the
weakest-link assumptions are valid, the Weibull distribution might be applicable.
• For some cases, it is acceptable to use the scatter information from a similar material type. For ex-
ample, if you know that a material type very similar to the one you are using has a certain material
property with a Gaussian distribution and a standard deviation of ±5% around the measured mean
value, then you can assume that for the material type you are using, you only know its mean value.
In this case, you could consider using a Gaussian distribution with a standard deviation of ±5%
around the given mean value.
Load Data
For loads, you usually only have a nominal or average value. You could ask the person who
provided the nominal value the following questions: Out of 1000 components operated under
real life conditions, what is the lowest load value any one of the components sees? What is the
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
372 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis (SSA) Theory
most likely load value? That is, what is the value that most of these 1000 components are
subject to? What is the highest load value any one component would be subject to? To be
safe, you should ask these questions not only of the person who provided the nominal value
but also of one or more experts who are familiar with how your products are operated under
real-life conditions. From all the answers you get, you can then consolidate what the minimum,
the most likely, and the maximum value probably is. As verification, compare this picture with
the nominal value that you would use for a deterministic analysis. The nominal value should
be close to the most likely value unless using a conservative assumption. If the nominal value
includes a conservative assumption (is biased), its value is probably close to the maximum
value. Finally, you can use a triangular distribution using the minimum, most likely, and max-
imum values obtained.
Distribution Functions
Beta Distribution
fX(x)
r,t
xmin
xmax
You provide the shape parameters and and the distribution lower bound and upper bound
and of the random variable .
The Beta distribution is very useful for random variables that are bounded at both sides. If linear
operations are applied to random variables that are all subjected to a uniform distribution, then
the results can usually be described by a Beta distribution. For example, if you are dealing with
tolerances and assemblies where the components are assembled and the individual tolerances
of the components follow a uniform distribution (a special case of the Beta distribution), the
overall tolerances of the assembly are a function of adding or subtracting the geometrical extension
of the individual components (a linear operation). Hence, the overall tolerances of the assembly
can be described by a Beta distribution. Also, as previously mentioned, the Beta distribution can
be useful for describing the scatter of individual geometrical extensions of components as well.
Exponential Distribution
fX(x)
xmin
You provide the decay parameter and the shift (or distribution lower bound) of the random
variable .
The exponential distribution is useful in cases where there is a physical reason that the probability
density function is strictly decreasing as the uncertainty variable value increases. The distribution
is mostly used to describe time-related effects. For example, it describes the time between inde-
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 373
DesignXplorer Theory
pendent events occurring at a constant rate. It is therefore very popular in the area of systems
reliability and lifetime-related systems reliability, and it can be used for the life distribution of
non-redundant systems. Typically, it is used if the lifetime is not subjected to wear-out and the
failure rate is constant with time. Wear-out is usually a dominant life-limiting factor for mechan-
ical components that would preclude the use of the exponential distribution for mechanical parts.
However, where preventive maintenance exchanges parts before wear-out can occur, then the
exponential distribution is still useful to describe the distribution of the time until exchanging
the part is necessary.
µ
x
You provide values for the mean value and the standard deviation of the random variable
.
The Gaussian, or normal, distribution is a fundamental and commonly-used distribution for stat-
istical matters. It is typically used to describe the scatter of the measurement data of many
physical phenomena. Strictly speaking, every random variable follows a normal distribution if it
is generated by a linear combination of a very large number of other random effects, regardless
which distribution these random effects originally follow. The Gaussian distribution is also valid
if the random variable is a linear combination of two or more other effects if those effects also
follow a Gaussian distribution.
Lognormal Distribution
fX(x)
You provide values for the logarithmic mean value and the logarithmic deviation . The para-
meters and are the mean value and standard deviation of :
The lognormal distribution is another basic and commonly-used distribution, typically used to
describe the scatter of the measurement data of physical phenomena, where the logarithm of
the data would follow a normal distribution. The lognormal distribution is suitable for phenomena
that arise from the multiplication of a large number of error effects. It is also used for random
variables that are the result of multiplying two or more random effects (if the effects that get
multiplied are also lognormally distributed). It is often used for lifetime distributions such as the
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
374 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis (SSA) Theory
scatter of the strain amplitude of a cyclic loading that a material can endure until low-cycle-fatigue
occurs.
Uniform Distribution
fX(x)
xmin xmax
You provide the distribution lower bound and upper bound and of the random variable
.
The uniform distribution is a fundamental distribution for cases where the only information
available is a lower and an upper bound. It is also useful to describe geometric tolerances. It can
also be used in cases where any value of the random variable is as likely as any other within a
certain interval. In this sense, it can be used for cases where "lack of engineering knowledge"
plays a role.
Triangular Distribution
fX(x)
You provide the minimum value (or distribution lower bound) , the most likely value limit
and the maximum value (or distribution upper bound) .
The triangular distribution is most helpful to model a random variable when actual data is not
available. It is very often used to capture expert opinions, as in cases where the only data you
have are the well-founded opinions of experts. However, regardless of the physical nature of the
random variable you want to model, you can always ask experts questions like "Out of 1000
components, what are the lowest and highest load values for this random variable?" and other
similar questions. You should also include an estimate for the random variable value derived from
a computer program, as described above. For more details, see Choosing a Distribution for a
Random Variable (p. 370).
2 G
xmin xmax
G
x
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 375
DesignXplorer Theory
You provide the mean value and the standard deviation of the non-truncated Gaussian dis-
tribution and the truncation limits and (or distribution lower bound and upper bound).
The truncated Gaussian distribution typically appears where the physical phenomenon follows
a Gaussian distribution, but the extreme ends are cut off or are eliminated from the sample
population by quality control measures. As such, it is useful to describe the material properties
or geometric tolerances.
Weibull Distribution
fX(x)
m,xchr
xmin
You provide the Weibull characteristic value , the Weibull exponent m, and the minimum
value (or distribution lower bound). There are several special cases. For the distribu-
tion coincides with a two-parameter Weibull distribution. The Rayleigh distribution is a special
case of the Weibull distribution with and .
In engineering, the Weibull distribution is most often used for strength or strength-related lifetime
parameters, and is the standard distribution for material strength and lifetime parameters for
very brittle materials (for these very brittle materials, the "weakest-link theory" is applicable). For
more details, see Choosing a Distribution for a Random Variable (p. 370).
Sample Generation
For SSA, the sample generation is based on the Latin Hypercube Sampling (LHS) technique by default.
In the Properties view for a Six Sigma Analysis cell, Sampling Type can be set to either LHS or
WLHS (Weighted Latin Hypercube Sampling).
LHS is a more advanced and efficient form of Monte Carlo analysis methods. With LHS, the points are
randomly generated in a square grid across the design space, but no two points share input parameters
of the same value. This means that no point shares a row or a column of the grid with any other
point. Generally, LHS requires 20% to 40% fewer simulations loops than the Direct Monte Carlo
technique to deliver the same results with the same accuracy. However, that number is largely
problem-dependent.
In WLHS, the input variables are discretized unevenly/unequally in their design space. The cell (or
hypercube, in multiple dimensions) size of probability of occurrence is evaluated according to topology
of output/response. The cell size of probability of occurrence is discretized such that it is smaller (re-
latively) around minimum and maximum of output/response. As evaluation of cell size is output/re-
sponse oriented, WLHS is somewhat unsymmetrical/biased.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
376 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis (SSA) Theory
In general, WLHS is intended to stretch distribution farther out in the tails (lower and upper) with
less number of runs than LHS. This means that given same number of runs, WLHS is expected to
reach a smaller than LHS. Due to biasness, however, the evaluated can be subject to some
difference compared to LHS.
Histogram
A histogram plot is most commonly used to visualize the scatter of a SSA variable. A histogram is
derived by dividing the range between the minimum value and the maximum value into intervals
of equal size. Then SSA determines how many samples fall within each interval, that is, how many
"hits" landed in each interval.
SSA also allows you to plot histograms of your uncertainty variables so you can double-check that
the sampling process generated the samples according to the distribution function you specified.
For uncertainty variables, SSA not only plots the histogram bars, but also a curve for values derived
from the distribution function you specified. Visualizing histograms of the uncertainty variables is
another way to verify that enough simulation loops have been performed. If the number of simulation
loops is sufficient, the histogram bars have the following characteristics:
• They are close to the curve that is derived from the distribution function.
• They have no major gaps. While they have no hits in an interval, neighboring intervals have many hits.
However, if the probability density function is flattening out at the far ends of a distribution (for
example, the exponential distribution flattens out for large values of the uncertainty variable) then
there might logically be gaps. Hits are counted only as positive integer numbers and as these
numbers gradually get smaller, a zero hit can happen in an interval.
The value of the cumulative distribution function at the location is the probability that the values
of stay below . Whether this probability represents the failure probability or the reliability of
your component depends on how you define failure.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 377
DesignXplorer Theory
For example, if you design a component such that a certain deflection should not exceed a certain
admissible limit, then a failure event occurs if the critical deflection exceeds this limit. Thus, for this
example, the cumulative distribution function is interpreted as the reliability curve of the component.
On the other hand, if you design a component such that the eigenfrequencies are beyond a certain
admissible limit, then a failure event occurs if an eigenfrequency drops below this limit. So for this
example, the cumulative distribution function is interpreted as the failure probability curve of the
component.
The cumulative distribution function also lets you visualize what the reliability or failure probability
would be if you chose to change the admissible limits of your design.
A cumulative distribution function plot is an important tool to quantify the probability that the
design of your product does or does not satisfy quality and reliability requirements. The value of
a cumulative distribution function of a particular output parameter represents the probability that
the output parameter remains below a certain level as indicated by the values on the X axis of the
plot.
The probability that Shear Stress Maximum remains less than a limit value of 1.71E+5 is about 93%,
which means that there is a 7% probability that Shear Stress Maximum exceeds the limit value of
1.71E+5.
Probability Table
Instead of reading data from the cumulative distribution chart, you can also obtain important in-
formation about the cumulative distribution function in tabular form. A probability table is available
that is designed to provide probability values for an even spread of levels of an input or output
parameter. You can view the table in either Quantile-Percentile (Probability) mode or Percentile-
Quantile (Inverse Probability) mode. The probability table lets you find out the parameter levels
corresponding to probability levels that are typically used for the design of reliable products. If you
want to see the probability of a value that is not listed, you can add it to the table. Likewise, you
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
378 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis (SSA) Theory
can add a probability or sigma-level and see the corresponding values. You can also delete values
from the table. For more information, see Using Statistical Postprocessing (p. 272).
Note:
Both tables have more rows if the number of samples is increased. If you are designing
for high product reliability, which is a low probability that the product does not conform
to quality or performance requirements, then the sample size must be adequately large
to address those low probabilities. Typically, if your product does not conform to the
requirements denoted with "Preq," then the minimum number of samples should be
determined by . For example, if your product has a probability of
that it does not conform to the requirements, then the minimum number
of samples should be .
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 379
DesignXplorer Theory
reliability and quality of your product. You can view a sensitivities plot for any output parameter
in your model. For more information, see Sensitivities Chart (SSA) (p. 51).
The sensitivities available under the Six Sigma Analysis and Goal Driven Optimization views are
statistical sensitivities. Statistical sensitivities are global sensitivities, whereas the parameter sensit-
ivities available under the Responses view are local sensitivities. The global, statistical sensitivities
are based on a correlation analysis using the generated sample points, which are located throughout
the entire space of input parameters. The local parameter sensitivities are based on the difference
between the minimum and maximum value obtained by varying one input parameter while holding
all other input parameters constant. As such, the values obtained for local parameter sensitivities
depend on the values of the input parameters that are held constant. Global, statistical sensitivities
do not depend on the values of the input parameters, because all possible values for the input
parameters are already taken into account when determining the sensitivities.
Design exploration displays sensitivities as both a bar chart and pie chart. The charts describe the
sensitivities in an absolute fashion (taking the signs into account). A positive sensitivity indicates
that increasing the value of the uncertainty variable increases the value of the result parameter for
which the sensitivities are plotted. Conversely, a negative sensitivity indicates that increasing the
uncertainty variable value reduces the result parameter value.
Using a sensitivity plot, you can answer the following important questions.
How can I make the component more reliable or improve its quality?
If the results for the reliability or failure probability of the component do not reach the expected
levels, or if the scatter of an output parameter is too wide and therefore not robust enough for a
quality product, then you should make changes to the important input variables first. Modifying
an input variable that is insignificant would be a waste of time.
Of course, you are not in control of all uncertainty parameters. A typical example where you have
very limited means of control involves material properties. For example, if it turns out that the en-
vironmental temperature (outdoor) is the most important input parameter, then there is probably
nothing you can do. However, even if you find out that the reliability or quality of your product is
driven by parameters that you cannot control, this data has importance—it is likely that you have
a fundamental flaw in your product design! You should watch for influential parameters like these.
If the input variable you want to tackle is a geometry-related parameter or a geometric tolerance,
then improving the reliability and quality of your product means that it might be necessary to
change to a more accurate manufacturing process or use a more accurate manufacturing machine.
If it is a material property, then there might be nothing you can do about it. However, if you only
had a few measurements for a material property and consequently used only a rough guess about
its scatter, and the material property turns out to be an important driver of product reliability and
quality, then it makes sense to collect more raw data.
How can I save money without sacrificing the reliability or the quality of the product?
If the results for the reliability or failure probability of the component are acceptable or if the
scatter of an output parameter is small and therefore robust enough for a quality product, then
there is usually the question of how to save money without reducing the reliability or quality. In
this case, you should first make changes to the input variables that turned out to be insignificant
because they do not affect the reliability or quality of your product. If it is the geometrical properties
or tolerances that are insignificant, you can consider applying a less expensive manufacturing process.
If a material property turns out to be insignificant, then this is not typically a good way to save
money, because you are usually not in control of individual material properties. However, the loads
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
380 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis (SSA) Theory
or boundary conditions can be a potential for saving money, but in which sense this can be exploited
is highly problem-dependent.
SSA Theory
The purpose of a SSA is to gain an understanding of the effects of uncertainties associated with the
input parameter of your design. This goal is achieved using a variety of statistical measures and
postprocessing tools.
Statistical Postprocessing
Convention: Set of data .
1. Mean Value
Mean is a measure of average for a set of observations. The mean of a set of observations is
defined as follows:
(70)
2. Standard Deviation
Standard deviation is a measure of dispersion from the mean for a set of observations. The
standard deviation of a set of observations is defined as follows:
(71)
3. Sigma Level
Sigma level is calculated as the inverse cumulative distribution function of a standard Gaussian
distribution at a given percentile. Sigma level is used in conjunction with standard deviation to
measure data dispersion from the mean. For example, for a pair of quantile and sigma level ,
it means that value is about standard deviations away from the sample mean.
4. Skewness
Skewness is a measure of degree of asymmetry around the mean for a set of observations. The
observations are symmetric if distribution of the observations looks the same to the left and right
of the mean. Negative skewness indicates the distribution of the observations being left-skewed.
Positive skewness indicates the distribution of the observations being right-skewed. The skewness
of a set of observations is defined as follows:
(72)
5. Kurtosis
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 381
DesignXplorer Theory
indicates a relatively peaked distribution of the observations. As such, the kurtosis of a set of
observations is defined with calibration to the normal distribution as follows:
(73)
6. Shannon Entropy
The probability mass is used in a normalized fashion, such that not only the shape, but the range
of variability, or the distribution is accounted for. This is shown in Equation 75 (p. 382), where
is the relative frequency of the parameter falling into a certain interval, and is the width of
the interval. As a result, Shannon entropy can have a negative value. Following are some compar-
isons of the Shannon entropy, where S2 is smaller than S1.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
382 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis (SSA) Theory
Three signal-to-noise ratios are provided in the statistics of each output in your SSA. These ratios
are as follows:
• Nominal is Best
• Smaller is Better
• Larger is Better
Signal-to-noise (S/N) ratios are measures used to optimize control parameters to achieve a robust
design. These measures were first proposed by Dr. Taguchi of Nippon Telephone and Telegraph
Company, Japan, to reduce design noises in manufacturing processes. These design noises are
normally expressed in statistical terms such as mean and standard deviation (or variance). In
computer aided engineering (CAE), these ratios have been widely used to achieve a robust design
in computer simulations. For a design to be robust, the simulations are carried out with an objective
to minimize the variances. The minimum variance of designs/simulations can be done with or
without targeting a certain mean. In design exploration, minimum variance targeted at a certain
mean (called Nominal is Best) is provided, and is given as follows:
(76)
Nominal is Best is a measure used for characterizing design parameters such as model dimension
in a tolerance design, in which a specific dimension is required, with an acceptable standard de-
viation.
In some designs, however, the objectives are to seek a minimum or a maximum possible at the
price of any variance.
• For the cases of minimum possible (Smaller is Better), which is a measure used for characterizing output
parameters such as model deformation, the S/N is expressed as follows:
(77)
• For the cases of maximum possible (Larger is Better), which is a measure used for characterizing output
parameters such as material yield, the S/N is formulated as follows:
(78)
These three S/N ratios are mutually exclusive. Only one of the ratios can be optimized for any
given parameter. For a better design, these ratios should be maximized.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 383
DesignXplorer Theory
(80)
Note:
The minimum and maximum values strongly depend on the number of samples. If you
generate a new sample set with more samples, then chances are that the minimum
value is lower in the larger sample set. Likewise, the maximum value of the larger
sample set is most likely higher than for the original sample set. Hence, the minimum
and maximum values should not be interpreted as absolute physical bounds.
The sensitivity charts displayed under Six Sigma Analysis are global sensitivities based on statist-
ical measures. For more information, see see Single Parameter Sensitivities (p. 142).
• The amount by which the output parameter varies across the variation range of an input parameter.
• The variation range of an input parameter. Typically, the wider the variation range, the larger the effect
of the input parameter.
The statistical sensitivities displayed under Six Sigma Analysis are based on the Spearman-Rank
Order Correlation coefficients that take both those aspects into account at the same time.
Basing sensitivities on correlation coefficients follows the concept that the more strongly an output
parameter is correlated with a particular input parameter, the more sensitive it is with respect to
changes of that input parameter.
(81)
where:
Because the sample size is finite, the correlation coefficient is a random variable. Hence, the
correlation coefficient between two random variables and usually yields a small, but nonzero
value, even if and are not correlated at all in reality. In this case, the correlation coefficient
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
384 of ANSYS, Inc. and its subsidiaries and affiliates.
Six Sigma Analysis (SSA) Theory
would be insignificant. Therefore, you need to find out if a correlation coefficient is significant or
not. To determine the significance of the correlation coefficient, assume the hypothesis that the
correlation between and is not significant at all, meaning that they are not correlated, and
(null hypothesis). In this case the variable:
(82)
is approximately distributed like the student's -distribution with degrees of freedom. The
cumulative distribution function student's -distribution is:
(83)
where:
There is no closed-form solution available for Equation 83 (p. 385). See Abramowitz and Stegun
(Pocketbook of Mathematical Functions, abridged version of the Handbook of Mathematical Functions,
Harry Deutsch, 1984) for more details.
The larger the correlation coefficient , the less likely it is that the null hypothesis is true. Also,
the larger the correlation coefficient , the larger is the value of t from Equation 82 (p. 385) and
consequently also the probability is increased. Therefore, if the probability that the null
hypothesis is true is given by . If exceeds a certain significance level, for example
2.5%, you can assume that the null hypothesis is true. However, if is below the significance
level, then it can be assumed that the null hypothesis is not true and that consequently the cor-
relation coefficient is significant. This limit can be changed in Design Exploration Options (p. 22).
The cumulative distribution function of sampled data is also called the empirical distribution
function. To determine the cumulative distribution function of sampled data, you need to order
the sample values in ascending order. Let be the sampled value of the random variable
having a rank of , which makes smallest out of all sampled values. The cumulative distribu-
tion function that corresponds to is the probability that the random variable has values
below or equal to . Because you have only a limited amount of samples, the estimate for this
probability is itself a random variable. According to Kececioglu (Reliability Engineering Handbook,
Vol. 1, 1991, Prentice-Hall, Inc.), the cumulative distribution function associated with is:
(84)
The cumulative distribution function of sampled data can only be given at the individual sampled
values using Equation 84 (p. 385). Hence, the evaluation of the probability
that the random variable is less than or equal to an arbitrary value x requires an interpolation
between the available data points.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 385
DesignXplorer Theory
If is, for example, is between and , the probability that the random variable is less or
equal to is:
(85)
The cumulative distribution function of sampled data can only be given at the individual sampled
values using Equation 84 (p. 385). Hence, the evaluation of the inverse cu-
mulative distribution function for any arbitrary probability value requires an interpolation between
the available data points.
The evaluation of the inverse of the empirical distribution function is most important in the tails
of the distribution, where the slope of the empirical distribution function is flat. In this case, a
direct interpolation between the points of the empirical distribution function similar to Equa-
tion 85 (p. 386) can lead to inaccurate results. Therefore, the inverse standard normal distribution
function is applied for all probabilities involved in the interpolation. If is the requested
probability for which you are looking for the inverse cumulative distribution function value, and
is between and , the inverse cumulative distribution function value can be calculated
using:
(86)
Theory References
This section provides external references for further reading and study on the theory underpinning
DesignXplorer's design exploration capabilities:
Genetic Aggregation Response Surface References
MISQP Optimization Algorithm References
Reference Books
Reference Articles
Note:
Acar, E.,
Structural and Multidisciplinary Optimization, Vol. 42, No. 6, 2010, pp. 879-896.
GA reference 2
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
386 of ANSYS, Inc. and its subsidiaries and affiliates.
Theory References
"Multiple Surrogates: How Cross-Validation Errors Can Help Us to Obtain the Best
Predictor,"
Structural and Multidisciplinary Optimization, Vol. 39, No. 4, 2009, pp. 439-457.
MISQP reference 1
"A comparative study of numerical algorithms for nonlinear and nonconvex mixed-
integer optimization,"
MISQP reference 2
MISQP: A Fortran Subroutine of a Trust Region SQP Algorithm for Mixed-Integer Nonlinear
Programming - User's Guide
MISQP reference 3
O. Exler, K. Schittkowski,
Reference Books
A list follows of reference books—from primer to advanced users.
Probabilistic Structural Mechanics Handbook – Theory and Industrial Applications Edited by Raj
Sundararajan
This book provides a very good introduction into probabilistic analysis and design of all kinds of indus-
trial applications. The coverage is quite balanced and includes the theory of probabilistic methods,
structural reliability methods, and industrial application examples. For more information on probabilistic
methods (sampling-based), see "Simulation and the Monte Carlo Method" (p. 387) by Reuven Rubinstein.
For more information on structural reliability methods (such as FORM/SORM), see Probability, Reliability,
and Statistical Methods in Engineering Design (p. 388) by Achintya Haldar and Sankaran Mahadevan.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 387
DesignXplorer Theory
Probability, Reliability, and Statistical Methods in Engineering Design by Achintya Haldar and Sankaran
Mahadevan
This book presents a good introduction on risk assessments and structural reliability designs. It clearly
addresses the methods widely used to conduct risk-based and reliability-based designs, such as
FORM/SORM.
Applied Linear Statistical Models by John Neter, Michael Kutner, William Wasserman and Christopher
Nachtsheim
This book is basically the backbone of the 2nd-order polynomials response surface in DesignXplorer. It
comprehensively covers regression analysis (stepwise regression in particular), input-output transform-
ation for better regression, and statistical goodness-of-fit measures. It also covers sampling schemes for
DOEs. For more on DOEs, CCD, and Box-Behnken design, see Design and Analysis of Experiment by
Douglas Montgomery. CCD and Box-Behnken design are traditional DOE sampling schemes. They differ
from Design and Analysis of Computer Experiments (DACE), which is used for an interpolation-based
response surface, such as Kriging.
Design and Analysis of Computer Experiments by Thomas Santner, Brian Williams and William Notz
This book describes how to create a space filling design for an interpolation-based response surface.
This includes non-parametric regression and Kriging/Radial Basis, which are supported by DesignXplorer,
and Pursuit Projection Regression, which is not supported by DesignXplorer.
Reference Articles
A list follows of reference articles on design exploration topics such as probability, reliability, and
statistics.
Articles reference 1
Probability Concepts in Engineering Planning and Design; Volume II: Decision, Risk, and
Reliability
Articles reference 2
Ayyub, B. M. (editor)
Articles reference 3
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
388 of ANSYS, Inc. and its subsidiaries and affiliates.
Theory References
Articles reference 4
1997.
Articles reference 5
SIAM, 1975.
Articles reference 6
Barnes, J.W.
Articles reference 7
Beasley, M.
Articles reference 8
Articles reference 9
Billinton, R.
Articles reference 10
Calabro, S. R.
Articles reference 11
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 389
DesignXplorer Theory
Casciati, F.
Articles reference 12
Catuneanu, V. M.
Reliability Fundamentals
Articles reference 13
Chorafas, D. N.
Articles reference 14
Cox, S. J.
Butterworth-Heinemann, 1991.
Articles reference 15
Dai, S-H.
Articles reference 16
Ditlevson, O.
Articles reference 17
Gumbel, E.
Statistics of Extremes
Articles reference 18
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
390 of ANSYS, Inc. and its subsidiaries and affiliates.
Theory References
Articles reference 19
Articles reference 20
Harr, M. E.
Articles reference 21
Hart, Gary C.
Articles reference 22
Henley, E. J.
Articles reference 23
Articles reference 24
Kapur, K. C.
Articles reference 25
Leemis, L. M.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 391
DesignXplorer Theory
Articles reference 26
Leitch, R. D.
Articles reference 27
Litle, W. A.
Articles reference 28
1982.
Articles reference 29
Lucia, A. C.
Articles reference 30
Articles reference 31
Madsen, H. O.
Articles reference 32
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
392 of ANSYS, Inc. and its subsidiaries and affiliates.
Theory References
Articles reference 33
Marek, P.
Articles reference 34
McCormick, N. J.
Articles reference 35
Melchers, R. E.
Articles reference 36
Misra, K. B.
Articles reference 37
Modarres, M.
What Every Engineer Should Know about Reliability and Risk Analysis
Articles reference 38
ASCE, 1986.
Articles reference 39
Reliability of Structures
Articles reference 40
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 393
DesignXplorer Theory
Papoulis, A.
Articles reference 41
Pugsley, A. G.
Articles reference 42
Rackwitz, R.
Articles reference 43
Rao, S. S.
Reliability-Based Design
Articles reference 44
Rubinstein, R. Y.
Articles reference 45
Schlaifer, R.
Articles reference 46
Schneider, J.
Articles reference 47
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
394 of ANSYS, Inc. and its subsidiaries and affiliates.
Theory References
Shooman, M. L.
Articles reference 48
Sinha, S. K.
Articles reference 49
Smith, D. J.
Articles reference 50
Spencer, B. F.
Articles reference 51
Thoft-Christensen, P.
Articles reference 52
Thoft-Christensen, P.
Articles reference 53
Thoft-Christensen, P.
Articles reference 54
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 395
DesignXplorer Theory
Articles reference 55
Tichy, M.
Wittmann, F. H.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
396 of ANSYS, Inc. and its subsidiaries and affiliates.
Troubleshooting
An ANSYS DesignXplorer Update operation returns an error message saying that one or several design
points have failed to update.
Click Show Details in the message dialog box to see the list of design points that failed to update and the
associated error messages. Each failed design point is automatically preserved for you at the project level,
so you can return to the project and edit the Parameter Set bar to see the failed design points and invest-
igate the reason for the failures.
This error means that the project failed to update correctly for the parameter values defined in the
listed design points. There are a variety of failure reasons, from a lack of a license to a CAD model
generation failure. If you try to update the system again, an update attempts to update only the
failed design points, and if the update is successful, the operation completes. If there are persistent
failures with a design point, to investigate, copy its values to the current design point, attempt a
project update, and edit the cells in the project that are not correctly updated.
When you open a project, one or more DesignXplorer cells are marked with a black X icon in the Project
Schematic.
The black X icon next to a DesignXplorer cell indicates a disabled status. This status is given to DesignXplorer
cells that fail to load properly because the project has been corrupted by the absence of necessary files
(most often the parameters.dxdb and parameters.params files). The Messages pane displays a
warning message indicating the reason for the failure and lists each of the disabled cells.
When a DesignXplorer cell is disabled, you cannot update, edit, or preview it. Try one of the following
methods to address the issue.
Method 1: (recommended) Navigate to the Files pane, locate the missing files if possible, and copy
them into the correct project folder. The parameters.dxdb and parameters.params files are
normally located in the subdirectory <project name>_files\dpall\global\DX. With the
necessary files restored, the disabled cells should load successfully.
Method 2: Delete all DesignXplorer systems containing disabled cells. Once the project is free of
corrupted cells, you can save the project and set up new DesignXplorer systems.
To open or view a different Excel workbook (other than the ones added to the project):
2. From this new instance, select File → Open and then the workbook.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates. 397
Troubleshooting
Note:
You can still view the Excel workbooks added to the project by selecting Open File in
Excel from the right-click context menu.
To recover from this situation, you can abandon the pending updates by selecting Tools → Abandon
Pending Updates.
To verify derived parameter definition, right-click the Parameter Set bar and select Edit to open
the Parameter Set tab. In the Outline pane, derived parameters that are incorrectly defined have
a red cell containing an error message in the Value column. In the Properties pane for such a derived
parameter, the same error message displays in the Expression property cell.
Based on the information in the error message, correct derived parameter definitions as follows:
2. In the Parameters pane, enter a valid expression for the Expression property.
3. Refresh the project or the unfulfilled DesignXplorer cell to apply the changes.
Release 2020 R1 - © ANSYS, Inc. All rights reserved. - Contains proprietary and confidential information
398 of ANSYS, Inc. and its subsidiaries and affiliates.