An Application of Selected Artificial Intelligence Techniques To Engineering Analysis
An Application of Selected Artificial Intelligence Techniques To Engineering Analysis
Doctor of Philosophy
in
April, 1989
© Bruce W.R. Forde
/
In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the
University of British Columbia, I agree that the Library shall make it freely available for reference
and study. I further agree that permission for extensive copying of this thesis for scholarly
purposes may be granted by the Head of the Department of Civil Engineering or by his or her
representatives. It is understood that copying or publication of this thesis for financial gain shall
not be allowed without my written permission.
April, 1989
Abstract
This thesis explores the application of some of the more practical artificial intelligence (Al)
techniques developed to date in the field of engineering analysis. The limitations of conventional
computer-aided analysis programs provide the motivation for knowledge automation and
development of a hybrid approach for constructing and controlling engineering analysis software.
The kinds of knowledge used in the analysis process, the programs that control this
knowledge, and the resources that perform numerical computation are described as part of a hybrid
system for engineering analysis. Modelling, solution, and interpretation activities are examined for
An intelligent finite element analysis program called "SNAP" is developed to demonstrate the
structural mechanics. A step-by-step discussion is given for the design, implementation, and
operation of the SNAP software to provide a clear understanding of the principles involved.
The general conclusion of this thesis is that a variety of artificial intelligence techniques can
be used to significantly improve the engineering analysis process, and that much research is still to
be done. A series of projects suitable for completion by graduate students in the field of structural
ii
Contents
Abstract ii
Contents iii
Figures v
Tables vi
Acknowledgements vii
1. Introduction 1
1.1. Problem 1
1.1.1. Specialization 1
1.1.2. Reliability 4
1.1.3. Knowledge-Transfer 5
1.1.4. Responsibility 6
1.2. Solution 7
1.2.1. Conventional Systems 7
1.2.2. Intelligent Consultants/Interfaces 8
1.2.3. Hybrid Systems 9
2.2. Reasoning 16
2.2.1. Logic and Theorem-Proving 16
2.2.2. Inference 19
2.2.3. Search and Problem-Solving 20
2.3. Representation 22
2.3.1. Rules 22
2.3.2. Frames 24
2.3.3. Scripts 26
iii
2.7. Applications of A l Techniques in Engineering Analysis 45
2.7.1. Object-Oriented Programming 45
2.7.2. Generic Application Frameworks 45
2.7.3. Event-Driven Architectures 46
2.7.4. Knowledge-Based Expert Systems 46
3. Engineering Analysis 47
3.1. Knowledge 47
3.1.1. Modelling 48
3.1.2. Solution 49
3.1.3. Interpretation 51
3.2. Controllers 53
3.2.1. Controller Interface 54
3.2.2. Event-Driven Operation 55
3.2.3. Example 56
4.2. Implementation 71
4.2.1. Graphical Interface 71
4.2.2. External Procedures 79
4.2.3. Object Lists 81
4.2.4. Analytical Modelling and Interpretation 84
4.2.5. Numerical Modelling, Solution, and Interpretation 88
4.2.6. Program Control 94
References 118
Appendix A : The Finite Element Formulation 128
Appendix B: Isoparametric Finite Elements 132
iv
Figures
1. Engineering Analysis Environment 3
2. Engineering Analysis Process 7
3. Existing Analysis Solutions 8
4. Proposed Analysis Solution 10
5. Truth Tables for the Basic Connectives 17
6. The Vocabulary of Logic 18
7. Forward-Chaining 20
8. Backward-Chaining 21
9. Rule Structure 23
10. A Typical Knowledge Framework 25
11. A Typical Knowledge Script 26
12. Graphical Representation of an Object 30
13. Graphical Representation of a Class 30
14. A Typical Object 33
15. A Typical Class 34
16. An Expandable Application Framework 35
17. KBES Classification within A l Technology 36
18. KBES Components 37
19. KBES used by an Event-Driven Analysis Program 39
20. The Evolution of Programming Paradigms 41
21. The NExpert Object Open A l Environment 44
22. Engineering Analysis Scenario 47
23. Modelling 48
24. Analytical Solution 50
25. Numerical Solution 50
26. Interpretation 51
27. A Typical Controller 53
28. TController Class Description 54
29. Event-Driven Operation 55
30. Modelling Controller 56
31. A Typical Intelligent Resource 58
32. TResource Class Description 59
33. Goal-Driven Operation 60
34. Solution Controller Resources 61
35. SNAP User-Interface 65
36. SNAP Abstraction Levels 66
37. SNAP Control Structure 68
38. SNAP Event Management 69
39. Window User-Interface 72
40. TWindow Class Description 73
41. Dialog User-Interface 74
42. TDialog Class Description 75
43. TDwgSizeDialog Class Description 76
44. Menu User-Interface 76
45. TApplication Class Description 77
46. Application Event-Driven Architecture 78
47. TVector Class Description 79
48. TMatrix Class Description 79
49. Banded Matrix Storage 80
50. TBandedMatrix Class Description 81
51. TLink Class Description 81
52. TList Class Description 82
V
Figures
53. TModelPart Class Hierarchy 85
54. TShape Class Hierarchy 86
55. TModelWindow Class Description 87
56. Node List Management 89
57. Shape Function Class Hierarchy 90
58. TGauss Class Hierarchy 91
59. TElement Class Hierarchy 92
60. TMesh Class Description 93
61. TNumericalModel Class Description 93
62. Descending Ownership Links 94
63. Ascending Ownership Links 95
64. Numerical Analysis Task Performance 96
65. Modelling Task Event Loop 97
66. Solution Task Event Loop 98
67. Interpretation Task Event Loop 98
68. TInferenceEngine Class Description 99
69. TRule Class Description 99
70. TStatement Class Description 100
71. TAction Class Description 100
72. Forward-Chaining Algorithm 101
73. Backward-Chaining Algorithm 101
74. SNAP User-Interface 102
75. Grid Spacing Dialog 103
76. Axes Dialog 103
77. Boundary Object 104
78. Thickness Dialog 105
79. Material Dialog 105
80. Boundary and Solid Objects 106
81. Concentrated Load Dialog 106
82. Boundary, Solid, and Load Objects 107
83. Displacements Dialog 108
84. Stresses Dialog 108
85. Stress Display 109
86. Errors Dialog 109
87. Error Display 110
Tables
1. Control Paradigms 42
2. Data Structures 42
3. TLink Class Methods 82
4. TList Class Methods 83
5. Rules HI
vi
Acknowledgements
The research described in this thesis was made possible by scholarships from the Natural Sciences
(DAAD), and the Eidgenossische Technische Hochschule Zurich (ETH). Additional funding was
provided by Dr.-Ing. Siegfried F. Stiemer during the final stages of the degree.
Many people influenced the direction of this research during my studies at the University of
British Columbia and the Universitat Stuttgart. In particular, Dr.-Ing. Siegfried F. Stiemer,
Dr. Alan D . Russell, Dr. Ricardo O. Foschi, Dr. Noel D . Nathan, Dr. F. Sassani, Mr. David J.
Halliday, P.Eng., and Mr. Harry K. Ng, P.Eng. provided important ideas that shaped the topic.
David Fayegh, Ibrahim Al-Hammad, and Tony Martin provided additional constructive criticism
and practical perspectives that helped to improve the final thesis. Finally, I must thank my friends
and family for bearing with me during the completion of this thesis.
vii
1. Introduction
goal-driven event-generation. This chapter describes the problems with the existing
methods for engineering analysis, the potential knowledge-based solutions, and the
1.1. Problem
The collage of interrelated complex activities shown in Fig. 1 displays an engineering analysis
environment focused on the research, development, and application of a central tool: the
engineering analysis program. Although conventional analysis software plays this key role, it is
notoriously complicated and unreliable. Improvements desired by the engineering profession must
start with an examination of the knowledge used by and transferred between the research,
1.1.1. Specialization
Modem engineering environments are designed to exploit expertise. Common sense supports this
delegation philosophy — tasks can usually be divided into specialized components. Specialization
and knowledge have an odd relationship in the scientific community. Consider two individuals:
one who performs specialized research, and another who wants to apply state-of-the-art techniques
in practice. If the specialist gains knowledge during the course of research, then the practitioner
has lost an equal amount of knowledge from the state-of-the-art. Recovery from this loss is only
possible if the practitioner becomes a specialist. Generalization of this scenario leads to two
1
conclusions: a great deal of specialized research is never used in practical applications due to time
and human factors; and the state-of-the-art is really determined by the current technology for
knowledge integration.
Some of the specialized environments considered in this thesis include: academic education
and research groups, software and hardware divisions of the computer industry, and professional
and other constraints found in each of these environments produce a variety of unique highly
specialized programs.
Engineering education usually involves the construction of simple analysis tools to teach
fundamental principles. These programs concentrate on specific analysis problems that relate to the
educational curriculum or to the current research interests of the instructor. Little attention is paid
to implementation details due to the time and financial constraints of the academic environment.
General-purpose programs have been developed in exceptional cases; however, academic research
constraints of a degree programme. Specialized programs are rarely used outside of their original
development environment, although the published theoretical foundations of these programs are
Professional research and development institutions have the resources to pursue more
complex problems than found in the academic community, so they should produce a higher quality
2
Education Implementation
Research Analysis
Engineering
Analysis
Program
Design
3
Analysts and designers develop expertise while using analysis tools. Although their
User forums and training courses deal extensively with case studies that illustrate the highlights
and pitfalls of application software. Some of this expertise is transmitted back to the program
developers, at which point it is converted into new features, simplified interfaces, and enhanced
documentation.
Technological changes in the computer industry influence the kinds of solutions employed in
engineering analysis. Numerical procedures that would have strained the capacity of mainframe
computers only a few years ago are now being performed on personal computer workstations.
Theoretical methods previously abandoned due to their complexity have suddenly become practical
and professional practice environments is not effectively utilized. The key word in this context is
"specialized" — knowledge associated with the complete analysis process is simply beyond the
1.1.2. Reliability
Although engineering analysis has become an essential part of many industries, it is generally
acknowledged that the majority of existing computer software for analysis is complicated and
error-prone. Recent desire to improve the reliability of methods for engineering analysis has
prompted a dual effort, which is aimed at both analysts and software [53]. This approach is
somewhat shortsighted, as it fails to recognize that mandatory extensive training programmes for
all analysts are impractical, and that software applications are already subjected to extensive testing.
The quality of solutions obtained by analysts is best when the problem domain is within a
previously encountered range. Without sufficient background knowledge for a particular problem,
analysts have difficulty in making predictions about the nature of the results. In a situation where
4
there are no expectations, there can be no comparison of characteristics, trends, and properties as is
required to gain confidence in the solution. Analysts rightfully tend to be conservative, so in cases
where they have little experience they will often choose a well-known method to avoid the
The cause of reliability problems can be related to the lack of knowledge transferred from the
research source to the end-user during the creation of an analysis program. Typical programs
evolve in a cycle of: research, development, and application. Complete software programs are
built around the results and theories of several research projects. These programs grow with time
to include new theories and to handle more complex problems. Consider the implications of a new
highly specialized theory. If the program developers hear about this theory, and they consider it to
be relevant, then time and money will be allocated to the implementation of application software.
By the time the end-user sees the product, many errors in theory and implementation exist. Since
the end-user is neither the specialist who devised the theory nor the development expert who
implemented the software, use of this product will result in potential errors. Clearly, the cause of
this failure may not be attributed to poor software design or inadequate training of the analyst,
rather it stems from a basic failure in communication starting at the research source.
1.1.3. Knowledge-Transfer
Knowledge is usually passed from research and development environments to the analyst via
documentation. If the theory and implementation of analysis procedures are clearly described, and
if the analyst has time to read about them, then perhaps these procedures will be of use in a
practical application. However, most analysts do not have time to continually upgrade their
education, so technological advancements are not at their disposal. A better strategy provides
knowledge to the analyst when it is needed in the form of expert consultation. Unfortunately, this
approach is often too slow or too costly for extensive application, as even experts find it difficult to
5
The knowledge-transfer process could be automated using artificial intelligence (Al). When
McCarthy coined the name "artificial intelligence" in the late 1950's [12], overly optimistic
researchers predicted the overnight development of machines with human intellectual capacities.
Three decades have already passed without success, so a new generation of software companies
has started looking in more practical directions. Techniques originally developed under the
heading of A l are filtering into the mainstream computer industry where emphasis is placed on
results rather than on technology. To exploit practical aspects of A l in the improvement of the
1.1.4. Responsibility
Computer-aided engineering analysis is a relatively new technology. Not long ago, analysis
referred to hand calculations performed by an engineer. The invention of the finite element method
in the late 1950's added a new dimension to engineering analysis. Over the next two decades,
analysis procedures matured with the aid of database management, graphics, and other computing
tools. Application software integrated modelling and computation to provide powerful analysis
and design aids; however, the knowledge required to properly apply these tools quickly grew
beyond the expertise of the average analyst. By the end of the 1970's, it was clear that analysis
technology had reached an impasse, as expertise cannot be demanded from all analysts in a world
Automation of the analysis process has not been achieved to date for one reason — the
responsibility for computerized tasks still belongs to the analyst. Even expert analysts are unable
to cope with the vast knowledge associated with analysis theory and implementation, so it is
unreasonable to expect an average engineer to reliably interpret analysis results unless provided
with some help. User-interaction must be raised to a higher level of abstraction and the computer
must assume responsibility for all low-level numerical and logical computations.
6
1.2. Solution
Engineering analysis may be defined as the examination of a physical structure to determine general
characteristics of its behaviour. This process is usually separated into modelling, solution, and
interpretation activities that are performed on the physical, analytical, and numerical levels of
abstraction shown in the conceptual model of Fig. 2. A clear understanding of the problems with
conventional analysis and the potential solutions offered by artificial intelligence techniques may be
obtained by comparing the user/computer interface for past, present, and future analysis solutions.
\ Behaviour /
Physical
I t
SC
Numerical
J Model Response t
Solution
The majority of existing engineering analysis programs directly interact with the user, as shown in
Fig. 3a. This kind of program shifts the burden of numerical computation to the machine, and
allows the analyst to work creatively in the analytical abstraction level, concentrating on the
and interpretation does not exist. In place of an algorithm, expert analysts employ heuristics to
guide them through analysis problems. These heuristics relate to all aspects associated with the
7
theory and implementation of the analysis process. Given considerable experience with a specific
analysis program, an analyst will likely find its solutions to be reliable. However, an
inexperienced analyst, lacking the appropriate heuristical knowledge, will often find the same
analysis program to be extremely unreliable. Thus, the reliability of an analysis program appears
to be related to the knowledge of the analyst as well as to the robustness of the program.
*
11
l|HI
1\
H ^
( KBES
J Consultant
\
f
[ Analysis [ Analysis ( KBES
^ Program ^ ^ Program Interface
Analysis
^ Program
(b) Intelligent Consultants — the analyst is aided by a consultant in operating the analysis program.
(c) Intelligent Interfaces — the intelligent program operates the analysis program for the analyst.
The reliability of an analysis program can be improved by providing analysts with an expert
consultation facility. Experts have heuristical knowledge that relates to numerical modelling and
interpretation activities. Several artificial intelligence programs such as S A C O N [8] have been
developed to act as consultants. Although expert consultation can help an inexperienced analyst
use a sophisticated analysis program, the analyst remains fully responsible for all conventional
analysis tasks as shown in Fig. 3b. This implies that the consultant approach offers only a
temporary solution to existing problems, as analysis programs will surely become more complex to
8
A more recent approach to this problem involves the use of intelligent interface programs as
shown in Fig. 3c. In contrast to consultant systems, where the engineer communicates directly
with the analysis program, interface systems are intended to act as intermediaries that shield the
engineer from the complexity of the analysis program [96]. This approach permits the engineer to
work on a higher level of abstraction, and to let the A l program deal with the tedious details
associated with numerical modelling and interpretation. Although expert systems can provide quite
reliable solutions by heuristic means, artificial intelligence technology is still in its infancy and
engineers cannot yet solely rely on such programs. This means that the determination of whether
the intelligent interface has properly modelled and interpreted the analysis problem still lies with the
The primary flaw in the existing systems shown in Fig. 3 is that analysis programs are complicated
and unreliable. They are too overwhelming for direct use (even with expert consultation), and they
are too unreliable for heuristic operation by machines. Analysis programs can be divided into
many logical components. This division process need only continue until all components are made
small enough that their purpose and function can be uniquely defined. Once this has been
completed, all facts are contained in a single cognitive space and a framework of knowledge-based
expert systems such as that shown in Fig. 4 can be used to obtain the best available solution for a
given problem. The most significant difference between this proposed system and the existing
systems is that in the new approach the computer has assumed responsibility for all low-level
Transferring the responsibility for program operation to the computer implies a potentially
higher degree of reliability from analysis results due to the limitations of the human mind.
However, this is only true if the computer and human use the same data and the same reasoning
technology, it is quite likely that the computer will be more reliable than the human. This means
9
that it is essential to separate the analysis process into problems that can be represented using
At the lowest levels of the hybrid solution exist intelligently interfaced components of the
analysis problem called resources. Each of these components are self-contained units that have
limited access to the rest of the system, and they are singularly responsible for the management of
their own data and procedures. Object-oriented programming techniques described later in this
thesis provide exactly this paradigm — domain-specific data and procedures may be encapsulated
At one level higher than any analysis component exists an intelligent controller program that
is responsible for the selection and application of a group of lower level components or controllers.
Knowledge-based expert system technology described later in this thesis provides the goal- and
I f I
£ | Controllers
10
1.3. Objectives and Organization
This thesis focuses on the application of a few practical techniques from the field of artificial
intelligence in the methods used for engineering analysis. The problems examined include:
The second chapter of this thesis describes current artificial intelligence technology, and identifies
techniques that may be appropriate for engineering analysis. Traditional fields of A l are examined
along with the more practical tools that have emerged during the last decade.
Fundamental principles of logic and theorem-proving are given as a prelude to the main
techniques for search, inference, and general problem-solving found in typical artificial intelligence
applications. The basics of scientific reasoning are strengthened by a discussion of methods for
products called knowledge-based expert systems (KBES) — programs that mimic the
systems are presented with reference to a proposed hybrid system for engineering analysis.
Smalltalk), hybrid languages (Object Pascal, C++) and higher-level approaches (NExpert Object).
Emphasis is placed on the hybrid environment (Object Pascal and C) used later in this thesis for the
11
1.3.2. Hybrid Systems for Engineering Analysis
The third chapter of this thesis oudines a proposed technique for the automation of the engineering
analysis process. A general (but realistic) understanding is needed for the kinds of knowledge, the
required KBES technology, and the potential interface with intelligent resources in the engineering
analysis domain. This is achieved by using a detailed scenario to illustrate the potential operation
Controllers and intelligent resources shown in Fig. 4 are described for a generic engineering
analysis problem. The data, inference, and control strategies used by K B E S controllers is
explained using an example for the selection and application of a group of lower level components
or controllers. A description of the protocol for low-level resources is given for subsequent
The fourth chapter of this thesis deals with the development of an intelligent finite element analysis
program called "SNAP" [30]. SNAP represents the third generation in a series of programs that
use isoparametric elements for the numerical analysis of two-dimensional linear problems in solid
and structural mechanics [24, 25]. The roots of SNAP can be traced back to a conventional
analysis program called "NAP". A second program called "Object NAP" was developed to
demonstrate the advantages of the object-oriented approach. The third and final program applied
several A l techniques that prompted the name "Smart" NAP — SNAP. The primary features of
SNAP are: object-oriented design within the GenApp event-driven generic application framework;
hybrid control for modelling, solution, and interpretation; and goal-driven event-generation using a
implementation, and operation of the SNAP software to provide a clear understanding of the main
12
2. Artificial Intelligence Techniques
Research in the field of artificial intelligence (Al) is currently focussed on the practical
aspects of making computers more useful and on understanding the principles that
make intelligence possible [100]. Computing techniques developed as part of this trend
can potentially be used to improve the reliability and efficiency of engineering analysis
software. Unfortunately, the majority of A l tools exist solely on the drawing boards of
imaginative developers. This chapter surveys some of the most popular approaches in
A l , selects a few tools that appear useful for the analysis problem domain, and shapes
these tools into a form that is useful for direct application in this thesis. Emphasis is
2.1. Background
Artificial intelligence technology has a short and stormy history. Scientific foundations for the
philosophers such as Boole [11], were not applied to computers until 1950 when Turing explicitly
stated that a computer could be programmed to exhibit intelligent behaviour [89]. This statement
created intense optimism and prompted predictions for the overnight development of machines with
human intellectual capacities. Thefirsttime "artificial intelligence" was used in print came in 1956
with the proposal for a conference organized by McCarthy [12]. Aside from founding the field of
A l , those who attended the Dartmouth conference were probably amongst the first researchers to
After years of arduous labour with general-purpose systems, aimed at academic topics such
as language comprehension and vision, attention was drawn towards projects with potential for
immediate success. In the early 1980's, application of expert knowledge to difficult problems by
"expert systems" brought new life into the A l dream. Several research teams formed companies
13
which advertised products that claimed to be nothing short of a "brain in a box". However,
bringing this visionary technology to the marketplace was not as easy as expected, and optimism
faded once again. Recently, a second generation of A l companies have enjoyed success based on
the use of solution-oriented products which employ the best features of all available technologies.
Methods originally developed under the guise of A l have started to filter into the mainstream
Although artificial intelligence has not appeared overnight, considerable advancement has
been achieved. The current state-of-the-art, some major topics, and descriptions of the practical
Research in artificial intelligence serves two groups of people [100]. Computer scientists and
engineers need to know about A l in order to make computers more useful. Psychologists,
linguists, and philosophers need to know about A l in order to understand the principles of
intelligence. A symbiotic relationship is required between these two areas of research in the quest
for an understanding of intelligence. This leads to the conclusion that learning how to make
Cognitive computers are not likely to appear in the near future, but many short-term
objectives will certainly produce "smarter" computers over the next few years. Mainstream
computing environments have already seen the effects of research that was once found only in
artificial intelligence laboratories. Xerox's P A R C (Palo Alto Research Center) is responsible for
many key concepts of object-oriented languages and for many aspects of user-interface on the
current generation of personal computers. This includes: bit-mapped screens, multibutton mice,
menus, overlapping windows, refined class system concepts, and the object-oriented philosophy.
Some of the most important problems to be solved in the field of A l deal with the classical
objectives — the central problem of artificial intelligence is the simultaneous production of good
candidates for each of three ingredients: the representation language, the inference regime, and the
14
particular domain knowledge [12]. The fact that methods for knowledge representation and
reasoning have not yet been standardized indicates that a significant amount of research is still
needed. As the state-of-the-art technology gradually improves for this central problem,
development tools must be created to deal with knowledge in ways that are suited to the needs of
the computer scientists and engineers who want to apply A l in their domain.
At this time, the primary objective for A l in the field of engineering analysis is the
establishment of "artificial intelligence literacy" in the engineering research community. Over the
next few years this increased awareness will produce more applications of A l to the modelling,
solution, and interpretation tasks in the analysis problem, resulting in "smarter" analysis software.
The field of artificial intelligence is composed of many fields of study. Some of the major topics in
• searching
• problem-solving
• logic and uncertainty
• common-sense knowledge
• knowledge-based expert systems
• robotics
• vision and pattern recognition
• real-world interfacing
• natural-language processing
• machine learning
Some of these topics represent building blocks which can be applied to other areas of A l or even in
and manipulating logic have been used extensively in A l applications such as expert systems as
15
2.2. Reasoning
Intelligence is often equated with the ability to reason or to think about knowledge, so the creation
of artificial intelligence requires some form of automated reasoning. Reasoning may also be
considered as computation used to discover or to formulate new ideas based on known facts. The
tools for formulating an argument, finding the necessary information, and drawing conclusions are
Scientific reasoning relies heavily on the use of predicate calculus — a language of expressing and
dealing with logic as formulae. This language has a rich vocabulary, of which only a small portion
is necessary in this context. More detailed descriptions of the philosophy of logic and its
Predicates
A predicate is a function that maps object arguments into Boolean values (TRUE or FALSE). Each
of the arguments to a predicate must be a term. Terms may be constant symbols, variables, or
function results. Constant symbols include names, values, and descriptions such as: n o d e - 1 ,
10, and yielding. Variables may take on quantities described by constant symbols. Functions
used as terms in predicate expressions evaluate to constant symbols. In fact, a predicate is merely
a simple function which evaluates to a Boolean constant. A predicate with a set of proper terms is
called an atomic formula and is the basic building block of all logical statements. Examples of
On ( A, x )
Equal ( x, y )
The first formula will return T R U E if the constant A is "On" the variable x, while the second will
return T R U E only if two variables x and y are "Equal". Quotation marks are used in the preceding
sentence, as function internal operation may not necessarily match implications of function names.
16
A l Languages such as Lisp provide flexible methods for defining predicate functions which do
exactly what the programmer wants; hence, "Equal" could be defined to mean having equal values
Connectives
Atomic formulae may be combined using connectives to form more sophisticated statements. The
English names for the common connectives are: a n d , or, not, and implies; however, predicate
calculus uses the symbols: &, V , —., => to simplify the construction of universal statements. If p
and q are formulae in predicate calculus language, then (p & q ), (p V q ), (-ip ), and (p => q )
are also formulae. The Boolean values for each of these combinations can be expressed in a truth
p & q P V q
TRUE FALSE TRUE FALSE
q
TRUE FALSE
Quantifiers
In order to make statements of any practical use, references to variables must be made by using
quantifiers. The universal quantifier V indicates that something applies in all cases. The following
17
The existential quantifier 3 indicates that something applies in at least one case. The following
example indicates that there is at least one object that is a bird [100]:
3 x [ Bird (x ) ]
The vocabulary of logic is summarized graphically in Fig. 6. Objects, variables, and function
results are terms combined with predicates to make atomic formulae. Connectives and quantifiers
may be used with atomic formulae to create more sophisticated formulae. Finally, these formulae
are used as statements in logical inference procedures described in the next section.
Predicates Terms
On V \
Atomic Formulae
Negation
Well-formed Formulae
V x [ On ( A , x ) => Equal (x, Support ( A )) ] Statements
18
2.2.2. Inference
There are several kinds of inference techniques used in artificial intelligence including: deduction,
abduction, and induction. The difference between each of these inference techniques is determined
by the robustness of the approaches. Deduction is "legal inference" because given true axioms the
inferences drawn will also be true [17]. The deduction paradigm is:
Deduction
From: a, ( a =^ b )
Infer: b
Abduction is a type of inference that offers plausible explanations to observed facts when these
facts happen to be consequences of axioms (as opposed to conditions used in deduction). This is
not "legal inference" because the implication predicate ( => ) does not guarantee the exclusive
Abduction
From: b, ( a => b )
Infer: a
Generalization of observed facts is called induction. Obviously, this is also not "legal inference",
but as with abduction, it may be a very useful kind of logic. The induction paradigm is:
Induction
From: P ( a ), P ( b ), . . .
Infer: V x [P(x )]
19
2.2.3. Search and Problem-Solving
logical decisions based on the available facts, and efficiently applying logic may be oriented
towards specific problem types or programming implementations [77]. Given a set of observed
facts and known rules represented using predicate calculus, there are many inference strategies
which could be used to find the desired solution. Two basic deductive inference schemes are
Forward-chaining applies deductive reasoning to unify concepts and to derive new axioms.
A simple example [94] can be used to illustrate how this works. A set of rules can be
consecutively executed to infer new facts by dynamically manipulating the context information as
shown in Fig. 7. Starting with the lowest rule (A => D), the fact "D" may be deduced since "A" is
in the context. Since "D" may now be added to the facts where " C " already exists, the second rule
(C & D => F) can be used to deduce "F". Similarly, the third rule (F & B => Z) can now be used
to deduce "Z". This kind of inference is useful if a general understanding of a problem area is
required; however, randomly applying inference rules may generate many useless formulae.
F i g . 7. Forward-Chaining [94].
20
Posed with the question of whether a specific fact is implied by the current context,
backward-chaining (Fig. 8) is a more appropriate inference method. With this method, the user
simply, asks whether a given fact exists. If the fact does not exist, a series of new goals are
consequences implied by the desired fact leads to a deductive inference chain which is identical to
the one produced by forward-chaining. The only difference between the two approaches lies in the
FACTS
D not here
WamD
need A D here
F & B => Z F & B => Z F&B => Z
C&D F C & D => F C&D => F
A => D A => D A => D 4-
RULES RULES
FACTS FACTS
have Z
21
2.3. Representation
Humans use knowledge to reason and to achieve goals, yet the substance or representation of their
knowledge is largely unknown. Machines must also have some technique for storing the
specific problem domain is called an expert. A machine can only hope to compete with the expert
if the domain-specific knowledge can be expressed using a representation which reflects the natural
structure of the problem and its solution. Some common ways to represent knowledge associated
2.3.1. Rules
Memory organization is a key facet of knowledge representation. Tasks requiring expertise for
efficient performance may be automated using rule-based systems. This approach is usually
applied in areas where logic cannot be easily presented in an algorithm. Heuristics can be captured
Theorem-proving systems use deductive statements to arrive at hypotheses. This kind of rule
is useful for classification problems such as the identification of a disease or the selection of a
At the other end of the spectrum, production systems use evocative statements to achieve
goals [17]. This kind of rule is useful for invoking operations when certain conditions have been
satisfied, such as telling someone to go on a diet if they have exceeded their normal weight level by
22
A mixture of deductive and evocative statements is adopted in this thesis to provide the most
The structure of a rule is graphically shown in Fig. 9. Components of this structure include
the L H S (left-hand side) and the RHS (right-hand side). The LHS consists of a list of conditions
which may contain all sorts of data (numerical, symbolical, etc.), but the result of each statement
is Boolean. The RHS consists of a hypothesis and a list of actions to perform when the hypothesis
has been proven T R U E . A l l Boolean statements on the LHS must be T R U E to imply the RHS of
the rule. If this occurs, the hypothesis (a single Boolean statement) is set to T R U E , and the
if...
, L
Conditions
then...
Hypothesis Hypothesis
Condition 1
Condition 2
Action 1
Condition 3
Action 2
Condition 4
Action 3
Condition 5
Condition M Action N
and do . . .
t
Actions
23
2.3.2. Frames
integrate the basic tools presented in the preceding sections. This includes the work of Minsky
[53] on "frames" and the work of Schank and Abelson [74] on "scripts". Both methods are based
on the synthesis of declarative and procedural representations into a single package. Minsky
This definition introduces the crux of the representation issue: One can never cope with the many
details of a complex problem all at once. Problems should be solved by decomposition into several
stereotypical sub-problems.
Some differences in terminology have arisen since the advent of the frame concept. Although
they may be called "frames" in one text [100] and "schema" in another [17], Minsky's original
ideas and those of Kuipers [46] are usually acknowledged in frame implementations. The most
common approach is to define a frame as some sort of memory structure which holds data.
Information may be stored in this structure to support inheritance, instantiation, and specialization,
but the controller of the information retrieval network must provide the procedures for these tasks
[68]. Passive reliance upon a global controller defeats the potential declarative/procedural
synthesis offered by frame representations. Instead, frames can be installed into part of an
object-oriented class system which supports procedural attachment and object communication.
This allows frame specialization using domain-specific resources and subsequent construction of
modular systems with these specialist frames. A typical object-oriented knowledge framework is
shown in Fig. 10. Each frame has an arbitrary number of slots which may be filled with data,
24
Frame: 1
Parent: 0
Slot: 1
Slot: 2
Slot: 3
transformation [46]. Description of each problem is done by objects which are understood by the
user (primitive objects). Instantiation is simply the process of description which is guided by
predictions made using knowledge stored with the objects (defaults, rules-of-thumb, etc.).
Justification of the terminal attachments (slot values) within specified variation limits is provided
unused data, etc) may indicate that a frame is not correct and should be used to suggest an
appropriate replacement. Once a frame has been rejected, the known facts are placed into the new
25
2.3.3. Scripts
An alternative perspective for structuring stereotypical information was proposed by Shank and
Abelson [ 7 4 ] . Rather than describing the data structures associated with a typical situation, as
done in the frame concept, scripts describe the activities performed on and by those data structures.
Scripts provide the most effective means for process description. A series of operations
mutually associated with a situation can be listed in an activity diagram for use later to match and
execute a process. Pattern matching of processes is useful for classification systems, where the
goal is to identify a trend and to make predictions about the future (eg. stock market analysis).
Process description has more bearing on engineering analysis, as the execution of expert
Activity: 1
Follows: 0
Actors
Props
Results
f 1
Activity: 2 Activity: 3 Activity: 4 ||
Follows: 1 Follows: 1 Follows: 1 If
Actors Actors Actors H
Props Props Props p
Results Results Results P
1 1
Activity: 5 Activity: 6
Follows: 2 Follows: 3,4
Actors Actors
Props Props
Results Results
26
2.4. Object-Oriented Methods
This section introduces the concepts and terminology of object-oriented programming used
throughout this thesis. A high-level conceptual model is presented for structuring programs and
data. This model can be used to explain relationships between objects, classes, and methods
regardless of the implementation language. The expandable application concept is introduced along
have made a concerted effort to make the functionality of their systems as similar to others as
possible. Although the finite element analysis program described in the case study at the end of
this thesis was developed using Object Pascal and C, a language-independent model described in
Objects
(procedural) programming techniques, which require the developer to represent data and
specific kinds of data with the specific procedures to operate on that data.
Higher-level objects can be created by assembling groups of objects. These new objects
have the collective functionality offered by their sub-objects. Abstraction of object concepts can be
extended to complete software applications. A program assembled from objects can itself be an
27
Objects communicate with each other by sending messages. When an object receives a
message, it interprets that message and executes one of its procedures. That procedure operates on
the private data of the object, so the internal details of how it functions are hidden from the
program that uses the object. This also means that the internal operation of an object could be
changed without having to make any changes to the rest of the program. Behaviour of this sort
Classes
Groups of objects that have the same kind of data and procedures are called a class. This is
analogous to the notion of structured data found in conventional languages like Pascal and C.
Pascal record and C struct declarations can be used to describe user data types. Objects are called
instances of a class in the same sense as standard Pascal or C variables are instances of a given data
type. The class concept can be viewed as an extension of the record concept to provide a set of
attached procedures.
Classes may also be viewed as templates which describe the organization of a given object
type. This template usually has a list of the instance variables (with their data types) and a list of
the messages that are recognized by the class called the protocol. When an instance is created,
storage is allocated for the variables described by the instance variable list. This new object retains
a link to the class which created it so that it has access to the attached procedures. Messages
passed to an object are directed to the class where the action is performed using the object's data as
input. Some systems provide an additional set of variables which belong to the class in the class
template. Storage and values for these variables is maintained with the class and is not copied out
28
Methods
Procedures attached to a class are called methods. These methods operate on the values of instance
variables stored with each object. Given a message, a class searches its protocol for the
corresponding procedure. After locating the method, the class executes the operation using the
object's data as input. Some object-oriented languages distinguish between methods which operate
on instance variables and class variables. Languages that support class variables usually provide a
second protocol for messages sent directly to the class. These methods are called class methods or
metamethods. Object Pascal does not provide class methods, so they will not be discussed in this
context.
Inheritance
Classes inherit both data and procedures from their ancestors. This means that objects can be
customized to suit a given application. Common features can be declared in a general class from
which specific classes are generated. When a specialized object receives a message it first checks
its own protocol for a corresponding method. If no such method is found, the search continues
with the immediate ancestor class. Inheritance continues until the method is located somewhere in
the object's ancestry (or is not found anywhere). An additional feature is usually provided in class
systems to allow classes to conditionally override or redirect inheritance so that objects can use
A Conceptual Model
Relationships between objects, classes, and methods can be shown graphically [76]. Objects have
a name, a link to their class, and instance variable storage as shown in Fig. 12. The object name is
languages). The class link is usually a pointer (or handle) to the location of the class structure (this
is used internally by the class system). The instance variable storage is where values associated
with this particular object will be stored (this may include references to other objects).
29
(to class)
Object-Name
Class Link
Classes have a name, a link to their ancestor class, an instance variable template, a message
and method dispatch table, and a set of attached procedures as shown in Fig. 13. The class name
is usually held in a symbol table maintained by the class system (similar to the declaration of a
structure or record in procedural languages). The ancestor link is usually a pointer (or handle) to
the location of a class structure (this is used internally by the class system). The instance variable
template contains descriptive parameters that are used during the creation of objects belonging to
this class. The message and method dispatch table describes the local protocol of this class and
(to ancestor)
Class-Name
Immediate Ancestor Link
Both objects and classes are represented internally as simple record structures. For this
reason, the same message-passing with data and procedural abstraction could be implemented in
conventional procedural languages. However, the tedious details associated with class declaration
30
and management (many additional data structures are maintained internally by the class system)
fact, most object-oriented programming environments do not actually pass messages to invoke
methods. Rather, conventional procedures are dynamically bound based on the object whose
Example
that compares procedural and object-oriented approaches for a simple problem. Consider the
following scenario: a program is needed to compute the area and circumference for circles of
varying radii.
The procedural approach to this problem would likely start by defining functions to compute
the desired information. Using Pascal, the programmer could write code as follows:
Given a record from a database for a particular circle, the programmer can compute the area and
circumference by writing:
area : = C i r c l e A r e a ( r e c o r d V a l u e ) ;
c i r c u m f e r e n c e := C i r c l e C i r c u m f e r e n c e ( r e c o r d V a l u e ) ;
The remaining task for the programmer is to obtain radius record-values for each circle in the
database, and to organize some procedure to consistently submit these values to the CircleArea
and C i r c l e C i r c u m f e r e n c e functions.
31
The object-oriented approach to the same problem starts with a class definition. It is standard
practice in Object Pascal to begin class-names with the letter "T" and instance-variable-names with
TCircle = object
f R a d i u s : Extended; <—Instance variable declaration,
f u n c t i o n A r e a : Extended; <—function prototype,
f u n c t i o n Circumference: Extended; <—function prototype,
end;
The corresponding method declarations appear to be the same as their procedural counterparts;
however, the instance variable f Radius is used in place of the function parameter radius since the
function T C i r c l e . A r e a : Extended;
begin
Area := p i * f R a d i u s * fRadius;
end;
Given an instance of the T C i r c l e class, the programmer can compute its area and circumference
by invoking its Area and c i r c u m f e r e n c e methods. If the name used to designate this instance is
area := a C i r c l e . A r e a ;
c i r c u m f e r e n c e := a C i r c l e . C i r c u m f e r e n c e ;
The programmer's task in this case is simply to create object instances based on the records found
in the database (this may be done when reading the data), and to invoke the methods Area and
An important distinction can be made between the procedural approach and the
object-oriented approach — it is the programmer's responsibility to write code that binds data
records to procedures when using the procedural approach, whereas it is the compiler's
responsibility for the same task when using the object-oriented approach. Comparing two lines of
32
code from the two approaches indicates that object-oriented programming simplifies the
programmer's task.
area := C i r c l e A r e a ( r e c o r d V a l u e ) ;
area := a C i r c l e . A r e a ;
In addition to the higher degree of reliability implied by having more tasks performed by the
compiler, the object-oriented approach also assures that the knowledge about relationships between
data and procedures remains with the source code rather than with the programmer. This means
that object-oriented programs should be easier to maintain and to modify even in the absence of
their creators.
Using the graphical representation described earlier, a typical instance of the T C i r c i e class
( a c i r c i e ) would appear as shown in Fig. 14. This object has a link to its class ( T C i r c i e ) , and a
value stored in its instance variable field. The T C i r c i e class can also be represented as shown in
Fig. 15. This class has a link to its ancestor (object), an instance variable template (radius), a
message and method dispatch table (protocol includes: r a d i u s , Area, and circumference), and
fRadius = 10.0
v )
33
Area = n * radius
Circumference = 2n * radius
(to Object Class)
TCircle
fRadius
O— Area
o— Circumference
Object-oriented programming presents software designers with the opportunity to reuse significant
event-processing can be stored in a class-library to provide a foundation for custom software. This
is known as an "expandable application framework", and is probably singularly responsible for the
Overview
simplifies the construction of programs [6]. This framework eliminates the inefficiency of having
to reengineer significant portions of the user-interface for each application. It also has the
beneficial side-effect of providing programs which are more user-friendly, as they are guaranteed
Software developers start with the blank application (enclosed by the dotted line) of Fig. 16.
Custom subroutines are added to this framework by creating new class descriptions which have
(drawing, menus, buttons, etc.) comes with the blank application, so developers can concentrate
34
Supplied
"Main"
The "Expandable" Routine
Application Framework
Supplied Supplied
Subroutine Subroutine
The portion of the application supplied to you as the expandable application framework.
application types. The Intermedia system, a large object-oriented hypertext/hypermedia system and
linkages [55]. The developers of this system transported MacApp to an Inheritance C environment
Application
A generic application framework called "GenApp" was developed by the author for use with
simple graphical applications. It provides a basic framework for file interface, graphical editing,
alert and dialog boxes, and other standard features supplied by tools like MacApp. These features
are offered by a set of primitive classes for applications, documents, windows, and user-interface
media. GenApp is introduced with the intelligent finite element analysis program presented at the
end of this thesis, and additional details are given in the GenApp manual [29].
35
2.5. Knowledge-Based Expert Systems
Almost all existing applications of A l in engineering belong to a group of elementary products
called knowledge-based expert systems — programs that mimic the decision-making processes of
an expert in a particular domain. This section presents the classical format of an expert system for
general problem-solving in contrast with a novel expert system employed by the engineering
2.5.1. Classification
intelligence encompasses all programs that exhibit intelligent behaviour by skillful application of
heuristics. Within this general field, knowledge-based systems (KBS) are all programs that make
domain knowledge explicit and separate from the rest of the system. Almost all existing
applications of A l in engineering fall into the final group, which is a subset of knowledge-based
systems that applies expert knowledge to difficult real world problems (KBES).
Exhibit intelligent
ARTIFICIAL behaviour by skillful
INTELLIGENCE application of heuristics
PROGRAMS
36
2.5.2. Components
The internal structure of a typical KBES can be idealized as shown in Fig. 18. Three basic
The knowledge base may contain domain-specific axioms and facts (expertise). The context is a
short-term knowledge base which contains information about the current problem. The inference
mechanism is the engine of the KBES, and works with both the context and knowledge base to
provide solutions. Three other components that are not necessarily part of the K B E S , but are
37
The knowledge acquisition facility is a software package that may be used to describe and organize
heuristics used by the expert. The explanation facility is more software used by the expert system
to explain and to justify logical computations performed during inference. The user-interface
mechanism is determined by the machine and environment used to develop the software.
Naturally, real applications may have slight variations from this generic expert system
2.5.3. Applications
analysis are those that require manipulation of the domain knowledge. Unlike conventional
engineering software, expert systems are designed to work with a constantly changing knowledge
base. K B E S technology has tremendous potential in the analysis field, as there are many
The first publicized knowledge-based expert system in the engineering analysis domain was
an automated consultant called S A C O N (Structural Analysis CONsultant) [8], that used the
E M Y C I N [92] shell as its framework. This system advises engineers in the use of a large,
large number of analysis methods, material properties, and geometries that may be used in the
modelling process. SACON's job is to help the analyst select the best options for a given problem
domain. Using a set of production rules, and a backward-chaining depth-first search strategy this
system infers facts which can guide an inexperienced analyst through the analysis process.
However, as mentioned earlier, passive consultation is only a temporary solution to the problems
The next generation of KBES tools in the analysis domain attempted to become active players
in the analysis process. Rather than providing passive consultation, and expecting the analyst to
perform all conventional analysis tasks, these expert systems were intended to be front-ends or
intelligent-interfaces that shield the engineer from the complexity of the analysis program.
38
Although intelligent interfaces can be developed to adequately control existing conventional
analysis software [96], this approach does nothing to improve the current state of poor reliability
The concept proposed by this thesis is to simplify the complexity of analysis programs using
object-oriented programming techniques and to integrate these tools in a KBES framework using
the same object-oriented philosophy. The hybrid system for engineering analysis developed as
part of this thesis requires a simple expert system to perform backward chaining inference in
An idealization of the actual expert system used in this thesis is shown in Fig. 19. The
knowledge-base, context, and inference engine are similar to those found in a typical KBES;
however, there are no acquisition and user-interface modules. Instead of an explanation facility,
this system takes action with its logical computation directly communicating results to the
event-driven analysis program through an event generator (details are given in chapter 4).
Context
Knowledge
Base
39
2.6. Development Tools
A variety of software is available for creating artificial intelligence applications. The development
tools examined include: conventional A l languages (Lisp, Prolog, Smalltalk), hybrid languages
(Object Pascal, C++) and higher-level approaches (NExpert Object). Emphasis is placed on the
hybrid environment (Object Pascal and C) used later in this thesis for the development of an
Parsaye and Chignell is shown in Fig. 20, and Tables 1 and 2. A natural trend has taken place,
starting with simple data and control management facilities gradually changing towards more
sophisticated tools.
In addition to the introduction of new languages which provide new features, existing
languages have undergone extensive internal surgery over time. The F O R T R A N used by
engineers today bears little resemblance to the equivalent language of the 1950's. Many features
such as dynamic data structures [9] and recursion [56] are part of the engineer's F O R T R A N
toolkit. Object-oriented class systems instigated by Smalltalk [37] are found in extensions to
Pascal (Object Pascal) [88], C (C++) [82], and Lisp (Flavors and LOOPS) [10]. The result of
Expert systems can be developed using a variety of programming paradigms, but the most
common implementations are rule-based systems. These systems are typically built using a KBES
"shell" which simplifies user-interface and prototype development. Problem-solving is done using
paradigms recognize the need to model tasks and reasoning rather than to solve problems
exclusively by theorem-proving.
40
Frame-based systems are often mistakenly considered as alternatives to rule-based systems.
Frames merely provide structuring and inheritance capabilities to the simple rule-based logic
software. Specific object-oriented techniques used in this thesis will be introduced later.
Mid 1950s
Late 1950s
Early 1960s
Early 1970s
Mid 1970s
Late 1970s
41
T a b l e 1. Control Paradigms [62].
GoTo/Jump. This was the first control paradigm used in assembly language programming, and it
still exists in a number of languages.
For-Loop/Do-While. This was the next step away from GoTos and was first used in FORTRAN
Arrays. These are commonly used in assembly language, FORTRAN, and similar languages.
Records/Fields. These are used in Pascal, C, Ada, etc. Records are used to group information.
Dynamic Data Structures. These are used in symbolic languages such as Lisp and Prolog, which
provide a convenient way of dealing with an undetermined number of items of different types.
Built-in databases. Prolog's built-in database and OPS5's working memory are examples of such
structures. They provide the ability to store information in abstract form, as though it had been
placed in a relational database.
Frames. Frames (and objects) were first implemented in FRL, Smalltalk, and a number of other
recent languages. Frames extend record structures by providing means of inheriting information
between records and allowing active data elements.
42
2.6.2. Conventional Al Languages
Lisp, Prolog, and Smalltalk are often referred to as the languages of A l research. Lisp is the oldest
of these three languages, and is also the most successful in the field of artificial intelligence. Lisp's
success is probably due to its flexibility — a Lisp user can customize the language to suit oneself,
and can deal very easily with a variety of List Processing applications [101]. The biggest
Prolog is a language designed for Programming in Logic. It can deal most efficiently with
problems that may be expressed using the predicate calculus described earlier. In fact, Prolog can
Smalltalk is a more recent product of A l research than Lisp or Prolog, and is focussed on a
both the simplicity and power of the language [37]. Object-oriented concepts which were
formalized in the design of Smalltalk have been transported to many other programming languages,
so although Smalltalk may not be a particularly useful development tool by itself, it does help to
designing a totally new programming language around the basic concepts (the pure object-oriented
approach) or grafting these concepts onto an existing language (the hybrid approach) [76].
Smalltalk is an example of a pure object-oriented language. Object Pascal [88] and C++ [82] are
examples of hybrid languages. The hybrid approach is particularly useful for engineering
applications, as most programmers of engineering applications are not familiar with Lisp and other
Al languages and would prefer to work within their conventional computing environments. This
was the case in the development of an intelligent finite element analysis program — Object Pascal
offered the best available medium for integrating existing analysis procedures written in C.
43
2.6.4. Higher-Level Approaches
NExpert Object [58] is an example of this kind of tool. It provides: integrated forward and
backward chaining, automatic goal generation, multiple inheritance, pattern matching, direct calls
to external routines, etc. The NExpert concept is simple — provide a KBES tool which can call
and be called by conventional C programs. The resulting open architecture shown in Fig. 21
provides an impressive list of capabilities that are available and functioning on many computer
systems.
external
devices/computers Networking
multi-processes Ethernet, DecNet...
real-time data input
into NEXPERT event-driven
architecture Inter-process Communication
Vax Calling Conventions
Dynamic Data Exchange
Multi-tasking
Execute I
n
t
e
r
DataBases NEXPERT f
DB ffl, Lotus 123 a
ORACLE, SQL, Retrieve c
RDB, Excel... Show e
J
explanations
graphics
text
Data storage
Read/Write
Graphics focus of attention
Text conclusions
reasoning trace
active values
Dynamic access reports
to knowledge bases Knowledge what if
Load/Unload
Bases
44
2.7. Applications of A l Techniques in Engineering Analysis
Many techniques from the field of artificial intelligence can potentially be used to improve the
reliability and efficiency of engineering analysis software; however, only a select few will ever find
practical application in this domain. Some tools are not available to the general public, others are
not fully functional, and the majority exist solely on the drawing boards of imaginative developers.
Selection of techniques and development environments for use in this thesis was influenced by the
kinds of tools available at the time research was undertaken. This section describes the selections
that were made, the motivations that influenced these selections, and the problems that had to be
Preliminary research done for the Object N A P project [32] indicates that object-oriented
programming offers significant improvements in program size, development time, and software
reusability. At the same time, it may require longer to perform numerical tasks. A combination of
conventional (C) programs for numerical tasks with object-oriented (Object Pascal) programs to
integrate and to organize the analysis results appears to offer the best overall product. The
Lightspeed C and Pascal compilers [87,88], which provide integrated environments for program
development, were selected for use in this research project. Selection of C and Object Pascal (as
Developers of object-oriented programs have the option to start program construction at a very high
level by adopting a generic application framework. The most popular system on personal
computers is called MacApp [6], and can be accessed using a variety of languages including M P W
Pascal, M P W C , and Assembler. Unfortunately, MacApp is provided as a "black box" library and
its use in non-MPW environments is not yet well-supported. Since the Lightspeed Pascal compiler
45
[88] provides an integrated environment for program development, it was quite easy to develop a
generic framework called GenApp that suited the SNAP project's needs. GenApp supports many
of the same features as MacApp, including a similar user-interface library and main event-loop
and graphics applications. The basic concept behind this approach is that actions taken by the user
(or by other applications) result in events. Applications can create and respond to these events as a
programming paradigm — their operation is based on the continual extraction of events from an
GenApp uses this kind of paradigm to provide a consistent standard user-interface with
menus, windows, and drawing tools found in many professional software packages. SNAP is
constructed on top of the basic GenApp shell by providing customized event-handling methods in
its class structure. The most significant flaw in the event-driven approach is that context-dependent
conditional relationships must be embedded within the event-handling process (details are given in
chapter 4). The solution to this problem was obtained by providing an expert system that operates
within the analysis process to direct event-generation when potential problems arise.
Traditional KBES technology has been used in many engineering analysis applications; however,
the form usually adopted by such systems is as a consultant or interface to the actual analysis
software, rather than as an integrated component of the analysis process. The expert system used
in the SNAP project, as described in the preceding section, had to be designed for operation in
conjunction with the object-oriented, event-driven environment of GenApp. The completed KBES
performs forward and backward chaining with backtracking to provide logical computations that
46
3. Engineering Analysis
The reliability and efficiency of methods for engineering analysis can be improved by
system. This chapter describes such a system, including: the kinds of knowledge used
in the analysis process, the programs that control this knowledge, and the resources
3.1. Knowledge
Engineering analysis may be defined as the examination of a physical structure to reveal specified
facets of physical behaviour. In most cases, analysts want to predict behaviour so that they can'
provide a safe and efficient design for a proposed structure. This requires some knowledge of
modelling, solution, and interpretation activities. The scenario shown in Fig. 2 2 is used to
Physical Physical
Structure Behaviour
47
3.1.1. Modelling
activity takes place on physical, analytical, and numerical abstraction levels. The physical
abstraction level refers to the actual structure and the complex description of reality. A simplified
abstraction level. Further simplification using numerical techniques to approximate the analytical
model is carried out on the lowest abstraction level. Idealizations used in the modelling process are
F i g . 23. Modelling—A precise and unambiguous statement of the physical problem (shown on
the left) that captures the essential aspects of behaviour is the analytical model (shown
in the middle) [90]. Further simplification so that solution can be carried out using
numerical techniques yields the numerical model (shown on the right).
Starting from the physical level, the analyst must extract all pertinent spatial properties to
form an analytical model which approximates the real problem. Spatial descriptions usually
include geometry and topology that corresponds with the information used by solids modelling
programs. In addition to spatial descriptions, engineers like to attach functional descriptions to the
(eg. Calling something a "beam" implies the expected analysis and result types). Such functional
descriptions are needed for pragmatic reasons only, because spatial knowledge is theoretically
sufficient for analysis purposes [90]; however, functional aspects of analytical modelling hold the
key to much of the knowledge used by analysts. Since engineers use descriptions such as "beam",
"plate", and "shell" in all aspects of their work — from formulation of theories to practical design
48
operations — it is likely that they also associate their heuristic knowledge (expertise) with these
descriptions. In this light, heuristics provide an essential aspect of the modelling process for the
Although analytical solution is possible for simple problems, most cases require numerical
solutions. Creation of a numerical model draws from another area of the analyst's knowledge.
Input to finite element analysis programs typically consists of a mesh and a set of control
parameters. The mesh specifies basic information about the problem (nodes, elements, materials,
and boundary conditions), whereas the control parameters tell the analysis program how to solve
the problem. In order to construct a finite element mesh, the analyst must be familiar with element
geometry and connectivity as well as material and boundary condition specifications. In addition to
a simple understanding of the mesh configuration, the analyst is responsible for the selection of
modelling options that coincide with the intended usage (eg. Elements should be chosen based on
3.1.2. Solution
Perhaps the most straightforward part of analysis is the "solution" activity. Solution can be
the physical level, nature performs its own form of solution to reveal physical behaviour. On the
analytical level, solution involves the symbolic manipulation of mathematical formulae to produce
analytical response (The word "response" is used rather than "behaviour", as the new information
pertains to only the specific problems addressed by the model). Finally, on the numerical level,
computation using approximation techniques leads to the numerical response. Idealizations used in
the solution process are shown in Figs. 22, 24, and 25.
Analytical solutions are usually not possible for complicated problems, as the number of
equations and the mathematical complexity increases dramatically for odd-shaped domains or
non-uniform boundary conditions. Although it may not be possible to obtain the exact analytical
49
response, solutions may be available for a simplified problem having essentially the same
characteristic response. Classical engineering solutions require the analyst to have extensive
theoretical and practical knowledge. Automation of this knowledge is probably most efficiently
Analytical Analytical
Model Response
F i g . 24. Analytical Solution —In some cases, problems may be solved by the symbolic
manipulation of mathematical formulae associated with an analytical model (shown on
the left) to produce analytical response (shown on the right).
Numerical solutions are used for most modern engineering analyses. Although solving
systems of equations defined by the numerical model may be computationally expensive, it is also
easily automated. The finite element method illustrates a typical example of how to simplify a
problem by dividing it into many smaller problems that can be solved using generic methods. The
task of the finite element program designer is to devise efficient and practical solutions for each of
these small problems and to provide appropriate means for assembling the complete solution.
Numerical Numerical
Model Response
F i g . 25. Numerical Solution —In most cases, problems are solved by numerical analysis
programs that compute numerical response (displacements, stresses, etc.) using a
numerical model (nodes, elements, stiffness matrices, etc.).
50
Most activities in finite element analysis programs are automated using algorithms. This
indicates that the knowledge related to these activities is quite well-defined, and that the details of
their operation may often be hidden from the analyst's view. However, the analyst is usually still
responsible for the selection of control parameters that guide the analysis program to the
3.1.3. Interpretation
activity takes place on numerical, analytical, and physical abstraction levels. The numerical
abstraction level refers to the numbers output by an analysis program. Precise descriptions of the
predicted analytical response and an evaluation of the numerical response in analytical terms are
located on the analytical abstraction level. Further generalization of the predicted structural
behaviour is found on the physical abstraction level. Idealizations used in the interpretation
F i g . 26. Interpretation — Given discrete values at the numerical level (displacements, stresses,
etc.), there is a need to interpret the meaningfirstat the analytical level to confirm that
the numerical solution is correct, and second to confirm that the analytical response is
consistent with observed or desired physical behaviour.
between the actual results and the results predicted by various analytical models. At this stage, the
validity of a solution can be tested (eg. Analyses that assume small displacements are only valid if
51
their results indicate small displacements). In order to perform this kind of testing, the analyst
must understand the foundations of numerical and analytical approximations. Usually, finite
element analysis programs have documentation that outlines the theory and implementation of their
operation to simplify the process of mapping solutions from the numerical domain to meaningful
analytical abstractions. Some programs even try to convert results into functional descriptions that
are easier for the analyst to understand (eg. Results can be expressed in terms of load-carrying
mechanisms). Since most theories are clearly described at the numerical abstraction level, it is not
In many cases, the analyst is forced to investigate the foundations of a numerical solution to
assure proper application of the results in "post-processing" software. Although usually intended
knowledge of the analyst. This additional effort is certainly beneficial, as it elevates the solution
response to a higher level of abstraction and simplifies the task of analytical interpretation.
Analytical response is assumed to reflect actual physical behaviour, yet only to the accuracy
of the original model. Physical behaviour must be predicted using analysis results, so analysts
must understand the principles behind the model, its solution, and the factors that determine its
validity. Analysts use "judgment" and "rules-of-thumb" that have been developed by years of
experience and by comparison with similar problem types to perform the interpretation activity.
Since analysts presently rely on such heuristics, a similar representation seems most practical for
In summary, the knowledge found in each of the modelling, solution, and interpretation
activities has some structured aspects that can be represented using algorithms and other
less-structured aspects that are more easily represented using heuristics. Controllers and resources
described in subsequent sections must be able to deal with both kinds of knowledge.
52
3.2. Controllers
Classical engineering analysis methods use a "divide-and-conquer" strategy — any problem may
be divided into several sub-problems. This approach is used in the development of a hybrid
system for engineering analysis to assemble a hierarchy of intelligent components into a knowledge
framework. This hierarchy deals with complex issues at its top and simple issues at its bottom.
Eventually, the low-level problems have simple solutions provided by conventional application
software. The complete solution is obtained by combining the results of all sub-problems into a
form which has meaning at higher levels of abstraction. Hybrid systems can be constructed using
two basic components: controllers (the managers of the knowledge hierarchy) and intelligent
resources (the conventional software used for low-level problems). This section deals with a
t
A Typical
Controller J
11 j
- 1 — ^ r ^ ^ f f ^ /
66
... mI I .
u r >_
\1 1 \r 'II' i f 1
- - -
( \( \( \ C \ C \ C \ ( \\ \'( i c
^ i C J K J V J K. J { ) V. J J \.
i2 Controllers
Intelligent Resources
53
3.2.1. Controller Interface
The primary occupation of a controller is with the management of subordinate controllers and
intelligent resources, so the interface with these components is an important part of the controller
and Howard [69] for interfacing expert systems with design databases in integrated C A D systems.
Such an approach permits the integration of a heterogeneous cross-section of expert systems and
conventional application software. Rather than trying to use a diverse mixture of software, as
would be possible with systems like K A D B A S E , this thesis concentrates on the use of simple
controller/resource communication.
case, communication between these components is handled by the message facilities of the
development tool (Object Pascal). Controllers can use event-driven, goal-driven, or mixed
operations to process knowledge. Descriptions for a generic T C o n t r o l l e r class could take many
forms; however, the simple class shown in Fig. 28 uses the event-driven approach. Interface to
instances of this class is provided by the Task method. A higher-level controller can send this
message to any of its subordinates at any time to let them perform tasks that have accumulated in an
event queue. Once a controller is activated, it takes events that belong to it from the queue using
GetNextEvent and submits any associated sub-events to other controllers using PostEvent.
IController
o- Task DoEvent —o
o— GetNextEvent PostEvent —o
Fig. 28. TController Class Description.
54
3.2.2. Event-Driven Operation
The architecture of a typical controller used in this thesis follows the event-driven operation shown
in Fig. 29. When a message is received by this kind of controller an event is created. Entering the
The event loop is repeated until no events are found (events not handled by this controller are not
included). Events may be processed direcdy by the controller (a simple calculation) or indirectly
by one of its resources. Sub-events are treated in the same manner as the original event, so they
may also be dispatched to other controllers. This paradigm is analogous with forward-chaining
55
Event-driven controllers can be used to create friendly interfaces that allow the user to work
without the constraint of a hierarchically structured set of commands. The generic application
framework (GenApp), developed by the author as a shell for graphical applications, provides
user-interface for graphical editing within its windows using this event-driven paradigm.
heuristical knowledge). Goal-driven inference described later for the operation of a typical
3.2.3. Example
The operation of the modelling controller shown in Fig. 30 can be illustrated using an analogy.
Consider this controller to be the shipping officer in a transportation company that moves
knowledge between the locations of the physical structure, the analytical model, and the numerical
model. The responsibilities of the shipping officer are focused on the preparation of shipping
orders, coordination of product movement, and quality assurance. Decisions must be made (often
heuristically) regarding the proper mode and route of transport, and when a mistake has occurred
56
If the controller in this example uses an event-driven approach then messages received from
its resources (the delivery vehicles) result in events. For example, when one vehicle reports that it
has broken down then another vehicle will be dispatched in its place, or when a delivery has been
Some events that take place in the engineering analysis modelling task include: Create, and
Move. User actions that create components of the structural model (solids, boundaries, loads, etc.)
post C r e a t e events to the modelling controller to tell it that something has been added to the
model. This controller retrieves the event from a queue, interprets the meaning of the message,
and tries to perform the requested action. Creation of a solid part, may post many subsequent
events to other controllers (finite element mesh must be generated, stiffness computed, etc.).
Addition of boundary parts and loads also results in subsequent event generation (load vector
Other events that take place in the engineering analysis modelling task include: R e f i n e ,
ReduceDOF and AssignDOF. These events are not direcdy user-related, rather they are subsequent
or secondary events that are conditionally caused by events such as C r e a t e and Move. These
events are difficult to relate to user actions without examining the current problem context;
however, some constraints or rules can usually describe their application. For example, the
ReduceDOF event should be generated after C r e a t e events if the object was a solid and it was
created next to a boundary (see the example at the end of this thesis for a detailed description of the
user-interface). Also, the ReduceDOF event should be generated after Create events if the object
was a boundary and it was created next to a solid. It is apparent that many combinations can exist
for each kind of event, resulting in a tree of possible outcomes for any given model. For this
reason, the event-driven approach works nicely as long as all actions can be directly related to
events by a causal relationship (i.e. events must always take place given the action, and not be
dependent on additional conditions). Actions that result in multiple or conditional events are best
processed using alternative inference schemes such as goal-driven event-generation (explained later
in this thesis).
57
3.3. Intelligent Resources
Numerical computations found in engineering analysis are usually represented using structured
algorithms. Controllers may perform some small calculations as part of management activities, but
the majority of structured computation is left to the workhorse "resources" at lower levels in the
knowledge hierarchy. Conventional analysis programs divide algorithms into modules that can
take advantage of existing software for general-purpose or highly specialized tasks. The hybrid
system described in this thesis uses intelligent interfaces to access conventional analysis software.
This section deals with a typical intelligent resource as shown in Fig. 31.
A Typical
Intelligent
f
Resource
55e
5
Controllers
^••j Intelligent Resources
58
3.3.1. Resource Interface
Resources can be designed to provide information of any kind. The TResource class description
shown in Fig. 32 is an example of a simple resource with one basic method (DoCommand) to
control its operation. Controllers may dispatch events or goals to this resource simply by creating
command objects that this resource understands, and then submitting them to the DoCommand
method. This approach makes the resource independent of its controllers (commands may be
o- DoCommand
Intelligent resources used in this thesis have either goal-driven or event-driven architectures.
Description of a typical controller operating under event-driven inference given earlier may also be
A typical goal-driven operation is shown in Fig. 33. When a message is received by this
kind of resource a goal is created. This goal may be proven (TRUE or F A L S E ) using a
(1) Resolve Built-in Clauses: If the goal is a built-in clause, it is evaluated using the associated
(2) Match with Facts: If the goal is found in the context return T R U E . If the goal is something
that the user should answer, then a prompt is given for input.
(3) Match with Rules: Try to match the goal with the conclusion of a rule. If no matching rule is
found, return F A L S E .
59
(4) Prove the Rule Premise: Try to prove each clause in the premise of a rule. If all clauses
succeed, return T R U E .
(5) Backtrack within a Premise: If a clause fails in a rule premise, go back to the previous clause
(i.e. backtrack) and try to find new bindings for the variables.
(6) If Necessary, Try Another Rule: If the rule under consideration is of no use in proving the
goal, then try the next rule with a conclusion that matches the goal.
Goal-driven resources are best suited to deal with heuristical knowledge. Conversely,
event-driven resources are best suited to deal with well-defined algorithms. Aspects of the analysis
process such as analytical solution can be automated using goal-driven resources. Many problems
in numerical analysis are easily represented using algorithms (most of the resources used in this
60
3.3.3. Example
The operation of the solution controller and its resources (Matrix Library, Shape Functions,
Numerical Integration, Finite Elements, etc.) shown in Fig. 34 can be illustrated by extending the
analogy used earlier for controllers. Consider this controller to be the shipping officer in a
transportation company that moves knowledge between the locations of the numerical model and
the numerical response. While the shipping officer deals with the preparation of shipping orders,
coordination of product movement, and quality assurance, resources are the workers that carry out
these tasks. After receiving their orders, each resource becomes responsible for collecting and
moving some information along the solution route using their own delivery vehicles. Along the
way, resources may be required to make some decisions on their own or with the help of their
controller.
V /
Matrix Library
E^g
Numerical Numerical
Model Response
61
If the resource in this example uses a goal-driven approach, then messages received from its
controller (the shipping officer) result in logical computation to obtain that goal. For example,
when one vehicle is dispatched to a location for a delivery, the driver of the vehicle may be
responsible for determining the route. In the case that the shipment was not waiting for pickup
when the vehicle arrived, the driver may have to request new orders from the central shipping
In order to make decisions of this kind, resources must have a decision-handling system.
Algorithms can be used for this purpose — if a resource is asked for something, then it can check a
few facts according to a predetermined plan before responding. This works for simple cases, but it
breaks down as soon as insufficient information is available or when the problem does not fit
within the bounds of the algorithm. Experts make their decisions based on a combination of
algorithmic and heuristic approaches. A KBES can be used to duplicate the expert's approach and
from one program into something the second program understands. There are many examples in
recent literature of coupled or hybrid knowledge-based expert systems that have been used to
The low-level resources used in the finite element analysis program SNAP would all be
classified as simple or non-intelligent since the KBES that controls SNAP'S resources is located at
the controller level (see chapter 4). Since controllers can themselves be considered as resources,
SNAP has three intelligent resources (the modelling, solution, and interpretation controllers) each
with their own set of rules and an inference engine that shares a common problem context for the
analysis problem.
62
4. Intelligent Finite Element Analysis
The quality of solutions provided by a finite element analysis program can be related to
its development and application environments — a program that is easy to develop and
developers of conventional analysis software are often faced with extremely complex
data and control structures, so the resulting programs force the end-user to work within
the bounds of strict and unforgiving algorithms. Artificial intelligence techniques can
simplify the development task and can produce software that is more useful than would
be possible with the conventional approach. This chapter describes the development of
given for the design, implementation, and operation of the SNAP software to provide a
4.1. Design
The SNAP software is a product of three designs: a conventional analysis program called "NAP"
[24], an object-oriented version called "Object NAP" [25], and some new concepts pertaining to
artificial intelligence. A previous research paper has demonstrated that Object N A P is clearly
source code [32]. In addition to adopting the object-oriented approach, SNAP employs an
event-driven architecture aided by an internal expert system that uses goal-driven event generation
63
4.1.1. Objectives
The primary goal of the SNAP project is to demonstrate the advantages/disadvantages offered to
software developers and users by A l techniques such as: object-oriented programming, generic
(3) Interpretation: Errors in the numerical solution are heuristically estimated and the results
These objectives are certainly bold for an academic project, as similar programs developed using
conventional techniques could take a team of software developers years to complete. However,
completion of such a program amplifies the claims of this thesis that artificial intelligence
techniques can offer significant improvements to simplify the development task and to produce
software that is more useful than would be possible using the conventional approach.
64
The key to designing a user-friendly application program is to provide responsiveness,
permissiveness, and consistency in the user interface [5]. Responsiveness means that the user can
spontaneously and intuitively perform actions to produce desired results. Permissiveness means
that the user is allowed to do anything reasonable. Consistency means that applications should
build on skills that the user already has rather than forcing the user to learn a new interface.
User-interface for SNAP (Macintosh Version) is provided by the simple graphical editor
shown in Fig. 35. Standard menus (#, F i l e , E d i t , and W i n d o w s ) provide all conventional
operations for opening, closing, saving, and editing data files. Additional menu commands are
self-descriptive features that allow the user to customize the program to suit the problem. The
graphics palette (displayed in the left side of a drawing window) contains tools for: pointing and
selecting, drawing solids, attaching loads and constraints, and for typing text. With these tools,
the user can create arbitrarily shaped objects with multiple loads and boundary conditions. SNAP
takes care of the rest of the modelling, solution, and interpretation tasks.
65
User-interface, from the perspective of the computer, can be idealized as shown in Fig. 36.
Once SNAP receives an analytical description of the problem (the user provides a graphical
representation), it can create a numerical model, solve for the numerical response, and interpret the
results. Computation performed by SNAP at the numerical level is hidden — the user gives and
Initial finite element meshes normally have to be refined to obtain accurate results. Analysts
using conventional programs are required to estimate error associated with the numerical solution
and to revise the mesh if necessary. Automatic error analysis and mesh refinement (based on
inter-element stress discrepancies) are provided in SNAP so that the user is not required to
Although algorithms are available for most of the modelling, solution, and interpretation
activities, initial guesses and expert judgment are often required at key locations in these
algorithms. Analysts develop expertise related to these "fuzzy" areas by repeated use of the
numerical analysis programs. Heuristics can represent some expert knowledge so that it may be
used by automated processes such as those found in SNAP (eg. Solution control can use a rule
Analytical
I
Numerical
Solution
Fig. 36. SNAP Abstraction Levels.
66
4.1.2. Methodology
Artificial intelligence techniques described earlier in this thesis can be used to simplify the
development of the SNAP finite element analysis program. Techniques originating in artificial
Object-Oriented Programming
applications, more time can be devoted to the improvement of program quality rather than simply
producing a working program. The resulting additional clarity and organization is apparent in both
Many object-oriented programming environments could have been used in the development
of SNAP (Smalltalk, LOOPS, Object Lisp, Object Pascal, C++, etc.); however, since most
analysis programs are written using procedural languages like F O R T R A N , Pascal, and C it seems
logical to use a hybrid language (object-oriented concepts grafted onto an existing language) such
as Object Pascal or C++. The hybrid environment of Lightspeed Pascal [88] was chosen as it
offered the best combination of development environment and language features. Programs which
involve a lot of computation should use languages that take advantage of hardware developed for
this purpose. Pascal lags behind C in this area. A set of high-speed matrix routines were written
using the LightspeedC compiler [87], and were stored in a library used by SNAP and its
67
Generic Application Frameworks
The proposed hybrid solution for engineering analysis, as described in preceding chapters, is used
to design the overall structure of the SNAP software. Control is provided by a hierarchy like that
shown in Fig. 37. The main controller is responsible for the management of three subordinate
controllers for modelling, solution, and interpretation. Each of those controllers are responsible
for dealing with resources that create, analyse, and display different kinds of data. Some of the
resources are generic math libraries whereas others are highly specialized tools for finite element
analysis. In addition to resources for analysis, there are several components that provide the
Controllers and resources are easily developed using the object-oriented approach — they are
just instances of the TControiier and TResource classes given in the previous chapter.
Communication throughout the hybrid system is provided by the simple object message facilities of
Object Pascal.
f
I1
68
Event-Driven Architecture
SNAP has an event-driven architecture like that shown in Fig. 38. One main event loop is used to
monitor the graphical interface and all generic operations (GenApp). Additional event loops are
needed to monitor each of the modelling, solution, and interpretation processes. Events are
generated for the main event loop when the user clicks the mouse button or types some text.
Events are also created when anything happens in the analysis process (model changed, solution
calculated, error estimated, etc.). Events related to the main event loop are held in a global queue
that is managed by the system; however, analysis-related events are stored in a location that may be
GenApp
Event Loop
r Modelling
Event Loop
T Solution
Event Loop
f Interpretation
C~~) Event Loop
~3
9
v
— ]
69
Knowledge-Based Expert Systems for Goal-Driven Event-Generation
Event-driven architectures perform well if there is a direct relationship between user-actions and
program response (eg. when the user types a character in a word-processor, that character is
entered in the document at the current insertion point). However, if user-actions are
architecture (eg. upon completion of the global stiffness matrix, an event can be generated for
displacement computation, but only if the load vector is also completed). Many combinations can
result within the event-handling process, leading to the same confusion as found in conventional
algorithm-based software.
A novel approach developed as part of this thesis involves the use of selective goal-driven
inference at key locations within the primary event-driven structure. The idea is to generate events
based on the current problem to replace the context-dependent conditional relationships. Instead of
specifying a rigid dependence or sequence of events, a knowledge-based expert system can use a
set of simple rules to decide at any time what actions must take place in order to obtain a solution.
When asked for displacements, SNAP'S expert system can examine this rule and decide that it
needs both the stiffness matrix and the load vector. Backward chaining inference can evaluate
these conditions, and if proven T R U E , the rule will be executed. Rule execution posts an event to
Logical computation to discover facts and to carry out actions is a common paradigm for
deductive and evocative inference. The actual implementation of such a paradigm by goal-driven
event-generation within an event-driven architecture is a new and useful product of this thesis.
70
4.2. I m p l e m e n t a t i o n
The SNAP software is presented using the object-oriented conceptual model described earlier.
Subsequent sections deal with basic class descriptions and class hierarchies for the major
(4) Analytical Modelling and Interpretation — tools for creating functional descriptions.
(5) Numerical Modelling, Solution, and Interpretation — nodes, elements, forces, etc.
The GenApp environment is a generic application framework written by the author for the
development of simple graphics applications such as the interface to SNAP. This environment
provides a set of primitive classes (windows, dialogs, menus, alerts, controls, etc.) that allow the
Windows
Modern engineering applications typically provide user interface with windows similar to that
shown in Fig. 39. Some of the basic features of a drawing window include:
• Close Box — This control closes the window and deallocates it from memory. If
the contents of the window have been changed, a dialog box (described later) is
displayed to allow the user to save or discard the changes.
• Title Bar — This bar displays the name of the file associated with the window (or
"Untitled" if it has not been saved). To reposition the window on the screen, align
the mouse over this bar, then press and hold the mouse button while moving the
71
mouse. When the mouse button is released, the window moves to its new location.
This technique is called "dragging" and is used extensively in graphics applications.
Zoom Box — This control enlarges the window to fill the screen or reduces it back
to its original size.
Close Zoom
Title Bar
Box Box
i
ID1 Title HI
O
11
O
a Content Vertical
Palette - Scroll
Bar
C?
0| mm o a
T
Horizontal Scroll Bar Resize
Box
• Horizontal Scroll Bar — This control scrolls the window contents from side to side
so that a large view may be displayed in a small window. To scroll a small distance,
click the scroll bar arrow that points in the desired direction. To scroll by the
windowful, click in the gray area of the scroll bar. To scroll quickly to any part of a
drawing, drag the scroll box to a place in the scroll bar that represents the relative
position in the drawing.
• Vertical Scroll Bar — This control operates exactly as for the horizontal scroll bar,
but it moves the contents the window contents vertically.
72
• Palette — This panel acts as a "toolbox" where the user can quickly change the
current drawing tool. Many graphics applications support tools for pointing,
drawing, erasing, and manipulating shapes. GenApp provides generic support for
an arbitrary number of rows and columns of tool types.
The generic TWindow class shown in Fig. 40 was developed to provide support for the
features just described. Instances of this class maintain a variety of instance variables. Some
variables refer to the screen data structures and display (f wptr, f contentRect, f viewRect, etc.),
whereas other deal with data files (fName, fVRefNum, f Type, etc.). However, most variables are
just Boolean flags that indicate whether certain options are in effect ( f H a s P a i e t t e , fHasGrid,
f HasXYLoc, f H a s S c r o i i B a r , etc.). Still others manage the objects related to the controls and
O- IWindow DwgToPort -o
o- Close PortToDwg -o
o- Save DwgToXY -o
o- SaveAs XYToDwg -o
o- Print ScrollBitMap -o
o- PageSetup Draw -o
o- DoRead Activate -o
o- DoWrite Update -o
o- ReadData Idle -o
o- WriteData Click -o
o- InitOrigin Key -o
o- DwgOrigin Grow -o
O— GetDwgMouse Zoom -o
O— GetDwgEvtLoc Resize -o
F i g . 40. TWindow Class Description.
73
Methods associated with windows provide all actions that take place in response to events
such as a mouse click in the close box. Most of these operations are self-explanatory (Close,
Save, Print, etc.), but others have resulted from the practical development of reusable software and
may require some explanation to be fully understood. Details relating to the purpose of these
Dialogs
A special kind of window, known as the "Dialog", is used to obtain information from the user.
Typical commands (Open, Close, Save, Print, etc.) may require additional data (File names, page
numbers, etc.), and dialogs allow the user to enter this information.
W i d t h : 15.33
H e i g h t : 20.28 User Item
Radio Button
> (•)inch O cm
Pages: 4
The Drawing Size Dialog shown in Fig. 41 is used by most applications that offer multi-page
drawing areas. Key features of this dialog include: buttons, static text, edit text, radio buttons,
and user items. A n OK button is provided to terminate the dialog and to use the new values. The
Cancel button also terminates the dialog, but does not save the changes. Static text items are used
to enhance the dialog so that the user understands what to do and how to do it. Edit text items
allow the user to type information using the keyboard. In this case, the user may enter values for
74
the drawing width and height. These values are measured using the units selected using a radio
button. Clicking on either Inch or c m radio buttons toggles the units to the new selection (and
converts the values displayed in the edit text items). Alternatively, the user may click in the user
item to graphically indicate the size of the drawing (in page increments).
The generic T D i a l o g class shown in Fig. 42 was developed to provide support for the
features just described. Instances of this class maintain some instance variables that refer to the
screen drawing (f ResiD, fDPtr, fwindow), and others that refer to the event loop associated with
the i D i a l o g method, followed by the data-specific Setup method, and the screen Centre method.
Once a dialog is primed by the Modal method, it enters an event loop which continuously executes
the Event, v a l i d a t e , and E r r o r R e p o r t methods. If the O K button was selected, the dialog exits
o-
o-
IDialog
Centre
Modal
Setup
—o
—o
o- Save
Dispose
Event
Validate
—o
—o
o- ErrorReport
—o
Fig. 42. TDialog Class Description.
Specialized dialogs can be created by defining new classes that have the T D i a l o g class as
their parent. The drawing size dialog shown earlier is provided by the class shown in Fig. 43.
Data for this dialog comes from the active drawing window. Instance variables that describe the
data changed by this dialog (f u n i t s , fDwgsize) and those that are required for short-term display
Event method is needed to intercept mouse clicks in the user item, otherwise all interface is
75
handled by the parent class. Each time the v a l i d a t e method is invoked, additional methods may
be used to assure that valid data has been entered. Specific procedures are provided by the parent
that allow developers to specify the data type entered in an edit text item (text, numeric, Boolean).
(to TDialog Class)
TDwgSizeDialog
•
fUnits fDwgSize
fChanged fDwgPages
fPageSize fMaxDwgPages
Setup
Save
SetDwgSize
CalcDwgSize —o
Dispose SetDwgSizeltems
Event GetDwgSizeltems
Validate CalcDwgPages
Drawltems
-o
Fig. 43. TDwgSizeDialog Class Description.
Menus
Commands are used to perform operations on the data in a drawing window. The user may invoke
by certain kinds of mouse actions. A standard feature of most applications is the File menu shown
Title
File keyboard
New 9 § N <- equivalent
Open... 3§0
Close §§UJ
Saue 9§S
{
Command
Group
Saue As... < ellipsis
Page Setup...
Print... 9§P
Quit 3§Q
Fig. 44. Menu User-Interface.
76
Applications
All application programs (also known as "Applications") have a standard event-driven architecture
that may be described using the class structure of Fig. 45 and the flowchart in Fig. 46. When an
method and then given the Run message. This results in the execution of the DoFinderRequest
At this stage, the application enters a continuous loop that executes until a global variable is
set to indicate that the application is done and that it is time to quit. Each time through the event
loop, the DoMainEventLoop method is invoked to see if there any events in the queue. If an event
is found, it is submitted to the DoEvent method where it is processed. Mouse, key, activate,
update, and idle events are delegated to their respective application and window handlers.
Mouse and key events are intercepted if they pertain to menu commands. Menu selections
(and keyboard equivalents) are handled by the DoMenuCommand method. Some intrinsic commands
GenApp is completely expandable as specialized applications can declare their own DoEvent
and DoMenuCommand methods to process events and commands not processed by the generic
application, and can let the generic events and commands be processed through inheritance.
Alternatively, specialized applications may decide to override some or all of the generic methods.
(to TObject Class)
TApplication
w
Windows
o - IAppiication Run
—o
DoFinderRequest DoMenuCommand
—o
o - DoMainEventLoop DoUpdateMenus
—o
DoEvent DoAboutApp
—o
o - DoMouseDown DoNewFile
—o
o— DoMouseUp DoOpenFile
—o
o— DoKeyDown DoNewWindow
o— Do Activate DoClose
—o
o - DoUpdate DoCloseAll
—o
o— Doldle OurWindow
77
application application application
Start IApplication Run DoFinderRequest
JBBBSB
application application application application application application
DoMouseDown DoMouseUp DoKeyDown DoActivate DoUpdate Doldle
command
application application
( aboutApp^
appli cation \
DoAboutApp | DoNewFile
78
4.2.2. External Procedures
One of the most important features of hybrid object-oriented languages is the ability to support
existing software. This is especially pertinent in the field of finite element analysis, where
specialized routines are available for the high-speed manipulation of matrices, including:
multiplication, addition and inversion. A library that was developed for the NAP project [26], has
been utilized without significant modification in Object NAP [27] and SNAP [31].
A few classes may be introduced to handle the basic matrix algebra, including the TVector
and T M a t r i x classes shown in Fig. 47 and Fig. 48. A variety of addition, scaling, multiplication,
and transposition operations are supported. Each of these methods invoke C library functions in
the same manner as a procedural program would call any of its routines.
(to TObject Class)
TVector
w
t
fn. fx
alloc Add —o
dealloc Scale —o
Free Dot -o
Max Min —o
Fig. 47. TVector Class Description.
(to TObject Class)
TMatrix
fm, fn fx
O— alloc Add —o
dealloc Scale —o
o- Free Multiply —o
Transpose MultVector —o
o- Max Min —o
Fig. 48. TMatrix Class Description.
Matrices found in numerical analysis tend to be large and sparse. With a proper numbering
scheme, a sparse matrix may be organized so that its non-zero values are located near the matrix
diagonal in a banded strip. In this case, all values that are more than the bandwidth away from the
diagonal are zero and do not have to be stored in memory. The reduction in required memory is
substantial, permitting the solution of much larger problems. A banded matrix technique shown in
79
Fig. 49 is used for the global stiffness matrix, whereas full matrices are used for the local
stiffnesses. Elements in the banded matrix are accessed by computing an offset from the start of
the array, based on the row and column numbers of the full matrix (this is a function of the index
1 2 3 4 5 6 7 8 9 10
1 1 \
Local Stiffness \
2 2 5 \
3 4 5 6
\
3
3 6 9 \
X
11 12 13 14 4 7 10 13
\
21 22 23 24 5 \ 8 iffiiiii \
\
31 32 33 34 6 \ i 2 i i 5 s l : g i 2 l\
\ \ bandwidth
41 42 43 44 7 \16 19 22 2 5 ^ - -
8
\ \
\20 23 26 29\
\ \
9 \24 27 30 33\
V 28 31 34 37_
10
32 35 38
36 39
^^:Plo ; :
not used
The banded matrix class can be defined as a special kind of matrix. A l l data and methods
associated with a matrix are inherited by a banded matrix. The class description shown in Fig. 50
has no additional instance variables, and only three additional methods. A special method is
provided to permit the addition of a full matrix into the banded matrix (used during stiffness
assembly operations).
Two methods are provided to solve a system of equations using the Cholesky method of
The s o l v e method assumes that the solution is to be performed only once, so it discards the
lower triangular matrix upon completion. Conversely, the Re s o l v e method saves the partial
80
solution so that multiple vectors (load cases) can be submitted for a single structure. Interactive
analysis provided by SNAP occurs faster using the Resolve method, as this avoids unnecessary
AddMatrix
Solve
ReSolve
Engineering analysis programs make extensive use of lists. Although these lists hold arbitrary
kinds of data, the procedures used to construct, search, and alter them are identical. The T L i n k
and T L i s t classes, displayed in Fig. 51 and Fig. 52, collectively provide generic support for
A TLink object has three instance variables. The f ob j field holds an object reference — this
is the actual data to be stored in a list. The fNext and f P r e v i o u s fields hold references to the
links before and after the given link. A chain of links can be constructed to form a list. Each link
knows who is before and after itself, so it is easy to insert and extract links from a chain. A variety
ILink InsertAfter
—o
o— Free InsertBefore
—o
Q— Clear Extract
—o
o- DownBy UpBy
—o
DownToObj UpToObj
—o
Fig. 51. TLink Class Description.
81
T a b l e 3. TLink Class Methods.
ILink initialize a link.
Free free a link and its data.
Clear free a link.
InsertAfter insert a new link in a list after a specified location,
InsertBefore insert a new link in a list before a specified location,
Extract extract a link from a list.
DownBy move a reference down a list by a given number of links,
DownToObj move a reference down a list until a given object is found,
UpBy move a reference up a list by a given number of links,
UpToObj move a reference up a list until a given object is found.
A T L i s t object also has three instance variables. The f L i s t s i z e field keeps track of the
number of objects currently in the list. The f Top and f Bottom fields hold references to the links at
the start and end of the list. A list is responsible for maintaining these fields in conjunction with
operations performed by its links. A variety of simple and complex methods (shown in Table 4)
Q — IList First -o
O— Free Last —o
o- Append Insert -o
InsertList —o
O— AppendList
AppendB yPredicate InsertByPredicate -o.
o- AppendListByPredicate InsertListByPredicate -o
MoveToTop MoveToBottom -o
o- Remove —o
DeleteNum RemoveNum -o
O—
Q—
Delete
DeleteAll RemoveAll -o
o-
Q— DeleteList RemoveByPredicate -o
o- FindByObject InList -o
FindBylndex ExistsOne —o
o- FindByPredicate ExtractLink —o
o— FindLastB yPredicate DoToAll -o
o
— FindBestObject DoToAllwithlndex —o
FindGoodObject DoToSubset —o
Fig. 52. TList Class Description.
82
Table 4. TList Class Methods.
IList initialize a list
AppendByPredicate append an object at last place in a list that given predicate function is T R U E .
InsertByPredicate insert an object at first place in list that given predicate function is T R U E .
InsertListByPredicate insert a list of objects using InsertByPredicate (reverses order).
MoveToTop move an object to the top of a list.
MoveToBottom move an object to the bottom of a list.
FindByPredicate find the first object that returns T R U E to a given predicate function,
FindLastBy Predicate find the last object that returns T R U E to a given predicate function,
FindBestObject find the best object in a list as ranked by a given predicate function,
83
4.2.4. Analytical Modelling and Interpretation
An analytical model may be constructed using a few simple building blocks. The fundamental
components used in this thesis are: solids, loads, and boundaries. Each of these components is
called a model part, and is to be directly created and operated upon by the user. The user may
draw these parts in a model window using graphical tools. Once a model has been constructed, it
must be displayed — this is done using a variety of shapes that display objects. In summary, the
(2) Shape
Model Parts
The components of an analytical model must be able to collectively represent spatial and functional
behaviour — the model parts must describe solid geometry and material, as well as force and
displacement boundary conditions. There are many alternatives available for classification of
model parts, but the hierarchy shown in Fig. 53 provides a simple and effective representation that
Model parts have at least two instance variables: f w i n d o w and f shape. The f w i n d o w field
holds a reference to the part owner (described later in the context of ownership links). A shape is
needed by all parts so that they may be drawn in the model window. The part kind determines
which kind of object is referenced in the shape field (rectangle, oval, polygon, etc.).
When a part is created by the user, it is initialized by the iModeiPart method within the
Create operation. If the part is successfully completed, it is added to a list maintained by its
window using the Append method. A model part may be drawn moved, resized, selected, or have
several other operations done to it until the user issues the Delete method to remove it from the list
of parts.
84
Solids are special model parts used to describe two-dimensional structural components (this
provided for solids to store thickness, material, finite element meshes, etc. These additional fields
are initialized using the i s o l i d method (invoked by the specialized C r e a t e method). MakeMesh,
ReduceDOF, and other specialized methods are used invoke lower level numerical analysis routines
o—
fMaterial fBounds
o— ILoad IBoundary •o
o— ISolid o— Create Create -o
o- Create o— Update Append -o
c^- Free O SnapToSolid
o- MakeMesh
ReduceDOF TConcLoad TFixedBound
o-
TRectSolid
TDistLoad TSlideBound
TOvalSolid
TPolySolid
85
TLoad and TBoundary classes collectively provide force and displacement boundary
conditions. The interaction between objects such as solids, loads, and boundaries must be
specified. For simplicity, it is assumed that both loads and boundaries are attached to solids. The
Shapes
A l l objects displayed in the content of the model window belong to the shape class hierarchy
shown in Fig. 54. This class provides a variety of instance variables and methods to create, draw,
O— IShape Offset —O
Q— Free Move
—o
o- Create Resize -o
O— Fill HitTest —o
O—
O—
Frame
Draw
Select
Invalidate
—o
86
Model Window
The fwindow instance variable held by each model part is a reference to an object of the
TModeiwindow class. This window class is a specialization of the generic window provided by
GenApp. Amongst its instance variables are a list of model parts and a numerical model. The
f M o d e i P a r t s field holds a List object that stores the solids, loads, and boundaries in the current
model. The f NumericaiModei field holds a reference to current analysis methods. Linear elastic
static analyses are used in this thesis; however, this could also be extended to include other kinds
of analyses.
Specialized c l o s e , i d l e , C l i c k , and Key methods incorporate a few additional functions with the
basic inherited methods. A variety of other methods are provided to deal with model part creation,
o— Close
DModelWindow DoNewPart
—o
SelectAll
—o
Idle
Click
Draw
InSolid
—o
o- Key InBoundary
—o
o- —o
F i g . 55. TModelWindow Class Description.
87
4.2.5. Numerical Modelling, Solution, and Interpretation
A finite element analysis program can be constructed from a few simple classes. Some of these
classes are needed for spatial and functional descriptions, whereas others are related to the way the
isoparametric elements (see Appendices A and B), some of the most important classes are:
(1) Nodes
(4) Elements
(5) Mesh
Nodes
A built-in concept with conventional finite element theory is the notion of node — a point of
reference in space. Displacements, forces, boundary conditions, and even the domain geometry
rely on nodes, so a natural class to begin the description of finite elements is with the node. Since
nodes occupy a point in space, there must be a set of coordinates as part of this object. Fixity of a
node can also be included in this description along with the displacements associated with the
degrees of freedom. Loads are not included with the TNode class in this implementation, as is
explained later.
Methods associated with the TNode class include procedures for node initialization and
alteration. Some procedures are needed to assign degrees of freedom to a node based on a mesh
numbering sequence, and to assign displacements to nodes once a displacement vector has been
computed.
Once a group of nodes has been created (either by reading them from a file, or by generating
them using a graphical input device), they must be stored somewhere. A n appropriate location to
88
store the node instances is in a T L i s t object. A specialized T L i s t class called TNodeList can be
used for this purpose. This list class has a few special procedures for dealing with nodes,
combined with the generic methods of all lists. A graphical representation of the resulting TNode
TObject
o- Clone Free
—o
TList
IList Append —Q
O— Free AppendList —O
First Insert
o— —o
aNodeList TNodeList
Nodel
O— AssignDOF AssignDisp
—o
-• ( •
Node 2 TNode
fX fDOF ftj
Node 3 O— INode
o—
AssignDOF AssignDisp
-o
Fig. 56. Node List Management.
89
Shape Functions
The shape function class appears as shown in Fig. 57. This class contains references to: the shape
functions (N), the derivatives (Nr, and NS), and the determinant of the Jacobian. Methods leading
up to the computation of the strain displacement matrix (B) include procedures which calculate the
product of the inverse Jacobian matrix with the linear operator, as well as a utility to store the
derivative vectors into the Nrs matrix used in subsequent multiplications. A n additional utility is
desired (LocaiToGiobai) to compute the global coordinates (x, y) of any point in an element given
the natural coordinates (r, s) for subsequent use in stress plot output.
fN fNr fNs
fDeterm in ant
o- invJL
Nrs
LocaiToGiobai
BMatrix
—o
—Q
Gauss Points
The TGauss class appears as shown in Fig. 58. This class contains references to: the total number
of points (used to decide how much storage to allocate), as well as the actual points and weights in
vector form. The only method used is one to initialize the data stored in the Gauss vectors once it
has been determined that an element needs that data. Since Gauss data is the same for all elements
that use the same kind of integration, these vectors can be shared by an element class. This could
be provided directly by a class system which supplies class variables, but it is also easily done
90
(to TObject Class)
TGauss
fTotal fWeights
• fPoints
O—I IGauss
o— IRect4NGauss
o—
IRect8NGauss
o- ITri3NGauss
o- ITri6NGauss
The TShapeFcns and TGauss classes employ similar class hierarchies. Although it may
appear that in many cases the generic procedures are overridden, this is not necessarily the case. If
a generic procedure can be used as part of specialized procedure, the overridden procedure may
still be inherited between supplementary actions performed by the special class. Initialization
operations found in both TShapeFcns and TGauss generic classes are invoked in this manner after
certain key parameters have been set. This kind of class-interaction optimization can be beneficial
if properly controlled or can lead to unnecessarily complicated situations when used in excess. At
all times it is important to retain a certain measure of independence, as that is the underlying
Elements
Finite elements are represented using the TElement class shown in Fig. 59. Elements have an
instance variable called f window that holds a reference to the owner (a Twindow instance). This
provides a link back to generic methods needed in drawing operations. Otherwise, a TElement
object has only a few simple instance variables to hold material, nodes, Gauss points, shape
Generic methods such as: bandwidth computation, index assembly for adding the local
stiffness to the global, retrieval of nodal coordinates and displacements can be carried out
91
regardless of the element formulation. Stiffness and stress computation routines can use some
submethods from the shape function and gauss classes. Specialized methods are required for each
element class to perform operations such as drawing its nodes and boundaries (node numbering
o— Element CalcBandwidth -o
o— Indices CalcStiffness -o
NodeCoords CalcStress -o
o— Draw CalcError -o
TRect4NEIement TRect8NEIement TTri3NElement TTri6NElement
Mesh
An ordered list of elements known as a TMesh is used to numerically model each solid part as
shown in Fig. 60. A TMesh object has an instance variable called window that holds a reference to
the owner (a TWindow instance). This provides a link back to generic methods needed in drawing
and mesh generation operations. Otherwise, a TMesh object has only a few simple instance
variables that include: the shape of the mesh (and of the T S o l i d parent), the nodes and elements,
A TMesh object is initialized using the iMesh method after (or during) its creation by its solid
owner. Methods provided for reducing and assigning degrees of freedom of its nodes, act in
parallel to methods that actually alter the fixities of the nodes based on interaction with TBoundary
92
objects. A variety of other methods are needed to invoke bandwidth, stiffness, stress, and error
o— ReduceDOF
IMesh CalcStiffness
-o
o-AssignDOF AddStiffnessTo
-o
o— CalcStresses
—o
-o
o-
o-
Constrain
CalcBandwidth
CalcErrors
Refine
-o
o- -o
Fig. 60. TMesh Class Description.
Numerical Model
The linear elastic static analyses used in SNAP are represented using the TNumericalModel class
shown in Fig. 61. A n object of this class is held by TModeiwindow instances. This clarifies the
previously mentioned advantage of maintaining a reference to the window with several low-level
objects in the numerical model — high-level controllers (described later) can use this reference to
invoking appropriate numerical solution methods. This way, solutions are not directly built into
the analytical and numerical model framework, rather they may be attached as needed in response
TNumericalModel -t-
w
fWindow fStiffness fNU
flteration fLoads ffiand width
fScale fDisplacements
o— rNumericalModel ReduceDOF —o
CalcSuffnessMatrix AssignDOF —o
o- CalcLpadVector CalcStresses
—o
o- CalcDisplacements CalcErrors
93
4.2.6. Program Control
This section describes the global architecture of the SNAP data and control structures. Ownership
of the objects used in SNAP is structured as shown in Fig. 62. As mentioned earlier, each
application program manages a list of window objects. Each of these windows is in turn
responsible for a complete analysis problem. A hierarchy of descending ownership links is needed
in order to delegate work to subordinate objects — commands originating from the window (or
even the application) can be delegated through this framework. Analytical and numerical
f NumericaiModei. The fModeiParts field holds a list of solids, loads, and boundaries. Solids
have links to low-level numerical analysis objects through their instance variable field fMesh. A
TMesh object goes one step deeper by storing a list of TElement objects in its f Elements field.
TModelWindow
• = Window
fElement;
TElement
•
Fig. 62. Descending Ownership Links.
94
Alternatively, it is sometimes necessary for low-level objects to communicate to high-level
objects through a hierarchy of ascending ownership links. TModeiPart, TMesh, TElement, and
TNumericaiModei objects all have fields that hold reference to their TModelWindow owner.
TModelWindow
fWindow
TNumericaiModei
{R}=[K]{D}
The event-driven architecture of GenApp provides the primary control structure for SNAP.
Within this global framework, analytical modelling and interpretation activities take place due to
direct user involvement (mouse and keyboard input, etc.). Creation, movement, and alteration of
model parts (solids, loads, and boundaries) can be equated to the operations performed on their
graphical representations, so the user can easily understand how and when to perform actions.
However, the lower level operations that must take place in a numerical analysis program are
considerably more complex. Mesh generation, equation assembly, solution, and subsequent
response processing are activities that must be performed, yet if the user is allowed to randomly
enter and ask for information at the analytical level it is essential that the numerical analysis control
95
If SNAP is to perform numerical analysis at every available opportunity, it should place a
procedure somewhere (or at several locations) in the GenApp event loop. Since the application
Do i d l e method is executed whenever the analytical level is in an idle mode (the user is doing
nothing), this makes it an ideal location to support numerical analysis through controller task
performance. Each time its Do i d l e method is executed, SNAP simply invokes the Task methods
for each of the modelling, solution, and interpretation controllers as shown in Fig. 64. Whether
the Task method actually invokes any operations depends on the specific controller and the current
Task performance within a generic controller takes place as shown in Fig. 64. Each time the
controller is asked to perform a task, it enters its own event loop. First the controller
GetNextEvent method is called to see if there are any events that may be handled by this controller
(it does this by checking parameters determined at initialization). If there are no events then it stops
execution (returns control to the caller), otherwise it passes the event to the controller DoEvent
method for processing. Each controller can use the same generic event loop, as long as it has its
The modelling controller shown in Fig. 65 has a specialized DoEvent method that processes
events related to the creation, alteration, and refinement of solids, loads, and boundaries.
Additional events are supported to take care of specialized modelling activities such as degree of
freedom reduction and assignment (node numbering). Since the controller operates at a high level,
it can take advantage of both ascending and descending ownership links. For example, events that
are directed at solids can climb the hierarchy to find the numerical solution methods that are needed
96
( create j ( move ) ( refine ) (reduceDOF ] ( assign DO F~)
The solution controller shown in Fig. 66 has a specialized DoEvent method that processes
events related to the construction, assembly, and solution of equilibrium equations. Stiffness
matrix computation starts with the individual elements and then continues with the assembly of a
global stiffness matrix. Since elements (and a mesh of elements) can compute their stiffness
independently from the assembly process it is logical to have two kinds of events. For example,
when the solution controller receives a stiffness event for a mesh (or solid) it recognizes that
computation is required, whereas if the event is directed at the numerical model the activity
performed is the assembly of the global stiffness. Load and displacement events invoke the
remaining activities of load vector assembly and solution for the displacement vector (using the
The interpretation controller shown in Fig. 67 has a specialized DoEvent method that
processes events relating to the computation of stresses and errors associated with a given
displacement field. These events simply invoke low-level operations within all elements to
97
( stiffness )
numericalModel numeric
:alModel
CalcStiffnessMatrixJ CalcDisplacements E
masaaamssBBSsa
controller
' Start J
GetNextEvent
( Stop J
" I interpretationController
DoEvent
numericalModel numericalModel \
CalcS tresses CalcErrors j
98
Since many kinds of actions can be performed by the user, and the analysis program has to
decide when and how to process these actions, context-dependent conditional relationships must be
introduced in the event-handling mechanisms. This could be done by embedding if-then constructs
throughout the source code, but a much more practical method involves the use of an internal
expert system that determines and controls event-generation based on goal-driven inference.
The class structures required to implement inference in the object-oriented environment with
the analysis program are shown in Figs. 68, 69,70 and 71. The inference engine described by the
T l n f e r e n c e E n g i n e class has instance variables that hold references to the rules in a knowledge
base, the rules currently being examined, the facts in a specific problem, and the goals to be
attained to find a solution. Instances of this class may be initialized to a specific knowledge base
TlnferenceEngine -t-
w
fRuleBase fContext Doumal fGoals
IinferenceEngine
Free
ForwardChain
BackwardChain
—o
IContext Backtrack —o
Fig. 68. TlnferenceEngine Class Description.
A mixed rule format (described in chapter 2) is implemented using the TRule class. This
o- IRule Match -o
Execute —o
O— Possible
Fig. 69. TRule Class Description.
99
Logical statements are provided by the TStatement class. A statement is either T R U E or
F A L S E and either P R O V E N or NOT-PROVEN. The f value and f Proved fields are just Boolean
O— TStatement
T A c t i o n class provides instance variables that determine what message should be relayed and to
IAction Execute —O
Fig. 71. TAction Class Description.
chaining. The algorithms for these two approaches are illustrated in Figs. 72 and 73. In the case
of a rule-based forward-chaining, each rule is tested to see if it can generate new information. If a
rule is successfully proven, the new data it has generated is used to see if more rules can be
state. The algorithm used in the TlnferenceEngine class is essentially the same as found in most
rule-based systems that support mixed forward and backward chaining with backtracking.
100
101
4.3. Operation
This section uses an example to illustrate how SNAP performs numerical analysis. Descriptions
are given for: the user-interface, the underlying flow of events caused by user-actions, and the
4.3.1. User-Interface
The example problem shown in Fig. 74 can be created in three steps:
(1) Draw a Boundary object — the gray blob to which the beam is attached.
(2) Draw a Solid object — the rectangular beam.
(3) Draw a Load object — the concentrated load shown at the beam tip.
n = Untitled #1 mm
III A
ii
nI
in '
a saw / T7~^~S
......
01 iiijiillllililiillliiiliili
F i g . 74. SNAP User-Interface. 0
Drawing is a familiar activity for persons who use C A D programs. This simply means to use
a graphical input device (mouse, pen, etc.) to create a shape in a computer model. Different kinds
of objects are drawn depending on the currently selected tool. In Fig. 74, the palette has eight
tools with the top left one selected. Once the user completes drawing, SNAP takes over and
automatically performs all numerical analysis tasks including: mesh generation, stiffness and load
computation, displacement solution, and stress and error output. Displaced shapes and stress
contours may be graphically overlaid on the original model (based on flags set using the menus).
102
4.3.2. Flow of Events
SNAP executes numerical analysis in response to drawing actions performed by a user. The exact
procedures followed by SNAP depend on the kind of problem being analysed and the status of the
knowledge-base in SNAP'S expert system. This section describes a sample session with SNAP.
Before creating a model, the user can adjust the screen display to suit the problem. Grid lines
shown in Fig. 74 are evenly spaced starting from the top-left corner of the drawing window. The
grid spacing and the position of the axes can be set using the dialogs shown in Figs. 75 and 76.
These dialogs are displayed in response to appropriate selections in the Options menu.
Grid Spacing
Cancel )
100.0
(?) mm O inch
O cm O ft
Om
Rues 1 O K ) )
Cancel
top 1000.0
left -100.0
<•> mm O i n c h
O cm Oft
Om
103
A TBoundary object can be created by selecting the corresponding palette tool and drawing
an arbitrary shape as shown in Fig. 77. SNAP responds by posting a C r e a t e message in the
numerical analysis event loop. Along with the message is a reference to the boundary object, so
when the modelling controller extracts this event from the event queue, it has the object to be
operated on. A ReduceDOF event may be posted to the window's numerical solution if there are
A T S o l i d object can be attached to the newly created boundary. Before specifying its
location, the user can adjust the default thickness and material for the solid using the dialogs shown
in Figs. 78 and 79. In this case, a thickness of 100 mm and material properties of E = 200,000 MPa
and v = 0.3 will be used by the linear analysis procedures in SNAP. A l l solids created use the
default values as set by the user. If the user wants to subsequently change any value, this may be
done by selecting the object (or objects) to be altered and recalling the same dialogs. This permits
the flexible definition of models with many solids having unique or shared properties.
104
Thickness t OK
10.0 Cancel )
<§)mm Oinch
O cm O ft
Om
Material
CSD
Cancel
a 300.0
(S) MPa
Oksi
E = 200000.0
The user can create a T S o l i d object using the rectangular palette tool as shown in Fig. 80.
SNAP responds by posting a C r e a t e event with the new solid object. The modelling controller
intercepts this event, and may cause events that: create a mesh, reduce the degrees of freedom,
assign the overall node and degree of freedom numbers, and compute the stiffness of the elements
in the new mesh. Some events are directly generated by other events (degrees of freedom
reduction events are always generated when a solid is created), whereas others take place as a
result of logical computations performed by SNAP'S expert system (events such as stiffness
105
Exactly when and how events are generated is explained in the next section with reference to the
IU=I Untitled #1
A
lUl i
<>—o- O A O -0—6.
Rectangular |U*|
Solid tool
a A—o-
O—< O 4 i O < >—O-
o 6 o <> o 6 o
tont ••;
Fig. 80. Boundary and Solid Objects.
A T L o a d object can be attached to the newly created solid. Before specifying its location, the
user can adjust the default magnitude using the concentrated load dialog shown in Fig. 81. In this
case, the load is a vertical load of 2 kN. As with properties for solid objects, loads may be
Concentrated Load S I
Cancel
FK =
0.00
Fy - -2.00
< S ) k N Okips
106
When the user adds a T L o a d object to the model, SNAP responds by posting a C r e a t e event
with the load object. This results in subsequent events that: attach the load to a specific solid
object, compute the contribution of the load object to the global load vector, and instigate the
factorization using the global banded stiffness matrix and the global load vector. Once a
displacement vector is computed, the nodes are assigned the new displaced values and the
=•^9 Untitled #1
•1 A
n
CZ
1 o
Hill O
Fig. 82. Boundary, Solid, and Load Objects.
The displaced shape is controlled by the Displacements Dialog of Fig. 83. In this case, the
displacements were automatically magnified 989.3 times to create an aesthetically pleasing display.
The user can override the default by typing a new value into this dialog, or can turn the display off
107
Displacements
Cancel
Shouj displacements
Automatic magnification
M a g n i f i c a t i o n : 989.3
To display stresses due to the load, the user can adjust the range used for stress-shading as
shown in Fig. 84. In this case, shades from white to black are uniformly distributed over the
range 0.0 to 10.0 MPa. Turning the stress display on results in the stress pattern (stress-norm)
Stresses
C a n c e lJ
Show stresses
Automatic shading
10.0
<S)MPa
O* s i
0.0
108
iEUE Untitled #1
• A <0
•
ft m
a
Numerically integrated finite element solutions yield stresses at the Gauss points. These
values can be extrapolated from the Gauss points out to the nodes for each element. For this
reason, nodes that are attached to more than one element have multiple values for stresses. The
variance of these stresses is known as the inter-element discrepancies and gives a good measure of
the solution error. To display the errors, the user can adjust the number of refinement iterations
and the maximum inter-element percentage error permitted as shown in Fig. 86. In this case, 3
iterations were allowed, and a maximum error of 10%. Turning the error display on results in the
Errors
Cancel
• Show errors
K lA u t o m a t i c r e f i n e m e n t
Maximum iterations:
M a x i m u m % e r r o ' 4r :
109
Fig. 87. Error Display.
If the error in any element exceeds the specified value for the maximum allowable percentage
error (10% in this case), a R e f i n e event will be posted for the solid object that has the error. The
beam problem shown in Fig. 87 has a maximum inter-element discrepancy of only 9%, so no
SNAP provides many other features not discussed here, such as: object relocation and
resizing functions, user-directed calculation, and process interruption. Refer to the SNAP manual
The majority of SNAP'S operation is built around its event-driven architecture; however, many
problems can be most efficiently solved using a combination of goal-driven and event-driven
techniques. This section describes the sequence of events and the logical computations performed
As already mentioned, some events are directly caused by user-actions whereas others result
from logical operations performed by SNAP'S expert system. In order to understand how
goal-driven event-generation works it is necessary to introduce some of the rules used by SNAP
(see Table 5.). These rules are expressed using a mixed format (described in chapter 2).
110
T a b l e 5. Rules.
HaveErrors then ...
©
HaveSolution
if.
and d o .
© if.
CalcErrors and d o .
© if.
CalcS tresses and d o .
©
HaveLocalStiffness HaveStiffhessMatrix then ...
if. HaveAssignedDOF
CalcS tiffnessM atrix and d o .
©
HaveLoads HaveLoadVector then ...
if. HaveAssignedLXDF
CalcLoadVector and d o .
©
HaveReducedDOF HaveAssignedDOF then ...
AssignDOF and d o .
User-Action Events
Actions such as drawing the boundary, solid, and load objects result in events that are directly
handled by the modelling controller. These events may generate subsequent events that reduce
degrees of freedom or that set flags to tell SNAP what is happening. In this problem, when the
user draws the TBoundary object, a Create event is intercepted by the modelling controller. Since
no other parts exist at this time, nothing else takes place. When the user draws the TSolid object,
another C r e a t e event is intercepted by the modelling controller; however, this one sets the fact
HaveLocalStiffness to T R U E & proven (after invoking the mesh creation procedure) and posts
a subsequent ReduceDOF event to the solid object. The modelling controller also handles this event
and sets the fact HaveReducedDOF to T R U E & proven if the solid overlaps the boundary area. At
this time, the modelling controller asks SNAP'S expert system to generate other events as needed
111
in order to obtain a solution. It is immediately obvious to an expert that solution is not yet possible
as no loads have been applied to the structure. When the user draws the T L o a d object, another
Create event is intercepted by the modelling controller. This one sets the fact HaveLoads to
T R U E & proven and again asks SNAP'S expert system to generate other events as needed in order
to obtain a solution.
Goal-Driven Events
The expert system used in SNAP provides simple forward and backward chaining inference
coupled with a backtracking feature. Consider the first time the modelling controller asked SNAP
to generate events for a solution. The inference engine begins by searching its fact-base to see if
the goal is already known (proven T R U E or FALSE). Since HaveSoiution is not known, the
rules are searched for hypotheses that contain this fact Rule number 1 indicates that in order to get
a solution, the errors must be obtained. Searching the facts and rules for this new goal leads rule
number 2, which indicates that in order to get the errors, the stresses must be obtained. The
following sequence is obtained by searching for all conditions that must be satisfied in order to
After creating the boundary and the solid, conditions HaveLocalStiffness and
HaveReducedDOF will be proven T R U E ; however, HaveLoads will remain unproven. This means
that backward chaining will stop at rule 6 since it is not possible to prove HaveLoadVector.
Once the load object is added, the fact HaveLoads is now proven T R U E and backward
chaining inference can be used to obtain a solution. All rules marked for execution will be invoked
in logical order leading to the hypothesis HaveSoiution. As each rule is proven, its hypothesis is
112
set to T R U E and its list of actions are added to the analysis event queue. Unlike many existing
inference engines offering mixed backward and forward chaining, this expert system does not
actually carry out the rule actions by itself. Rather, it merely lets the event-driven control of SNAP
carry out the actions at some convenient subsequent time. Starting with rule 7, the list of rules to
be executed is as follows:
When GenApp allows the modelling, solution, and interpretation controllers to perform their
pending tasks (during idle time), these controllers draw the events from the queue and process
them. Events posted during backward chaining are handled as follows: the modelling controller
Caicstresses and CalcErrors. During error computation, a R e f i n e event will be posted if the
error is larger than a specified value and the entire process starts over (the logical computation
113
5. Conclusions and Recommendations
This thesis explores the application of some of the more practical artificial intelligence
5.1. Conclusions
The general conclusion of this thesis is that a variety of artificial intelligence techniques can be used
to improve the engineering analysis process. Existing A l technology may not necessarily provide
an immediate solution for the automation of knowledge, but it can help to simplify existing
analysis, as provided by the SNAP program, demonstrates the potential improvements offered to
software developers and users by the hybrid approach to analysis software development.
interactive analysis, and goal-driven event-generation using a knowledge-based expert system are
selected from the many available artificial intelligence techniques to simplify the development of
engineering analysis computer software and to help organize the related knowledge.
resulting in shorter development times and more reliable source code. Generic application
114
that may be used by developers of hybrid engineering software. Problems associated with
event-driven interactive analysis, in dealing with conditional task processing, can be overcome by
replace the alternative conventional and intelligent interface systems. This approach promotes the
development of small, self-contained packages that may be driven by high-level programs such as
knowledge-based expert systems, rather than trying to develop complex A l software to run
programs that are already too large, too complex, and out-of-date.
unifies the concepts presented in this thesis and hopefully draws attention to the importance of
research in this field. Emphasis is placed on: the kinds of objects used to represent the analysis
components; the kinds of events used to perform the modelling, solution, and interpretation tasks;
and the use of an expert system to control analysis within an event-driven environment.
The main difference between SNAP and conventional analysis programs is related to the control
architecture. Whereas conventional analysis programs use algorithms that perform a specified
architecture that processes analysis tasks in any order required by the current problem. This
flexibility is obtained by defining the relationships between analysis operations using heuristics and
by using a knowledge-based expert system to logically determine the required operations to obtain
a solution.
115
5.2. Recommendations for Further Research
This section describes a series of projects that are suitable for graduate students in the field of
engineering analysis. Some projects are direct extensions of this thesis, while others are related to
Engineering analysis is only part of the engineering design process. The research performed in
this thesis could be implemented in a hybrid system for engineering design. Developers of such a
system should investigate the control structures and resources used in the conceptual, preliminary,
and final design activities. A n event-driven architecture, similar to that found in SNAP, could be
In addition to the basic framework of the analysis problem (modelling, solution, and
interpretation), a set of high-level classes should be developed for the organization of resources,
constraints, and objectives. The designer who uses this tool should be able to specify a problem in
terms of functional requirements and to receive a solution from the design system that is both
design for engineering involves the formalization of expertise using a tool that can represent
complex real-world problems. A development tool, such as NExpert Object [58], should be used
so that the research can start at a high level. Rather than dealing with details relating to numerical
computation, this project could concentrate on the acquisition, representation, and manipulation of
116
Some of the key problems to be considered include: analytical representation of spatial and
Analysis programs designed and implemented by expert analysts and programmers may provide
technology will soon make it impossible to provide optimal analysis. The proposed solution is to
automate the selection and application of low-level analysis components, so that the analysis
program can change itself to provide the best analysis based on the problem being solved.
One of the key components associated with this project is the development of a
knowledge-based system that can organize and solve equations in the same manner as an expert.
In the case of a numerical analysis program such as SNAP, this would involve the representation
of equations for stiffness, displacement, stress, and error computation. Using its knowledge base,
the equation-solving expert-system should be able to identify: the order in which the equations are
applied, where the data is stored, and what calculations can be used in multiple contexts (eg.
stiffness, stress, and error computations share certain intermediate matrices). The potential savings
in numerical computation offered by a system of this kind can enhance present computing
117
References
1. Adeli, H.; Paek, Y.J. Computer-Aided Anaysis of Structures in InterLisp Environment.
Computers & Structures 23, 393-407, 1986.
4. Allen, R . H . ; Boarnet, M . G . ; Culbert, C.J.; Savely, R.T. Using Hybrid Expert System
Approaches for Engineering Applications. Engineering with Computers 2, 95-110, 1987.
5. Apple Computer, Inc. Inside Macintosh™: Volumes I, II, & III. Reading, Massachusetts,
U S A : Addison-Wesley Publishing Company, Inc., 1985.
6. Apple Computer, Inc. MacApp Programmer's Manual. Cupertino, California, USA, 1986.
7. Bathe, K.-J. Finite element procedures in engineering analysis. Englewood Cliffs, New
Jersey, U S A : Prentice-Hall, 1982.
10. Bollay, D.; McConnell, J.; Reali, R.; Ritz, D. ExperCommonLISP Documentation.
Santa Barbara, Cahfornia, USA: The ExperTelligence Press, 1986.
11. Boole, G . An Investigation of the Laws of Thought on Which Are Founded the
Mathematical Theories of Logic and Probabilities. 1854.
118
12. Brachman, R.J.; Levesque, H.J. Readings in Knowledge Representation. Los Altos,
California, U S A : Morgan Kaufmann Publishers, Inc., 1985.
18. Dawe, D.J. Matrix andfiniteelement displacement analysis of structures. Oxford, England:
Clarendon Press, 1984.
20. Fenves, S.J. A Framework for Cooperative Development of a Finite Element Modeling
Assistant. In: Reliability of Methods for Engineering Analysis: Proceedings of the First
International Conference. (Eds. K.-J. Bathe; D . R J . Owen). University College, Swansea,
U . K . : Pineridge Press. 475-486, 1986.
22. Forde, B.W.R. Iteration Procedures for Sudden Local Alteration of Structural Stiffness.
Mitteilung Nr. 6. Institut fur Baustatik der Universitat Stuttgart. (Ed. Professor Dr.-Ing.
E . Ramm). Stuttgart, West Germany, 1986.
23. Forde, B.W.R.; Stiemer, S.F. Improved Arc-Length Orthogonality Methods for Nonlinear
Finite Element Analysis. Computers & Structures 27, 625-630, 1987.
119
24. Forde, B.W.R. Inside NAP: Technical documentation for the NAP numerical analysis
program. Department of Civil Engineering, University of British Columbia, Vancouver,
B.C., Canada, 1988.
25. Forde, B.W.R. Inside Object NAP: Technical documentation for the Object NAP numerical
analysis program. Department of Civil Engineering, University of British Columbia,
Vancouver, B.C., Canada, 1988.
26. Forde, B.W.R. NAP: Source code for the NAP numerical analysis program. Department
of Civil Engineering, University of British Columbia, Vancouver, B.C., Canada, 1988.
27. Forde, B.W.R. Object NAP: Source code for the Object NAP numerical analysis program.
Department of Civil Engineering, University of British Columbia, Vancouver, B . C . ,
Canada, 1988.
28. Forde, B.W.R.; Stiemer, S.F. ESA: Expert Structural Analysis for Engineers. Computers
& Structures 29, 171-174, 1988.
30. Forde, B.W.R. Inside SNAP: Technical documentation for the SNAP numerical analysis
program. Department of Civil Engineering, University of British Columbia, Vancouver,
B.C., Canada, 1989.
31. Forde, B.W.R. SNAP: Source code for the SNAP numerical analysis program.
Department of Civil Engineering, University of British Columbia, Vancouver, B . C . ,
Canada, 1989.
32. Forde, B.W.R.; Foschi, R.O.; Stiemer, S.F. Object-Oriented Finite Element Analysis. In
Print: Computers & Structures, 1989.
120
34. Furuta, H . ; Tu, K.S.; Yao, J.T.P. Structural engineering applications of expert systems.
Computer Aided Design 17,410-419,1985.
35. Giere, R.N. Understanding Scientific Reasoning. New York, N.Y., USA: Holt, Rinehart
and Winston, 1979.
37. Goldberg, A.; Robson, D. Smalltalk-80: The Language and Its Implementation. Reading,
Massachusetts, USA: Addison-Wesley, 1983.
38. Gregory, B.L.; Shephard, M.S. Design of a Knowledge Based System to Convert Airframe
Geometric Models to Structural Models. Expert Systems in Civil Engineering, (Eds. C.N.
Kostem and M . L . Maher), A S C E , 1986.
39. Gregory, B.L.; Shephard, M.S. The Generation of Airframe Finite Element Models Using
an Expert System. Engineering with Computers 2, 65-77, 1987.
42. Hullot, J.-M. ExperlnterfaceBuilder Documentation. Santa Barbara, California, USA: The
ExperTelligence Press, 1987.
43. Jones, M.S.; Saouma, V . E . Prototype Hybrid Expert System for R/C Design. Computing
in Civil Engineering 2,136-143, 1988.
121
45. Kochan, S. G . Programming in C. Hasbrouck Heights, New Jersey, USA: Hayden Book
Company, Inc., 1983.
46. Kuipers, B.J. A Frame for Frames: Representing Knowledge for Recognition. In:
Representation and Understanding: Studies in Cognitive Science (Eds. D . G . Bobrow; A .
Collins). New York, N Y , USA: Academic Press, 151-184, 1985.
47. LaLonde, W.R.; Thomas, D . A . ; Pugh, J.R. An Exemplar Based Smalltalk. Proceedings of
the First Conference on Object-Oriented Programming Systems, Languages and Applications
(OOPSLA-86), Portland, Oregon, USA, 322-330, September 1986.
49. MacNeal, R.H. Standards for Finite Element Test Problems. In: Reliability of Methods for
Engineering Analysis: Proceedings of the First International Conference. (Eds. K.-J. Bathe;
D.R.J. Owen). University College, Swansea, U.K.: Pineridge Press. 335-351,1986.
50. Maher, M . L . HI-RISE and beyond: directions for expert systems in design. Computer
Aided Design 17, 420-427, 1985.
51. Maher, M . L . Expert Systems for Structural Design. Computing in Civil Engineering 1,
270-283, 1987.
52. Maher, M . L . ; Fenves, S.J. HI-RISE: A Knowledge-Based Expert System For The
Preliminary Structural Design Of High Rise Buildings. Report No. R-85-146, Department
of Civil Engineering, Carnegie-Mellon University, Pittsburgh, Pennsylvania, U S A , 1985.
53. Mair, W . M . Standards and Quality in Finite Element Practice. In: Reliability of Methods
for Engineering Analysis: Proceedings of the First International Conference. (Eds. K.-J.
Bathe; D.R.J. Owen). University College, Swansea, U.K.: Pineridge Press. 353-362,
1986.
122
55. Meyrowitz, N . Intermedia: The Architecture and Construction of an Object-Oriented
Hypermedia System and Applications Framework. Proceedings of the First Conference on
Object-Oriented Programming Systems, Languages and Applications (OOPSLA-86),
Portland, Oregon, U S A , 186-201, September 1986.
56. Microsoft. Microsoft® FORTRAN Compiler for the Apple® Macintosh™ User's Guide.
Bellevue, Washington, USA: Microsoft Corporation, 1985.
57. Minsky, M . A Framework for Representing Knowledge. In: The Psychology of Computer
Vision (Ed. P.H. Winston). New York, N Y , USA: McGraw-Hill, 1975.
58. Neuron Data. NExpert Object® Fundamentals: Macintosh™ Version 1.1. Palo Alto,
California, USA: Neuron Data Inc., 1987.
59. Noor, A . K . ; Babuska, I. Quality Assessment and Control of Finite Element Solutions.
Finite Elements in Analysis and Design 3,1-26,1987.
60. Ortolano, L ; Perman, C D . Software for Expert Systems Development. Computing in Civil
Engineering 1, 225-240, 1987.
62. Parsaye, K ; Chignell, M . Expert Systems for Experts. New York, N Y , USA: John Wiley
& Sons Inc, 1988.
64. Rank, E . ; Babuska, I. An Expert System for the Optimal Mesh Design in the hp-Version of
the Finite Element Method. Numerical Methods in Engineering 24,2087-2106,1987.
65. Rasdorf, W.J.; Salley, G.C. Generative Engineering Databases - Towards Expert Systems.
Computers & Structures 20, 11-15, 1985.
123
66. Rasdorf, W.J.; Wang, T . E . Generic Design Standards Processing in an Expert System
Environment. Computing in Civil Engineering 2, 68-87, 1988.
67. Reddy, J.N. An introduction to the finite element method. New York, N Y , U S A :
McGraw-Hill, 1984.
68. Rehak, D.R. Artificial Intelligence Techniques for Finite Element Program Development.
In: Reliability of Methods for Engineering Analysis: Proceedings of the First International
Conference (Eds. K . - J . Bathe; D.R.J. Owen). University College, Swansea, U . K . :
Pineridge Press, 515-532, 1986.
69. Rehak, D.R.; Howard, H.C. Interfacing expert systems with design databases in integrated
CAD systems. Computer Aided Design 17, 443-454, 1985.
70. Rivlin, J.M.; Hsu, M.B.; Marcal, P.V. Knowledge-based consultation for finite element
analysis. Technical Report AFWAL-TR-80-3069, Flight Dynamics Laboratory (FIBRA),
Wright-Patterson Airforce Base, May, 1980.
71. Rogers, J.L.; Barthelemy, J-F. M . An Expert System for Choosing the Best Combination of
Options in a General Purpose Program for Automated Design Synthesis. Engineering with
Computers 1, 217-227, 1986.
72. Rooney, M.F.; Smith, S.E. Artificial Intelligence in Engineering Design. Computers &
Structures 16, 279-288, 1983.
73. Ross, C.T.F. Finite element methods in structural mechanics. Chichester, West Sussex,
England: Ellis Horwood Limited, 1985.
74. Schank, R.C.; Abelson, R.P. Scripts, Plans, Goals, and Understanding. Hillsdale, New
Jersey, U S A : Erlbaum, 1977.
75. Schank, R.C. Dynamic Memory: A theory of reminding and learning in computers and
people. Cambridge, Massachusetts, USA: Cambridge University Press, 1982.
124
77. Schildt, H . Artificial Intelligence Using C. Berkeley, California, U S A : Osborne
McGraw-Hill, 1987.
78. Soh, C.-K.; Soh, A . - K . Example of Intelligent Structural Design System. Computing in
Civil Engineering 2, 329-345, 1988.
79. Sriram, D.; Maher, M . L . ; Fenves, S.J. Knowledge-Based Expert Systems in Structural
Design. Computers & Structures 20, 1-9, 1985.
80. Sriram, D.; Maher, M . L . The Representation and Use of Constraints in Structural Design.
In: Applications of Artificial Intelligence in Engineering Problems. Proceedings of the First
International Conference (Eds. D . Sriram; R. Adey). Southampton University, U.K.:
Springer-Verlag, 355-368,1986.
81. Stefik, M.J.; Conway, L . The principled engineering of knowledge. A l Magazine 3(3)
4-16, 1982.
84. Taig, I.C. Expert Aids to Finite Element System Applications. In: Applications of Artificial
Intelligence in Engineering Problems. Proceedings of the First International Conference
(Eds. D . Sriram; R. Adey). Southampton University, U.K.: Springer-Verlag, 759-770,
1986.
86. THINK. MORE™ User's Manual. Bedford, Massachusetts, USA: Symantec Corporation,
THINK Technologies Division, 1988.
125
87. THINK. THINK's LightspeedC™ User's Manual. Bedford, Massachusetts, U S A :
Symantec Corporation, THINK Technologies Division, 1988.
88. THINK. THINK's Lightspeed Pascal™ User's Manual. Bedford, Massachusetts, USA:
Symantec Corporation, THINK Technologies Division, 1988.
89. Turing, A . Computing machinery and intelligence. In Computers and Thought (Eds. E .
Feigenbaum; J. Feldman). New York, NY, USA: McGraw-Hill, 1-35, 1963.
95. Weiss, S.; Kulikowski, C . EXPERT: A System for Developing Consultation Models.
Proceedings of the Sixth International Joint Conference on Artificial Intelligence (IJCAI-79),
Tokyo, Japan, 942-947, 1979.
96. Weiss, S.; Kulikowski, C ; Apte C ; Uschold, M . ; Patchett, J.; Brigham, R.; Spitzer, B.
Building Expert Systems for Controlling Complex Programs. Proceedings of the Second
National Conference on Artificial Intelligence (AAAI-82), Pittsburgh, Pennsylvania, USA,
322-326, August 1982.
126
97. Wigan, M.R. Engineering Tools for Building Knowledge-Based Systems on Microsystems.
Microcomputers in Civil Engineering 1,52-68, 1986.
98. Wilson, E.L.; Itoh, T. Expert SAP: A Computer Program for Adaptive Mesh Refinement in
Finite Element Analysis. In: Reliability of Methods for Engineering Analysis: Proceedings
of the First International Conference (Eds. K.-J. Bathe; D.R.J. Owen). University College,
Swansea, U.K.: Pineridge Press, 85-102, 1986.
101. Winston, P.H.; Horn, B.K.P. LISP. Second Edition. Reading, Massachusetts, U S A :
Addison-Wesley, 1984.
102. Zienkiewicz, O . C . The finite element method. Third Edition. New York, N Y , USA:
McGraw-Hill Book Company, 1977.
103. Zienkiewicz, O . C ; Zhu, J.Z. A Simple Error Estimator and Adaptive Procedure for
Practical Engineering Analysis. Numerical Methods in Engineering 24, 337-357,1987.
127
Appendix A: The Finite Element Formulation
Miriirnization of the total potential energy functional for an elastic continuum, by invoking its
stationarity, is considered the foundation of the finite element method [67]. An equivalent
formulation. Equilibrium equations are established for the linear solution of a general
three-dimensional body subjected to body forces, surface tractions, and concentrated forces.
rf i Jx "K '
Jx
Jy ; f s
= f s
; F'= K
Jy
. Jz . . Jz . X .
128
The strains corresponding to U are:
e
i xx
£
^yr zz e
?XY YYZ YZX ] 3)
and all work done is stored as strain energy which is completely recoverable on the removal of the
loads [18]. These conditions apply for problems in linear elasticity, and will be employed here.
The set of state variables for which the functional n(U ...,U )
lt n is a minimum defines a
solution in this context [7]. The conditions required to obtain the state variables are:
JJ=Zl-W (A 6)
Assuming a linear elastic continuum, the total potential energy of the body in Fig A . l . is:
77 = \ \z x dV - Ju f dV
T 7 B
- \\J ( dS - £ U * F"
S s
(A 7)
The stresses may berelatedto the strains and to the initial stresses:
x = Ce + T ' 04 8)
129
A.3. Principle of Virtual Displacements
Invoking the stationarity of 77 and observing the symmetry of C , the principle of virtual
Je r
C e dV =\\?f dV
B
+ JlJ f dS
5 s
- \ f %' dV +XTJ' F* (A 9)
The displacements within each element are assumed to be related to the global displacement vector
by a shape function matrix. This is shown here in terms of global coordinates; however, local
transformation.
u ° U , y , z) = K"(x,
(
y , z) U (A 10)
The strains within each element are obtained by application of a differential operator to the shape
e '\x, y, ) = B ° U , y,z)l)
(
Z
(
(A 11)
The principle of virtual displacements can be written as a summation over all elements in the body:
Substitution of the shape function and strain-displacement relationships for each element yields:
{XJ .N"V"W"}(
—T
2j
A A
-{ZJ\B">'"" r
+F
130
A . 4 . Equilibrium Equations
Imposition of unit virtual displacements to all nodal components, and denoting the nodal
KU = R (A 14)
where:
R=R B + R - Rj + Rc
s (A 15)
*.=XJ,.N'"V"W" M«)
* V
(')
R^I^N^f^V 0
(A 18)
R,=XJ B ° i
(
f ( ,
^ W
(A 19)
R= F
C (A 20)
131
Appendix B: Isoparametric Finite Elements
A group of finite elements for numerical analysis known as "isoparametric" are contained in the
S N A P library. This name signifies that the element geometry and the element displacements are
interpolated using the same parametric form [67]. The computational advantages of numerically
integrated finite elements [102] are demonstrated by formulation of the shape functions and
derivatives for several element types. Numerical integration formulae are given for general
The geometrical transformation of a master element shown in Fig. B . l is based on the description
of the coordinates of any point (x,y) in that element in terms of the shape functions (Nj) and the
values of the nodal coordinates (jtj-j,-). This means that one advantage of isoparametric elements
132
Written as a summation, the geometrical transformation is:
i= 1
If the same transformation is used to evaluate the displacements within the element, the same
»
« = I > . - N . - ( £ . »7) v = 5 > . A T ( £ , TJ) (82)
1-1 1-1
between the natural andrealcoordinate systems. Application of the chain rule for differentiation to
dN. 'dN.
~dx
dN. dN.
LW"J dr\ drj ~df
'dV.
~dx~
dN.
dN.
~dx
= J dN. (B 3 )
dy
133
The inverse relationship between these shape function derivatives is obtained with some
simple matrix algebra. If the determinant of the Jacobian is non-zero, inversion is performed as
follows:
T - A* T r TI dx & dx . n
(B 4)
J =detU] =— -——*o w
r dy dy ]
/
1
an
dx dx
% -
i = 1 i = 1
r r (B 5 )
11 12
r r
21 22 J L
B.3. Strain-Displacement Matrix Computation
The desired strain-displacement relationship may be derived from:
".x
10 0 0 u
0 0 0 1 (Z? 6)
V.x
0 1 1 0
v
•yJ
^2 0 0 " u
10 0 0
r0
0 0 u
e = 0 0 0 1 < (B 7 )
r
0 ^2 V
0 1 1 0 n
0 0 ^21 r
22.
134
Multiplying the first two matrices yields:
0 0 "
e = 0 0 (B 8)
J..
Finally, the B matrix may be computed by differentiating the shape functions for the given
0 0 .. N , 0
r J.. ^2 0
oi 0 0 0
e = 0 0 ^22
0 0 0 f
W (B9)
J
N
.^21 ^2
0 0 0
"2., •
e = Bu (5 10)
Since B was not assembled as a function of the natural coordinates over the element domain,
stiffness matrix integration cannot be performed in advance. The following integral must be
evaluated using a numerical technique. Limits are shown for a rectangular domain; however, the
K= \JB CBtdxdy
T
i i
= \\B t
CBtJd^dr]
-1-1
n n
(B 11)
i = ly = 1
where: 0 = B C BtJ
T
is evaluated at specific T\- ) locations, and
W , Wj are constant weighting factors.
t
The constitutive matrix C , which is a symmetrical contracted version of E , may be derived for
problems in two dimensions for either plane stress or plane strain conditions [44]. These two
matrices may be computed given values for Young's modulus and Poisson's ratio.
135
1 v 0
v 1 0
c= V
0 0 1^2-
plane stress
(B 12)
V
1 0
1- v
E(l-v) v
C = 1 0 plane strain
(1+ v)(l-2u) 1- u
1- 2v
0 0
2(1- t »
B.5. N u m e r i c a l Integration
are typically chosen for finite element applications. The primary difference between these two
techniques is in the spacing of the integration points. Newton-Cotes procedures specify a set of
predetermined points (usually at equal spacing), from which a set of weight factors are determined
to rriiriirnise the integration error. Conversely, Gauss-Legendre procedures compute a set of points
and weight factors which yield the value of the integral exactly for a given polynomial order. The
Gauss-Legendre technique has certain advantages over the Newton-Cotes technique [67,7], so it
Integration in two dimensions over a rectangular domain is performed by forming a product of the
one-dimensional formulae as shown in Table B.2. Integration over a triangular domain is slightly
more complicated, with the resulting sample points and weight factors shown in Table B.3.
136
Table B.2. Gauss-Legendre numerical integration over rectangular domains.
1 x 1
4 = -0.577 4 = 0.577.
2x2
r| =0.577... -4-
T| =-0.577... -o—
i
4 = -0.774... 4 = 0.774...
i
3x3 ri = 0.774.. . ii
T|=0.0 4-
i
i
i
T| = -0.774... —o—
—i—
4 = -0.861. 4 = 0.861...
n = 0.861..
i i
4x4 TI = 0.339... — —| <>•- pi — < ^i - -
r
T| =-0.339... -4--*
I I I I
ri =-0.861... -K-^-r-H
The accuracy of integration is specified by the order of polynomial which is integrated exactly
using a given set of points and weight factors. A typical example is the 2x2 rectangular pattern.
This has an accuracy of order 3, which means that it will correctly evaluate an integral with cubic
and lower order terms. The problem is then to determine the order of polynomial being integrated
for a given stiffness matrix, and to select the appropriate sample points and weight factors.
137
Table B.3. Gauss-Legendre numerical integration over triangular domains.
X.
\ \
1-point Cj= 0 . 3 3 3 3 3 33333 333 Wj = 0 . 5 0 0 0 0 00000 000 \ X
\ \
\ x
\ x
1
T|
C,= 0.16666 66666 667
W!=0.16666 66666 667
C = 0.66666 66666 667 \.
\ \
2
3-point \ X
W =w,
2 ^2= 2- ^2 =
c c
l
2
W 3 = W l
i A ^ \
S = 1> l3=
3
C T C
2
7<\
4
W =0.11250
3 00000 000
5 C = 0.33333 33333 333
w
s
1= W l
W 2 = W l
^ 2 = C , T | = Cj
W
2 2
3 = W l
^3 = 1 ' ^3= ° 2
C
W =w4 2
^4=C . T| = C
3 4 4
W =w5 2
W =w 6 2
^6=C . T| = C
4 6 3
W =w7 3
^7 = %= c 5
138
The elements of the shape function matrix N are given by:
^=7(1-0(1- V)
/V =i(l+
2 TI)
JV, = (1 + O d +
7 n)
N = {(1-
4 0(1+ TJ)
F i g . B.3. Typical Shape Function for the Four-Node Rectangular Element [40].
139
B.7. Eight-Node Rectangular Element
X, u
N = - (l+
2
1
1 Od- i?)d- § + r?)
iv =-id+ O d + n ) d - £ - n)
3
iV = - ^ ( l - £ ) d + » ? ) ( ! + § - V)
4
iV = i ( l - £ ) ( 1 - TJ)
5
2
N =i(l+ 6 r/ )
2
N =|(l- 7 «* )(1 +
2
N = 7(l-
8 O d - ^)
Derivatives of these shape functions with respect to the natural coordinates are:
77)(2£ + 77) \„ = i(l-
:
^(277 +
TJ)(2§ - r?)
" a . , = j ( l + £)(277 - £ )
N
^ = 7(1 + r;)(2^ + 77) = 7(1 + |)(2T7 +
= i d " «')
- i d - 77 ) 2 = -77(1- 0
140
Fig. B.5. Typical Comer Node Shape Function for the Eight-Node Rectangular Element [40].
Fig. B.6. Typical Midside Node Shape Function for the Eight-Node Rectangular Element [40].
141
F i g . B.7. Three-Node Triangular Element.
N =ri
3
Derivatives of these shape functions with respect to the natural coordinates are:
F i g . B.8. Typical Shape Function for the Three-Node Triangular Element [40].
142
B.9. Six-Node Triangular Element
N =4fr
5
N =4CV
6
where: £ =
(i
Derivatives of these shape functions with respect to the natural coordinates are:
TVj ^ = -3 + 4 £ + 477 „ = -3 + 477 + 4 £
^2.« = ^ - l N „= 02
7V =0 3 ? ^ . , = 477 - 1
N ^ = 4 ( 1 - 2 ^ - 77)
4 N , , = -4^
4
143
F i g . B.10. Typical Comer Node Shape Function for the Six-Node Triangular Element [40].
F i g . B . l l . Typical Midside Node Shape Function for the Six-Node Triangular Element [40].
refined if the numerical integration scheme used is sufficient to evaluate the volume of the element
exactly [67]. This reasoning is linked to the energy functional definition [102]. There are dangers
associated with low-order integration techniques since they can lead to zero-energy modes and
singular stiffness matrices, so higher order techniques may be used in practice [73]. The minimum
144
Forde, B.W.R.; Stiemer, S.F.
Improved Arc-Length Orthogonality Methods for Nonlinear Finite Element Analysis
Computers & Structures 27, 625-630, 1987.
Forde, B.W.R.
Iteration Procedures for Sudden Local Alteration of Structural Stiffness
Mitteilung 6, Institut fur Baustatik der Universitat Stuttgart,
(Ed. Professor Dr.-Ing. E . Ramm), Stuttgart, 1986.
AWARDS:
Natural Sciences and Engineering Research Council of Canada (NSERC), 1983-1988.
Deutscher Akademischer Austauchdienst (DAAD) Stipendium, 1985-1986.
Eidgenossische Technische Hochschule Zurich (ETH) Stipendium, 1985-1986.
Alexander Lorraine Carruthers Scholarship in Engineering, 1982-1983.
MacKenzie Swan Memorial Scholarship, 1981-1982.