Download Full Practical Foundations For Programming Languages 2nd Edition Robert Harper PDF All Chapters
Download Full Practical Foundations For Programming Languages 2nd Edition Robert Harper PDF All Chapters
com
https://textbookfull.com/product/practical-foundations-for-
programming-languages-2nd-edition-robert-harper/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/foundations-of-programming-languages-
second-edition-lee/
textboxfull.com
https://textbookfull.com/product/foundations-of-programming-languages-
kent-d-lee/
textboxfull.com
https://textbookfull.com/product/concepts-of-programming-languages-
global-edition-robert-w-sebesta-sebesta-r-w/
textboxfull.com
https://textbookfull.com/product/concepts-of-programming-languages-
twelfth-edition-sebesta/
textboxfull.com
Concepts of programming languages 11th Edition Sebesta
https://textbookfull.com/product/concepts-of-programming-
languages-11th-edition-sebesta/
textboxfull.com
https://textbookfull.com/product/programming-languages-and-systems-
amal-ahmed/
textboxfull.com
https://textbookfull.com/product/understanding-programming-
languages-1st-edition-cliff-b-jones/
textboxfull.com
https://textbookfull.com/product/programming-languages-and-
systems-1st-edition-nobuko-yoshida/
textboxfull.com
https://textbookfull.com/product/foundations-of-psychological-testing-
a-practical-approach-6th-edition-leslie-a-miller-robert-l-lovler/
textboxfull.com
Practical Foundations for Programming Languages
This text develops a comprehensive theory of programming languages based on type sys-
tems and structural operational semantics. Language concepts are precisely defined by their
static and dynamic semantics, presenting the essential tools both intuitively and rigorously
while relying on only elementary mathematics. These tools are used to analyze and prove
properties of languages and provide the framework for combining and comparing language
features. The broad range of concepts includes fundamental data types such as sums and
products, polymorphic and abstract types, dynamic typing, dynamic dispatch, subtyping
and refinement types, symbols and dynamic classification, parallelism and cost semantics,
and concurrency and distribution. The methods are directly applicable to language imple-
mentation, to the development of logics for reasoning about programs, and to the formal
verification language properties such as type safety.
This thoroughly revised second edition includes exercises at the end of nearly every
chapter and a new chapter on type refinements.
Robert Harper
Carnegie Mellon University
32 Avenue of the Americas, New York, NY 10013
www.cambridge.org
Information on this title: www.cambridge.org/9781107150300
© Robert Harper 2016
A catalog record for this publication is available from the British Library.
1 Abstract Syntax 3
1.1 Abstract Syntax Trees 3
1.2 Abstract Binding Trees 6
1.3 Notes 10
2 Inductive Definitions 12
2.1 Judgments 12
2.2 Inference Rules 12
2.3 Derivations 14
2.4 Rule Induction 15
2.5 Iterated and Simultaneous Inductive Definitions 17
2.6 Defining Functions by Rules 18
2.7 Notes 19
4 Statics 33
4.1 Syntax 33
4.2 Type System 34
4.3 Structural Properties 35
4.4 Notes 37
vi Contents
5 Dynamics 39
5.1 Transition Systems 39
5.2 Structural Dynamics 40
5.3 Contextual Dynamics 42
5.4 Equational Dynamics 44
5.5 Notes 46
6 Type Safety 48
6.1 Preservation 48
6.2 Progress 49
6.3 Run-Time Errors 50
6.4 Notes 52
7 Evaluation Dynamics 53
7.1 Evaluation Dynamics 53
7.2 Relating Structural and Evaluation Dynamics 54
7.3 Type Safety, Revisited 55
7.4 Cost Dynamics 56
7.5 Notes 57
10 Product Types 79
10.1 Nullary and Binary Products 79
10.2 Finite Products 81
10.3 Primitive Mutual Recursion 82
10.4 Notes 83
vii Contents
11 Sum Types 85
11.1 Nullary and Binary Sums 85
11.2 Finite Sums 86
11.3 Applications of Sum Types 88
11.4 Notes 91
12 Constructive Logic 95
12.1 Constructive Semantics 95
12.2 Constructive Logic 96
12.3 Proof Dynamics 100
12.4 Propositions as Types 101
12.5 Notes 101
Part X Subtyping
27 Inheritance 245
27.1 Class and Method Extension 245
27.2 Class-Based Inheritance 246
27.3 Method-Based Inheritance 248
27.4 Notes 249
29 Exceptions 260
29.1 Failures 260
29.2 Exceptions 262
29.3 Exception Values 263
29.4 Notes 264
30 Continuations 266
30.1 Overview 266
30.2 Continuation Dynamics 268
30.3 Coroutines from Continuations 269
30.4 Notes 272
31 Symbols 277
31.1 Symbol Declaration 277
31.2 Symbol References 280
31.3 Notes 282
Part XV Parallelism
48 Parametricity 454
48.1 Overview 454
48.2 Observational Equivalence 455
48.3 Logical Equivalence 456
48.4 Parametricity Properties 461
48.5 Representation Independence, Revisited 464
48.6 Notes 465
Bibliography 479
Index 487
Preface to the Second Edition
Writing the second edition to a textbook incurs the same risk as building the second version
of a software system. It is difficult to make substantive improvements, while avoiding the
temptation to overburden and undermine the foundation on which one is building. With the
hope of avoiding the second system effect, I have sought to make corrections, revisions,
expansions, and deletions that improve the coherence of the development, remove some
topics that distract from the main themes, add new topics that were omitted from the first
edition, and include exercises for almost every chapter.
The revision removes a number of typographical errors, corrects a few material errors
(especially the formulation of the parallel abstract machine and of concurrency in Algol),
and improves the writing throughout. Some chapters have been deleted (general pattern
matching and polarization, restricted forms of polymorphism), some have been completely
rewritten (the chapter on higher kinds), some have been substantially revised (general
and parametric inductive definitions, concurrent and distributed Algol), several have been
reorganized (to better distinguish partial from total type theories), and a new chapter
has been added (on type refinements). Titular attributions on several chapters have been
removed, not to diminish credit, but to avoid confusion between the present and the original
formulations of several topics. A new system of (pronounceable!) language names has been
introduced throughout. The exercises generally seek to expand on the ideas in the main
text, and their solutions often involve significant technical ideas that merit study. Routine
exercises of the kind one might include in a homework assignment are deliberately few.
My purpose in writing this book is to establish a comprehensive framework for formu-
lating and analyzing a broad range of ideas in programming languages. If language design
and programming methodology are to advance from a trade-craft to a rigorous discipline,
it is essential that we first get the definitions right. Then, and only then, can there be mean-
ingful analysis and consolidation of ideas. My hope is that I have helped to build such a
foundation.
I am grateful to Stephen Brookes, Evan Cavallo, Karl Crary, Jon Sterling, James R.
Wilcox and Todd Wilson for their help in critiquing drafts of this edition and for their
suggestions for modification and revision. I thank my department head, Frank Pfenning,
for his support of my work on the completion of this edition. Thanks also to my editors, Ada
Brunstein and Lauren Cowles, for their guidance and assistance. And thanks to Andrew
Shulaev for corrections to the draft.
Neither the author nor the publisher make any warranty, express or implied, that the
definitions, theorems, and proofs contained in this volume are free of error, or are consistent
with any particular standard of merchantability, or that they will meet requirements for any
particular application. They should not be relied on for solving a problem whose incorrect
xvi Preface to the Second Edition
solution could result in injury to a person or loss of property. If you do use this material
in such a manner, it is at your own risk. The author and publisher disclaim all liability for
direct or consequential damage resulting from its use.
Pittsburgh
July 2015
Preface to the First Edition
Types are the central organizing principle of the theory of programming languages. Lan-
guage features are manifestations of type structure. The syntax of a language is governed
by the constructs that define its types, and its semantics is determined by the interactions
among those constructs. The soundness of a language design—the absence of ill-defined
programs—follows naturally.
The purpose of this book is to explain this remark. A variety of programming language
features are analyzed in the unifying framework of type theory. A language feature is defined
by its statics, the rules governing the use of the feature in a program, and its dynamics, the
rules defining how programs using this feature are to be executed. The concept of safety
emerges as the coherence of the statics and the dynamics of a language.
In this way, we establish a foundation for the study of programming languages. But
why these particular methods? The main justification is provided by the book itself. The
methods we use are both precise and intuitive, providing a uniform framework for explaining
programming language concepts. Importantly, these methods scale to a wide range of
programming language concepts, supporting rigorous analysis of their properties. Although
it would require another book in itself to justify this assertion, these methods are also
practical in that they are directly applicable to implementation and uniquely effective as a
basis for mechanized reasoning. No other framework offers as much.
Being a consolidation and distillation of decades of research, this book does not provide
an exhaustive account of the history of the ideas that inform it. Suffice it to say that much
of the development is not original but rather is largely a reformulation of what has gone
before. The notes at the end of each chapter signpost the major developments but are
not intended as a complete guide to the literature. For further information and alternative
perspectives, the reader is referred to such excellent sources as Constable (1986, 1998),
Girard (1989), Martin-Löf (1984), Mitchell (1996), Pierce (2002, 2004), and Reynolds
(1998).
The book is divided into parts that are, in the main, independent of one another. Parts
I and II, however, provide the foundation for the rest of the book and must therefore be
considered prior to all other parts. On first reading, it may be best to skim Part I, and begin
in earnest with Part II, returning to Part I for clarification of the logical framework in which
the rest of the book is cast.
Numerous people have read and commented on earlier editions of this book and have
suggested corrections and improvements to it. I am particularly grateful to Umut Acar,
Jesper Louis Andersen, Carlo Angiuli, Andrew Appel, Stephanie Balzer, Eric Bergstrom,
Guy E. Blelloch, Iliano Cervesato, Lin Chase, Karl Crary, Rowan Davies, Derek Dreyer,
Dan Licata, Zhong Shao, Rob Simmons, and Todd Wilson for their extensive efforts in
xviii Preface to the First Edition
reading and criticizing the book. I also thank the following people for their suggestions:
Joseph Abrahamson, Arbob Ahmad, Zena Ariola, Eric Bergstrome, William Byrd, Alejan-
dro Cabrera, Luis Caires, Luca Cardelli, Manuel Chakravarty, Richard C. Cobbe, James
Cooper, Yi Dai, Daniel Dantas, Anupam Datta, Jake Donham, Bill Duff, Matthias Felleisen,
Kathleen Fisher, Dan Friedman, Peter Gammie, Maia Ginsburg, Byron Hawkins, Kevin
Hely, Kuen-Bang Hou (Favonia), Justin Hsu, Wojciech Jedynak, Cao Jing, Salil Joshi,
Gabriele Keller, Scott Kilpatrick, Danielle Kramer, Dan Kreysa, Akiva Leffert, Ruy Ley-
Wild, Karen Liu, Dave MacQueen, Chris Martens, Greg Morrisett, Stefan Muller, Tom
Murphy, Aleksandar Nanevski, Georg Neis, David Neville, Adrian Trejo Nuñez, Cyrus
Omar, Doug Perkins, Frank Pfenning, Jean Pichon, Benjamin Pierce, Andrew M. Pitts,
Gordon Plotkin, David Renshaw, John Reynolds, Andreas Rossberg, Carter Schonwald,
Dale Schumacher, Dana Scott, Shayak Sen, Pawel Sobocinski, Kristina Sojakova, Daniel
Spoonhower, Paulo Tanimoto, Joe Tassarotti, Peter Thiemann, Bernardo Toninho, Michael
Tschantz, Kami Vaniea, Carsten Varming, David Walker, Dan Wang, Jack Wileden, Sergei
Winitzki, Roger Wolff, Omer Zach, Luke Zarko, and Yu Zhang. I am very grateful to the
students of 15-312 and 15-814 at Carnegie Mellon who have provided the impetus for the
preparation of this book and who have endured the many revisions to it over the last ten
years.
I thank the Max Planck Institute for Software Systems for its hospitality and support.
I also thank Espresso a Mano in Pittsburgh, CB2 Cafe in Cambridge, and Thonet Cafe
in Saarbrücken for providing a steady supply of coffee and a conducive atmosphere for
writing.
This material is, in part, based on work supported by the National Science Foundation
under Grant Nos. 0702381 and 0716469. Any opinions, findings, and conclusions or rec-
ommendations expressed in this material are those of the author(s) and do not necessarily
reflect the views of the National Science Foundation.
Robert Harper
Pittsburgh
March 2012
PART I
An abstract syntax tree, or ast for short, is an ordered tree whose leaves are variables, and
whose interior nodes are operators whose arguments are its children. Ast’s are classified
4 Abstract Syntax
into a variety of sorts corresponding to different forms of syntax. A variable stands for an
unspecified, or generic, piece of syntax of a specified sort. Ast’s can be combined by an
operator, which has an arity specifying the sort of the operator and the number and sorts
of its arguments. An operator of sort s and arity s1 , . . . , sn combines n ≥ 0 ast’s of sort
s1 , . . . , sn , respectively, into a compound ast of sort s.
The concept of a variable is central and therefore deserves special emphasis. A variable
is an unknown object drawn from some domain. The unknown can become known by
substitution of a particular object for all occurrences of a variable in a formula, thereby
specializing a general formula to a particular instance. For example, in school algebra
variables range over real numbers, and we may form polynomials, such as x 2 + 2 x + 1,
that can be specialized by substitution of, say, 7 for x to obtain 72 + (2 × 7) + 1, which can
be simplified according to the laws of arithmetic to obtain 64, which is (7 + 1)2 .
Abstract syntax trees are classified by sorts that divide ast’s into syntactic categories.
For example, familiar programming languages often have a syntactic distinction between
expressions and commands; these are two sorts of abstract syntax trees. Variables in abstract
syntax trees range over sorts in the sense that only ast’s of the specified sort of the variable
can be plugged in for that variable. Thus, it would make no sense to replace an expression
variable by a command, nor a command variable by an expression, the two being different
sorts of things. But the core idea carries over from school mathematics, namely that a
variable is an unknown, or a place-holder, whose meaning is given by substitution.
As an example, consider a language of arithmetic expressions built from numbers,
addition, and multiplication. The abstract syntax of such a language consists of a single
sort Exp generated by these operators:
The expression 2 + (3 × x), which involves a variable, x, would be represented by the ast
of sort Exp, under the assumption that x is also of this sort. Because, say, num[4], is an ast
of sort Exp, we may plug it in for x in the above ast to obtain the ast
which is written informally as 2 + (3 × 4). We may, of course, plug in more complex ast’s
of sort Exp for x to obtain other ast’s as result.
The tree structure of ast’s provides a very useful principle of reasoning, called structural
induction. Suppose that we wish to prove that some property P(a) holds for all ast’s a of a
given sort. To show this, it is enough to consider all the ways in which a can be generated
and show that the property holds in each case under the assumption that it holds for its
constituent ast’s (if any). So, in the case of the sort Exp just described, we must show
1. The property holds for any variable x of sort Exp: prove that P(x).
2. The property holds for any number, num[n]: for every n ∈ N, prove that P(num[n]).
5 1.1 Abstract Syntax Trees
3. Assuming that the property holds for a1 and a2 , prove that it holds for plus(a1 ; a2 ) and
times(a1 ; a2 ): if P(a1 ) and P(a2 ), then P(plus(a1 ; a2 )) and P(times(a1 ; a2 )).
Because these cases exhaust all possibilities for the formation of a, we are assured that
P(a) holds for any ast a of sort Exp.
It is common to apply the principle of structural induction in a form that takes account of
the interpretation of variables as place-holders for ast’s of the appropriate sort. Informally, it
is often useful to prove a property of an ast involving variables in a form that is conditional
on the property holding for the variables. Doing so anticipates that the variables will be
replaced with ast’s that ought to have the property assumed for them, so that the result of
the replacement will have the property as well. This amounts to applying the principle of
structural induction to properties P(a) of the form “if a involves variables x1 , . . . , xk , and
Q holds of each xi , then Q holds of a,” so that a proof of P(a) for all ast’s a by structural
induction is just a proof that Q(a) holds for all ast’s a under the assumption that Q holds
for its variables. When there are no variables, there are no assumptions, and the proof of P
is a proof that Q holds for all closed ast’s. On the other hand, if x is a variable in a, and we
replace it by an ast b for which Q holds, then Q will hold for the result of replacing x by b
in a.
For the sake of precision, we now give precise definitions of these concepts. Let S be
a finite set of sorts. For a given set S of sorts, an arity has the form (s1 , . . . , sn )s, which
specifies the sort s ∈ S of an operator taking n ≥ 0 arguments, each of sort si ∈ S. Let
O = { Oα } be an arity-indexed family of disjoint sets of operators Oα of arity α. If o is
an operator of arity (s1 , . . . , sn )s, we say that o has sort s and has n arguments of sorts
s 1 , . . . , sn .
Fix a set S of sorts and an arity-indexed family O of sets of operators of each arity. Let
X = { Xs }s∈S be a sort-indexed family of disjoint finite sets Xs of variables x of sort s.
When X is clear from context, we say that a variable x is of sort s if x ∈ Xs , and we say
that x is fresh for X , or just fresh when X is understood, if x ∈ / Xs for any sort s. If x is
fresh for X and s is a sort, then X , x is the family of sets of variables obtained by adding
x to Xs . The notation is ambiguous in that the sort s is not explicitly stated but determined
from context.
The family A[X ] = { A[X ]s }s∈S of abstract syntax trees, or ast’s, of sort s is the smallest
family satisfying the following conditions:
It follows from this definition that the principle of structural induction can be used to prove
that some property P holds of every ast. To show P(a) holds for every a ∈ A[X ], it is
enough to show:
1. If x ∈ Xs , then Ps (x).
2. If o has arity (s1 , . . . , sn )s and Ps1 (a1 ) and . . . and Psn (an ), then Ps (o(a1 ; . . . ;an )).
6 Abstract Syntax
For example, it is easy to prove by structural induction that A[X ] ⊆ A[Y] whenever
X ⊆ Y.
Variables are given meaning by substitution. If a ∈ A[X , x]s , and b ∈ A[X ]s , then
[b/x]a ∈ A[X ]s is the result of substituting b for every occurrence of x in a. The ast a is
called the target, and x is called the subject, of the substitution. Substitution is defined by
the following equations:
Theorem 1.1. If a ∈ A[X , x], then for every b ∈ A[X ] there exists a unique c ∈ A[X ]
such that [b/x]a = c
Abstract binding trees, or abt’s, enrich ast’s with the means to introduce new variables and
symbols, called a binding, with a specified range of significance, called its scope. The scope
of a binding is an abt within which the bound identifier can be used, either as a place-holder
(in the case of a variable declaration) or as the index of some operator (in the case of a
symbol declaration). Thus, the set of active identifiers can be larger within a subtree of
an abt than it is within the surrounding tree. Moreover, different subtrees may introduce
identifiers with disjoint scopes. The crucial principle is that any use of an identifier should
be understood as a reference, or abstract pointer, to its binding. One consequence is that
the choice of identifiers is immaterial, so long as we can always associate a unique binding
with each use of an identifier.
As a motivating example, consider the expression let x be a1 in a2 , which introduces
a variable x for use within the expression a2 to stand for the expression a1 . The variable
x is bound by the let expression for use within a2 ; any use of x within a1 refers to a
different variable that happens to have the same name. For example, in the expression
let x be 7 in x + x occurrences of x in the addition refer to the variable introduced by the
let. On the other hand, in the expression let x be x ∗ x in x + x, occurrences of x within
the multiplication refer to a different variable than those occurring within the addition. The
7 1.2 Abstract Binding Trees
latter occurrences refer to the binding introduced by the let, whereas the former refer to
some outer binding not displayed here.
The names of bound variables are immaterial insofar as they determine the same
binding. So, for example, let x be x ∗ x in x + x could just as well have been written
let y be x ∗ x in y + y, without changing its meaning. In the former case, the variable x
is bound within the addition, and in the latter, it is the variable y, but the “pointer structure”
remains the same. On the other hand, the expression let x be y ∗ y in x + x has a different
meaning to these two expressions, because now the variable y within the multiplication
refers to a different surrounding variable. Renaming of bound variables is constrained to
the extent that it must not alter the reference structure of the expression. For example, the
expression
let x be 2 in let y be 3 in x + x
let y be 2 in let y be 3 in y + y,
because the y in the expression y + y in the second case refers to the inner declaration, not
the outer one as before.
The concept of an ast can be enriched to account for binding and scope of a variable.
These enriched ast’s are called abstract binding trees, or abt’s for short. Abt’s generalize
ast’s by allowing an operator to bind any finite number (possibly zero) of variables in each
argument. An argument to an operator is called an abstractor and has the form x1 , . . . , xk .a.
The sequence of variables x1 , . . . , xk are bound within the abt a. (When k is zero, we elide
the distinction between .a and a itself.) Written in the form of an abt, the expression
let x be a1 in a2 has the form let(a1 ; x.a2 ), which more clearly specifies that the variable
x is bound within a2 , and not within a1 . We often write x to stand for a finite sequence
x1 , . . . , xn of distinct variables and write x .a to mean x1 , . . . , xn .a.
To account for binding, operators are assigned generalized arities of the form
(υ1 , . . . , υn )s, which specifies operators of sort s with n arguments of valence υ1 , . . . , υn .
In general a valence υ has the form s1 , . . . , sk .s, which specifies the sort of an argument as
well as the number and sorts of the variables bound within it. We say that a sequence x of
variables is of sort s to mean that the two sequences have the same length k and that the
variable xi is of sort si for each 1 ≤ i ≤ k.
Thus, to specify that the operator let has arity (Exp, Exp.Exp)Exp indicates that it is
of sort Exp whose first argument is of sort Exp and binds no variables and whose second
argument is also of sort Exp and within which is bound one variable of sort Exp. The
informal expression let x be 2 + 2 in x × x may then be written as the abt
in which the operator let has two arguments, the first of which is an expression, and the
second of which is an abstractor that binds one expression variable.
Fix a set S of sorts and a family O of disjoint sets of operators indexed by their generalized
arities. For a given family of disjoint sets of variables X , the family of abstract binding
8 Abstract Syntax
trees, or abt’s B[X ], is defined similarly to A[X ], except that X is not fixed throughout the
definition but rather changes as we enter the scopes of abstractors.
This simple idea is surprisingly hard to make precise. A first attempt at the definition is
as the least family of sets closed under the following conditions:
1. If x ∈ Xs , then x ∈ B[X ]s .
2. For each operator o of arity (s1 .s1 , . . . , sn .sn )s, if a1 ∈ B[X , x1 ]s1 , . . . , and an ∈
B[X , xn ]sn , then o( xn .an ) ∈ B[X ]s .
x1 .a1 ; . . . ;
The bound variables are adjoined to the set of active variables within each argument, with
the sort of each variable determined by the valence of the operator.
This definition is almost correct but fails to properly account for renaming of bound vari-
ables. An abt of the form let(a1 ; x.let(a2 ; x.a3 )) is ill-formed according to this definition,
because the first binding adds x to X , which implies that the second cannot also add x to
X , x, because it is not fresh for X , x. The solution is to ensure that each of the arguments
is well-formed regardless of the choice of bound variable names, which is achieved using
fresh renamings, which are bijections between sequences of variables. Specifically, a fresh
renaming (relative to X ) of a finite sequence of variables x is a bijection ρ : x ↔ x
between x and x , where x is fresh for X . We write
ρ (a) for the result of replacing each
occurrence of xi in a by ρ(xi ), its fresh counterpart.
This is achieved by altering the second clause of the definition of abt’s using fresh
renamings as follows:
For each operator o of arity (s1 .s1 , . . . , sn .sn )s, if for each 1 ≤ i ≤ n and each fresh
renaming ρi : xi ↔ xi , we have ρi (ai ) ∈ B[X , xi ], then o( xn .an ) ∈ B[X ]s .
x1 .a1 ; . . . ;
The renaming ρi (ai ) of each ai ensures that collisions cannot occur and that the abt is valid
for almost all renamings of any bound variables that occur within it.
The principle of structural induction extends to abt’s and is called structural induction
modulo fresh renaming. It states that to show that P[X ](a) holds for every a ∈ B[X ], it is
enough to show the following:
The second condition ensures that the inductive hypothesis holds for all fresh choices of
bound variable names, and not just the ones actually given in the abt.
As an example let us define the judgment x ∈ a, where a ∈ B[X , x], to mean that x
occurs free in a. Informally, this means that x is bound somewhere outside of a, rather
than within a itself. If x is bound within a, then those occurrences of x are different
from those occurring outside the binding. The following definition ensures that this is the
case:
9 1.2 Abstract Binding Trees
1. x ∈ x.
2. x ∈ o( xn .an ) if there exists 1 ≤ i ≤ n such that for every fresh renaming
x1 .a1 ; . . . ;
ρ : xi ↔ zi we have x ∈ ρ (ai ).
The first condition states that x is free in x but not free in y for any variable y other than x.
The second condition states that if x is free in some argument, independently of the choice
of bound variable names in that argument, then it is free in the overall abt.
The relation a =α b of α-equivalence (so-called for historical reasons) means that a and
b are identical up to the choice of bound variable names. The α-equivalence relation is the
strongest congruence containing the following two conditions:
1. x =α x.
xn .an ) =α o(
x1 .a1 ; . . . ;
2. o( xn .an ) if for every 1 ≤ i ≤ n, ρi (ai ) =α ρi (ai ) for
x1 .a1 ; . . . ;
all fresh renamings ρi : xi ↔ zi and ρi : xi ↔ zi .
The idea is that we rename xi and xi consistently, avoiding confusion, and check that ai
and ai are α-equivalent. If a =α b, then a and b are α-variants of each other.
Some care is required in the definition of substitution of an abt b of sort s for free
occurrences of a variable x of sort s in some abt a of some sort, written [b/x]a. Substitution
is partially defined by the following conditions:
KUSTAAVA. Niin, niin. Mikäs siinä muu auttaa? Erota pitää, kun se
kerran sen vika on.
KAISA: Ei.
ASARIAS. Milloinkas?
KAISA. Niin.
ASARIAS. Niinkö?
KAISA. Niin.
KAISA. Passaa.
KAISA. Eroamme.
KAISA. Passaa.
KAISA. Lähdetään.
KAISA. Niin.
KAISA. Passaa.
ASARIAS. Niin että nyt minä sittenkin sinut saan, oma kulta…?
KAISA. Saadaanhan nähdä.
KAISA. Niin.
KAISA. Niin.
ASARIAS. En pura.
KAISA. Vannotkos?
ASARIAS. Sinä?
VILLE. Niin pian kuin vain ehdit. (ASARIAS aivastaa, jolloin vasu
putoaa maahan hänen olkapäältään ja rievut maahan.) Noh? Mikäs
sinulle nyt tuli?
VILLE. No, nyt sen tunnet. Mutta elä nyt pudota enää!
ASARIAS. Ehkäpä sitten tähänkin tottuu. (Lähtee perällepäin.)
KAISA. Ei, Ville. Nyt ei auta muu kuin pysyä tosissaan loppuun
asti, kun tämän olen kuullut ja nähnyt.
KAISA. No?
HETA. Niin, eihän siinä sitten auta muu kuin suostua, Kaisa hyvä,
ihan kaikkeen, mitä se tahtoo.
KAISA. On kun on. Mutta mikä syy siinä lopulta toden sanoo,
Asariaksenko vai minun, se saadaan vielä nähdä. Ja tänään vielä,
jahka rovastin puheille päästään. Niin, vielä tänä päivänä.
KUSTAAVA. No, jos se siihen luottaa, niin kuka sen tietää kuinka
sitten käykään.
VILLE. No! Enhän minä siihen pakota. Mutta mitäs se Anni siihen
sanoo?
VILLE (avaa aitan oven). Kah, että kerkiät nousta siksi kun rovasti
saapuu ja pääset avioeroasi hänelle selittelemään.
ASARIAS. Avioko…?
ASARIAS (aitasta). Elä vain pane lukkoon, Ville! Elä pane lukkoon!
VILLE. No, nuku sitten rauhassa! Kyllä minä sinut ajoissa herätän.
Unta saat vielä nähdä Kaisastasi, mutta et muuta. (Sulkee jälleen
oven lukkoon.) "Levätköön hän nyt vahvass' rauhass', mutt' me kuin
olemm' tuonen kauhass'…" Meidän pitää olla ihan hiljaa. (On hetken
vaiti, painaa sitten korvansa lukon kohdalle ja kuuntelee kotvan.)
Jopa, todenperään, mies parka nukahti; alkaa jo kuorsata. (Juoksee
pihan poikki portaikolle päin.) Tulkaa nyt! Nyt saatte tulla. (Viittaa
kädellään.)
VILLE (samoin). Parasta on. Sillä nyt hän tuolla näkee unta, että
piha on täynnä taivaan enkeleitä ja heillä kullakin lapsi sylissään, ja
niiden enkelien joukossa on hänen oma Kaisansa. Niin että menkää
sisään nyt!
ROVASTI (nauraen). Vai niin, vai niin! No, onkos hänen äitinsä
täällä kanssa?
VILLE. Yksin.
VILLE. Miks'ei?
HETA. Niinpä, kah! Jätäkin sitten sinä asia rovastiin asti viemättä.
KAISA. Olen minä jo siksi vanha ja viisas, etten niin vain loukkuun
mene. Eteeni katson minäkin.
KAISA. En.
HETA. Tuki oma suus', ettei sieltä yhtä mittaa valeita valuisi.
HETA (kuin edellä). Mitä sinä, Asarias nyt? Selviähän toki unestasi!
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com