LectureNotes SV1
LectureNotes SV1
Lecture Notes - SV
Ralph Sarkis
March 31, 2020
These are lecture notes taken during the Semantics and Verification classes taught by
Colin Riba in winter 2020.
Contents
Transition Systems 2
Linear Time Properties 9
Invariants and Safety Properties 10
Liveness Properties 14
Topological Spaces 15
Preliminaries 15
ω-words 17
Posets and Complete Lattices 19
Preliminaries 19
Prefixes and Closure 21
Observable Properties 23
Continuous Functions 23
Compactness 24
Hausdorff Spaces 25
Linear Temporal Logic (LTL) 25
Linear Modal Logic (LML) 25
LML with Fixed Points 28
Syntax and Semantics of LTL 32
Fixed Points and Defined Modalities 32
Fixed Points and Continuity 33
Question 1. What is Semantics and Verification?
1
with circumventing the useless details of the machine implementation by giving
formal mathematical meaning to programs. This lets us reason rigorously about
their execution or any interesting properties that they have.
Verification is a terminology for methods that, given a program in an abstract
language and a property usually in another language, automatically verify whether
the program satisfies the property. A common object that is used to describe pro-
grams is a transition system1 , and in this class we will be interested in the so-called 1
Somewhat similar to a labeled graph. The
nodes represent states of the program and
linear time properties2 that they have.
each state has some properties associated
to it.
2
Roughly, they are properties on the
Transition Systems infinite sequences of transitions that can
occur in the system.
Definition 2. A transition system T is a tuple (S, A, →, I, AP, L), where:
1. S is the set of states of the system (represented as nodes of the graph),
2. A is the set of actions (represented as labels for the edges of the graph),
a
3. →⊆ S × A × S is the transition relation, we denote s → s0 when (s, a, s0 ) ∈→ (it
translates to “When in state s, we can execute action a and end up in state s0 ”),
insert_coin
get_beer get_soda
sel {paid}
τ τ
2
In the context of this course, and especially for this section, it is useful to have a
way to generate transition systems. Program graphs, although they are designed to
represent the evaluation of a program, can do exactly this.
Definition 5 (Program graph). Given a (finite) set Vars of variables together with,
for each variable x ∈ Vars, a domain Dom( x )5 , an evaluation is an element of 5
Example of such domains are lists, ma-
Eval(Vars) = ∏ x∈Vars Dom( x ), that is, η ∈ Eval(Vars) assigns a value η ( x ) ∈ chine integers, Z, R. Note that they can
be infinite and even contain stuff that can-
Dom( x ) to each variable x ∈ Vars.6 We write Eval when the set of variables is not be represented by a computer. This is
clear from the context. because it is sometimes useful to abstract
away these restrictions.
A condition is a propositional formula with atoms of the form x ∈ D where x ∈ 6
In other words, an evaluation can be
Vars and D ⊆ Dom( x ) or > and ⊥ to represent true and false values respectively. viewed as the state of the memory at a
The set of such conditions denoted Cond(Vars) (or simply Cond) is of course closed specific point in the program.
under conjunctions, disjunctions and negations. Given a condition g ∈ Cond and a
valuation η ∈ Eval, we write η g if g is true under the evaluation η.
A program graph over Vars has the form PG = (Loc, A, Effect, ,→, Loc0 , g0 ),
where:
3. Effect : A × Eval → Eval abstracts the effect that actions have on memory,
program graph for a vending machine with a similar behavior. The only difference
is that there is a now a set amount of beers and sodas in the machine that can be
refilled. When the user inserts a coin but there are no items left, the coin is returned.
Fix the maximum number of items m ∈ N, let the amount of beers and sodas
be variables in nb , ns ∈ Vars with domain Dom(nb ) = Dom(ns ) = {0, . . . , max − 1}.
There are two control points Loc0 = start, sel ∈ Loc and new actions to refill and
return the coin (A = {ic, gb, gs, refill, rc}). The initial condition is g0 = nb = m − 1 ∧
ns = m − 1.
> : ic
nb > 0 : gb
> : refill start sel
ns > 0 : gs
ns = 0 ∧ nb = 0 : rc
3
The effects are not represented in the diagram but a sensible Effect would satisfy:
for any evaluation η ∈ Eval,
The crucial difference between program graphs and transition systems is that
the former separate the control from the data. In other words, a program graph
abstracts only the behavior the program while a transition system abstracts the
behavior along with the memory of the program. This motivates that a transition
system might be more appropriate for observing the evolution of a program graph
along with the evaluation. The following definition makes this formal.
Definition 7 (TS of a PG). Let us have a program graph PG with the same nota-
tion as in Definition 5, the transition system of PG is TS( PG ) = (Loc × Eval, A, →
, I, AP, L)9 , where: 9
Note that this can lead to a huge set of
states because some variables can have
1. → is defined by the rule huge domains.
g:α
` → `0 ng
,
(`0 , Effect(η, α))
α
(`, η ) →
start ic sel
1, 0 1, 0
refill
gb gs
ic
start ic sel start sel
refill
1, 1 1, 1 0, 0 rc 0, 0
refill
gs gb
refill
start ic sel
0, 1 0, 1
4
Notice that increasing m, even by only one, would make the transition graph way
more complex.
To end this section presenting the basics of transition systems, we describe three
ways of combining them.
The first one is similar to taking the product of two DFA12 , but we allow the case 12
Deterministic finite automata.
where some system can do an action that the other cannot. 13 13
In DFA terminology, it amounts to taking
the products of two automata on different
Definition 9 (Interleaving of TSs). Given two transition systems Ti = (Si , Ai , →i alphabets. When the new machine sees
, Ii , APi , Li ) for i = 1, 2, their interleaving composition denoted T1 ||| T2 is (S1 × a letter that only one of the original DFA
recognizes, it makes a transition only
S2 , A1 ∪ A2 , →, I1 × I2 , AP1 ∪ AP2 , L), where → is defined by according to that DFA. When it sees a
letter that both DFA recognize, it non-
s1 →1 s10 s2 →2 s20
α α
deterministically choose what transition
,
to make. There is a slight caveat because
(s1 , s2 ) → (s10 , s2 ) (s1 , s2 ) → (s1 , s20 )
α α
actions are not consumed by a transition
system, so after doing a common action on
and L(s1 , s2 ) = L1 (s1 ) ∪ L2 (s2 ).
one of the system, the new machine can
Example 10 (Traffic lights). Suppose we have two traffic lights with that can switch still do that action on the other system.
from red to green and vice-versa non-deterministically, they are represented by the
following graphs.
R1 R2
τ τ τ τ
G1 G2
τ τ
G1 , R2 G1 , G2
τ
Unfortunately, such a simple way of composing transition systems does not allow
shared memory between the systems. For this reason, when we want to take this
possibility into account, it is preferred to do a composition of program graphs.
Definition 11 (Interleaving of PGs). Given two program graphs Gi = (Loci , Ai , Effecti , ,→i
, Loci,0 , gi,0 ) over Varsi for i = 1, 2, their interleaving is the program graph15 over 15
Observe that the variables are not nec-
Vars1 ∪ Vars2 denoted G1 ||| G2 = (Loc1 × Loc2 , A1 + A2 , Effect, ,→, Loc1,0 × Loc2,0 , g1,0 ∧ essarily disjoint, hence we must consider
actions of A1 and A2 as disjoint, otherwise
g2,0 ), where ,→ is defined by the rule there would be an ambiguity in the choice
g:α g:α of what effect to apply.
`1 →1 `10 `2 →2 `20
,
g:α
(`1 , `2 ) → (`1 , `20 )
α
(`1 , `2 ) → (`10 , `2 )
and Effect(α, η ) = Effecti (α, η ) for α ∈ Ai .16 16
The evaluation η is an element of
Eval(Vars1 ∪ Vars2 ), so we implicitly
adapted Effecti in the obvious way (i.e.: it
does not modify variables outside Varsi ). 5
Example 12. Let us illustrate this construction on two simple program graphs ma-
nipulating the same variable x with Dom( x ) = N, here are their representations
with effects in blue.
Interleaving the program graphs is simple enough (see Figure 1), but what is more Figure 1: Interleaving of program
interesting is comparing the transition systems we obtain when do the operations graphs in Example 12.
TS( G1 ) ||| TS( G2 ) and TS( G1 ||| G2 ). Since the domain of x is infinite, both these
transition systems have infinitely many states. ···
0 1 2 3
For the former, observe that interleaving TS( G1 ) and TS( G2 ) (represented in
Figure 2) will lead to the dissociation of the variable x into two independent copies.
It leads to a system (partially represented below) which is irrelevant for the purpose 0 1 2 3 4 ···
of analyzing the behavior of both programs when run concurrently.
α2
α2 0, 0 0, 1 0, 2 Figure 2: Part of TS( G1 ) and
α1 α1 α1 TS( G2 ) from Example 12. (the
α2 label of the nodes is the value of x
α2 1, 0 1, 1 1, 2 at that state)
α1 α1 α1
α2
α2 2, 0 2, 1 2, 2
For the latter, we actually obtain an interesting system (depicted below) because
interleaving the program graphs first ensures that α1 and α2 act on the same x.
α2
α1 , α2
α1 α1 α1
α2 0 1 2 3 4 ···
This example shows the relevance of program graphs when we care about con-
current data. While there are many more possibilities to compose transition sys-
tems, we introduce one last definition that illustrates how we can deal with concur-
rent control without using program graphs.
s1 →1 s10 s2 →2 s20
α α
α∈
/H α∈
/H
(s1 , s2 ) → (s10 , s2 ) (s1 , s2 ) → (s1 , s20 )
α α
s1 →1 s10 s2 →2 s20
α α
α∈H
,
(s10 , s20 )
α
( s1 , s2 ) →
6
and L(s1 , s2 ) = L1 (s1 ) ∪ L2 (s2 ). One should view the actions in H as synchronized
actions that both systems have to do at the same time.18 18
Note that this definition is a general-
ization of the interleaving composition as
Example 14. Given two transition systems T1 and T2 that have a non-critical state T1 ||∅ T2 = T1 ||| T2 .
denoted nci and a critical state ci and can jump from one to the other using actions
req and rel as depicted below.19 19
req req
nc1 c1 nc2 c2
rel rel
Before leaving this section, we show a simple result that illustrates how to deal
with transition systems in a more theoretical fashion.
Proof. It is easy to see that the states, actions, initial states, atomic propositions and
state labelings of T and T 0 will be the same because they are constructed with ×
and ∪ which are associative operations. Let us denote → and ⇒ for the transition
relations of T and T 0 respectively. We have to show that for any s1 , s10 ∈ S1 , s2 , s20 ∈
S2 , s3 , s30 ∈ S3 and α ∈ A1 ∪ A2 ∪ A3 ,
7
s1 →1 s10
α
α∈
/ A1 ∩ A2
(s1 , s2 ) →1||2 (s10 , s2 )
α
α∈
/ ( A1 ∪ A2 ) ∩ A3
(s1 , s2 , s3 ) → (s10 , s2 , s3 )
α
s1 →1 s10
α
α∈
/ A1 ∩ ( A2 ∪ A3 )
(s1 , s2 , s3 ) ⇒ (s10 , s2 , s3 )
α
Case 2: The action α belongs to exactly two of the systems, say α ∈ A1 ∩ A3 , then
we have the following inferences:
s1 →1 s10
α
α∈
/ A1 ∩ A2
(s1 , s2 ) →1||2 (s10 , s2 ) s3 →3 s30
α α
α ∈ ( A1 ∪ A2 ) ∩ A3
(s1 , s2 , s3 ) → (s10 , s2 , s30 )
α
s3 →3 s30
α
α∈
/ A2 ∩ A3
s1 →1 s10 (s2 , s3 ) →2||3 (s2 , s30 )
α α
α ∈ A1 ∩ ( A2 ∪ A3 )
(s1 , s2 , s3 ) ⇒ (s10 , s2 , s30 )
α
Case 3: The action α belongs to all of the systems, then we have the following
inferences:
s1 →1 s10 s2 →2 s20
α α
α ∈ A1 ∩ A2
(s1 , s2 ) →1||2 (s10 , s20 ) s3 →3 s30
α α
α ∈ ( A1 ∪ A2 ) ∩ A3
(s1 , s2 , s3 ) → (s10 , s20 , s30 )
α
s2 →2 s20 s3 →3 s30
α α
α ∈ A2 ∩ A3
s1 →1 s10 (s2 , s3 ) →2||3 (s20 , s30 )
α α
α ∈ A1 ∩ ( A2 ∪ A3 )
(s1 , s2 , s3 ) ⇒ (s10 , s20 , s30 )
α
Definition 16 (Linear Time Property). A linear time property (LTP) over atomic
propositions AP is a set of ω-words P ⊆ (2AP )ω .22 22
We use ω as the cardinality of N, thus
an ω-word on an alphabet Σ is an element
Example 17. Recall the transition system TVM depicted in Example 4 (and shown of Σω , i.e.: an infinite sequence of symbols
in Σ. Although some of the results about
again in Figure 4 with AP = {paid, available}. Here are four examples of LTP in this LTPs can be shown with general alphabets,
context. we will remain in the case of Σ = 2AP for
First, the property that any state with an available drink is preceded by a state clarity.
insert_coin
get_beer get_soda
sel {paid}
τ τ 8
beer soda
{paid, available}{paid, available}
In general, LTPs similar to P1 and P2 are hard to work with because they are not
finitary in the sense that, to verify them, one has no choice but to look at an infinite
amount of symbols.
We will see that some properties which might look infinitary are easier to auto-
matically verify because they have a finite representation. For instance, the property
that there is an infinite number of states is written:24 24
The notation ∃∞ is a shorthand of
∀ N, ∃i ≥ N. Its less intuitive dual, “al-
P3 = {σ ∈ (2AP )ω | ∃∞ i, paid ∈ σ (i )}. ways true after some point”, is denoted
∀∞ := ∃ N, ∀i ≥ N.
Definition 19 (Trace). The trace of a path π = (si )i<n is the sequence L(π ) :=
( L(si ))i<n . The set of traces of a transition system T, denoted Tr( T ), is
Also, Trω ( T ) denotes the set of infinite traces and Trfin ( T ) the set of finite traces.
Definition 20. We say that a transition system T satisfies a linear time property P
if Trω ( T ) ⊆ P. We denote this by write T p≈ P.25 25
We use this notation instead of the more
usual because this definition is not the
Example 21. Let us show that all the properties in Example 17 are satisfied by TVM . perfect notion of satisfaction. Informally,
this comes from the fact branchings are a
1. Clearly, TVM p≈ P1 because any path in T goes through sel before going through feature internal to transition systems but
not to LTPs. When we cover modal logics,
either beer or soda. we will see how to fix this definition.
2. Since for any i such that available ∈ L(πi ), we also have paid ∈ L(π ) it follows
trivially that TVM p≈ P2 .
3. The structure of TVM is very simple and we can observe that for any π and any
N ∈ N, paid ∈ L(π N +2 ),26 thus TVM p≈ P3 . 26
In words, starting in any state, doing two
transition always leads to a state where the
4. Note that for any infinite path in TVM goes infinitely many times through pay, user has paid.
thus it is not possible that at some point, any state in the path has paid in its
labeling. We conclude that TVM p≈ P4 .
9
Proposition 22. Let T and T 0 be two transition systems over AP, then
h i
Trω ( T ) ⊆ Trω ( T 0 ) ⇔ ∀ P ⊆ (2AP )ω , ( T 0 p≈ P =⇒ T p≈ P) .
Proof. (⇒) Follows trivially from the definitions. Indeed, for any P ⊆ (2AP )ω ,
0
The traces of TVM are the same as the traces of TVM . In particular, we have Trω ( TVM ) =
ω 0
Tr ( TVM ), so Proposition 22 says either system satisfies LTPs that the other satisfies.
We have already mentioned that some LTPs are harder to verify than others, now
we will introduce different families of linear time properties are nicer than most.
There are many such families, but we chose three which are simple to define and
have both historical and theoretical importance.28 28
10
Before getting dirty with these family of properties, we show two very simple
statements.
Proposition 28. An LTP P is a safety property if and only if for any σ ∈ Pc ,32 there exists 32
The complement of an LTP P on AP is
i ∈ N such that σ (0) · · · σ(i ) · (2AP )ω ∩ P = ∅. Pc := (2AP )ω \ P.
Proposition 32. Let T be a transition system with no terminal states and P ⊆ (2AP )ω be
a safety property induced by Pbad , then
T p≈ P ⇔ Trfin ( T ) ∩ Pbad = ∅.
Proof. (⇐) Let L(π ) be an infinite trace of T. Since any of its finite prefix is in
Trfin ( T ), it cannot coincide with a word in Pbad . Hence, L(π ) ∈ P.
(⇒) Suppose there exists σ̂ ∈ Trfin ( T ) ∩ Pbad , we have σ̂ = L(π ) for a finite path
π, but since there are no terminal state, we can always add states to π and obtain
an infinite path π 0 such that σ̂ ⊆ L(π 0 ). This means L(π 0 ) ∈ / P, but it contradicts
our assumption that T p≈ P.
Corollary 33. 35 Let T and T 0 be transition systems over AP with no terminal states, then 35
This result is essentially a characteriza-
h i tion similar to Proposition 22 that applies
Trfin ( T ) ⊆ Trfin ( T 0 ) ⇔ ∀ safety P ⊆ (2AP )ω , ( T 0 p≈ P =⇒ T p≈ P) . to safety properties. But now, instead of
comparing all the traces, we only have to
Proof. (⇒) Follows trivially from the last proposition. compare the finite traces.
Since, finite traces can be arbitrarily large, it is natural to ask whether comparing
finite traces of two systems suffices to compare all the LTPs that they satisfy. This
is almost the right intuition, but as usual, infinity breaks our intuition as shown in
the following example.
11
Example 34. Let T be the transition system depicted below where the state labeling
is written inside the states and states’ and actions’ names are omitted.
a b
i −1 times
a a ··· a b
Proof. (⇒) Since the systems have no terminal states, any finite trace in T corre-
sponds to a path π in T that can be extended to an infinite path π 0 so that L(π 0 ) is
in Trω ( T ) and thus in Trω ( T 0 ). Now, L(π 0 ) must correspond to a path π 00 in T 0 and
truncating it to the size of π shows that L(π ) = L(π 00 |i≤|π | ) is also in Trfin ( T 0 ).
(⇐)38 Let σ ∈ Trω ( T ), for any n ∈ N, σn := σ(0) · · · σ (n) ⊆ σ is in Trfin ( T ) ⊆ 38
In class, this direction was proved as
Trfin ( T 0 ), so in particular, it is the finite trace of an initial path, say πn , in T 0 . a corollary of the more general König’s
lemma which states that any finitely
To construct an initial path π in T 0 that satisfies L(π ) = σ, we will build (si )i∈N branching infinite tree has an infinite path.
by induction on i with s0 ∈ I and the following invariant: There are infinitely many I chose to integrate the proof of the lemma
into the proof of the proposition to avoid
πn ’s such that ∀k ≤ i, πn (k) = sk . introducing more definitions than needed.
First, since I 0 is finite and all paths πn satisfy πn (0) ∈ I 0 , there is at least one
s0 ∈ I 0 such that there are infinitely many πn with πn (0) = s0 .
Second, suppose s is defined up to i − 1 and there are infinitely many πn ’s sat-
isfying Pi−1 := ∀k ≤ i − 1, πn (k) = sk . Then, since there are finitely many s ∈ S0
such that si−1 → s for some α ∈ A0 , we can pick one such si such that there are still
α
1. is initial because s0 ∈ I 0 ,
12
We conclude that σ ∈ Trω ( T 0 ).
Corollary 37. Two transition systems on AP with no terminal states satisfy the same LTPs
if and only if they satisfy the same safety properties.
Proof. The proof follows from these equivalences that use the previous results:
h i
∀ P ∈ (2AP )ω , T p≈ P ⇔ T 0 p≈ P ⇔ Trω ( T ) = Trω ( T 0 )
⇔ Trfin ( T ) = Trfin ( T 0 )
h i
⇔ ∀ safety P ∈ (2AP )ω , T p≈ P ⇔ T 0 p≈ P
We will end this section with a bit more terminology and practice with results
on invariants and safety properties.
The closure of P is the set of LTPs that have all their finite prefixes in P, that is,
Proof. (⇒) Note that P ⊆ cl( P) is trivially true for any P.39 Now, suppose that 39
One way to see this is:
σ ∈ cl( P), for any finite prefix σ̂ ⊆ σ, σ̂ ∈ pref( P), so there exists σ0 with σ̂ ⊆ σ0 . In pref( P) =
[
pref(σ ).
other words, σ̂ · (2AP )ω ∩ P 6= ∅ and we conclude that σ ∈ P by the contrapositive σ∈ P
of Proposition 28.
(⇐) Let σ ∈ Pc , in particular σ ∈ / cl( P), so there is a finite prefix σ̂ ⊆ σ that is not
the prefix of any word in P. In mathematical terms, this means σ̂ · (2AP )ω ∩ P = ∅.
Since σ was arbitrary, P is a safety property by Proposition 28.
Proposition 40. Let P and Q be safety properties, then P ∪ Q and P ∩ Q are also safety
properties.
Proof. For the union, since a word is in P ∪ Q if it has no finite prefix in one of
Pbad and Qbad , it follows that P ∪ Q is the safety property induced by ( P ∪ Q)bad :=
Pbad ∩ Qbad .
For the intersection, it follows from Proposition 39 and
[ \ [ [
cl( P) ∩ cl( Q) = pref(σ ) pref(σ ) = pref(σ) = cl( P ∩ Q).
σ∈ P σ∈Q σ∈ P∩ Q
13
Liveness Properties
Definition 41 (Liveness). A LTP P ⊆ (2AP )ω is a liveness property if for any σ̂ ∈
(2AP )∗ , there exists σ ∈ (2AP )ω such that σ̂ ⊆ σ and σ ∈ P.
Example 42. The system TVM from Example 4 satisfies the property
because both sides of the implication are true for any σ ∈ Trω ( TVM ).40 P is a liveness 40
Proof. (⇒) Suppose there exists σ ∈ (2AP )∗ \ pref( P), then σ could not be extended
into a word of σ, contradicting the liveness of P.
(⇐) Any finite word is in pref( P), thus it can be extended in a word of P. We
conclude that P is a liveness property.
Corollary 44. Let P and Q be liveness properties on AP, then P ∪ Q is also a liveness
property.41 41
It follows from Proposition 43 because
pref( P ∪ Q) = pref( P) ∪ pref( Q).
Example 45. Unlike for safety properties, the intersection of two liveness properties
is not always a liveness property. Consider the following properties:42 42
P and Q respectively contain all ω-words
that are eventually all 1s and all 0s.
P = {σ ∈ {0, 1}ω | ∀∞ i, σ (i ) = 1}
Q = {σ ∈ {0, 1}ω | ∀∞ i, σ (i ) = 0}.
They are both clearly liveness as any finite word can be completed with either 1ω or
0ω and belong to P or Q respectively. However, their intersection is clearly empty
and ∅ is not a liveness property.
Proposition 46. The property > := (2AP )ω is the only LTP that is a liveness and safety
property.
Proof. Since any finite word can be completed into an ω-word, > is liveness. It is
also safety induced by Pbad = ∅.
Let P be a safety property induced by Pbad , then any x ∈ Pbad cannot be extended
into a infinite path in P by definition. Therefore, if P is safety and liveness, Pbad
must be empty and P = >.
In order to prove this theorem, we will introduce two very different approaches
that lead to very elegant proofs. Thus, the two next sections will feel ad hoc at first,
but they are very much used in current research in semantics, so they are worth
covering.
14
Topological Spaces
Preliminaries
Not much theory is needed, but we present it here for completeness.
the empty set and the whole space are open and closed (sometimes referred to as and if I is finite,
clopen) because Ui ∈ ΩX.
[
∅=
[ \ i∈ I
U and X = U.
U ∈∅ U ∈∅
All the following terminology and results are basic tools use in topology that
will end up helping us prove the decomposition theorem. Fix a topological space
( X, ΩX ).
Lemma 49. Let (Ci )i∈ I be a family of closed sets of X, then ∩i∈ I Ci is closed and if I is
finite, ∪i∈ I Ci is also closed.44 44
Observe that this are statements dual
to the axioms of Definition 48. In fact, it
Proof. Both statements follow trivially from DeMorgan’s laws and the fact that the is sometimes more convenient to define a
topological space by giving its closed sets,
complement of a closed set is open and vice-versa. For the first one, DeMorgan’s
and it is equivalent.
laws yield ! c
Cic
\ [
Ci = ,
i∈ I i∈ I
and the LHS is the complement of a union of opens, so it is closed. For the second
one, DeMorgan’s laws yield
!c
Cic
[ \
Ci = ,
i∈ I i∈ I
Lemma 50. A subset A ⊆ X is open if and only if for any x ∈ A, there exists an open
U ⊆ A such that x ∈ A.
15
It is very easy to show that A is the smallest closed set containing A.47 Then, it 47
A is closed because it is an intersection of
follows that A is closed if and only if A = A. closed sets and any closed sets containing
A also contains A by definition.
Here are more easy results on the closure of a subset.
c
Theorem 55 (Decomposition). Let A ⊆ X, then A = A ∩ ( A ∪ A ), where A is closed
c
and A ∪ A is dense.48 48
This results says that any set can be
decomposed into a closed and a dense set.
Proof. The equality is trivial and A is closed by definition. It is left to show that Note the similarity with Theorem 47, we
c will see that the latter is a corollary of this
A ∪ A is dense. basic result in topology.
Let U 6= ∅ be an open set. If U intersects A, we are done. Otherwise, we have
the following equivalences:
c
U ∩ A = ∅ ⇔ A ⊆ Uc ⇔ A ⊆ Uc ⇔ U ⊆ A ,
c
where the second =⇒ holds because U c is closed. We conclude U ∩ ( A ∪ A ) 6=
∅.
Ao : = {U ∈ ΩX | U ⊆ A}.
[
It is obvious that Ao is the largest open subset of A and thus that A is open if and
only if A = Ao .50 50
It also follows that A ⊆ B =⇒ Ao ⊆ Bo
and that Ao o = Ao .
Finally, we end this these preliminaries with a result on how to specify a topology.
16
Lemma 59. Let X and B ⊆ 2X , then ΩX be the set of all unions of sets in B, ( X, ΩX ) is
a topology. We say that ΩX is the topology generated by B.
Proof. We know that unions of opens are open and finite intersections of sets in
B are open. It remains to show that finite intersections of unions of sets in B are
also open. Let U = ∪i∈ I Ui and V = ∪ j∈ J Vj with Ui ∈ B and Vj ∈ B, then by
distributivity, we obtain
\ [
U ∩ V = ∪i∈ I Ui ∪ j∈ J Vj = Ui ∩ Vj ,
i ∈ I,j∈ J
ω-words
Definition 60 (Extensions). Given a non-empty set A and a finite word u ∈ A∗ , we
denote the extensions of u by51 51
We write ext(u) when the alphabet is
clear from context. Note that Aω = ext A (ε).
ext A (u) := u · Aω = {σ ∈ Aω | u ⊆ σ }.
ext(W ) = {ext(u) | u ∈ W }.
is a topology for Aω .
Proof. Let {Ui }i∈ I be a family of opens, for any i ∈ I, there exists Wi ⊆ A∗ such
that Ui = ext(Wi ). Then,53 53
The last equality holds because an ω-
word extends a word in one of the Wi ’s if
∪i∈ I Ui = ∪i∈ I ext(Wi ) = ext (∪i∈ I Wi ) , and only if it extends the same word in the
union of the Wi ’s.
17
Thus, let W1 , W2 ⊆ A∗ , we have
\
ext(W1 ) ∩ ext(W2 ) = ∪u∈W1 ext(u) ∪v∈W2 ext(v)
[
= ext(u) ∩ ext(v)
u∈W1 ,v∈W2
[
= {ext(u) | u ∈ W1 , ∃v ∈ W2 , v ⊆ U }
[
∪ {ext(v) | v ∈ W2 , ∃u ∈ W1 , u ⊆ v}
= ext(W1 e W2 ),
where W1 e W2 is the set of words in one of the Wi ’s that have a prefix in the other,
that is,
W1 e W2 = {u ∈ A∗ | ∃i 6= j, ∃v ∈ Wj , u ∈ Wi , v ⊆ u}.
We conclude that ΩA is also closed under finite intersection, so it is a topology for
Aω .
From now on, unless otherwise said, we assume that the topology on Aω is
generated by ΩA given above.
Remark 62. A set P ⊆ Aω is open if and only if there exists W ∈ A∗ such that
P = ext(W ) = ∪u∈W ext(u). In particular, if σ ∈ P, then there exists σ̂ ⊆ σ such that
ext(σ̂ ) ⊆ P.55 55
In other words, we will know that σ ∈ P
after observing it for a finite amount of
If we stare at this remark long enough, we can recover the intuition behind safety time because we know all extensions of σ̂
LTPs and in particular, the equivalent definitions seen in Proposition 28. Indeed, are in P.
the tools we have developed lead to a nice characterization of safety and liveness
properties.
Proof. (⇐) We have just said that if σ is in an open (in this case Pc ), then there exists
σ̂ ⊆ σ such that ext(σ̂ ) ⊆ Pc , i.e. ext(σ̂ ) ∩ P = ∅ as required for safety properties.
(⇒) Let P be induced by Pbad , any extension σ of a word in Pbad is not in P. In
other words Pc = ext( Pbad ) which is open, thus P is closed.
Proof. (⇒) Let U = ∅ be open, then U = ∪u∈W ext(u), where W cannot be empty.
Hence, since for any u ∈ W, ext(u) ∩ P 6= ∅57 , P ∩ U 6= ∅ and this direction follows. 57
Because any finite word can be extended
(⇐) Let P be dense, then for any u, ext(u) is open, so P ∩ ext(u) 6= ∅ which (by liveness).
18
Posets and Complete Lattices
Preliminaries
Definition 66. A poset (short for partially ordered set) is a pair ( A, ≤) where A is
a set and ≤ ⊆ A × A is a reflexive, transitive and antisymmetric binary relation.
Definition 69. The dual of a poset ( A, ≤) is denoted ( A, ≤)op := ( A, ≥), where for
any a, a0 ∈ A, a0 ≥ a ⇔ a ≤ a0 .59 This definition lets us avoid many sym-
59
metric arguments.
Definition 70. Let ( A, ≤) be a poset and S ⊆ A, then a ∈ A is an upper bound of
S if ∀s ∈ S, s ≤ a. Moreover, a ∈ A is the supremum of S, denoted ∨S, if it is the
least upper bound, that is, a is an upper bound of S and for any upper bound a0 of
S, a ≤ a0 .
Dually, a ∈ A is a lower bound (resp. infimum) of S if and only if it is an upper
bound (resp. supremum) of S in ( A, ≤)op .
Proposition 71. Infimums and supremums are unique when they exist.60 60
By antisymmetry.
Proof. (i) =⇒ (ii), (i) =⇒ (iii) and (ii) + (iii) =⇒ (i) are all trivial. Also, by using
duality, we only need to prove (ii) =⇒ (iii). For that, it suffices to note that for any
S ⊆ L, ∧S = { a ∈ L | ∀s ∈ S, a ≤ s} is a suitable definition of the infimum.
W
Defined that way, ∧S is a lower bound of S because if s < ∧S, then s < a for
some lower bound a of S62 , in particular s ∈ / S. Additionally, since we are taking 62
Because ∧S was the least upper bound
the supremum over all lower bounds of S, no lower bound of S can be greater and for lower bounds of S.
19
Example 75. As a corollary, we obtain that the open sets of a topological space
form a complete lattice. The supremums are given by unions which are open for
any arbitrary families of open sets. However, while the finite infimums are given
by intersection and infinite infimums exist by the previous lemma, they are not
necessarily intersections.63 63
In fact, the formula given above, states
For instance, consider the topology on Aω with A = {a, b} and P = ∩n∈N ext( an ). that the infimum of a family of opens is the
interior of its intersection.
All the elements in the intersection are open by definition, but P is not open because
aω ∈ P and there does not exists an ⊆ aω with ext( an ) ⊆ P. However, the interior
of P = { aω } which is ∅ is open.
Definition 79. Given two posets ( A, ≤) and ( B, ), a Galois connection is a pair of
functions g : A → B and f : B → A such that for any a ∈ A and b ∈ B,
g ( a ) b ⇔ a ≤ f ( b ).
20
However, this contradicts the fact that g( a) 6 g( a0 ) (using the ⇐ of the Galois
connection). We conclude that g is monotone.
A symmetric argument works to show that f is monotone.
Example 81.
Proof. Because f and g are monotone, f ◦ g is clearly monotone. Also, for any a ∈ A,
g( a) g( a) implying a ≤ f ( g( a)), so f ◦ g is expansive.
Now, in order to prove f ◦ g is idempotent, it is enough to show that67 67
The ≤ inequality follows by expansive-
ness.
f ( g( a)) ≥ f ( g( f ( g( a)))).
Observe that since f (b) ≤ f (b) for any b ∈ B, we have g( f (b)) ≤ b, thus in par-
ticular, with b = g( a), we have g( f ( g( a))) ≤ g( a). Applying f which is monotone
yields the desired inequality.
pref( P) ⊆ W ⇔ P ⊆ cl(W ).
Moreover, one can show that cl coincides with the closure operator in the topo-
logical space P ( Aω ).
21
Second, we show that cl( P) ⊆ P. Suppose that there exists σ ∈ Aω that is in
cl( P), but not in P. By Lemma 51 again, we have an open set U containing σ and
not intersecting P. Without loss of generality, U = ext(σ̂ ) for some prefix σ̂ ⊆ σ.71 71
Indeed, we already know open sets
However, because σ ∈ cl( P), σ̂ is the prefix of some word in P contradicting the fact have the form ext(W ) for W ⊆ A∗ . Thus,
U = ext(W ) = ∪i∈ I ext(wi ), and if σ ∈ U
that ext(σ̂ ) does not intersect P. we can choose one i with σ ∈ ext(wi ),
which means wi is the desired prefix.
Corollary 85. Let P ⊆ (2AP )ω , then
2. P is a liveness property if and only if cl( P) = (2AP )ω if and only if pref( P) = (2AP )∗ .
Before ending this section, let us spend a bit more time expanding on the prop-
erties of Galois connections.
Proposition 87. Let A and B be complete lattices, then if g : A → B preserves all supre-
mums, then there exists f : B → A such that g a f is a Galois connection.73 73
By duality (considering the opposite or-
ders), if f : B → A preserves all infimums,
Proof. We define f = b 7→ { a ∈ A | g( a) ≤ b} and we have to show that g( a) ≤
W
then there is a Galois connection g a f .
b ⇔ a ≤ f (b). The ⇒ direction is trivial because clearly a is in the set that f (b) is
the upper bound of, so a ≤ f (b).
For ⇐, note that g preserving supremums implies g is monotone.74 Thus, if 74
Assume a ≤ a0 , then a ∨ a0 = a0 , so
a ≤ f (b), we have g( a) ∨ g( a0 ) = g( a0 ), because g preserves
supremums. Therefore, g( a) ≤ g( a0 ).
_
g( a) ≤ g( f (b)) = { g( a) | g( a) ≤ b} ≤ b.
Observable Properties
Continuous Functions
Let f : Y → X, the inverse image function is
f −1 : 2X → 2Y = A 7→ {y ∈ Y | f (y) ∈ A}.
Lemma 88. For any f : Y → X, f −1 preserves arbitrary unions and intersection. Thus, it
also preserves complements.
22
Proof.
∀α ∈ Aω , ∀n ∈ N, ∃k ∈ N, ∀ β ∈ Aω ,
α(0) · · · α(k) = β(0) · · · β(k ) =⇒ f (α)(0) · · · f (α)(n) = f ( β)(0) · · · f ( β)(n).
Example 94. Let A = N and consider P = ∪n>0 ext(n), it is open because each
term in the union is open, and it is closed because it is the complement of ext(0).
However, there is no finite set of prefixes W such that P = ext(W ).
23
To finish this section, we will show that the only defect to obtaining the converse
is the fact that A is not finite. In order to do this, we will need to linger on in the
realm of topology.
Compactness
Definition 95. Let ( X, ΩX ), A ⊆ X, we say that a family {Ui }i∈ I is an open cover
of A if all Ui ’s are open are they cover A, i.e.: A ⊆ ∪i∈ I Ui . If J ⊆ I is such that
A ⊆ ∪ j∈ J , we say that {Uj } j∈ J is a subcover. It is a finite subcover if J is finite.
or it was not added because one of its prefix was already added. We conclude that
Aω = ext(W ) because ext(W ) ⊇ ext(V ). Moreover, we claim that W is finite and
this finishes the proof.81 81
Because we can pick one of the Vi ’s
containing w for each word w ∈ W and this
Assume towards a contradiction that W is infinite, then T = { x ∈ A∗ | ∃w ∈
forms a finite subcover of Aω .
W, x ⊆ w} is infinite as it contains W. Moreover, T can be seen as a subtree of the
tree on A∗ where u is a parent of v if and only if v = u · a for a ∈ A∗ . Since that tree
is finitely branching, T is too and by Konig’s lemma, T contains an infinite path.
That is, there exists σ ∈ Aω = ext(W ) such that for any n ∈ N, σ(0) · · · σ (n) is
a prefix of some wn ∈ W. However, this contradicts the fact that W is prefix free
because
Proof. Let {Ui }i∈ I be an open cover of C, then adding C c to this family yields an
open cover of X. Thus it has a finite subcover, which after removing C c yields a
finite subcover of {Ui }i∈ I .
24
Hausdorff Spaces
Definition 100. A topological space ( X, ΩX ) is Hausdorff (or T2 , or separated) if
for any x 6= y ∈ X, there exists U, V ∈ ΩX such that x ∈ U, y ∈ V and U ∩ V = ∅.
Example 101. The space Aω with the usual topology is always Hausdorff because
if α 6= β ∈ Aω , then there is a finite index i where they disagree. Therefore, with
x = α(0) · · · α(i ) and y = β(0) · · · β(i ), ext( x ) and ext(y) are the desired separating
sets.
Proof. Let x ∈ / C, for any y ∈ C, x and y are separated as in the Hausdorff definition
by sets Uy and Vy , where y ∈ Vy . Note that {Vy }y∈C is an open cover of C and by
compactness, there is a finite set I such that {Vyi }i∈ I still covers C. But, now ∩i∈ I Uyi
is a finite intersection of opens that contains x and that cannot contain any point in
C.82 Thus, it is an open set disjoint from C that contains x. The proposition follows 82
For each i ∈ I, Uyi intersects Vyi trivially,
by Lemma 51. hence the intersection of all Uyi cannot
intersect any Vyi . The claim follows since
the latter cover C.
Corollary 103. In Aω the closed sets are exactly the compact sets, thus the clopen sets are
exactly the open and compact sets.83 83
By Lemma 99 and Proposition 102.
Definition 105. The formulas in LML (over AP) are given by the following gram-
mar:84 84
All the usual connectives have the same
φ, ψ ::== > | ⊥ | X ∈ X | a ∈ AP | φ ∧ ψ | φ ∨ ψ | ¬φ | φ. semantics as before and the new con-
nective is the linear part of this logic.
We read φ as “next phi“, its semantics
Definition 106. 1. A valuation of a subset V ⊆ X is a function ρ : V → P (2AP )ω .
will become clearer when we define its
interpretation.
2. A formula with parameters is a pair (φ, ρ), where ρ is a valuation on V such that
V contains all free variables of φ.
25
The interpretation of a formula φ with parameters ρ is an LTP JφKρ ∈ P (2AP )ω
We also define some syntactic sugar for implications and equivalences, namely,
φ → ψ := ¬φ ∨ ψ and φ ↔ ψ = ( φ → ψ ) ∧ ( ψ → φ ).
Let us give two lemmas that are proven with a simple structural induction and
that will help us make other proofs more clear.
Lemma 108. Let ρ, ρ0 : V → P (2AP )ω be such that ρ( X ) = ρ0 ( X ) for any free variable
Definition 111. Given φ and ψ with all their variables in V, we say that φ and ψ are
logically equivalent, denoted φ ≡ ψ, if for any valuation ρ on V, JφKρ = JψKρ .
26
Example 112. We have the following for any formulas φ and ψ.89 89
These equivalences can all be easily
proven when looking at the definition of
(φ ∧ ψ) ≡ φ∧ ψ >≡> the interpretation, or the intuition we have
given above.
(φ ∨ ψ) ≡ φ∨ ψ ⊥≡⊥
¬φ ≡ ¬ φ
Remark 113. These formulas are only the important ones using the connective ,
but all the other equivalences proved in classical logic such as DeMorgan’s laws,
distributivity, etc. can also be shown.
Let us look at how LML relates to observable properties.
Proof. We proceed by structural induction on φ. The > and ⊥ case are trivial and
since clopen sets are closed under binary union and intersection and under com-
plements, the connectives ∧, ∨ and ¬ are also taken care of.
Case a: We have JaK = {ext( A) | A ∈ 2AP , a ∈ A} If AP is finite, then we are
S
done because this is a finite union of clopen set. Otherwise, we need to show that
this union is closed. Let σ be such that a ∈
/ σ(0), ext(σ (0)) is an open set containing
σ that does not intersect JaK.90 90
The case then follows from Lemma 51.
Case φ: By induction hypothesis, JφK is clopen, thus it is equal to ext(W ) for a
finite W ⊆ (2AP )∗ 91 . Moreover, unrolling the definition of J·K, we find 91
By Corollary 104.
!
[ [
J φK = ext( A · w) .
A∈2AP w ∈W
Hence, if AP is finite, this is a finite union of clopen sets and we are done. Other-
wise, we need to show this union is closed. Let σ ∈ / J φK, we have σ1 ∈/ JφK. Since
JφK is closed, there exists u ∈ (2AP )∗ such that σ1 ∈ ext(u) and ext(u) ∩ JφK = ∅,92 92
We can assume that the open set sepa-
then ext(σ (0) · u) contains σ and does not intersect J φK. rating σ 1 from JφK is the extension of a
single word u, because if it is ext(W ) for
W ⊆ (2AP )∗ , then we can pick one u ∈ W
Proposition 115. If AP is finite, then for any observable property (clopen), there is a closed such that σ1 ∈ ext(u).
LML formula φ such that JφK = P.
Proof. When AP is finite, we have seen that there is a finite U ⊆ (2AP )∗ such that
P = ext(U ). We will show that any u ∈ (2AP )∗ , ext(u) is definable by a formula
in LML.93 The result then follows from the fact that P is a finite union of such sets 93
An LTP is defined by φ if JφK = P.
and we can define finite unions with disjunctions of formulas.
First, for any A ∈ 2AP , we define
! !
^ ^
φA = a ∧ ¬a .
a∈ A ∈A
a/
27
ext(ε) = J>K, so φ0 = >. Next, suppose we have Jφk K = ext( Ak · · · A1 ), then we
define φk+1 = φk ∧ φ Ak+1 since we can easily verify that94 94
Indeed, an ω-word σ that satisfies this
formula must have σ(0) = Ak+1 as argued
ext( Ak+1 · Ak · · · A1 ) = J φk ∧ φ Ak+1 K. above and at the next step, it must satisfy
φk . Namely, by induction hypothesis,
σ1 ∈ ext( Ak · · · A1 ).
In the following example, we show that the proposition does not always hold
when AP is infinite.
Example 116. Let AP = N and A = 2N ∈ 2AP (the even numbers). We know that
ext( A) is observable, but there are no formula φ with JφK = ext( A). Indeed, assume
towards a contradiction that such a φ exists. Without loss of generality, φ has no
connective95 , hence we can write φ in DNF as 95
Indeed, ext( A) only restricts the first
_ ^ step of the LTPs it contains, so looking
φ= λi,j , at the next steps with is useless. More
i ∈ I j∈ Ji formally, σ ∈ ext( A) if and only if σ (0) ·
τ ∈ ext( A) for all τ ∈ (2AP )ω , and if φ
where the λi,j ’s are n ∈ N or its negation ¬n. If σ φ, then σ must satisfy one of contains a non-trivial (it is not >),
V then φ will not accept all extensions of
the term in the disjunction, wlog it is the first, so σ j∈ J1 λ1,j . However, if we let σ (0).
n be the greatest odd number in the λ1,j ’s, the LTP (σ (0) ∪ n + 2) · σ1 still satisfies
the same term, and so φ as well. This contradicts the fact that JφK = ext( A) as
n + 2 ∈ σ (0) is odd. Note that φ is a finite formula, so we can
expect it will not be effective to describe
something infinitary such as having all
LML with Fixed Points even numbers in the first step.
LML’s expressiveness is poor as it only describes nice safety properties (the observ-
able ones). In particular, the only liveness property it describes is trivial.96 In order 96
Recall Proposition 46.
to remedy that, we will add two modalities: “eventually“ (♦φ) and “always“ (φ).
The interpretation of these new connectives is97 97
An ω-word σ satisfies ♦φ if it satisies φ at
some step.
J♦φKρ := {σ ∈ (2AP )ω | ∃i ∈ N, σi ∈ JφKρ } It satisfies φ if it satisfies φ at all steps.
and
JφKρ := {σ ∈ (2AP )ω | ∀i ∈ N, σi ∈ JφKρ }.
Example 117. We give a few simple formulas and give the intuition behind their
interpretation. Fix a ∈ AP:
• The property defined by ♦a contains all ω-words for which a is true at some
point in the word:
• The property defined by a contains all ω-words for which a is true all the time:
• The property defined by ♦a contains all ω-words for which at any step, a will
be true at some point, or equivalently, a is true infinitely many times:
28
• The property defined by ♦a contains all ω-words for which at some step, a
starts to be true forever:
One can show that the J♦aK is an open liveness property, JaK is a safety property
and both J♦aK and J♦aK are liveness properties that are neither open nor closed.
♦φ ≡ ¬¬φ φ ≡ ¬♦¬φ
♦φ ≡ φ ∨ ♦φ φ ≡ φ ∧ φ
Proof.
Let P be another fixed point and σ ∈ (2AP )ω , we claim that if there exists i ∈ N
such that σi ∈ P, then σ ∈ P. Indeed, because Jφ♦ Kρ ( P) ⊆ P, we infer that
Our claim follows.101 Moreover, for any σ ∈ J♦φKρ , there exists i ∈ N such that 101
We have the following implications:
σi ∈ JφKρ ⊆ Jφ♦ Kρ ( P) ⊆ P. Hence, we conclude that J♦φKρ ⊆ P. σi ∈ P ⇒ σ(i − 1) ∈ P ⇒ · · · ⇒ σ ∈ P.
Let P be another fixed point and σ ∈ P, we will show that σ i ∈ JφKρ for any
i ∈ N.102 We proceed by induction on i. Since 102
It then follows that P ⊆ JφKρ .
29
JφKρ ⊇ JφKρ[ P/X ] ∩ J XKρ[ P/X ] = Jφ Kρ ( P),
This result shows how the modal connectives ♦ and can be described by
fixed points of functions defined using only connectives in LML. However, we have
skimped on the small detail that the greatest or least fixed points of a function be-
tween posets might not always exist.103 We have to introduce a bit of terminology Consider the function f : (N, ≤) →
103
and two results in order to show that the fixed points in Lemma 119 actually exist. (N, ≤) defined by
(
n n ≡ 1 (mod 2)
Definition 120. Let f : ( L, ≤) → ( L, ≤), a pre-fixpoint of L is an element a ∈ L such n 7→ .
0 n ≡ 0 (mod 2)
that f ( a) ≤ a. A post-fixpoint is an element a ∈ L such that a ≤ f ( a). A fixpoint
Every odd number is a fixed point, so f
(or fixed point) of f is a pre- and post-fixpoint. clearly has no greatest fixed points.
{ a ∈ L | a ≤ f ( a)}.
W
2. The greatest fixpoint of f is ν f :=
With this result, it is left to show that the functions Jφ♦ Kρ ( X ) and Jφ Kρ ( X ) are
monotone (or antimonotone as the greatest fixed point is the least fixed point in the
dual poset).
Definition 122. Let φ be a formula in LML and X ∈ X , we define the relations
X Pos φ and X Neg φ inductively as follows:105 105
The relations are read as “X positive
in φ“ and “X negative in φ“ respectively.
X 6= Y a ∈ AP X Pos φ They intuitively correspond to the fact that
X Pos X X Pos Y X Pos a X Pos > X Pos ⊥ X Pos φ X always appears under an even (resp.
X Pos φ X Pos ψ X Pos φ X Pos ψ X Neg φ odd) number of negations in φ. If X does
not appear in φ, then X Pos φ and X ¬φ.
X Pos φ ∨ ψ X Pos φ ∧ ψ X Pos ¬φ
30
X 6= Y a ∈ AP X Neg φ
X Neg Y X Neg a X Neg > X Neg ⊥ X Neg φ
X Neg φ X Neg ψ X Neg φ X Neg ψ X Neg φ
X Neg φ ∨ ψ X Neg φ ∧ ψ X Neg ¬φ
Example 123. Both the formulas φ♦ ( X ) and φ ( X ) we defined in Lemma 119 have
X positive in them.
Proof.
c
Proof. First, by definition, we have JψKρ ( A) = JφKρ ( Ac ) . Moreover, we can write
the following derivation107 107
The first equality is the formula for
least fixpoints in Theorem 121. The second
\n o
(2AP )ω \ µ JψKρ ( X ) = (2AP )ω \ A ⊆ (2AP )ω | JψKρ ( A) ⊆ A equality is an application of one of DeMor-
gan’s laws. The third equality uses the first
[n sentence of the proof, the fourth is just a
o
= Ac | A ⊆ (2AP )ω , JψKρ ( A) ⊆ A property of inclusion and complements
[n o and the last is the formula now for greatest
= A ⊆ (2AP )ω | JψKρ ( Ac ) ⊆ Ac fixpoints.
[n c o
= A ⊆ (2AP )ω | JφKρ ( A) ⊆ Ac
[n o
= A ⊆ (2AP )ω | A ⊆ JφKρ ( Ac )
= νJφKρ ( X ).
where ψ and the φi ’s have no occurrence of X. If we assume that ni,j = 1 for all
i ∈ I, j ∈ Ji , then θ has a simpler form, that is,
_
θ ≡ ψ∨ (φi ∧ X ) ≡ ψ ∨ (φ ∧ X ),
i∈ I
W
where ψ and φ = i∈ I φi do not contain X. The idea behind LTL is to add fixed
points as we did to obtain and ♦ but only for formulas θ of this form.
31
Formulas of LTL are recognized by the grammar:
The interpretation of formulas in LTL are exactly the same as for LML formulas
extended with the modal connective φUψ (read φ until ψ):108 108
Intutively, there is a time i such that
σ satisfies φ at every step before i and it
n o satisfies ψ at i.
JφUψKρ = σ ∈ (2AP )ω | ∃i ∈ N, σi ∈ JψKρ and ∀ j < i, σ j ∈ JφKρ .
We also extend the notation to LTL formulas and we observe that σ φUψ if
and only if there exists i ∈ N such that σi ψ and for any j < i, σ j ψ.
¬θ [¬ X/X ] ≡ ¬ (ψ ∨ (φ ∧ ¬ X ))
µJ¬θ [¬ X/X ]Kρ ( X ) = J¬ψU ¬(ψ ∨ φ)Kρ .
≡ ¬ψ ∧ ¬ (φ ∧ ¬ X)
≡ ¬ψ ∧ (¬φ ∨ X)
From now on, we will use the notation φWψ (read φ weak until ψ) as a shorthand:
≡ (¬ψ ∧ ¬φ) ∨ (¬ψ ∧ X ).
Lemma 127. The property JφWψKρ is the greatest fixed point of JθKρ ( X ) where θ =
ψ ∨ ( φ ∧ X ).
Recall now that the formulas that lead to and ♦ as fixpoints fit in our simple
case and it is easy to see (intuitively and with the fixpoint definitions) that
♦φ ≡ >Uφ and φ ≡ φW ⊥.
Therefore, by defining ♦ and with these equivalences within LTL, we can recover
their original semantics.
Lemma 128. For any LTL formula φ,111 The proofs are essentially an easy
111
32
In order to motivate the restrictions of fixed points to formulas where only
appears once at the leafs, we state the following fact without proof.
The greatest fixpoint would be
Proposition 129. There are no closed LTL formula φ such that JφK is the greatest fixed {σ ∈ (2AP )ω | ∀i ∈ N, a ∈ σ(2 · i )}.
point of θ ( X ) = a ∧ X.
φWψ ≡ (φ ∪ ψ) ∨ φ
¬ (φWψ) ≡ ¬ψU ¬ (φ ∨ ψ)
¬(φUψ) ≡ ¬ψW ¬ (φ ∨ ψ)
Remark 131. Using the last two equivalences, we obtain a systematic way to push
all the negations to the leaves. Unfortunately, this process leads to an exponential
blow up in the size of the formula. For this reason, there are variations on LTL that
add a modal connective to make these DeMorgan’s law have no blow up (it is called
release).
f n (⊥) f n (>).
_ ^
µf = and νf =
n ∈N n ∈N
33
Lemma 136. Let ( D, ⊆) be a directed poset, ( L, ≤) be a frame and f , g : D → L be
monotone, then ! !
_ _ _
f (d) ∨ g(d) = f ( d ) ∨ g ( d ),
d∈ D d∈ D d∈ D
and ! !
_ _ _
f (d) ∧ g(d) = f (d) ∧ g(d)
d∈ D d∈ D d∈ D
Proof.
Proof.
f n (⊥).
W
• If f is continuous, then µ f = n ∈N
f n (>).
V
• If f is cocontinuous, then ν f = n ∈N
Proof.
and dually,
n
JφKρ = ∩n∈N J φKρ .
ω-Regular Properties
Lω ( G ) := σ ∈ Σω | ∃i ≤ k, u ∈ L( Ei ), {v j } j∈N ⊆ L( Fi ), σ = uv0 · · · vn · · · .
34
• The language LΠ that satisfies σ ∈ LΠ ⇔ ∃∞ t, σ (t) = a is given by Lω ((b*a)ω ).
Remark 141. The set of ω-regular languages on a fixed alphabet are clearly closed
under finite union because Lω ( G ) ∪ Lω ( G 0 ) = Lω ( G + G 0 ). We will also see that
they are closed under finite intersection and complementation. We will also see that
any closed LTL formula defines an ω-regular language.
35