0% found this document useful (0 votes)
17 views

LectureNotes SV1

This document contains lecture notes from the Semantics and Verification classes taught by Colin Riba in winter 2020 at ENS Lyon. The notes cover topics like transition systems, linear time properties, topological spaces, posets and complete lattices, observable properties, and linear temporal logic. Transition systems are used to describe programs and their execution, with states represented as nodes and transitions as labeled edges. Linear time properties are properties of the infinite sequences of transitions that can occur in a transition system.

Uploaded by

RJ Diana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

LectureNotes SV1

This document contains lecture notes from the Semantics and Verification classes taught by Colin Riba in winter 2020 at ENS Lyon. The notes cover topics like transition systems, linear time properties, topological spaces, posets and complete lattices, observable properties, and linear temporal logic. Transition systems are used to describe programs and their execution, with states represented as nodes and transitions as labeled edges. Linear time properties are properties of the infinite sequences of transitions that can occur in a transition system.

Uploaded by

RJ Diana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

ENS Lyon - Winter 2020

Lecture Notes - SV
Ralph Sarkis
March 31, 2020

These are lecture notes taken during the Semantics and Verification classes taught by
Colin Riba in winter 2020.

Contents

Transition Systems 2
Linear Time Properties 9
Invariants and Safety Properties 10
Liveness Properties 14
Topological Spaces 15
Preliminaries 15
ω-words 17
Posets and Complete Lattices 19
Preliminaries 19
Prefixes and Closure 21
Observable Properties 23
Continuous Functions 23
Compactness 24
Hausdorff Spaces 25
Linear Temporal Logic (LTL) 25
Linear Modal Logic (LML) 25
LML with Fixed Points 28
Syntax and Semantics of LTL 32
Fixed Points and Defined Modalities 32
Fixed Points and Continuity 33
Question 1. What is Semantics and Verification?

While it is easy enough to describe how a simple program is executed, it is vir-


tually impossible to infer the precise behavior of a machine running a large piece
of code. Even more so if the latter contains randomness, parallelism or other com-
plex features now available in most devices. The field of semantics is concerned

1
with circumventing the useless details of the machine implementation by giving
formal mathematical meaning to programs. This lets us reason rigorously about
their execution or any interesting properties that they have.
Verification is a terminology for methods that, given a program in an abstract
language and a property usually in another language, automatically verify whether
the program satisfies the property. A common object that is used to describe pro-
grams is a transition system1 , and in this class we will be interested in the so-called 1
Somewhat similar to a labeled graph. The
nodes represent states of the program and
linear time properties2 that they have.
each state has some properties associated
to it.
2
Roughly, they are properties on the
Transition Systems infinite sequences of transitions that can
occur in the system.
Definition 2. A transition system T is a tuple (S, A, →, I, AP, L), where:
1. S is the set of states of the system (represented as nodes of the graph),

2. A is the set of actions (represented as labels for the edges of the graph),
a
3. →⊆ S × A × S is the transition relation, we denote s → s0 when (s, a, s0 ) ∈→ (it
translates to “When in state s, we can execute action a and end up in state s0 ”),

4. I ⊆ S is the set of initial states (represented with an arrow pointing to them),

5. AP is the set of atomic propositions, and

6. L : S → 2 AP is the state labeling, telling which atomic propositions are true in


each state of S (represented next to the states with a different color).
Remark 3. While this definition might look
We give an illustrative example that is not so relevant, but it shows how we similar to that of a DFA, there are a couple
usually represent these types of machines. of important distinctions we can already
make. First, this definition is highly non-
Example 4 (Vending machine). We will model a vending machine that waits for deterministic as → is a relation and there
might be several states related with the
a user to pay by inserting a coin and then non-deterministically selects between
same transitions. Second, in general, a
giving beer or soda and waiting for another user to pay.3 transition system is not necessarily finite.
3
We will abbreviate the actions insert_coin,
∅ get_beer and get_soda respectively as ic, gb
and gs.
pay

insert_coin
get_beer get_soda

sel {paid}
τ τ

{paid, available} beer soda {paid, available}

The τ transitions are conventionally assumed to be non-deterministic from the


point of view of the observer. The transition system depicted here has states
S = {pay, select, soda, beer}, A = {ic, gs, gb}4 , I = {pay}, AP = {paid, available} 4
It is common practice to assume that τ is
and L assigns ∅ to pay, {paid} to sel and {paid, available} to soda and beer. an action.

2
In the context of this course, and especially for this section, it is useful to have a
way to generate transition systems. Program graphs, although they are designed to
represent the evaluation of a program, can do exactly this.

Definition 5 (Program graph). Given a (finite) set Vars of variables together with,
for each variable x ∈ Vars, a domain Dom( x )5 , an evaluation is an element of 5
Example of such domains are lists, ma-
Eval(Vars) = ∏ x∈Vars Dom( x ), that is, η ∈ Eval(Vars) assigns a value η ( x ) ∈ chine integers, Z, R. Note that they can
be infinite and even contain stuff that can-
Dom( x ) to each variable x ∈ Vars.6 We write Eval when the set of variables is not be represented by a computer. This is
clear from the context. because it is sometimes useful to abstract
away these restrictions.
A condition is a propositional formula with atoms of the form x ∈ D where x ∈ 6
In other words, an evaluation can be
Vars and D ⊆ Dom( x ) or > and ⊥ to represent true and false values respectively. viewed as the state of the memory at a
The set of such conditions denoted Cond(Vars) (or simply Cond) is of course closed specific point in the program.
under conjunctions, disjunctions and negations. Given a condition g ∈ Cond and a
valuation η ∈ Eval, we write η  g if g is true under the evaluation η.
A program graph over Vars has the form PG = (Loc, A, Effect, ,→, Loc0 , g0 ),
where:

1. Loc is the (usually fininte) set of locations7 , 7


They are an abstraction of the line number
in the code, or of labels in assembly,
2. A is the set of actions, another terminology is control point.

3. Effect : A × Eval → Eval abstracts the effect that actions have on memory,

4. ,→⊆ Loc × Cond × A × Loc which is a transition relation guarded by a condi-


g:a
tion8 , 8
We will denote ` → `0 when
(`, g, a, `0 ) ∈,→. It roughly translates to
5. Loc0 ⊆ Loc is the set of initial locations and g0 ∈ Cond is the initial condition. “If we are in location ` and condition g is
holds, then we can execute action a apply
Example 6 (Vending machine (continued)). Let us extend Example 4 by giving the its effects and go to location `0 .”

program graph for a vending machine with a similar behavior. The only difference
is that there is a now a set amount of beers and sodas in the machine that can be
refilled. When the user inserts a coin but there are no items left, the coin is returned.
Fix the maximum number of items m ∈ N, let the amount of beers and sodas
be variables in nb , ns ∈ Vars with domain Dom(nb ) = Dom(ns ) = {0, . . . , max − 1}.
There are two control points Loc0 = start, sel ∈ Loc and new actions to refill and
return the coin (A = {ic, gb, gs, refill, rc}). The initial condition is g0 = nb = m − 1 ∧
ns = m − 1.
> : ic

nb > 0 : gb
> : refill start sel
ns > 0 : gs

ns = 0 ∧ nb = 0 : rc

3
The effects are not represented in the diagram but a sensible Effect would satisfy:
for any evaluation η ∈ Eval,

Effect(η, ic) = Effect(η, rc) = η


Effect(η, gb) = η [nb := nb − 1]
Effect(η, gs) = η [ns := ns − 1]
Effect(η, refill) = η [nb := 100, ns := 100].

The crucial difference between program graphs and transition systems is that
the former separate the control from the data. In other words, a program graph
abstracts only the behavior the program while a transition system abstracts the
behavior along with the memory of the program. This motivates that a transition
system might be more appropriate for observing the evolution of a program graph
along with the evaluation. The following definition makes this formal.

Definition 7 (TS of a PG). Let us have a program graph PG with the same nota-
tion as in Definition 5, the transition system of PG is TS( PG ) = (Loc × Eval, A, →
, I, AP, L)9 , where: 9
Note that this can lead to a huge set of
states because some variables can have
1. → is defined by the rule huge domains.

g:α
` → `0 ng
,
(`0 , Effect(η, α))
α
(`, η ) →

2. I = {(`, η ) | ` ∈ Loc0 , η  g0 },10 10


We require that the initial condition is
satisfied with η in memory.
3. AP = Loc + Cond(Vars),11 11
In words, the atomic properties can say
whether a state is in a certain location or
4. L(`, η ) = {`} ∪ { g | η  g}. whether it satisfies a condition of Cond.
In practice, we use a smaller subset of AP
Example 8 (Vending machine (still)). Assuming that the m = 2, we can draw the that contains only properties relevant to the
particular application.
transition graph for the vending machine in Example 6 (we omit the state labeling
as it is clear what properties hold at each state). We denote the evaluations as a
pairs of values (ns , nb ).

start ic sel
1, 0 1, 0
refill
gb gs
ic
start ic sel start sel
refill
1, 1 1, 1 0, 0 rc 0, 0
refill
gs gb
refill
start ic sel
0, 1 0, 1

4
Notice that increasing m, even by only one, would make the transition graph way
more complex.
To end this section presenting the basics of transition systems, we describe three
ways of combining them.
The first one is similar to taking the product of two DFA12 , but we allow the case 12
Deterministic finite automata.
where some system can do an action that the other cannot. 13 13
In DFA terminology, it amounts to taking
the products of two automata on different
Definition 9 (Interleaving of TSs). Given two transition systems Ti = (Si , Ai , →i alphabets. When the new machine sees
, Ii , APi , Li ) for i = 1, 2, their interleaving composition denoted T1 ||| T2 is (S1 × a letter that only one of the original DFA
recognizes, it makes a transition only
S2 , A1 ∪ A2 , →, I1 × I2 , AP1 ∪ AP2 , L), where → is defined by according to that DFA. When it sees a
letter that both DFA recognize, it non-
s1 →1 s10 s2 →2 s20
α α
deterministically choose what transition
,
to make. There is a slight caveat because
(s1 , s2 ) → (s10 , s2 ) (s1 , s2 ) → (s1 , s20 )
α α
actions are not consumed by a transition
system, so after doing a common action on
and L(s1 , s2 ) = L1 (s1 ) ∪ L2 (s2 ).
one of the system, the new machine can
Example 10 (Traffic lights). Suppose we have two traffic lights with that can switch still do that action on the other system.

from red to green and vice-versa non-deterministically, they are represented by the
following graphs.

R1 R2

τ τ τ τ

G1 G2

Taking their interleaving yields the following transition system.14 14


We left out the state labeling as it is a
trivial construction.
τ
R1 , R2 R1 , G2

τ τ

G1 , R2 G1 , G2
τ

Unfortunately, such a simple way of composing transition systems does not allow
shared memory between the systems. For this reason, when we want to take this
possibility into account, it is preferred to do a composition of program graphs.
Definition 11 (Interleaving of PGs). Given two program graphs Gi = (Loci , Ai , Effecti , ,→i
, Loci,0 , gi,0 ) over Varsi for i = 1, 2, their interleaving is the program graph15 over 15
Observe that the variables are not nec-
Vars1 ∪ Vars2 denoted G1 ||| G2 = (Loc1 × Loc2 , A1 + A2 , Effect, ,→, Loc1,0 × Loc2,0 , g1,0 ∧ essarily disjoint, hence we must consider
actions of A1 and A2 as disjoint, otherwise
g2,0 ), where ,→ is defined by the rule there would be an ambiguity in the choice
g:α g:α of what effect to apply.
`1 →1 `10 `2 →2 `20
,
g:α
(`1 , `2 ) → (`1 , `20 )
α
(`1 , `2 ) → (`10 , `2 )
and Effect(α, η ) = Effecti (α, η ) for α ∈ Ai .16 16
The evaluation η is an element of
Eval(Vars1 ∪ Vars2 ), so we implicitly
adapted Effecti in the obvious way (i.e.: it
does not modify variables outside Varsi ). 5
Example 12. Let us illustrate this construction on two simple program graphs ma-
nipulating the same variable x with Dom( x ) = N, here are their representations
with effects in blue.

`1 > : α1 `2 > : α2 > : α2 `1 , `2 > : α1


x = 2·x x = x+1
x = x+1 x = 2·x

Interleaving the program graphs is simple enough (see Figure 1), but what is more Figure 1: Interleaving of program
interesting is comparing the transition systems we obtain when do the operations graphs in Example 12.
TS( G1 ) ||| TS( G2 ) and TS( G1 ||| G2 ). Since the domain of x is infinite, both these
transition systems have infinitely many states. ···
0 1 2 3
For the former, observe that interleaving TS( G1 ) and TS( G2 ) (represented in
Figure 2) will lead to the dissociation of the variable x into two independent copies.
It leads to a system (partially represented below) which is irrelevant for the purpose 0 1 2 3 4 ···
of analyzing the behavior of both programs when run concurrently.

α2
α2 0, 0 0, 1 0, 2 Figure 2: Part of TS( G1 ) and
α1 α1 α1 TS( G2 ) from Example 12. (the
α2 label of the nodes is the value of x
α2 1, 0 1, 1 1, 2 at that state)
α1 α1 α1
α2
α2 2, 0 2, 1 2, 2

For the latter, we actually obtain an interesting system (depicted below) because
interleaving the program graphs first ensures that α1 and α2 act on the same x.
α2
α1 , α2
α1 α1 α1
α2 0 1 2 3 4 ···

This example shows the relevance of program graphs when we care about con-
current data. While there are many more possibilities to compose transition sys-
tems, we introduce one last definition that illustrates how we can deal with concur-
rent control without using program graphs.

Definition 13 (Parallel Composition of TSs). Let Ti = (Si , Ai , →i , Ii , APi , Li ) for i =


1, 2 be two transition systems and H ⊆ A1 ∩ A2 ,17 their parallel composition (or In general, we suppose τ ∈
17
/ H and we
handshaking) is T1 || H T2 = (S1 × S2 , A1 ∪ A2 , →, I1 × I2 , AP1 ∪ AP2 , L), where → write T1 || T2 when H = ( A1 ∩ A2 ) \ {τ }.

is defined by the rules

s1 →1 s10 s2 →2 s20
α α
α∈
/H α∈
/H
(s1 , s2 ) → (s10 , s2 ) (s1 , s2 ) → (s1 , s20 )
α α

s1 →1 s10 s2 →2 s20
α α
α∈H
,
(s10 , s20 )
α
( s1 , s2 ) →

6
and L(s1 , s2 ) = L1 (s1 ) ∪ L2 (s2 ). One should view the actions in H as synchronized
actions that both systems have to do at the same time.18 18
Note that this definition is a general-
ization of the interleaving composition as
Example 14. Given two transition systems T1 and T2 that have a non-critical state T1 ||∅ T2 = T1 ||| T2 .
denoted nci and a critical state ci and can jump from one to the other using actions
req and rel as depicted below.19 19

req req
nc1 c1 nc2 c2
rel rel

We leave it as an exercise to show that in the plain interleaving composition of


req
T1 and T2 , both processes can reach their critical state at the same time. However,
ul l
if we add an arbiter system A that can unlock or lock the critical section, we can
rel
disallow this behavior: with A is as in Figure 3, A ||rel,req ( T1 ||| T2 ) is as follows.
rel1 rel2 Figure 3: Arbiter system in Exam-
c1 , nc2 nc1 , nc2 nc1 , c2 ple 14.
req1 req2

nc1 , nc2 nc1 , c2 c1 , nc2 c1 , c2 c1 , c2

We have constructed a system where (c1 , c2 ) cannot be reached. This kind of


property is called a safety property and it is finitary because it does not mention
an infinite execution of the program. In the next section, we will talk about linear
time properties which are infinitary. In the context of this example, a reasonable
linear time property could require that in any execution, both c1 and c2 are visited
infinitely many times, ensuring fairness of the arbiter.

Before leaving this section, we show a simple result that illustrates how to deal
with transition systems in a more theoretical fashion.

Proposition 15 (Associativity of ||). Let Ti = (Si , Ai , →i , Ii , APi , Li ) for i = 1, 2, 3 be


transition systems, then20 20
Recall that || with no subscript is the
parallel composition synchronizing all
T := ( T1 || T2 ) || T3 = T1 || ( T2 || T3 ) =: T 0 . common actions (except τ).

Proof. It is easy to see that the states, actions, initial states, atomic propositions and
state labelings of T and T 0 will be the same because they are constructed with ×
and ∪ which are associative operations. Let us denote → and ⇒ for the transition
relations of T and T 0 respectively. We have to show that for any s1 , s10 ∈ S1 , s2 , s20 ∈
S2 , s3 , s30 ∈ S3 and α ∈ A1 ∪ A2 ∪ A3 ,

(s1 , s2 , s3 ) → (s10 , s20 , s30 ) ⇔ (s1 , s2 , s3 ) ⇒ (s10 , s20 , s30 ).


α α

We proceed by case analysis on the nature of α.


Case 1: The action α belongs to exactly one of the systems, say α ∈ A1 ,21 then we 21
The other cases are similar.
have the following inferences:

7
s1 →1 s10
α
α∈
/ A1 ∩ A2
(s1 , s2 ) →1||2 (s10 , s2 )
α
α∈
/ ( A1 ∪ A2 ) ∩ A3
(s1 , s2 , s3 ) → (s10 , s2 , s3 )
α

s1 →1 s10
α
α∈
/ A1 ∩ ( A2 ∪ A3 )
(s1 , s2 , s3 ) ⇒ (s10 , s2 , s3 )
α

Case 2: The action α belongs to exactly two of the systems, say α ∈ A1 ∩ A3 , then
we have the following inferences:

s1 →1 s10
α
α∈
/ A1 ∩ A2
(s1 , s2 ) →1||2 (s10 , s2 ) s3 →3 s30
α α
α ∈ ( A1 ∪ A2 ) ∩ A3
(s1 , s2 , s3 ) → (s10 , s2 , s30 )
α

s3 →3 s30
α
α∈
/ A2 ∩ A3
s1 →1 s10 (s2 , s3 ) →2||3 (s2 , s30 )
α α
α ∈ A1 ∩ ( A2 ∪ A3 )
(s1 , s2 , s3 ) ⇒ (s10 , s2 , s30 )
α

Case 3: The action α belongs to all of the systems, then we have the following
inferences:

s1 →1 s10 s2 →2 s20
α α
α ∈ A1 ∩ A2
(s1 , s2 ) →1||2 (s10 , s20 ) s3 →3 s30
α α
α ∈ ( A1 ∪ A2 ) ∩ A3
(s1 , s2 , s3 ) → (s10 , s20 , s30 )
α

s2 →2 s20 s3 →3 s30
α α
α ∈ A2 ∩ A3
s1 →1 s10 (s2 , s3 ) →2||3 (s20 , s30 )
α α
α ∈ A1 ∩ ( A2 ∪ A3 )
(s1 , s2 , s3 ) ⇒ (s10 , s20 , s30 )
α

Linear Time Properties

Definition 16 (Linear Time Property). A linear time property (LTP) over atomic
propositions AP is a set of ω-words P ⊆ (2AP )ω .22 22
We use ω as the cardinality of N, thus
an ω-word on an alphabet Σ is an element
Example 17. Recall the transition system TVM depicted in Example 4 (and shown of Σω , i.e.: an infinite sequence of symbols
in Σ. Although some of the results about
again in Figure 4 with AP = {paid, available}. Here are four examples of LTP in this LTPs can be shown with general alphabets,
context. we will remain in the case of Σ = 2AP for
First, the property that any state with an available drink is preceded by a state clarity.

where the user has paid can be written formally as23



P1 = {σ ∈ (2AP )ω : ∀i, available ∈ σ (i ) =⇒ i > 0 ∧ ∃ j < i, paid ∈ σ ( j)}. pay

insert_coin
get_beer get_soda

sel {paid}
τ τ 8

beer soda
{paid, available}{paid, available}

Figure 4: Representation of TVM


23
Intuitively, the system seems to behave according to P1 , so one might expect that
TVM “satisfies”this property in some sense. Definition 20 will formalize this intu-
ition.
The property that the number of states where a user has paid is at least as large
as the number of states where a drink is available is written:
n o
P2 = σ ∈ (2AP )ω : |{i | available ∈ σ (i )}| ≤ |{i | paid ∈ σ (i )}| .

In general, LTPs similar to P1 and P2 are hard to work with because they are not
finitary in the sense that, to verify them, one has no choice but to look at an infinite
amount of symbols.
We will see that some properties which might look infinitary are easier to auto-
matically verify because they have a finite representation. For instance, the property
that there is an infinite number of states is written:24 24
The notation ∃∞ is a shorthand of
∀ N, ∃i ≥ N. Its less intuitive dual, “al-
P3 = {σ ∈ (2AP )ω | ∃∞ i, paid ∈ σ (i )}. ways true after some point”, is denoted
∀∞ := ∃ N, ∀i ≥ N.

This last LTP illustrates how ∀∞ can be used:


n o
P4 = σ ∈ (2AP )ω | [∀∞ i, paid ∈ σ(i )] =⇒ [∃∞ i, available ∈ σ (i )] .

Definition 18 (Path). A (finite or infinite) path in a transition system T = (S, A, →


, I, AP, L) is a (finite or infinite) sequence of states π = (si )i≤n ⊆ S with n ≤ ω and
α
such that ∀i, i + 1 < n =⇒ ∃α ∈ A, si → si+1 . We say that a path π is initial if
s0 ∈ I.

Definition 19 (Trace). The trace of a path π = (si )i<n is the sequence L(π ) :=
( L(si ))i<n . The set of traces of a transition system T, denoted Tr( T ), is

Tr( T ) = { L(π ) | π is an initial path in T } .

Also, Trω ( T ) denotes the set of infinite traces and Trfin ( T ) the set of finite traces.

Definition 20. We say that a transition system T satisfies a linear time property P
if Trω ( T ) ⊆ P. We denote this by write T p≈ P.25 25
We use this notation instead of the more
usual  because this definition is not the
Example 21. Let us show that all the properties in Example 17 are satisfied by TVM . perfect notion of satisfaction. Informally,
this comes from the fact branchings are a
1. Clearly, TVM p≈ P1 because any path in T goes through sel before going through feature internal to transition systems but
not to LTPs. When we cover modal logics,
either beer or soda. we will see how to fix this definition.

2. Since for any i such that available ∈ L(πi ), we also have paid ∈ L(π ) it follows
trivially that TVM p≈ P2 .

3. The structure of TVM is very simple and we can observe that for any π and any
N ∈ N, paid ∈ L(π N +2 ),26 thus TVM p≈ P3 . 26
In words, starting in any state, doing two
transition always leads to a state where the
4. Note that for any infinite path in TVM goes infinitely many times through pay, user has paid.
thus it is not possible that at some point, any state in the path has paid in its
labeling. We conclude that TVM p≈ P4 .

9
Proposition 22. Let T and T 0 be two transition systems over AP, then
h i
Trω ( T ) ⊆ Trω ( T 0 ) ⇔ ∀ P ⊆ (2AP )ω , ( T 0 p≈ P =⇒ T p≈ P) .

Proof. (⇒) Follows trivially from the definitions. Indeed, for any P ⊆ (2AP )ω ,

def hyp def


T 0 p≈ P ⇔ Trω ( T 0 ) ⊆ P =⇒ Trω ( T ) ⊆ P ⇔ T p≈ P.

(⇐) Consider P = Trω ( T 0 ). We know that T 0 p≈ P, so, by our hypothesis, T p≈ P,


that is Trω ( T ) ⊆ Trω ( T 0 ).27 27
Although this proof is quite trivial, it
illustrates the importance of the fact that
Example 23. Let TVM 0 be the transition depicted below (we omit the actions as they infinite traces of a transition system form
an LTP.
are irrelevant for this example).

start
{paid, available} soda beer {paid, available}

{paid} sels selb {paid}

0
The traces of TVM are the same as the traces of TVM . In particular, we have Trω ( TVM ) =
ω 0
Tr ( TVM ), so Proposition 22 says either system satisfies LTPs that the other satisfies.

We have already mentioned that some LTPs are harder to verify than others, now
we will introduce different families of linear time properties are nicer than most.
There are many such families, but we chose three which are simple to define and
have both historical and theoretical importance.28 28

Invariants and Safety Properties


Definition 24 (Invariant). A linear time property P ⊆ (2AP )ω is an invariant if there
exists a propositional formula φ over AP such that P = {σ | ∀i, σ (i )  φ}.29 29
In words, all the paths of a system
satisfying P goes through states that satisfy
Example 25. Recall Example 14 where we ensured mutual exclusion. In the last some property φ.
system T = A ||rel,req ( T1 ||| T2 ), the states where both T1 and T2 are in their critical
sections are not reachable. Thus, a formula of the form φ = ¬(c1 ∧ c2 ) is satisfied
at any state in a path in T. The property of mutual exclusion is thus an invariant.

Definition 26 (Safety Property). A linear time property P ⊆ (2AP )ω is a safety


property if there is a set of finite words Pbad ⊆ (2AP )∗ such that30 30
Intuitively, a safety property is one
that always fails in finite time. That is, if
n o
L(π ) ∈/ P, then there exists a finite point
P = σ ∈ (2AP )ω | ∀i ∈ N, σ (0) · · · σ(i ) ∈
/ Pbad . in the execution of π where we can decide
that the trace of π is not in P.
Example 27. In Example 17, P1 and P2 for TV M are safety properties. For P1 , we
know that σ fails to be in P when available appears before paid, thus we can write31 31
We use the regular expression notation,
where a∗ means any finite sequence of a,
a · b means a followed by b and a + b means
P1,bad = ∅∗ · ({available} + {paid, available}) .
an a or a b.

10
Before getting dirty with these family of properties, we show two very simple
statements.
Proposition 28. An LTP P is a safety property if and only if for any σ ∈ Pc ,32 there exists 32
The complement of an LTP P on AP is
i ∈ N such that σ (0) · · · σ(i ) · (2AP )ω ∩ P = ∅. Pc := (2AP )ω \ P.

Proof. (⇒) Since P is a safety property, it is induced by some Pbad . If σ ∈ Pc , then


we infer from the definition that there exists i ∈ N, σ (0) · · · σ (i ) ∈ Pbad . Now, any
word in σ(0) · · · σ (i ) · (2AP )ω has a finite prefix in Pbad , namely σ(0) · · · σ (i ), so it
cannot be in P. This direction follows.
(⇐) Using the axiom of choice, for any σ ∈ Pc , we can choose iσ such that
σ (0) · · · σ (iσ ) · (2AP )ω ∩ P = ∅. Thus, if we let Pbad = {σ(0) · · · σ (iσ ) | σ ∈ Pc }, we
can easily see that no word in P has a finite prefix in Pbad 33 and any word in Pc has 33
If for i ∈ N and σ ∈ P,
a finite prefix in Pbad . Therefore, P is the safety property induced by Pbad . σ (0) · · · σ(i ) ∈ Pbad , it contradicts the
fact that σ(0) · · · σ(iσ ) · (2AP )ω ∩ P = ∅.
Proposition 29. Any invariant LTP is a safety property.
Proof. It suffices to let Pbad be the set of finite words with one character not satisfy-
ing the invariant. We leave the details as an exercise.

Definition 30. Given σ ∈ (2AP )ω , a finite prefix of σ is σ̂ ∈ (2AP )∗ such that


σ̂ = σ (0) . . . σ (n) for n ∈ N. We write ⊆ for the relation “is a finite prefix of“.
Definition 31. A state of a transition system is terminal if it has no outward transi-
tion.34 34
Formally, s ∈ S is terminal if for any
α ∈ A and any s0 ∈ s, s 6→ s0 .
α

Proposition 32. Let T be a transition system with no terminal states and P ⊆ (2AP )ω be
a safety property induced by Pbad , then

T p≈ P ⇔ Trfin ( T ) ∩ Pbad = ∅.

Proof. (⇐) Let L(π ) be an infinite trace of T. Since any of its finite prefix is in
Trfin ( T ), it cannot coincide with a word in Pbad . Hence, L(π ) ∈ P.
(⇒) Suppose there exists σ̂ ∈ Trfin ( T ) ∩ Pbad , we have σ̂ = L(π ) for a finite path
π, but since there are no terminal state, we can always add states to π and obtain
an infinite path π 0 such that σ̂ ⊆ L(π 0 ). This means L(π 0 ) ∈ / P, but it contradicts
our assumption that T p≈ P.

Corollary 33. 35 Let T and T 0 be transition systems over AP with no terminal states, then 35
This result is essentially a characteriza-
h i tion similar to Proposition 22 that applies
Trfin ( T ) ⊆ Trfin ( T 0 ) ⇔ ∀ safety P ⊆ (2AP )ω , ( T 0 p≈ P =⇒ T p≈ P) . to safety properties. But now, instead of
comparing all the traces, we only have to
Proof. (⇒) Follows trivially from the last proposition. compare the finite traces.

(⇐) Consider the safety property induced by Pbad = (2AP )∗ \ Trfin ( T 0 ). It is


clear that T 0 p≈ P by the last proposition, thus our assumption gives us T p≈ P or
equivalently Trfin ( T ) ∩ (2AP )∗ \ Trfin ( T 0 ) = ∅, it follows that Trfin ( T ) ⊆ Trfin ( T 0 ).

Since, finite traces can be arbitrarily large, it is natural to ask whether comparing
finite traces of two systems suffices to compare all the LTPs that they satisfy. This
is almost the right intuition, but as usual, infinity breaks our intuition as shown in
the following example.

11
Example 34. Let T be the transition system depicted below where the state labeling
is written inside the states and states’ and actions’ names are omitted.

a b

We have Trfin ( T ) = a∗ b∗ and Trω ( T ) = a∗ bω + aω . Now consider for any i ∈ N, the


system Ti as depicted below (with the same conventions as for T).

i −1 times
a a ··· a b

The finite traces of Ti will be words recognized by a + · · · + ai ∪ ai b∗ and its infinite


traces will be recognized by ai bω .
Now, let T 0 be the union36 for i ∈ N of the Ti ’s, then we have Trfin ( T 0 ) = a∗ b∗ = 36
Informally, it is like putting all systems
Trfin ( T ), but Trω ( T 0 ) = a∗ bω 6= Trω ( T ). next to each other with no interaction
between them. All initial states of the Ti ’s
are still initial states, so the paths in T 0 are
The following definition introduces a sufficient condition to get rid of such coun- just the union of the paths in the Ti ’s.
terexamples. As expected, it is a finiteness property.

Definition 35 (Finitely Branching). A transition system T is said to be finitely


branching if I is finite and for any s ∈ S, {s0 ∈ S | ∃α ∈ A, s → s0 } is finite.37
α 37
In later parts of the course, we will study
logics that will care about what actions
Proposition 36. Two finitely branching T and T 0 with no terminal states and on AP satisfy are used. In these cases, finitely branching
will require that there is a finite number of
distinct actions that can be done at s.
Trω ( T ) ⊆ Trω ( T 0 ) ⇔ Trfin ( T ) ⊆ Trfin ( T 0 ).

Proof. (⇒) Since the systems have no terminal states, any finite trace in T corre-
sponds to a path π in T that can be extended to an infinite path π 0 so that L(π 0 ) is
in Trω ( T ) and thus in Trω ( T 0 ). Now, L(π 0 ) must correspond to a path π 00 in T 0 and
truncating it to the size of π shows that L(π ) = L(π 00 |i≤|π | ) is also in Trfin ( T 0 ).
(⇐)38 Let σ ∈ Trω ( T ), for any n ∈ N, σn := σ(0) · · · σ (n) ⊆ σ is in Trfin ( T ) ⊆ 38
In class, this direction was proved as
Trfin ( T 0 ), so in particular, it is the finite trace of an initial path, say πn , in T 0 . a corollary of the more general König’s
lemma which states that any finitely
To construct an initial path π in T 0 that satisfies L(π ) = σ, we will build (si )i∈N branching infinite tree has an infinite path.
by induction on i with s0 ∈ I and the following invariant: There are infinitely many I chose to integrate the proof of the lemma
into the proof of the proposition to avoid
πn ’s such that ∀k ≤ i, πn (k) = sk . introducing more definitions than needed.
First, since I 0 is finite and all paths πn satisfy πn (0) ∈ I 0 , there is at least one
s0 ∈ I 0 such that there are infinitely many πn with πn (0) = s0 .
Second, suppose s is defined up to i − 1 and there are infinitely many πn ’s sat-
isfying Pi−1 := ∀k ≤ i − 1, πn (k) = sk . Then, since there are finitely many s ∈ S0
such that si−1 → s for some α ∈ A0 , we can pick one such si such that there are still
α

infinitely many of the πn satisfying Pi−1 that satisfy Pi := ∀k ≤ i, πn (k ) = sk .


By the induction principle, this defines a path π = (si )i∈N that

1. is initial because s0 ∈ I 0 ,

2. is in T 0 because every finite subpath is in T 0 , and

3. satisfies L(π ) = σ because L(πn ) = σn for all πn .

12
We conclude that σ ∈ Trω ( T 0 ).

Corollary 37. Two transition systems on AP with no terminal states satisfy the same LTPs
if and only if they satisfy the same safety properties.

Proof. The proof follows from these equivalences that use the previous results:
h i
∀ P ∈ (2AP )ω , T p≈ P ⇔ T 0 p≈ P ⇔ Trω ( T ) = Trω ( T 0 )
⇔ Trfin ( T ) = Trfin ( T 0 )
h i
⇔ ∀ safety P ∈ (2AP )ω , T p≈ P ⇔ T 0 p≈ P

We will end this section with a bit more terminology and practice with results
on invariants and safety properties.

Definition 38 (Closure). Let P be an LTP, we denote the set of finite prefixes of P by

pref( P) = {σ̂ ∈ (2AP )∗ | ∃σ ∈ P, σ̂ ⊆ σ }.

The closure of P is the set of LTPs that have all their finite prefixes in P, that is,

cl( P) = {σ ∈ (2AP )ω | pref(σ ) ⊆ pref( P)}.

Proposition 39. An LTP P is a safety property if and only if cl( P) = P.

Proof. (⇒) Note that P ⊆ cl( P) is trivially true for any P.39 Now, suppose that 39
One way to see this is:
σ ∈ cl( P), for any finite prefix σ̂ ⊆ σ, σ̂ ∈ pref( P), so there exists σ0 with σ̂ ⊆ σ0 . In pref( P) =
[
pref(σ ).
other words, σ̂ · (2AP )ω ∩ P 6= ∅ and we conclude that σ ∈ P by the contrapositive σ∈ P

of Proposition 28.
(⇐) Let σ ∈ Pc , in particular σ ∈ / cl( P), so there is a finite prefix σ̂ ⊆ σ that is not
the prefix of any word in P. In mathematical terms, this means σ̂ · (2AP )ω ∩ P = ∅.
Since σ was arbitrary, P is a safety property by Proposition 28.

Proposition 40. Let P and Q be safety properties, then P ∪ Q and P ∩ Q are also safety
properties.

Proof. For the union, since a word is in P ∪ Q if it has no finite prefix in one of
Pbad and Qbad , it follows that P ∪ Q is the safety property induced by ( P ∪ Q)bad :=
Pbad ∩ Qbad .
For the intersection, it follows from Proposition 39 and
[ \ [ [
cl( P) ∩ cl( Q) = pref(σ ) pref(σ ) = pref(σ) = cl( P ∩ Q).
σ∈ P σ∈Q σ∈ P∩ Q

13
Liveness Properties
Definition 41 (Liveness). A LTP P ⊆ (2AP )ω is a liveness property if for any σ̂ ∈
(2AP )∗ , there exists σ ∈ (2AP )ω such that σ̂ ⊆ σ and σ ∈ P.

Example 42. The system TVM from Example 4 satisfies the property

P = {σ | ∃∞ i, available ∈ σ (i ) =⇒ ∃∞ i, paid ∈ σ(i )},

because both sides of the implication are true for any σ ∈ Trω ( TVM ).40 P is a liveness 40

property because any finite word can be completed with ({available}{paid})∗ .

Proposition 43. An LTP P is a liveness property if and only if pref( P) = (2AP )∗ .

Proof. (⇒) Suppose there exists σ ∈ (2AP )∗ \ pref( P), then σ could not be extended
into a word of σ, contradicting the liveness of P.
(⇐) Any finite word is in pref( P), thus it can be extended in a word of P. We
conclude that P is a liveness property.

Corollary 44. Let P and Q be liveness properties on AP, then P ∪ Q is also a liveness
property.41 41
It follows from Proposition 43 because
pref( P ∪ Q) = pref( P) ∪ pref( Q).
Example 45. Unlike for safety properties, the intersection of two liveness properties
is not always a liveness property. Consider the following properties:42 42
P and Q respectively contain all ω-words
that are eventually all 1s and all 0s.
P = {σ ∈ {0, 1}ω | ∀∞ i, σ (i ) = 1}
Q = {σ ∈ {0, 1}ω | ∀∞ i, σ (i ) = 0}.

They are both clearly liveness as any finite word can be completed with either 1ω or
0ω and belong to P or Q respectively. However, their intersection is clearly empty
and ∅ is not a liveness property.

Proposition 46. The property > := (2AP )ω is the only LTP that is a liveness and safety
property.

Proof. Since any finite word can be completed into an ω-word, > is liveness. It is
also safety induced by Pbad = ∅.
Let P be a safety property induced by Pbad , then any x ∈ Pbad cannot be extended
into a infinite path in P by definition. Therefore, if P is safety and liveness, Pbad
must be empty and P = >.

Theorem 47 (Decomposition). Any LTP P can be decomposed in a liveness property and


safety property. P = Psafe ∩ Plive .

In order to prove this theorem, we will introduce two very different approaches
that lead to very elegant proofs. Thus, the two next sections will feel ad hoc at first,
but they are very much used in current research in semantics, so they are worth
covering.

14
Topological Spaces

Preliminaries
Not much theory is needed, but we present it here for completeness.

Definition 48. A topological space is a pair ( X, ΩX ), where X is a set and ΩX ⊆


2X is a set that is closed under arbitrary unions and finite intersections43 whose 43
For any family of open sets {Ui }i∈ I ,
elements are called open sets of X. [
Ui ∈ ΩX,
Complements of open sets (denoted U c ) are called closed sets. Observe that both i∈ I

the empty set and the whole space are open and closed (sometimes referred to as and if I is finite,
clopen) because Ui ∈ ΩX.
[

∅=
[ \ i∈ I
U and X = U.
U ∈∅ U ∈∅

All the following terminology and results are basic tools use in topology that
will end up helping us prove the decomposition theorem. Fix a topological space
( X, ΩX ).
Lemma 49. Let (Ci )i∈ I be a family of closed sets of X, then ∩i∈ I Ci is closed and if I is
finite, ∪i∈ I Ci is also closed.44 44
Observe that this are statements dual
to the axioms of Definition 48. In fact, it
Proof. Both statements follow trivially from DeMorgan’s laws and the fact that the is sometimes more convenient to define a
topological space by giving its closed sets,
complement of a closed set is open and vice-versa. For the first one, DeMorgan’s
and it is equivalent.
laws yield ! c
Cic
\ [
Ci = ,
i∈ I i∈ I
and the LHS is the complement of a union of opens, so it is closed. For the second
one, DeMorgan’s laws yield
!c
Cic
[ \
Ci = ,
i∈ I i∈ I

and the LHS is the complement of a finite intersection of opens, so it is closed.

Lemma 50. A subset A ⊆ X is open if and only if for any x ∈ A, there exists an open
U ⊆ A such that x ∈ A.

Proof. (⇒) For any x ∈ A, set U = A.


(⇐) For each x ∈ X, pick an open Ux ⊆ A such that x ∈ A, then we claim
A = ∪ x∈ A Ux which is open45 . The ⊆ inclusion follows because each x ∈ A has a 45
Arbitrary unions of opens are open.
set Ux in the union that contains x. The ⊇ inclusion follows because each term of
the union is a subset of A by assumption.

Lemma 51. A subset A ⊆ X is closed if and only if for any x ∈


/ A, there exists an open U
such that, x ∈ U and U ∩ A = ∅.46 46
This result is simply a restatement of the
last one by setting A = Ac .
Definition 52. Given A ⊆ X, the closure of A is
\
A := {C closed | C ⊇ A}.

15
It is very easy to show that A is the smallest closed set containing A.47 Then, it 47
A is closed because it is an intersection of
follows that A is closed if and only if A = A. closed sets and any closed sets containing
A also contains A by definition.
Here are more easy results on the closure of a subset.

Lemma 53. Given A, B ⊆ X then the following statements hold:

1. A ⊆ B =⇒ A ⊆ B. Proof of Lemma 53. 1. By definition, B


contains B, thus A, but B is closed, so it
must contain A.
2. A ⊆ A
2. By definition.
3. A = A. 3. A is closed, so its closure is itself.
4. 3 applied to ∅.
4. ∅ = ∅
5. ⊆ follows because the LHS is the
smallest closed set containing A ∪ B and
5. A ∪ B = A ∪ B. the RHS is closed and contains A ∪ B.
⊇ follows because the LHS is a closed
Definition 54. A subset A ⊆ X is said to be dense (in X) if any non-empty open set containing A and B, it contains A
set intersects A non-trivially, that is, ∀U 6= ∅ ∈ ΩX, A ∩ U 6= ∅. and B.

c
Theorem 55 (Decomposition). Let A ⊆ X, then A = A ∩ ( A ∪ A ), where A is closed
c
and A ∪ A is dense.48 48
This results says that any set can be
decomposed into a closed and a dense set.
Proof. The equality is trivial and A is closed by definition. It is left to show that Note the similarity with Theorem 47, we
c will see that the latter is a corollary of this
A ∪ A is dense. basic result in topology.
Let U 6= ∅ be an open set. If U intersects A, we are done. Otherwise, we have
the following equivalences:
c
U ∩ A = ∅ ⇔ A ⊆ Uc ⇔ A ⊆ Uc ⇔ U ⊆ A ,
c
where the second =⇒ holds because U c is closed. We conclude U ∩ ( A ∪ A ) 6=
∅.

Lemma 56. A subset A ⊆ X is dense if and only if A = X.


c
Proof. (⇒) Since A is open but it intersects trivially a dense set A, it must be empty,
thus A is the whole space.
(⇐) Let U be an open set such that U ∩ A = ∅, then A is contained in the closed
set U c , but this implies A ⊆ U c ,49 thus U is empty. 49
Recall that the closure of A is the smallest
closed set containing A.
Definition 57. Let A ⊆ X, the interior of A is

Ao : = {U ∈ ΩX | U ⊆ A}.
[

It is obvious that Ao is the largest open subset of A and thus that A is open if and
only if A = Ao .50 50
It also follows that A ⊆ B =⇒ Ao ⊆ Bo
and that Ao o = Ao .
Finally, we end this these preliminaries with a result on how to specify a topology.

Definition 58 (Base). Let X be a set, a base B is a set B ⊆ 2X such that X = ∪U ∈ B U


and any finite intersection of sets in B can be written as a union of sets in B.

16
Lemma 59. Let X and B ⊆ 2X , then ΩX be the set of all unions of sets in B, ( X, ΩX ) is
a topology. We say that ΩX is the topology generated by B.

Proof. We know that unions of opens are open and finite intersections of sets in
B are open. It remains to show that finite intersections of unions of sets in B are
also open. Let U = ∪i∈ I Ui and V = ∪ j∈ J Vj with Ui ∈ B and Vj ∈ B, then by
distributivity, we obtain
\ [
U ∩ V = ∪i∈ I Ui ∪ j∈ J Vj = Ui ∩ Vj ,
i ∈ I,j∈ J

so U ∩ V is open (being a union of opens). The lemma then follows by induction.

In practice, instead of generating a topology from a base B, we start with any


family B0 ⊆ 2X and consider its closure under finite intersections B, so that it
satisfies the axioms of a base. Such a B0 is often called a subbase for the topology
generated by B.
Although we could indefinitely extend this digression in topology, we will stop
now that we have the essentials and see how this relates to our investigation on
linear time properties.

ω-words
Definition 60 (Extensions). Given a non-empty set A and a finite word u ∈ A∗ , we
denote the extensions of u by51 51
We write ext(u) when the alphabet is
clear from context. Note that Aω = ext A (ε).
ext A (u) := u · Aω = {σ ∈ Aω | u ⊆ σ }.

We generalize this notation to sets of finite words W ⊆ A∗ in the natural way:

ext(W ) = {ext(u) | u ∈ W }.

Proposition 61. Let A be a non-empty set, the set52 52


Notice that this topology is generated by
the subbase with containing ext(u) for any
ΩA = {ext(W ) | W ⊆ A∗ } u ∈ A∗ as ext(W ) is the union of such sets
when W ⊆ A∗ .

is a topology for Aω .

Proof. Let {Ui }i∈ I be a family of opens, for any i ∈ I, there exists Wi ⊆ A∗ such
that Ui = ext(Wi ). Then,53 53
The last equality holds because an ω-
word extends a word in one of the Wi ’s if
∪i∈ I Ui = ∪i∈ I ext(Wi ) = ext (∪i∈ I Wi ) , and only if it extends the same word in the
union of the Wi ’s.

so we conclude that ΩA is closed under unions.


Furthermore, we observe that54 54
Indeed, if u ⊆ v, then any ω-word that
 extends v also extends u, so ext(v) ⊆
ext(u). The argument is symmetric for
ext(u)
 v⊆u
v ⊆ u and if v and u are not comparable,

ext(u) ∩ ext(v) = ext(v) u⊆v. then they do not agree at some index and
 no ω-word can have two distinct symbols


o/w

at this index.

17
Thus, let W1 , W2 ⊆ A∗ , we have
\
ext(W1 ) ∩ ext(W2 ) = ∪u∈W1 ext(u) ∪v∈W2 ext(v)
[
= ext(u) ∩ ext(v)
u∈W1 ,v∈W2
[
= {ext(u) | u ∈ W1 , ∃v ∈ W2 , v ⊆ U }
[
∪ {ext(v) | v ∈ W2 , ∃u ∈ W1 , u ⊆ v}
= ext(W1 e W2 ),

where W1 e W2 is the set of words in one of the Wi ’s that have a prefix in the other,
that is,
W1 e W2 = {u ∈ A∗ | ∃i 6= j, ∃v ∈ Wj , u ∈ Wi , v ⊆ u}.
We conclude that ΩA is also closed under finite intersection, so it is a topology for
Aω .

From now on, unless otherwise said, we assume that the topology on Aω is
generated by ΩA given above.
Remark 62. A set P ⊆ Aω is open if and only if there exists W ∈ A∗ such that
P = ext(W ) = ∪u∈W ext(u). In particular, if σ ∈ P, then there exists σ̂ ⊆ σ such that
ext(σ̂ ) ⊆ P.55 55
In other words, we will know that σ ∈ P
after observing it for a finite amount of
If we stare at this remark long enough, we can recover the intuition behind safety time because we know all extensions of σ̂
LTPs and in particular, the equivalent definitions seen in Proposition 28. Indeed, are in P.
the tools we have developed lead to a nice characterization of safety and liveness
properties.

Lemma 63. An LTP P ⊆ (2AP )ω is a safety property if and only if it is closed.56 56


In the usual topology on ω-words.

Proof. (⇐) We have just said that if σ is in an open (in this case Pc ), then there exists
σ̂ ⊆ σ such that ext(σ̂ ) ⊆ Pc , i.e. ext(σ̂ ) ∩ P = ∅ as required for safety properties.
(⇒) Let P be induced by Pbad , any extension σ of a word in Pbad is not in P. In
other words Pc = ext( Pbad ) which is open, thus P is closed.

Lemma 64. An LTP P ⊆ (2AP )ω is a liveness property if and only if it is dense.

Proof. (⇒) Let U = ∅ be open, then U = ∪u∈W ext(u), where W cannot be empty.
Hence, since for any u ∈ W, ext(u) ∩ P 6= ∅57 , P ∩ U 6= ∅ and this direction follows. 57
Because any finite word can be extended
(⇐) Let P be dense, then for any u, ext(u) is open, so P ∩ ext(u) 6= ∅ which (by liveness).

means we can extend u to be in P. This direction follows.

Corollary 65. Theorem 47.58 58


Indeed, any set can be written as the
intersection of a closed set and a dense
As we mentioned before, we have not gone through all this theory only to prove set by Theorem 55, which are respectively
safety and liveness properties.
this theorem, these concepts will come back in this class as well as in a more ad-
vanced study of semantics. However, before going back to the main point of this
class, we keep our promise of giving two proofs of the decomposition theorem.
Therefore, in the next section, we will revisit this characterization through the point
of view of lattice theory.

18
Posets and Complete Lattices

Preliminaries
Definition 66. A poset (short for partially ordered set) is a pair ( A, ≤) where A is
a set and ≤ ⊆ A × A is a reflexive, transitive and antisymmetric binary relation.

A prototypical example of a poset which is paradigmatic in this course is the


powerset with binary relation being inclusion :(2X , ⊆). The restriction of this poset
to the opens ΩX of a topological space also yields an important poset: (ΩX, ⊆).

Definition 67. A function f : ( A, ≤ A ) → ( B, ≤ B ) between posets is monotone (or


order-preserving) if for any a, a0 ∈ A, a ≤ a0 =⇒ f ( a) ≤ f ( a0 ).

Example 68. The closure in a topological space X is a monotone function from


(2X , ⊆) to itself because A ⊆ B implies A ⊆ B.

Definition 69. The dual of a poset ( A, ≤) is denoted ( A, ≤)op := ( A, ≥), where for
any a, a0 ∈ A, a0 ≥ a ⇔ a ≤ a0 .59 This definition lets us avoid many sym-
59

metric arguments.
Definition 70. Let ( A, ≤) be a poset and S ⊆ A, then a ∈ A is an upper bound of
S if ∀s ∈ S, s ≤ a. Moreover, a ∈ A is the supremum of S, denoted ∨S, if it is the
least upper bound, that is, a is an upper bound of S and for any upper bound a0 of
S, a ≤ a0 .
Dually, a ∈ A is a lower bound (resp. infimum) of S if and only if it is an upper
bound (resp. supremum) of S in ( A, ≤)op .

Proposition 71. Infimums and supremums are unique when they exist.60 60
By antisymmetry.

Definition 72. A complete lattice is the data ( L, ∧, ∨, ≤) where ( L, ≤) is a poset,


and ∧, ∨ : (2 L , ⊆) → ( L, ≤) are respectively infimum (or meet) and the supremum
(or join) as defined above.61 Observe that L has a smallest element that we denote 61
Thus, all supremums and infimums exist
⊥ := ∨∅ and a largest element > := ∧∅. in ( L, ≤).

Example 73. Again, the powerset with


Lemma 74. Let ( L, ≤) be a poset, then the following are equivalent: inclusion order is a good example, the join
of a family of subsets is their union and the
(i) ( L, ∧, ∨, ≤) is a complete lattice. meet is their intersection.

(ii) Any S ⊆ L has a supremum.

(iii) Any S ⊆ L has an infimum.

Proof. (i) =⇒ (ii), (i) =⇒ (iii) and (ii) + (iii) =⇒ (i) are all trivial. Also, by using
duality, we only need to prove (ii) =⇒ (iii). For that, it suffices to note that for any
S ⊆ L, ∧S = { a ∈ L | ∀s ∈ S, a ≤ s} is a suitable definition of the infimum.
W

Defined that way, ∧S is a lower bound of S because if s < ∧S, then s < a for
some lower bound a of S62 , in particular s ∈ / S. Additionally, since we are taking 62
Because ∧S was the least upper bound
the supremum over all lower bounds of S, no lower bound of S can be greater and for lower bounds of S.

we conclude that ∧S is indeed the infimum of S.

19
Example 75. As a corollary, we obtain that the open sets of a topological space
form a complete lattice. The supremums are given by unions which are open for
any arbitrary families of open sets. However, while the finite infimums are given
by intersection and infinite infimums exist by the previous lemma, they are not
necessarily intersections.63 63
In fact, the formula given above, states
For instance, consider the topology on Aω with A = {a, b} and P = ∩n∈N ext( an ). that the infimum of a family of opens is the
interior of its intersection.
All the elements in the intersection are open by definition, but P is not open because
aω ∈ P and there does not exists an ⊆ aω with ext( an ) ⊆ P. However, the interior
of P = { aω } which is ∅ is open.

Definition 76. Let ( A, ≤) be a poset, a closure operator on A is a map c : A → A


that is monotone, expansive and idempotent.64 We say that a ∈ A is closed if 64
That is, ∀ a, a0 ∈ A,
a = c( a), the set of closed elements of A is denoted Ac . a < a0 =⇒ c( a) ≤ c( a0 )
a ≤ c( a)
A typical example is the closure operator in topological spaces as we have said,
c( a) = c(c( a)).
but it satisfies more properties that are not usually satisfied by closure operators.
These are properties 4 and 5 in Lemma 53 and if a closure operator satisfies these
properties, it is a Kuratowski closure operator.65 65
In fact, any Kuratowski closure operator
c : 2X → 2X yields a topology on X (by
Lemma 77. Let ( L, ∧, ∨, ≤) be a complete lattice and c : L → L a closure operator on L, defining the closed sets instead of the
opens).
then ( Lc , ≤) is also a complete lattice where infimums are taken as in L and supremums are
the closure of the supremums taken in L.66 66
In particular, c has greatest fixed points
and least fixed points. In fact, only the
Proof. The fact that ≤ is also a partial order on Lc is trivial. Also, if S only has fact that c is monotone is enough to show
the existence of the greatest and least
closed elements, then ∧S is also closed because c(∧S) ≤ c(s) = s for any s ∈ S by fixed points. We will see that they play an
monotonicity, but ∧S ≤ c(∧S) by expansiveness. We can conclude because c(∧S) is important role in our study of transition
a lower bound greater than ∧S, so they must be equal. systems.

Now, we will show that for S ⊆ Lc , c(∨S) is the supremum of S in Lc . It is clearly


an upper bound because ∨S ≤ c(∨S). It is the least because if u is an upper bound
of S that is closed, then the fact that u ≥ ∨S implies u = c(u) ≥ c(∨S).

Lemma 78. Let ( L, ∧, ∨, ≤) be a complete lattice and c : L → L a closure operator, then


for any a ∈ L, c( a) = {c(b) ∈ Lc | a ≤ c(b)}.
V

Proof. Since a ≤ c(b) implies c( a) ≤ c(c(b)) = c(b), c( a) is a lower bound fo this


set. It is the infimum. because it belongs to this set, thus any d > c( a) is not a lower
bound.

Definition 79. Given two posets ( A, ≤) and ( B, ), a Galois connection is a pair of
functions g : A → B and f : B → A such that for any a ∈ A and b ∈ B,

g ( a )  b ⇔ a ≤ f ( b ).

For such a pair, we write g a f : A → B.

Lemma 80. Let g a f : A → B be a Galois connection, then g and f are monotone.

Proof. Assume towards a contradiction that a < a0 and g( a)  g( a0 ), then because


g( a0 )  g( a0 ), we infer that a0 ≤ f ( g( a0 )) and thus, by transitivity, a ≤ f ( g( a0 )).

20
However, this contradicts the fact that g( a) 6 g( a0 ) (using the ⇐ of the Galois
connection). We conclude that g is monotone.
A symmetric argument works to show that f is monotone.

Example 81.

Lemma 82. Let g a f : A → B be a Galois connection, then f ◦ g : A → A is a closure


operator.

Proof. Because f and g are monotone, f ◦ g is clearly monotone. Also, for any a ∈ A,
g( a)  g( a) implying a ≤ f ( g( a)), so f ◦ g is expansive.
Now, in order to prove f ◦ g is idempotent, it is enough to show that67 67
The ≤ inequality follows by expansive-
ness.
f ( g( a)) ≥ f ( g( f ( g( a)))).

Observe that since f (b) ≤ f (b) for any b ∈ B, we have g( f (b)) ≤ b, thus in par-
ticular, with b = g( a), we have g( f ( g( a))) ≤ g( a). Applying f which is monotone
yields the desired inequality.

Prefixes and Closure


For this section, we fix a non-empty set A. Recall the operations of prefixes and
closure defined in Definition 38, we will study them with the view point of lattice
theory.68 First, let us generalize the definition of pref and give a slightly different 68
The motivation behind this is the results
definition of closure: of Proposition 39 and 43 that characterize
safety and liveness properties in terms of
this operations.
pref : P ( Aω ) → P ( A∗ ) = P 7→ {σ̂ ∈ A∗ | ∃σ ∈ P, σ̂ ⊆ σ }
cl : P ( A∗ ) → P ( Aω ) = W 7→ {σ ∈ (2AP )ω | pref(σ ) ⊆ W }.

Observe that cl = cl ◦ pref : P ( Aω ) → P ( Aω )69 and in fact, we can prove that 69


cl( P) = {σ ∈ (2AP )ω | pref(σ) ⊆ pref( P)}
cl is a closure operator with the following lemma that states pref a cl is a Galois
connection.

Lemma 83. For any P ∈ P ( Aω ) and W ∈ P ( A∗ ), then

pref( P) ⊆ W ⇔ P ⊆ cl(W ).

Proof. (⇒) For any σ ∈ P, we have pref(σ ) ⊆ W, thus σ ∈ cl(W ).


(⇐) Because any σ ∈ P is also in cl(W ), we infer that pref(σ ) ⊆ W. Now, since
pref( P) = ∪σ∈ P pref(σ ) and all terms are subsets of W, we conclude pref( P) ⊆
W.

Moreover, one can show that cl coincides with the closure operator in the topo-
logical space P ( Aω ).

Proposition 84. For any P ⊆ Aw , P = cl( P).

Proof. First, we show that cl( P) is closed.70 It is closed because if σ ∈


/ cl( P), then 70
It is obvious that it contains P.
there exists σ̂ such that σ̂ is not a prefix of any word in P. Therefore, ext(σ̂ ) is an
open set containing σ that does not intersect cl( P). By Lemma 51, cl( P) is closed.

21
Second, we show that cl( P) ⊆ P. Suppose that there exists σ ∈ Aω that is in
cl( P), but not in P. By Lemma 51 again, we have an open set U containing σ and
not intersecting P. Without loss of generality, U = ext(σ̂ ) for some prefix σ̂ ⊆ σ.71 71
Indeed, we already know open sets
However, because σ ∈ cl( P), σ̂ is the prefix of some word in P contradicting the fact have the form ext(W ) for W ⊆ A∗ . Thus,
U = ext(W ) = ∪i∈ I ext(wi ), and if σ ∈ U
that ext(σ̂ ) does not intersect P. we can choose one i with σ ∈ ext(wi ),
which means wi is the desired prefix.
Corollary 85. Let P ⊆ (2AP )ω , then

1. P is a safety property if and only if cl( P) = P.

2. P is a liveness property if and only if cl( P) = (2AP )ω if and only if pref( P) = (2AP )∗ .

Before ending this section, let us spend a bit more time expanding on the prop-
erties of Galois connections.

Proposition 86. Let g a f : A → B be a Galois connection where A and B are complete


lattices, then g preserves supremums and f preserves infimums.72 72
For any S ⊆ A, g(∨S) = ∨ g(S) and for
any T ⊆ B, f (∧ T ) = ∧ f ( T ).
Proof. Let S ⊆ A, we claim that g(∨S) is the supremum of g(S). By monotonicity,
it is an upper bound, and suppose g(∨S)  ∨ g(S), then ∨S > f ( g(∨S)), which
contradicts the expansiveness of f ◦ g. The claim follows.
Let T ⊆ B, we claim that f (∧S) is the infimum of f (S). By monotonicity, it is a
lower bound, and suppose f (∧S) ≺ ∧ f (S), then ∧S

In fact, a kind of converse of this result holds.

Proposition 87. Let A and B be complete lattices, then if g : A → B preserves all supre-
mums, then there exists f : B → A such that g a f is a Galois connection.73 73
By duality (considering the opposite or-
ders), if f : B → A preserves all infimums,
Proof. We define f = b 7→ { a ∈ A | g( a) ≤ b} and we have to show that g( a) ≤
W
then there is a Galois connection g a f .
b ⇔ a ≤ f (b). The ⇒ direction is trivial because clearly a is in the set that f (b) is
the upper bound of, so a ≤ f (b).
For ⇐, note that g preserving supremums implies g is monotone.74 Thus, if 74
Assume a ≤ a0 , then a ∨ a0 = a0 , so
a ≤ f (b), we have g( a) ∨ g( a0 ) = g( a0 ), because g preserves
supremums. Therefore, g( a) ≤ g( a0 ).
_
g( a) ≤ g( f (b)) = { g( a) | g( a) ≤ b} ≤ b.

Observable Properties

Continuous Functions
Let f : Y → X, the inverse image function is

f −1 : 2X → 2Y = A 7→ {y ∈ Y | f (y) ∈ A}.

Let us show a basic, but fundamental result.

Lemma 88. For any f : Y → X, f −1 preserves arbitrary unions and intersection. Thus, it
also preserves complements.

22
Proof.

Definition 89. Let ( X, ΩX ) and (Y, ΩY ) be topological spaces, a function f : Y → X


is continuous if f −1 (ΩX ) ⊆ ΩY, that is, the preimage of any open set in X is open
in Y.

Lemma 90. A function f : Aω → Bω is continuous if and only if

∀α ∈ Aω , ∀n ∈ N, ∃k ∈ N, ∀ β ∈ Aω ,
α(0) · · · α(k) = β(0) · · · β(k ) =⇒ f (α)(0) · · · f (α)(n) = f ( β)(0) · · · f ( β)(n).

Proof. (⇒) An other formulation of the implication is that f −1 (ext( x )) ⊇ ext(y)


with y := α(0) · · · α(k) and x := f (α)(0) · · · f (α)(n). Now, note that f −1 (ext( x ))
contains α, and since it is open, Remark 62 tells us that there exists σ̂ ⊆ σ with
ext(σ̂) ⊆ f −1 (ext( x )). Letting k = |σ̂ | yields the desired y.
(⇐) Since f −1 preserves unions, open sets are all of the form ext(W ) = ∪ x∈W ext( x )
and arbitrary unions of opens are open, it is enough to show that f −1 (ext( x )) is
open for any x ∈ A∗ . Let α ∈ f −1 (ext( x )) and n = | x |, the hypothesis tells us that
there exists y = α(0) · · · α(k), such that f −1 (ext( x )) ⊇ ext(y). Since α ∈ ext(y) and
ext(y) is open, we conclude by Lemma 50 that f −1 (ext( x )) is open.

In other words, in order to determine a finite prefix of the image of α, we only


need to observe a finite prefix of α.
Remark 91. This property is also important when we talk about “computable“ func-
tions on streams, they are always continuous. Consequently, a “decidable“ property
must have a continuous characteristic function.75 75
In the codomain, the booleans are
equipped with the discrete topology
In light of this remark, given P ⊆ Aω with a continuous characteristic function ({0, 1}, {∅, {0}, {1}, {0, 1}}).
χ P , we infer that χ− 1 −1 c
P (1) = P and χ P (0) = P are open, or equivalently P is
clopen.76 Notice now that the subbase we used to form the topology on ω-words, 76
We will see that clopens are important
namely, all the extensions of finite words, is very nice as it only contains clopen sets. because they form a Boolean algebra
(which we will see formally later) as ∅, X,
A ∪ B, A ∩ B and Ac are clopen whenever
Proposition 92. Let A be a non-empty set and u ∈ A∗ , then ext(u) is clopen.
A and B are clopen.
Proof. We proceed by induction on the length of u. If u = ε, then ext(ε) = Aω
which is the whole space, so it is clopen.
Now, let u ∈ A∗ with ext(u) clopen, then for any a ∈ A, we have already seen
that ext(u · a) is open. To see that it is closed, observe that
[
Aω \ ext(u · a) = ( Aω \ ext(u)) ∪b6=a∈ A ext(u · b),

which is open because arbitrary unions of opens are open.

Corollary 93. If S : W ⊆ A∗ is finite, then ext(W ) is clopen.77 77


Follows because finite unions of closed
sets are closed.
However, the converse is not always true as shown in this example.

Example 94. Let A = N and consider P = ∪n>0 ext(n), it is open because each
term in the union is open, and it is closed because it is the complement of ext(0).
However, there is no finite set of prefixes W such that P = ext(W ).

23
To finish this section, we will show that the only defect to obtaining the converse
is the fact that A is not finite. In order to do this, we will need to linger on in the
realm of topology.

Compactness
Definition 95. Let ( X, ΩX ), A ⊆ X, we say that a family {Ui }i∈ I is an open cover
of A if all Ui ’s are open are they cover A, i.e.: A ⊆ ∪i∈ I Ui . If J ⊆ I is such that
A ⊆ ∪ j∈ J , we say that {Uj } j∈ J is a subcover. It is a finite subcover if J is finite.

Definition 96. Let ( X, ΩX ) be a topological space, then A ⊆ X is said to be compact


if any cover of A has a finite subcover.78 78
We say that X is a compact topological
space if it is a compact subset of itself.
Fact 97. If A is infinite, then Aω is not compact.

Proof. Consider Aω = ∪ a∈ A ext( a), if A is infinite, then we cannot find a finite


subcover.

Proposition 98. Let A 6= ∅ be finite, then Aω is compact.79 79


This result is equivalent to the axiom of
choice.
Proof. Let {Ui }i∈ I be an open cover of A. For any i ∈ I, let Vi ⊆ A∗ be such that
Ui = ext(Vi ) and define V = ∪i∈ I Vi ⊆ A∗ . Then, for n ∈ N, we inductively define
Wn ⊆ An as follows.
For n = 0, we let W0 = {ε} if ε ∈ V, otherwise W0 is empty. Suppose all Wi ’s are
defined up to n, Wn+1 is defined such that for any u ∈ An+1 , u ∈ Wn+1 if and only
if u ∈ V and u has no prefix in the previously defined Wi ’s.
Let W = ∪n∈N Wn , it satisfies two properties. Clearly, it is prefix free80 and any 80
Namely, if u ∈ W, then no prefix of u is
word in V has a prefix in W, because either this word was added to one of the Wi ’s, in W.

or it was not added because one of its prefix was already added. We conclude that
Aω = ext(W ) because ext(W ) ⊇ ext(V ). Moreover, we claim that W is finite and
this finishes the proof.81 81
Because we can pick one of the Vi ’s
containing w for each word w ∈ W and this
Assume towards a contradiction that W is infinite, then T = { x ∈ A∗ | ∃w ∈
forms a finite subcover of Aω .
W, x ⊆ w} is infinite as it contains W. Moreover, T can be seen as a subtree of the
tree on A∗ where u is a parent of v if and only if v = u · a for a ∈ A∗ . Since that tree
is finitely branching, T is too and by Konig’s lemma, T contains an infinite path.
That is, there exists σ ∈ Aω = ext(W ) such that for any n ∈ N, σ(0) · · · σ (n) is
a prefix of some wn ∈ W. However, this contradicts the fact that W is prefix free
because

Lemma 99. If ( X, ΩX ) is a compact topological space and C ⊆ X is closed, then C is


compact.

Proof. Let {Ui }i∈ I be an open cover of C, then adding C c to this family yields an
open cover of X. Thus it has a finite subcover, which after removing C c yields a
finite subcover of {Ui }i∈ I .

24
Hausdorff Spaces
Definition 100. A topological space ( X, ΩX ) is Hausdorff (or T2 , or separated) if
for any x 6= y ∈ X, there exists U, V ∈ ΩX such that x ∈ U, y ∈ V and U ∩ V = ∅.

Example 101. The space Aω with the usual topology is always Hausdorff because
if α 6= β ∈ Aω , then there is a finite index i where they disagree. Therefore, with
x = α(0) · · · α(i ) and y = β(0) · · · β(i ), ext( x ) and ext(y) are the desired separating
sets.

Proposition 102. Let ( X, ΩX ) be a Hausdorff space and C ⊆ X be compact, then C is


closed.

Proof. Let x ∈ / C, for any y ∈ C, x and y are separated as in the Hausdorff definition
by sets Uy and Vy , where y ∈ Vy . Note that {Vy }y∈C is an open cover of C and by
compactness, there is a finite set I such that {Vyi }i∈ I still covers C. But, now ∩i∈ I Uyi
is a finite intersection of opens that contains x and that cannot contain any point in
C.82 Thus, it is an open set disjoint from C that contains x. The proposition follows 82
For each i ∈ I, Uyi intersects Vyi trivially,
by Lemma 51. hence the intersection of all Uyi cannot
intersect any Vyi . The claim follows since
the latter cover C.
Corollary 103. In Aω the closed sets are exactly the compact sets, thus the clopen sets are
exactly the open and compact sets.83 83
By Lemma 99 and Proposition 102.

Corollary 104. If A 6= ∅ is finite, then P ⊆ Aω is clopen if and only if P = ext(W ) with


W ⊆ A∗ finite.

Proof. (⇐) Corollary 93.


(⇒) Since P is open and compact, the open cover {ext(w)}w∈W has a finite sub-
cover W f ⊆ W with P = ∪w∈W f ext(w) = ext(W f ).

Linear Temporal Logic (LTL)

Linear Modal Logic (LML)


This logic is really simple and based on the concept of observable properties we
have recently characterized through topological concepts. We will first describe the
syntax and semantics of LML.
In this section, we fix an infinite countable set of variables X = { X, Y, Z, . . . } and
a set of atomic propositions AP.

Definition 105. The formulas in LML (over AP) are given by the following gram-
mar:84 84
All the usual connectives have the same
φ, ψ ::== > | ⊥ | X ∈ X | a ∈ AP | φ ∧ ψ | φ ∨ ψ | ¬φ | φ. semantics as before and the new con-
nective is the linear part of this logic.
We read φ as “next phi“, its semantics
Definition 106. 1. A valuation of a subset V ⊆ X is a function ρ : V → P (2AP )ω .

will become clearer when we define its
interpretation.
2. A formula with parameters is a pair (φ, ρ), where ρ is a valuation on V such that
V contains all free variables of φ.

25
The interpretation of a formula φ with parameters ρ is an LTP JφKρ ∈ P (2AP )ω


defined inductively as follows:85 85


The intuition behind these will be made
clearer when we talk about satisfaction of a
J>Kρ = (2AP )ω J ⊥Kρ = ∅ formula.
n o We use the notation σ  i as a shorthand
JXKρ = ρ( X ) JaKρ = σ ∈ (2AP )ω | a ∈ σ (0) for σ ◦ succi , that is σ  i (k ) = σ(k + i ) for
any k ∈ N.
Jφ ∧ ψKρ = JφKρ ∩ JψKρ Jφ ∨ ψKρ = JφKρ ∪ JψKρ
n o
J¬φKρ = (2AP )ω \ JφKρ J φKρ = σ ∈ (2AP )ω | σ1 ∈ JφKρ .

We also define some syntactic sugar for implications and equivalences, namely,

φ → ψ := ¬φ ∨ ψ and φ ↔ ψ = ( φ → ψ ) ∧ ( ψ → φ ).

Definition 107. We say that σ ∈ (2AP )ω satisfies (φ, ρ) if σ ∈ JφKρ .

Let us give two lemmas that are proven with a simple structural induction and
that will help us make other proofs more clear.

Lemma 108. Let ρ, ρ0 : V → P (2AP )ω be such that ρ( X ) = ρ0 ( X ) for any free variable


X in φ, then JφKρ = JφKρ0 .86 86


Proved by a simple structural induction
on the formulas.
Lemma 109. Let φ and ψ be formulas with parameters ρ and X ∈ X not in ψ, then87 87
For parameters ρ, X ∈ X and
A ⊆ (2AP )ω , the parameter ρ[ A/X ]
JφKρ[JψKρ /X ] = Jφ[ψ/X ]Kρ . acts as ρ on any variable except for X were
ρ[ A/X ]( X ) = A.
For a formulas φ and ψ and X ∈ X ,
If φ has no free variable, we say that it is closed. Moreover, we will write JφK = φ[ψ/X ] is the formula φ where all free
JφKρ because by the previous lemma, the interpretation of φ is independent of the occurrences of X have been replaced by ψ.
choice of a valuation. For satisfaction, we denote σ φ, that is, when σ ∈ JφK.
The notion of satisfaction lets us give a nice intuition for the interpretation of
formulas. For instance, any word should satisfy the formula > and we indeed have
for any word σ ∈ (2AP )ω , σ ∈ J>K, that is σ ⊥. Similarly, no word can satisfy ⊥.
The following list gives the reasoning for the other connectives. For any word σ:88 88
Recall that we do not have to deal with
variables X ∈ X as we assumed the
• An ω-word satisfies an atomic proposition whenever at the first step, the propo- formula was closed.
sition is true, i.e.: σ a ⇔ a ∈ σ (0).

• An ω-word satisfies a conjunction of formulas if it satisfies both formulas, i.e.:


σ φ ∧ ψ ⇔ σ φ and σ ψ. Remark 110. The intuition for → and ↔
are missing from this list, but they can
• An ω-word satisfies a disjunction of formulas if it satisfies either formula, i.e.: easily be recovered from their definition.
σ φ ∨ ψ ⇔ σ φ or σ ψ. However, we note that we can use the
classical definition of implication (as
• An ω-word satisfies the negation of a formula if it does not satisfy that formula, opposed to the intuitionistic one) because
satisfaction of a formula is the belonging
i.e.: σ ¬φ ⇔ σ 6 φ. to an element
 of the Boolean algebra on
P (2AP )ω , where classical logic can
• An ω-word satisfies the next of a formula if it satisfies that formula at the next always be done.
time step, i.e.: σ φ ⇔ σ1 φ.

Definition 111. Given φ and ψ with all their variables in V, we say that φ and ψ are
logically equivalent, denoted φ ≡ ψ, if for any valuation ρ on V, JφKρ = JψKρ .

26
Example 112. We have the following for any formulas φ and ψ.89 89
These equivalences can all be easily
proven when looking at the definition of
(φ ∧ ψ) ≡ φ∧ ψ >≡> the interpretation, or the intuition we have
given above.
(φ ∨ ψ) ≡ φ∨ ψ ⊥≡⊥
¬φ ≡ ¬ φ

Remark 113. These formulas are only the important ones using the connective ,
but all the other equivalences proved in classical logic such as DeMorgan’s laws,
distributivity, etc. can also be shown.
Let us look at how LML relates to observable properties.

Proposition 114. If φ is a closed LML formula, then JφK ∈ P (2AP )ω is clopen.




Proof. We proceed by structural induction on φ. The > and ⊥ case are trivial and
since clopen sets are closed under binary union and intersection and under com-
plements, the connectives ∧, ∨ and ¬ are also taken care of.
Case a: We have JaK = {ext( A) | A ∈ 2AP , a ∈ A} If AP is finite, then we are
S

done because this is a finite union of clopen set. Otherwise, we need to show that
this union is closed. Let σ be such that a ∈
/ σ(0), ext(σ (0)) is an open set containing
σ that does not intersect JaK.90 90
The case then follows from Lemma 51.
Case φ: By induction hypothesis, JφK is clopen, thus it is equal to ext(W ) for a
finite W ⊆ (2AP )∗ 91 . Moreover, unrolling the definition of J·K, we find 91
By Corollary 104.
!
[ [
J φK = ext( A · w) .
A∈2AP w ∈W

Hence, if AP is finite, this is a finite union of clopen sets and we are done. Other-
wise, we need to show this union is closed. Let σ ∈ / J φK, we have σ1 ∈/ JφK. Since
JφK is closed, there exists u ∈ (2AP )∗ such that σ1 ∈ ext(u) and ext(u) ∩ JφK = ∅,92 92
We can assume that the open set sepa-
then ext(σ (0) · u) contains σ and does not intersect J φK. rating σ  1 from JφK is the extension of a
single word u, because if it is ext(W ) for
W ⊆ (2AP )∗ , then we can pick one u ∈ W
Proposition 115. If AP is finite, then for any observable property (clopen), there is a closed such that σ1 ∈ ext(u).
LML formula φ such that JφK = P.

Proof. When AP is finite, we have seen that there is a finite U ⊆ (2AP )∗ such that
P = ext(U ). We will show that any u ∈ (2AP )∗ , ext(u) is definable by a formula
in LML.93 The result then follows from the fact that P is a finite union of such sets 93
An LTP is defined by φ if JφK = P.
and we can define finite unions with disjunctions of formulas.
First, for any A ∈ 2AP , we define
! !
^ ^
φA = a ∧ ¬a .
a∈ A ∈A
a/

We can see that σ ∈ Jφ A K must satisfy a ∈ σ (0) ⇔ a ∈ A, that is σ (0) = A. We


conclude that ext( A) = Jφ A K, so the extensions of single-letter words are definable.
Second, let u = An · · · A1 ∈ (2AP )∗ , we proceed by induction to show that for
any k ≤ n, Ak · · · A1 is definable by φk . The base case k = 0 is trivial because

27
ext(ε) = J>K, so φ0 = >. Next, suppose we have Jφk K = ext( Ak · · · A1 ), then we
define φk+1 = φk ∧ φ Ak+1 since we can easily verify that94 94
Indeed, an ω-word σ that satisfies this
formula must have σ(0) = Ak+1 as argued
ext( Ak+1 · Ak · · · A1 ) = J φk ∧ φ Ak+1 K. above and at the next step, it must satisfy
φk . Namely, by induction hypothesis,
σ1 ∈ ext( Ak · · · A1 ).

In the following example, we show that the proposition does not always hold
when AP is infinite.
Example 116. Let AP = N and A = 2N ∈ 2AP (the even numbers). We know that
ext( A) is observable, but there are no formula φ with JφK = ext( A). Indeed, assume
towards a contradiction that such a φ exists. Without loss of generality, φ has no
connective95 , hence we can write φ in DNF as 95
Indeed, ext( A) only restricts the first
_ ^ step of the LTPs it contains, so looking
φ= λi,j , at the next steps with is useless. More
i ∈ I j∈ Ji formally, σ ∈ ext( A) if and only if σ (0) ·
τ ∈ ext( A) for all τ ∈ (2AP )ω , and if φ
where the λi,j ’s are n ∈ N or its negation ¬n. If σ φ, then σ must satisfy one of contains a non-trivial (it is not >),
V then φ will not accept all extensions of
the term in the disjunction, wlog it is the first, so σ j∈ J1 λ1,j . However, if we let σ (0).
n be the greatest odd number in the λ1,j ’s, the LTP (σ (0) ∪ n + 2) · σ1 still satisfies
the same term, and so φ as well. This contradicts the fact that JφK = ext( A) as
n + 2 ∈ σ (0) is odd. Note that φ is a finite formula, so we can
expect it will not be effective to describe
something infinitary such as having all
LML with Fixed Points even numbers in the first step.

LML’s expressiveness is poor as it only describes nice safety properties (the observ-
able ones). In particular, the only liveness property it describes is trivial.96 In order 96
Recall Proposition 46.
to remedy that, we will add two modalities: “eventually“ (♦φ) and “always“ (φ).
The interpretation of these new connectives is97 97
An ω-word σ satisfies ♦φ if it satisies φ at
some step.
J♦φKρ := {σ ∈ (2AP )ω | ∃i ∈ N, σi ∈ JφKρ } It satisfies φ if it satisfies φ at all steps.

and
JφKρ := {σ ∈ (2AP )ω | ∀i ∈ N, σi ∈ JφKρ }.
Example 117. We give a few simple formulas and give the intuition behind their
interpretation. Fix a ∈ AP:
• The property defined by ♦a contains all ω-words for which a is true at some
point in the word:

J♦aK = {σ ∈ (2AP )ω | ∃i ∈ N, a ∈ σ (i )}.

• The property defined by a contains all ω-words for which a is true all the time:

J♦aK = {σ ∈ (2AP )ω | ∀i ∈ N, a ∈ σ (i )}.

• The property defined by ♦a contains all ω-words for which at any step, a will
be true at some point, or equivalently, a is true infinitely many times:

J♦aK = {σ ∈ (2AP )ω | ∃∞ i, a ∈ σ(i )}.

28
• The property defined by ♦a contains all ω-words for which at some step, a
starts to be true forever:

J♦aK = {σ ∈ (2AP )ω | ∀∞ i, a ∈ σ (i )}.

One can show that the J♦aK is an open liveness property, JaK is a safety property
and both J♦aK and J♦aK are liveness properties that are neither open nor closed.

Lemma 118. We have the following logical equivalences:

♦φ ≡ ¬¬φ φ ≡ ¬♦¬φ
♦φ ≡ φ ∨ ♦φ φ ≡ φ ∧ φ

Proof.

In light of this, one could think of ♦φ as the infinitary property n∈N n φ.


W

However, syntactically, it is more complex to deal with infinite formulas. In order


to use this intuition while avoiding such difficulties, we need to extend the logic
with fixed points.98 98
Recall that a fixed point x for a function f
satisfies x = f ( x ).
Lemma 119. Let φ be a formula with parameters ρ, φ♦ ( X ) = φ ∨ X and φ ( X ) =
φ ∧ X, where X ∈ X is not in φ. Then:99 99
For an LML formula φ with parameters
ρ and X ∈ X , we have the following
1. J♦φKρ is the least fixed point of the function notation:

    JφKρ ( X ) = A 7→ JφKρ[ A/X ] .


Jφ♦ Kρ ( X ) : P (2AP )ω → P (2AP )ω = A 7→ Jφ♦ ( X )Kρ[ A/X ] .

2. JφKρ is the greatest fixed point of the function


   
Jφ Kρ ( X ) : P (2AP )ω → P (2AP )ω = A 7→ Jφ ( X )Kρ[ A/X ] .

Proof. 1. First, we can show J♦φKρ is a fixed point because100 100


The first equality is Lemma 109 (adapted
to work with the ♦ connective) and the
Jφ♦ Kρ (J♦φKρ ) = Jφ ∨ ♦φKρ = J♦φKρ . second is Lemma 118.

Let P be another fixed point and σ ∈ (2AP )ω , we claim that if there exists i ∈ N
such that σi ∈ P, then σ ∈ P. Indeed, because Jφ♦ Kρ ( P) ⊆ P, we infer that

{σ ∈ (2AP )ω | σ1 ∈ P} = J XKρ[ P/X ] ⊆ P.

Our claim follows.101 Moreover, for any σ ∈ J♦φKρ , there exists i ∈ N such that 101
We have the following implications:
σi ∈ JφKρ ⊆ Jφ♦ Kρ ( P) ⊆ P. Hence, we conclude that J♦φKρ ⊆ P. σi ∈ P ⇒ σ(i − 1) ∈ P ⇒ · · · ⇒ σ ∈ P.

2. First, similarly to above, JφKρ is a fixed point because

Jφ Kρ (JφKρ ) = Jφ ∨ φKρ = JφKρ .

Let P be another fixed point and σ ∈ P, we will show that σ  i ∈ JφKρ for any
i ∈ N.102 We proceed by induction on i. Since 102
It then follows that P ⊆ JφKρ .

29
JφKρ ⊇ JφKρ[ P/X ] ∩ J XKρ[ P/X ] = Jφ Kρ ( P),

we have σ ∈ JφKρ , covering the base case. Also, we have

P⊆J XKρ[ P/X ] = {σ ∈ (2AP )ω | σ1 ∈ P}.

In particular σ  i ∈ P =⇒ σ  (i + 1) ∈ P, finishing the induction because we


have just proven that any word in P is in JφKρ .

This result shows how the modal connectives ♦ and  can be described by
fixed points of functions defined using only connectives in LML. However, we have
skimped on the small detail that the greatest or least fixed points of a function be-
tween posets might not always exist.103 We have to introduce a bit of terminology Consider the function f : (N, ≤) →
103

and two results in order to show that the fixed points in Lemma 119 actually exist. (N, ≤) defined by
(
n n ≡ 1 (mod 2)
Definition 120. Let f : ( L, ≤) → ( L, ≤), a pre-fixpoint of L is an element a ∈ L such n 7→ .
0 n ≡ 0 (mod 2)
that f ( a) ≤ a. A post-fixpoint is an element a ∈ L such that a ≤ f ( a). A fixpoint
Every odd number is a fixed point, so f
(or fixed point) of f is a pre- and post-fixpoint. clearly has no greatest fixed points.

Theorem 121 (Knaester-Tarski). 104 Let ( L, ∧, ∨, ≤) be a complete lattice and f : L → L 104


This is actually a weaker version of
be monotone, then the Knaester-Tarski theorem which states
that the fixed points of f form a complete
lattice.
{ a ∈ L | f ( a ) ≤ a }.
V
1. The least fixpoint of f is µ f :=

{ a ∈ L | a ≤ f ( a)}.
W
2. The greatest fixpoint of f is ν f :=

Proof. 1. Any fixpoint of f is in particular a pre-fixpoint, thus µ f , being a lower


bound of pre-fixpoints, is smaller than all fixpoints. Moreover, because for any
pre-fixpoint a ∈ L, f (µ f ) ≤ f ( a) ≤ a, f (ν f ) is also a lower bound of the pre-
fixpoints, so f (µ f ) ≤ µ f . Then, f ( f (µ f )) ≤ f (µ f ), so f (µ f ) is a pre-fixpoint and
µ f ≤ f (µ f ). We conclude that µ f is a fixpoint.

2. Any fixpoint of f is in particular a post-fixpoint, thus ν f , being an upper bound


of post-fixpoints, is bigger than all fixpoints. Moreover, because for any post-
fixpoint a ∈ L, a ≤ f ( a) ≤ f (ν f ), f (ν f ) is an upper bound of the post-fixpoints,
so ν f ≤ f (ν f ). Then, f (ν f ) ≤ f ( f (ν f )), so f (ν f ) is a post-fixpoint and f (ν f ) ≤
ν f . We conclude that ν f is a fixpoint.

With this result, it is left to show that the functions Jφ♦ Kρ ( X ) and Jφ Kρ ( X ) are
monotone (or antimonotone as the greatest fixed point is the least fixed point in the
dual poset).
Definition 122. Let φ be a formula in LML and X ∈ X , we define the relations
X Pos φ and X Neg φ inductively as follows:105 105
The relations are read as “X positive
in φ“ and “X negative in φ“ respectively.
X 6= Y a ∈ AP X Pos φ They intuitively correspond to the fact that
X Pos X X Pos Y X Pos a X Pos > X Pos ⊥ X Pos φ X always appears under an even (resp.
X Pos φ X Pos ψ X Pos φ X Pos ψ X Neg φ odd) number of negations in φ. If X does
not appear in φ, then X Pos φ and X ¬φ.
X Pos φ ∨ ψ X Pos φ ∧ ψ X Pos ¬φ

30
X 6= Y a ∈ AP X Neg φ
X Neg Y X Neg a X Neg > X Neg ⊥ X Neg φ
X Neg φ X Neg ψ X Neg φ X Neg ψ X Neg φ
X Neg φ ∨ ψ X Neg φ ∧ ψ X Neg ¬φ

Example 123. Both the formulas φ♦ ( X ) and φ ( X ) we defined in Lemma 119 have
X positive in them.

Lemma 124. Let (φ, ρ) be a formula with parameters and X ∈ X .

1. If X is positive in φ, then JφKρ ( X ) is monotone.

2. If X is negative in φ, then JφKρ ( X ) is antimonotone.

Proof.

Lemma 125. Let φ be a formula with parameters ρ, X ∈ X be positive in φ and ψ =


¬φ[¬ X/X ], then106 106
Recall that ν f and µ f are respectively
the least and greatest fixpoints of the
c c
νJφKρ ( X ) = µJψKρ ( X ) and dually µJφKρ ( X ) = νJψKρ ( X ) . function f .

c
Proof. First, by definition, we have JψKρ ( A) = JφKρ ( Ac ) . Moreover, we can write
the following derivation107 107
The first equality is the formula for
least fixpoints in Theorem 121. The second
\n o
(2AP )ω \ µ JψKρ ( X ) = (2AP )ω \ A ⊆ (2AP )ω | JψKρ ( A) ⊆ A equality is an application of one of DeMor-

gan’s laws. The third equality uses the first
[n sentence of the proof, the fourth is just a
o
= Ac | A ⊆ (2AP )ω , JψKρ ( A) ⊆ A property of inclusion and complements
[n o and the last is the formula now for greatest
= A ⊆ (2AP )ω | JψKρ ( Ac ) ⊆ Ac fixpoints.
[n c o
= A ⊆ (2AP )ω | JφKρ ( A) ⊆ Ac
[n o
= A ⊆ (2AP )ω | A ⊆ JφKρ ( Ac )

= νJφKρ ( X ).

Syntax and Semantics of LTL


Consider a formula θ with X Pos θ, we can write it (modulo logical equivalence) as
ni,j
_ ^
θ ≡ ψ∨ (φi X ),
i∈ I j∈ Ji

where ψ and the φi ’s have no occurrence of X. If we assume that ni,j = 1 for all
i ∈ I, j ∈ Ji , then θ has a simpler form, that is,
_
θ ≡ ψ∨ (φi ∧ X ) ≡ ψ ∨ (φ ∧ X ),
i∈ I
W
where ψ and φ = i∈ I φi do not contain X. The idea behind LTL is to add fixed
points as we did to obtain  and ♦ but only for formulas θ of this form.

31
Formulas of LTL are recognized by the grammar:

φ, ψ := >|⊥| a ∈ AP| X ∈ X |φ ∧ ψ|φ ∨ ψ|¬φ| φ|φUψ.

The interpretation of formulas in LTL are exactly the same as for LML formulas
extended with the modal connective φUψ (read φ until ψ):108 108
Intutively, there is a time i such that
σ satisfies φ at every step before i and it
n o satisfies ψ at i.
JφUψKρ = σ ∈ (2AP )ω | ∃i ∈ N, σi ∈ JψKρ and ∀ j < i, σ j ∈ JφKρ .
We also extend the notation to LTL formulas and we observe that σ φUψ if
and only if there exists i ∈ N such that σi ψ and for any j < i, σ j ψ.

Fixed Points and Defined Modalities


Lemma 126. Let φ and ψ be formulas with parameters ρ and X ∈ X not in φ and ψ, then
JφUψKρ is the least fixed point of JθKρ ( X ) where θ = ψ ∨ (φ ∧ X ).

Proof. First, it is a fixed point because if σ ψ ∨ (φ ∧ (φUψ)), then either σ ψ


and then i = 0 is a witness for σ φUψ, or σ φ ∧ (φUψ), and i + 1 is the
desired witness where i is the witness of σ1 φUψ.109 109
The formal details of this proof are left
to the reader.
Thus, we have JφUψKρ = µJθKρ ( X ). Moreover, to express greatest fixed points,
we have seen that
c
νJθKρ ( X ) = µJ¬θ [¬ X/X ]Kρ ( X ) .
After some simplifications110 , we can write 110

¬θ [¬ X/X ] ≡ ¬ (ψ ∨ (φ ∧ ¬ X ))
µJ¬θ [¬ X/X ]Kρ ( X ) = J¬ψU ¬(ψ ∨ φ)Kρ .
≡ ¬ψ ∧ ¬ (φ ∧ ¬ X)
≡ ¬ψ ∧ (¬φ ∨ X)
From now on, we will use the notation φWψ (read φ weak until ψ) as a shorthand:
≡ (¬ψ ∧ ¬φ) ∨ (¬ψ ∧ X ).

φWψ ≡ ¬ (¬ψU ¬(ψ ∨ φ)) ,

and the next lemma follows from our derivations.

Lemma 127. The property JφWψKρ is the greatest fixed point of JθKρ ( X ) where θ =
ψ ∨ ( φ ∧ X ).

Recall now that the formulas that lead to  and ♦ as fixpoints fit in our simple
case and it is easy to see (intuitively and with the fixpoint definitions) that

♦φ ≡ >Uφ and φ ≡ φW ⊥.

Therefore, by defining ♦ and  with these equivalences within LTL, we can recover
their original semantics.

Lemma 128. For any LTL formula φ,111 The proofs are essentially an easy
111

unrolling of the definitions of until and


J♦φKρ := J>UφKρ = {σ ∈ (2AP )ω | ∃i ∈ N, σi ∈ JφKρ } weak until.

JφKρ := JφW ⊥Kρ = {σ ∈ (2AP )ω | ∀i ∈ N, σi ∈ JφKρ }.

32
In order to motivate the restrictions of fixed points to formulas where only
appears once at the leafs, we state the following fact without proof.
The greatest fixpoint would be
Proposition 129. There are no closed LTL formula φ such that JφK is the greatest fixed {σ ∈ (2AP )ω | ∀i ∈ N, a ∈ σ(2 · i )}.
point of θ ( X ) = a ∧ X.

Lemma 130. For any φ and ψ, we have

φWψ ≡ (φ ∪ ψ) ∨ φ
¬ (φWψ) ≡ ¬ψU ¬ (φ ∨ ψ)
¬(φUψ) ≡ ¬ψW ¬ (φ ∨ ψ)

Remark 131. Using the last two equivalences, we obtain a systematic way to push
all the negations to the leaves. Unfortunately, this process leads to an exponential
blow up in the size of the formula. For this reason, there are variations on LTL that
add a modal connective to make these DeMorgan’s law have no blow up (it is called
release).

Fixed Points and Continuity


Let us go back to studying LML formulas. We would like to get better ways to
express fixed points than what is given by Knaester-Tarski. More precisely, let
f : L → L be monotone on a complete lattice ( L, ∧, ∨, ≤). Under certain conditions
on f , we can show that

f n (⊥) f n (>).
_ ^
µf = and νf =
n ∈N n ∈N

We introduce a bit more terminology before getting to this point.

Definition 132. Let ( A, ≤) be a poset, a subset D ⊆ L is directed if it is non-empty


and for any a, b ∈ D, there exists c ∈ D such that a, b ≤ c. A subset D ⊆ L is
codirected if it is directed in the opposite order.112 112
Namely, it is non-empty and for any
a, b ∈ D, there exists c ∈ D such that
Lemma 133. Let f : ( A, ≤) → ( B, ) be a monotone function of posets. If D ⊆ A is c ≤ a, b.
(co)directed, then f ( D ) is also (co)directed.113 It follows trivially that when f is anti-
113

monotone, the image of a directed set is


Proof. Let d, d0 ∈ f ( D ), there exists a, b ∈ D such that f ( a) = d and f (b) = d0 . codirected and vice-versa.
Hence, if D is directed (resp. codirected), then there exists c ∈ D such that a, b ≤ c
(resp. c ≤ a, b) and by monotonicity, f ( a), f (b) ≤ f (c) (resp. f (c) ≤ f ( a), f (b)) with
f (c) ∈ f ( D ), so f ( D ) is directed (resp. codirected).

Definition 134. Let L and L0 be two complete lattice,114 a function f : L → L” is 114


We will overload the notations ≤, ∧, ∨,
continuous (also Scott-continuous) if it is monotone and for any directed D ⊆ L, > and ⊥ to mean the usual things in both
W W L and L0 while making it clear in which
f ( D) = f ( D ). It is cocontinuous if it is monotnoe and for any codirected sets elements live.
C ⊆ L, f ( C ) = f (C ).
V V

Definition 135. A complete lattice L is a frame if for any a ∈ L and S ⊆ L, a ∧


W
S=
{ a ∧ s | s ∈ S }.
W

33
Lemma 136. Let ( D, ⊆) be a directed poset, ( L, ≤) be a frame and f , g : D → L be
monotone, then ! !
_ _ _
f (d) ∨ g(d) = f ( d ) ∨ g ( d ),
d∈ D d∈ D d∈ D

and ! !
_ _ _
f (d) ∧ g(d) = f (d) ∧ g(d)
d∈ D d∈ D d∈ D

Proof.

Corollary 137. Let X be a set and ( D, ≤) a poset:

• If ( D, ≤) is directed and f , g : D → P ( X ) are monotone, then we have the same thing


as above but with unions and intersections as well as the duals.

Proof.

Proposition 138. If f : L → L is monotone, then

f n (⊥).
W
• If f is continuous, then µ f = n ∈N

f n (>).
V
• If f is cocontinuous, then ν f = n ∈N

Proof.

Let φ be an LML formula with parameters ρ and X ∈ X such that X Pos φ, we


can show that JφKρ ( X ) is continuous and cocontinuous. Therefore, we have
n
J♦φKρ = ∪n∈N J φKρ ,

and dually,
n
JφKρ = ∩n∈N J φKρ .

ω-Regular Properties

We will introduce a notion similar to regular languages (the ones recognized by a


DFA) that applies to linear time properties, namely, sets of ω-words.

Definition 139. Let Σ be a finite alphabet an ω-regular expression on Σ has the


form
G = E1 · F1ω + · · · + Ek · Fkω ,
where the Ei ’s and Fi ’s are regular expressions on Σ such that ε ∈
/ L( Fi ) for any i.
The language recognized by G is

Lω ( G ) := σ ∈ Σω | ∃i ≤ k, u ∈ L( Ei ), {v j } j∈N ⊆ L( Fi ), σ = uv0 · · · vn · · · .


A language L ⊆ Σω is ω-regular if L = Lω ( G ) for an ω-regular expression G.

Example 140. Let Σ = {a, b}.

34
• The language LΠ that satisfies σ ∈ LΠ ⇔ ∃∞ t, σ (t) = a is given by Lω ((b*a)ω ).

• The language LΣ that satisifes σ ∈ LΠ ⇔ ∀∞ t, σ (t) = b is given by Lω (Σ∗ · bω ).

Remark 141. The set of ω-regular languages on a fixed alphabet are clearly closed
under finite union because Lω ( G ) ∪ Lω ( G 0 ) = Lω ( G + G 0 ). We will also see that
they are closed under finite intersection and complementation. We will also see that
any closed LTL formula defines an ω-regular language.

35

You might also like