pp unit5
pp unit5
13.1 Introduction
13.2 Introduction to Subprogram-Level Concurrency
13.3 Semaphores
13.4 Monitors
13.5 Message Passing
13.6 Ada Support for Concurrency
13.7 Java Threads
13.8 C# Threads
13.9 Concurrency in Functional Languages
13.10 Statement-Level Concurrency
Introduction
• Involves a different way of designing software that can be very useful—many real-world
situations involve concurrency
• Multiprocessor computers capable of physical concurrency are now widely used
Introduction to Subprogram-Level Concurrency
• A task or process is a program unit that can be in concurrent execution with other
program units
• Tasks differ from ordinary subprograms in that:
– A task may be implicitly started
– When a program unit starts the execution of a task, it is not necessarily suspended
– When a task’s execution is completed, control may not return to the caller
• Tasks usually work together
Two General Categories of Tasks
• Heavyweight tasks execute in their own address space
• Lightweight tasks all run in the same address space – more efficient
• A task is disjoint if it does not communicate with or affect the execution of any other task
in the program in any way
Task Synchronization
Cooperation synchronization
• Task B must wait for task A to complete some specific activity before task B can
continue its execution, e.g., the producer-consumer problem
Competition synchronization
• Two or more tasks must use some resource that cannot be simultaneously used, e.g., a
shared counter
– Competition is usually provided by mutually exclusive access (approaches are
discussed later)
Need for Competition Synchronization
Scheduler
• Providing synchronization requires a mechanism for delaying task execution
• Task execution control is maintained by a program called the scheduler, which maps task
execution onto available processors
Task Execution States
• New - created but not yet started
• Rready - ready to run but not currently running (no available processor)
• Running
• Blocked - has been running, but cannot now continue (usually waiting for some event to
occur)
• Dead - no longer active in any sense
release(aSemaphore)
if aSemaphore’s queue is empty then
increment aSemaphore’s counter
else
put the calling task in the task ready queue
transfer control to a task from aSemaphore’s queue
end
Consumer Code
task consumer;
loop
wait (fullspots);{wait till not empty}}
FETCH(VALUE);
release(emptyspots); {increase empty}
-- consume VALUE –-
end loop;
end consumer;
producer code
task consumer;
loop
wait(fullspots);{wait till not empty}
wait(access); {wait for access}
FETCH(VALUE);
release(access); {relinquish access}
release(emptyspots); {increase empty}
-- consume VALUE –-
end loop;
end consumer;
Evaluation of Semaphores
• Misuse of semaphores can cause failures in cooperation synchronization, e.g., the buffer
will overflow if the wait of fullspots is left out
• Misuse of semaphores can cause failures in competition synchronization, e.g., the
program will deadlock if the release of access is left out
Monitors
One solution to some of the problems of semaphores in a concurrent environment is to encapsulate
shared data structures with their operations and hide their representations—that is, tomake shared
data structures abstract data types with some special restrictions
• The idea: encapsulate the shared data and its operations to restrict access
• A monitor is an abstract data type for shared data
Competition Synchronization
• Shared data is resident in the monitor (rather than in the client units)
• All access resident in the monitor
– Monitor implementation guarantee synchronized access by allowing only
one access at a time
– Calls to monitor procedures are implicitly queued if the monitor is busy at
the time of the call
Cooperation Synchronization
Evaluation of Monitors
Message Passing
•Message passing is a general model for concurrency
– It can model both semaphores and monitors
– It is not just for competition synchronization
• Central idea: task communication is like seeing a doctor--most of the time she
waits for you or you wait for her, but when you are both ready, you get together,
or rendezvous
Message Passing Rendezvous
• To support concurrent tasks with message passing, a language needs:
Task Body
• The body task describes the action that takes place when a rendezvous occurs
• A task that sends a message is suspended while waiting for the message to be
accepted and during the rendezvous
• Entry points in the spec are described with accept clauses in the body
The task executes to the top of the accept clause and waits for a message
During execution of the accept clause, the sender is suspended
The basic concept of synchronous message passing is that tasks are often busy, and when busy, they
cannot be interrupted by other units. Suppose task A and task B are both in execution, and A wishes to
send a message to B. Clearly, if B is busy, it is not desirable to allow another task to interrupt it. That would
disturb B’s current processing. . The alternative is to provide a Linguistic mechanism that allows a task to
specify to other tasks when it is ready to receive messages.
.
If task A is waiting for a message at the time task B sends that message, the message can be transmitted.
This actual transmission of the message is called a rendezvous. Note that a rendezvous canoccur only if
both the sender and receiver want it to happen. During a rendezvous, the information of the message can
be transmitted in either or both directions.
Both cooperation and competition synchronization of tasks can be conveniently handled with the
message-passing model, as described in the following section.
- Example:
task body buf_task is
begin
loop
end DEPOSIT;
end loop;
end buf_task;
Def: A clause whose guard is true is called open.
Def: A clause whose guard is false is called closed.
• A method that includes the synchronized modifier disallows any other method
from running on the object while it is in execution
…
public synchronized void deposit( int i) {…}
public synchronized int fetch() {…}
…
• The above two methods are synchronized which prevents them from interfering
with each other
• If only a part of a method must be run without interference, it can be synchronized
thru synchronized statement
synchronized (expression)
statement
Cooperation Synchronization with Java Threads
Synchronizing Threads
High-Performance Fortran
• Number of processors
!HPF$ PROCESSORS procs (n)
• Distribution of data
!HPF$ DISTRIBUTE (kind) ONTO procs :: identifier_list
– kind can be BLOCK (distribute data to processors in blocks) or CYCLIC
(distribute data to processors one element at a time)
• Relate the distribution of one array with that of another
ALIGN array1_element WITH array2_element
exception_choice form:
exception_name | others
• Handlers are placed at the end of the block or unit in which they occur
Example:
Predefined Exceptions
Evaluation
• Ada was the only widely used language with exception handling until it was added
to C++
int main()
{
int a,b;
cout<<―enter a,b values:‖;
cin>>a>>b;
try{
if(b!=0)
cout<<―result is:‖<<(a/b);
else
throw b;
}
catch(int e)
{
cout<<―divide by zero error occurred due
to b= ― << e;
}
}
Classes of Exceptions
• Like those of C++, except every catch requires a named parameter and all
parameters must be descendants of Throwable
• Syntax of try clause is exactly that of C++, except for the finally clause
• Exceptions are thrown with throw, as in C++, but often the throw includes the new
operator to create the object as : throw new MyException();
• The Java throws clause is quite different from the throw clause of C++
• Exceptions of class Error and RunTimeException and all of their descendants are
called unchecked exceptions; all other exceptions are called checked exceptions
• Checked exceptions that may be thrown by a method must be either:
– Listed in the throws clause, or
– Handled in the method
finally clause
import java.io.*;
class Test
{
public static void main(String args[]) throws IOException
{
int a[],b,c;
DataInputStream dis=new DataInputStream(System.in);
a=new int[5];
for(int i=0;i<5;i++)
{
a[i]=Integer.parseInt(dis.readLine());
}
//displaying the values from array
try{
for(int i=0;i<7;i++)
{
System.out.println(a[i]);
}
}
catch(Exception e)
{
finally
{
}
}
O/P: 1 2 3 4 5
12345
The runtime error is : ArrayIndexOutOfBoundsException: 5
100% wiil be executed
Assertions
• The types of exceptions makes more sense than in the case of C++
• The throws clause is better than that of C++ (The throw clause in C++ says little to
the programmer)
• The finally clause is often useful
• The Java interpreter throws a variety of exceptions that can be handled by user
programs
Output oRadioB.java
Functional Languages
The imperative and functional models grew out of work undertaken by
mathematicians Alan Turing, Alonzo Church, Stephen Kleene, Emil Post etc.
These individuals developed several very different formalizations of the notion of an
algorithm, or effective procedure, based on automata, symbolic manipulation,
recursive function definitions, and combinatorics.
Turing’s model of computing was the Turing machine, an automaton reminiscent of
a finite or pushdown automaton, but with the ability to access arbitrary cells of an
unbounded storage “tape.”
Church’s model of computing is called the lambda calculus.
It is based on the notion of parameterized expressions ;with each parameter
introduced by an occurrence of the letter λ—hence the notation’s name.
Lambda calculus was the inspiration for functional programming: one uses it to
compute by substituting parameters into expressions, just as one computes in a high
level functional program by passing arguments to functions.
A constructive proof is the one that shows how to obtain a mathematical object with
some desired property, and a nonconstructive proof is one that merely shows that
such an object must exist, perhaps by contradiction, or counting arguments, or
reduction to some other theorem whose proof is nonconstructive.
The logic programmer writes a set of axioms that allow the computer to discover a
constructive proof for each particular set of inputs.
Lambda Calculus
Lambda calculus is a constructive notation for function definitions. Any computable
function can be written as a lambda expression.
Computation amounts to macro substitution of arguments into the function
definition, followed by reduction to simplest form via simple and mechanical rewrite
rules.
The order in which these rules are applied captures the distinction between
applicative and normal-order evaluation.
Conventions on the use of certain simple functions e.g., the identity function allow
selection, structures, and even arithmetic to be captured as lambda expressions.
An Overview of Scheme
A constructive proof is the one that shows how to obtain a mathematical object with
some desired property, and a nonconstructive proof is one that merely shows that
such an object must exist, perhaps by contradiction, or counting arguments, or
reduction to some other theorem whose proof is nonconstructive.
The logic programmer writes a set of axioms that allow the computer to discover a
constructive proof for each particular set of inputs.
An Overview of Scheme
Bindings
Names can be bound to values by introducing a nested scope:
(let ((a 3)
(b 4)
(square (lambda (x) (* x x))) (plus +))
(sqrt (plus (square a) (square b)))) ⇒ 5.0
The special form let takes two or more arguments.
Scheme provides a special form called define that has the side effect of creating a
global binding for a name:
(define hypot (lambda (a b)
(sqrt (+ (* a a) (* b b))))) (hypot 3 4) ⇒ 5
Lisp included a self-definition of the language:code for a Lisp interpreter. The code
is based on the functions eval and apply.
Apply, takes two arguments: a function and a list.
It achieves the effect of calling the function, with the elements of the list as
arguments.
For Primitive special forms, built into the language implementation- lambda, if,
define, set!, quote, etc. eval provides a direct implementation.
Formalizing Self-Definition: Self-definition” is that for all expressions E, we get the
same result by evaluating E under the interpreter I that we get by evaluating E
directly.
Consider,
M(I) =M
Suppose let H(F) = F(I ) where F can be any function that takes a Scheme expression
as its argument.
Clearly
H(M) =M
Function M is said to be a fixed point of H.
H is well defined we can use it to obtain a rigorous definition of M.
When it needs an input value, function my_prog forces evaluation of the car of input,
and passes the cdr on to the rest of the program.
The language implementation repeatedly forces evaluation of the car of output, prints
it, and repeats:
(define driver (lambda (s) (if (null? s) ’() ; nothing left (display (car s))
(driver (cdr s))))) (driver output)
Lazy evaluation would force things to happen in the proper order. The car of output
is the first prompt.
The cadr of output is the first square, a value that requires evaluation of the car of
input. The caddr of output is the second prompt.
Recent versions of Haskell employ a more general concept known as monads.
Monads are drawn from a branch of mathematics known as category theory, but one
doesn’t need to understand the theory to appreciate their usefulness in practice.
In Haskell, monads are essentially a clever use of higher-order functions, coupled
with a bit of syntactic sugar, that allow the programmer to chain together a sequence
of actions -function
calls that have to happen in order.
The following code calls random twice to illustrate its interface;
twoRandomInts gen = let (rand1, gen2) = random gen (rand2, gen3) = random gen2
in ([rand1, rand2], gen3)
gen2, one of the return values from the first call to random, has been passed as an
argument to the second call.
Then gen3, one of the return values from the second call, is returned to main, where
it could, if we wished, be passed to another function.
This is particularly complicated for deeply nested functions.
Monads provide a more general solution to the problem of threading mutable state
through a functional program.
We use Haskell’s standard IO monad, which includes a random number generator:
twoMoreRandomInts = do rand1 <- randomIO
rand2 <- randomIO return [rand1, rand2]
Now (fold + 0 ’(1 2 3 4 5)) gives us the sum of the first five natural numbers.
One of the most common uses of higher-order functions is to build new functions
from existing ones:
(define total (lambda (l) (fold + 0 l))) (total ’(1 2 3 4 5)) ⇒ 15
(define total-all (lambda (l) (map total l)))
(total-all ’((1 2 3 4 5)
(2 4 6 8 10)
(3 6 9 12 15))) ⇒ (15 30 45)
Currying
A common operation, named for logician Haskell Curry, is to replace a
multiargument function with a function that takes a single argument and returns a
function that expects the remaining arguments:
(define curried-plus (lambda (a) (lambda (b) (+ a b)))) ((curried-plus 3) 4) ⇒ 7
(define plus-3 (curried-plus 3)) (plus-3 4) ⇒ 7
Among other things, currying gives us the ability to pass a “partially applied”
function to a higher-order function.
We can write a general-purpose function that “curries” its (binary) function
argument:
(define curry (lambda (f) (lambda (a) (lambda (b) (f a b))))) (((curry +) 3) 4) ⇒ 7
(define curried-plus (curry +))
Consider the following function in ML:
Logic Languages
Prolog and other logic languages are based on first-order predicate calculus
Logic Programming Concepts
Logic programming systems allow the programmer to state a collection of axioms
from which theorems can be proven.
The user of a logic program states a theorem, or goal, and the language
implementation attempts to find a collection of axioms and inference steps- including
choices of values for variables that together imply the goal.
In almost all logic languages, axioms are written in a standard form known as a Horn
clause.
A Horn clause consists of a head, or consequent term H, and a body consisting of
terms Bi :
H ← B1, B2, . . . , Bn
The semantics of this statement are that when the Bi are all true,we can deduce that
H is true as well.
We say “H, if B1, B2, . . . , and Bn.”
A logic programming system combines existing statements, canceling like terms,
through a process known as resolution.
If we know that A and B imply C, for example, and that C implies D, we can deduce
that A and B
imply D:
C ← A, B
D←C
D ← A, B
In general, terms like A, B, C, and D may consist not only of constants -“Rochester
is rainy”, but also of predicates applied to atoms or to variables:
rainy(Rochester), rainy(Seattle), rainy(X). _
During resolution, free variables may acquire values through unification with
expressions in
matching terms, much as variables acquire types in ML
flowery(X) ← rainy(X) rainy(Rochester)
flowery(Rochester)
Prolog
A Prolog interpreter runs in the context of a database of clauses (Horn clauses) that
are assumed to be true.
Each clause is composed of terms, which may be constants, variables, or structures.
?- a = a.
Yes % constant unifies with itself
?- a = b.
No % but not with another constant
?- foo(a, b) = foo(a, b).
Yes % structures are recursively identical
It is possible for two variables to be unified without instantiating them. If we type
?- A = B.
the interpreter will simply respond A = B
Suppose we are given the following rules:
takes_lab(S) :- takes(S, C), has_lab(C).
has_lab(D) :- meets_in(D, R), is_lab(R).
S takes a lab class if S takes C and C is a lab class.
Moreover D is a lab class if D meets in room R and R is a lab. Lists
List manipulation is a sufficiently common operation in Prolog to warrant its own
notation.
The construct [a, b, c] is syntactic sugar for the structure .(a, .(b, .(c, []))), where [] is
the empty list and .
is a built-in cons-like predicate.
[a, b, c] could be expressed as [a | [b,c]], [a, b | [c]], or [a, b, c | []].
The vertical-bar notation is particularly handy when the tail of the list is a variable:
member(X, [X | _]).
member(X, [_ | T]) :- member(X, T). sorted([]). % empty list is sorted sorted([A, B |
T]) :- A =< B, sorted([B | T]).
% compound list is sorted if first two elements are in order and % remainder of list -
after first element is sorted
Here =< is a built-in predicate that operates on numbers.
The underscore is a placeholder for a variable that is not needed anywhere else in the
clause. Note that [a, b | c] is the improper list .(a, .(b, c)).
Given,
append([], A, A).
append([H | T], A, [H | L]) :- append(T, A, L). we can type
?- append([a, b, c], [d, e], L). L = [a, b, c, d, e]
?- append(X, [d, e], [a, b, c, d, e]).
Arithmetic
The usual arithmetic operators are available in Prolog, but they play the role of
predicates, not of functions.
Thus +(2, 3), which may also be written 2 + 3, is a two argument structure. It will not
unify with 5:
?- (2 + 3) = 5.
No
Prolog provides a built-in predicate, is, that unifies its first argument with the
arithmetic value of its second argument:
?- is(X, 1+2).
X=3
?- X is 1+2.
X = 3 % infix is also ok
?- 1+2 is 4-1.
No % first argument (1+2) is already instantiated.
Search/Execution Order
We can imagine two principal search strategies:
Start with existing clauses and work forward, attempting to derive the goal. This
strategy is known as forward chaining.
Start with the goal and work backward, attempting to “unresolve” it into a set of
preexisting clauses. This strategy is known as backward chaining.
The Prolog interpreter (or program) explores this tree depth first, from left to right.
It starts at the beginning of the database, searching for a rule R whose head can be
unified with the top-level goal. It then considers the terms in the body of R as
subgoals, and attempts to satisfy them, recursively, left to right.
The process of returning to previous goals is known as backtracking. It strongly
resembles the control flow of generators in Icon. rainy(seattle).
rainy(rochester). cold(rochester).
snowy(X) :- rainy(X), cold(X).
Fig: Backtracking search in Prolog- An OR level consists of alternative database
clauses whose head will unify with the subgoal above; one of these must be satisfied.
The notation _C = _X is meant to indicate that while both C and X are
uninstantiated, they have been associated with one another in such a way that if
either receives a value in the future it will be shared by both.
The binding of X to seattle is broken when we backtrack to the rainy(X) subgoal.
The effect is similar to the breaking of bindings between actual and formal
parameters in an imperative programming language.
The interpreter pushes a frame onto its stack every time it begins to pursue a new
subgoal G.
If G succeeds, control returns to the “caller” (the parent in the search tree), but G’s
frame remains on the stack.
Later subgoals will be given space above this dormant frame.
Suppose for example that we have a database describing a directed acyclic graph:
edge(a, b). edge(b, c). edge(c, d).
edge(d, e). edge(b, e). edge(d, f). path(X, X).
path(X, Y) :- edge(Z, Y), path(X, Z).
The last two clauses tell us how to determine whether there is a path from node X to
node Y.
If we were to reverse the order of the terms on the right-hand side of the final clause,
then the Prolog interpreter would search for a node Z that is reachable from X before
checking to see whether there is an edge from Z to Y.
path(X, Y) :- path(X, Z), edge(Z, Y). path(X, X).
The interpreter first unifies path(a, a) with the left-hand side of path(X, Y):- path(X,
Z), edge(Z, Y).
It then considers the goals on the right-hand side, the first of which (path(X, Z)),
unifies with the left-hand side of the very same rule, leading to an infinite regression.
Let us use the Prolog fact x(n) to indicate that player X has placed a marker in square
n, and o(m) to indicate that player O has placed a marker in square m.
Issue a query ?- move(A) that will cause the Prolog interpreter to choose a good
square A for the computer to occupy next.
Clearly we need to be able to tell whether three given squares lie in a row. One way
to express this is:
ordered_line(1, 2, 3). ordered_line(4, 5, 6).
ordered_line(7, 8, 9). ordered_line(1, 4, 7).
line(A, B, C) :- ordered_line(A, B, C).
line(A, B, C) :- ordered_line(A, C, B).
The following rules work well. move(A) :- good(A), empty(A). full(A) :- x(A).
full(A) :- o(A).
empty(A) :- \+(full(A)).
% strategy:
good(A) :- win(A). good(A) :- block_win(A).
good(A) :- split(A). good(A) :- strong_build(A). good(A) :- weak_build(A).
The initial rule indicates that we can satisfy the goal move(A) by choosing a good,
empty square. The \+ is a built-in predicate that succeeds if its argument -a goal
cannot be proven;
If none of these goals can be satisfied, our final, default choice is to pick an
unoccupied square, giving priority to the center, the corners, and the sides in that
order:
good(5).
good(1). good(3). good(7). good(9).
good(2). good(4). good(6). good(8).
A collection of Horn clauses, such as the facts and rules of a Prolog database,
constitutes a list of things assumed to be true.
It does not include any things assumed to be false.
This reliance on purely “positive” logic implies that Prolog’s \+ predicate is different
from logical negation.
Unless the database is assumed to contain everything that is true -this is the closed
world assumption), the goal \+(T) can succeed simply because our current
knowledge is insufficient to prove T.
Negation in Prolog occurs outside any implicit existential quantifiers on the right-
hand side of a rule. Thus ,
?- \+(takes(X, his201)).
Logical limitations of Prolog
Prolog can do many things. But it has four fundamental logical weaknesses:
Prolog doesn't allow "or"d (disjunctive) facts or conclusions--that is, statements that
one of several things is true, but you don't know which.
For instance, if a light does not come on when we turn on its switch, we can
conclude that either the bulb is burned out or the power is off or the light is
disconnected. Prolog doesn't allow "not" (negative) facts or conclusions--that is,
direct statements that something is false.
For instance, if a light does not come on when we turn on its switch, but another light
in the same room comes on when we turn on its switch, we can conclude that it is
false that there is a power failure.