PPL Unit 3-1
PPL Unit 3-1
Abstraction:
- Nearly all programming languages designed since 1980 have supported data abstraction with some
kind of module
Encapsulation:
- Original motivation:
2. Some means of partial compilation (compilation units that are smaller than the whole program)
- Obvious solution: a grouping of subprograms that are logically related into a unit that can be
separately compiled
2. FORTRAN 77 and C - Files containing one or more subprograms can be independently compiled
3. FORTRAN 90, C++, Ada (and other contemporary languages) - separately compliable modules
Definitions: An abstract data type is a user-defined datatype that satisfies the following two conditions:
Definition 1: The representation of and operations of objects of the type are defined in a single
syntactic unit; also, other units can create objects of the type.
Definition 2: The representation of objects of the type is hidden from the program units that use
these objects, so the only operations possible are those provided in the type's definition .
Advantage of Restriction 1:
Advantage of Restriction 2:
- Reliability--by hiding the data representations, user code cannot directly access objects
of the type. User code cannot depend on the representation, allowing the representation to be
changed without affecting user code.
- User-defined abstract data types must have the same characteristics as built-in abstract data
types
2. A method of making type names and subprogram headers visible to clients, while hiding
actual definitions.
3. Some primitive operations must be built into the language processor (usually just assignment
and comparisons for equality and inequality)
- Some operations are commonly needed, but must be defined by the type designer
Language Examples:
1. Simula 67
2. Ada
- Information Hiding
- Representation of an exported hidden type is specified in a special invisible (to clients) part of
the spec package (the private clause), as in:
package … is
type NODE_TYPE is
record
end record;
…
- A spec package can also define unhidden type simply by providing the representation outside a
private clause
1. The compiler must be able to see the representation after seeing only the spec package (the
compiler can see the private clause)
2. Clients must see the type name, but not the representation (clients cannot see the private clause)
• C++, Ada, Java 5.0, and C# 2005 provide support for parameterized ADTs
Parameterized ADTs in Ada
• Ada Generic Packages
–Make the stack type more flexible by making the element type and the size of the stack generic
generic
Max_Size: Positive;
type Elem_Type is private;
package Generic_Stack is
Type Stack_Type is limited private;
function Top(Stk: in out StackType) return Elem_type;
…
end Generic_Stack;
Package Integer_Stack is new Generic_Stack(100,Integer);
Package Float_Stack is new Generic_Stack(100,Float);
Parameterized ADTs in C++
• Classes can be somewhat generic by writing parameterized
constructor functions
class stack {
…
stack (int size)
{ stk_ptr = new int
[size]; max_len = size -
1;
top = -1;
};
…
}
stack stk(100);
• The stack element type can be parameterized by making the class a template class
template <class Type>
class stack {
private:
Type *stackPtr;
const int maxLen;
int topPtr;
public:
stack() {
stackPtr = new Type[100];
maxLen = 99;
topPtr = -1;
}
…
}
Parameterized Classes in Java 5.0
• Generic parameters must be classes
• Most common generic types are the collection types, such as Linked List and Array List
• Eliminate the need to cast objects that are removed
• Eliminate the problem of having multiple types in a structure
Parameterized Classes in C# 2005
• Similar to those of Java 5.0
• Elements of parameterized structures can be accessed through indexing
• The concept of ADTs and their use in program design was a milestone in the development of
languages
• Two primary features of ADTs are the packaging of data with their associated operations and
information hiding
• Ada provides packages that simulate ADTs
• C++ data abstraction is provided by classes
• java‘s data abstraction is similar to C++
• Ada, C++, Java 5.0, and C# 2005 support parameterized ADTs
Object-Oriented Programming
• Abstract data types
• Inheritance
–Inheritance is the central theme in OOP and languages that support it
• Polymorphism
Inheritance
• Productivity increases can come from reuse
–ADTs are difficult to reuse—always need changes
–All ADTs are independent and at the same level
• Inheritance allows new classes defined in terms of existing ones, i.e., by allowing them to inherit
common parts
• Inheritance addresses both of the above concerns--reuse ADTs after minor changes and define classes
in a hierarchy
Object-Oriented Concepts
• ADTs are usually called classes
• Class instances are called objects
• A class that inherits is a derived class or a subclass
• The class from which another class inherits is a parent class or super class
• Subprograms that define operations on objects are called methods
• Calls to methods are called messages
• The entire collection of methods of an object is called its message protocol or message interface
• Messages have two parts--a method name and the destination object
• In the simplest case, a class inherits all of the entities of its parent
• An abstract method is one that does not include a definition (it only defines a protocol)
Categories of Concurrency:
1. Physical concurrency - Multiple independent processors ( multiple threads of control)
2. Logical concurrency - The appearance of physical concurrency is presented by time sharing
one processor (software can be designed as if there were multiple threads of control)
- Coroutines provide only quasiconcurrency
- Task A must wait for task B to complete some specific activity before task A can continue its
execution
e.g., the producer-consumer problem
2. Competition
- When two or more tasks must use some resource that cannot be simultaneously used
e.g., a shared counter
- A problem because operations are not atomic
- Competition is usually provided by mutually exclusive access (methods are discussed later
- Providing synchronization requires a mechanism for delaying task execution
- Task execution control is maintained by a program called the scheduler, which maps task
execution onto available processors
- Tasks can be in one of several different execution states:
1. New - created but not yet started
2. Runnable or ready - ready to run but not currently running (no available processor)
3. Running
4. Blocked - has been running, but cannot not continue (usually waiting for some event to
occur)
5. Dead - no longer active in any sense
Liveness is a characteristic that a program unit may or may not have
- In sequential code, it means the unit will eventually complete its execution
- In a concurrent environment, a task can easily lose its liveness
- If all tasks in a concurrent environment lose their liveness, it is called deadlock
- Design Issues for Concurrency:
1. How is cooperation synchronization provided?
2. How is competition synchronization provided?
3. How and when do tasks begin and end execution?
4. Are tasks statically or dynamically created?
Example: A buffer and some producers and some consumers
Technique: Attach two SIGNAL objects to the buffer, one for full spots and one for empty spots
Methods of Providing Synchronization:
1. Semaphores
2. Monitors
3. Message Passing
1. Semaphores (Dijkstra - 1965)
- A semaphore is a data structure consisting of a counter and a queue for storing task descriptors
- Semaphores can be used to implement guards on the code that accesses shared data structures
- Semaphores have only two operations, wait and release (originally called P and V by Dijkstra)
- Semaphores can be used to provide both competition and cooperation synchronization
Cooperation Synchronization with Semaphores:
Example: A shared buffer
- The buffer is implemented as an ADT with the operations DEPOSIT and FETCH as the only
ways to access the buffer.
Use two semaphores for cooperation:
Empty spots and full spots
- The semaphore counters are used to store the numbers of empty spots and full spots in the buffer
- DEPOSIT must first check empty spots to see if there is room in the buffer
- If there is room, the counter of empty spots is decremented and the value is inserted
- If there is no room, the caller is stored in the queue of empty spots
- When DEPOSIT is finished, it must increment the counter of full spots
- FETCH must first check full spots to see if there is a value
- If there is a full spot, the counter of full spots is decremented and the value is removed
- If there are no values in the buffer, the caller must be placed in the queue of full spots
- When FETCH is finished, it increments the counter of empty spots
- The operations of FETCH and DEPOSIT on the semaphores are accomplished through two
semaphore operations named wait and release wait (aSemaphore)
if a Semaphore’s counter > 0 then Decrement aSemaphore’s counter
else
Put the caller in aSemaphore’s queue Attempt to transfer control to some
ready task
(If the task ready queue is empty, deadlock occurs)
end
release(aSemaphore)
if aSemaphore’s queue is empty then
Increment aSemaphore’s counter
else
Put the calling task in the task ready queue Transfer control to a task from aSemaphore’s queue
end
- Competition Synchronization with Semaphores:
- A third semaphore, named access, is used to control access (competition synchronization)
- The counter of access will only have the values 0 and 1
- Such a semaphore is called a binary semaphore
SHOW the complete shared buffer example
- Note that wait and release must be atomic!
Evaluation of Semaphores:
1. Misuse of semaphores can cause failures in cooperation synchronization
e.g., the buffer will overflow if the wait of full spots is left out
2. Misuse of semaphores can cause failures in competition synchronization
e.g., The program will deadlock if the release of access is left out
:
2. Monitors ( Concurrent Pascal, Modula, Mesa)
The idea: encapsulate the shared data and it operations to restrict access
A monitor is an abstract data type for shared data show the diagram of monitor buffer operation,
- Example language: Concurrent Pascal
- Concurrent Pascal is Pascal + classes, processes (tasks), monitors, and the queue data type (for
semaphores)
Example language: Concurrent Pascal (continued) processes are types
Instances are statically created by declarations
- An instance is “started” by init, which allocate its local data and begins its execution
- Monitors are also types Form:
type some_name = monitor (formal parameters)
shared variables , local procedures
exported procedures (have entry in definition) initialization code
- delay takes a queue type parameter; it puts the process that calls it in the specified queue and
removes its exclusive access rights to the monitor’s data structure
- Differs from send because delay always blocks the caller
- continue takes a queue type parameter; it disconnects the caller from the monitor, thus freeing the
monitor for use by another process.
-It also takes a process from the parameter queue (if the queue isn’t empty) and starts it, Differs from
release because it always has some effect (release does nothing if the queue is empty)
Java Threads
• The concurrent units in Java are methods named run
– A run method code can be in concurrent execution with other such methods
– The process in which the run methods execute is called a thread
Class myThread extends Thread
public void run () {…}
}
…
Thread myTh = new MyThread ();
myTh.start();
Controlling Thread Execution
• The Thread class has several methods to control the execution of threads
The yield is a request from the running thread to voluntarily surrender the processor
– The sleep method can be used by the caller of the method to block the thread
– The join method is used to force a method to delay its execution until the run method of another thread
has completed its execution
Thread Priorities
• A thread‘s default priority is the same as the thread that create it
– If main creates a thread, its default priority is NORM_PRIORITY
• Threads defined two other priority constants, MAX_PRIORITY and MIN_PRIORITY
• The priority of a thread can be changed with the methods setPriority
Cooperation Synchronization with Java Threads
• Cooperation synchronization in Java is achieved via wait, notify, and notifyAll methods
– All methods are defined in Object, which is the root class in Java, so all objects inherit them
• The wait method must be called in a loop
• The notify method is called to tell one waiting thread that the event it was waiting has happened
• The notifyAll method awakens all of the threads on the object‘s wait list
Java’s Thread Evaluation
• Java‘s support for concurrency is relatively simple but effective
• Not as powerful as Ada‘s tasks
C# Threads
Synchronizing Threads
- Continuation
- The block or unit that raises an exception but does not handle it is always terminated (also any block
or unit to which it is propagated that does not handle it)
- User-defined Exceptions:
exception_name_list : exception;
raise [exception_name]
(the exception name is not required if it is in a handler--in this case, it propagates the same exception)
- Exception conditions can be disabled with:
pragma SUPPRESS(exception_list)
- Predefined Exceptions:
CONSTRAINT_ERROR - index constraints, range constraints, etc.
NUMERIC_ERROR - numeric operation cannot return a correct value, etc.
PROGRAM_ERROR - call to a subprogram whose body has not been elaborated
STORAGE_ERROR - system runs out of heap
TASKING_ERROR - an error associated with tasks
- Evaluation
- The Ada design for exception handling embodies the state-of-the-art in language design in 1980
- A significant advance over PL/I
- Ada was the only widely used language with exception handling until it was added to C++
C++ Exception Handling :
- Added to C++ in 1990
- Design is based on that of CLU, Ada, and ML
• Like those of C++, except every catch requires a named parameter and all parameters must be
descendants of Throwable
•Syntax of try clause is exactly that of C++
•Exceptions are thrown with throw, as in C++, but often the throw includes the new operator to create the
object, as in: throw new MyException ();
Binding Exceptions to Handlers
•Binding an exception to a handler is simpler in Java than it is in C++
–An exception is bound to the first handler with a parameter is the same class as the thrown object or an
ancestor of it
•An exception can be handled and rethrown by including a throw in the handler (a handler could also
throw a different exception)
Continuation
• If no handler is found in the method, the exception is propagated to the method‘s caller
• If no handler is found (all the way to main), the program is terminated
•To ensure that all exceptions are caught, a handler can be included in any try construct that catches all
exceptions
–Simply use an Exception class parameter
–Of course, it must be the last in the try construct
Checked and Unchecked Exceptions
• The Java throws clause is quite different from the throw clause of C++
•Exceptions of class Error and RunTimeException and all of their descendants are called unchecked
exceptions; all other exceptions are called checked exceptions
•Checked exceptions that may be thrown by a method must be either:
–Listed in the throws clause, or Handled in the method
Other Design Choices
•A method cannot declare more exceptions in its throws clause than the method it overrides
•A method that calls a method that lists a particular checked exception in its throws clause has three
alternatives for dealing with that exception:
–Catch and handle the exception
–Catch the exception and throw an exception that is listed in its own throws clause
–Declare it in its throws clause and do not handle it
The finally Clause
•Can appear at the end of a try construct
•Form:
finally {
...
}
•Purpose: To specify code that is to be executed, regardless of what happens in the try construct
Example
•A try construct with a finally clause can be used outside exception handling
try {
for (index = 0; index < 100; index++) {
…
if (…) {
return;
} //** end of if
} //** end of try clause
finally {
…
} //** end of try construct
LOGIC PROGRAM PARADIGM:
Based on logic and declarative programming 60’s and early 70’s, Prolog (Programming
in logic, 1972) is the most well known representative of the paradigm.
Prolog is based on Horn clauses and SLD resolution
Mostly developed in fifth generation computer systems project
Specially designed for theorem proof and artificial intelligence but allows general
purpose computation.
Some other languages in paradigm: ALF, Frill, G¨odel,, Mercury, Oz, Ciao, _Prolog, datalog, and
CLP languages
Proof: by refutation, try to un satisfy the clauses with a goal clause G. Find 9(G).
Linear resolution for definite programs with constraints and selected atom.
Unification. Bidirectional.
Prolog terms:
Atoms:
1 Strings with starting with a small letter and consist of
o [a-zA-Z 0-9]*
o a aDAM a1 2
2 Strings consisting of only punctuation
*** .+. .<.>.
3 Any string enclosed in single quotes (like an arbitrary string)
o ’ADAM’ ’Onur Sehitoglu’’2 * 4 < 6’
Numbers
o 1234 12.32 12.23e-10
Variables:
Starts with an atom head have one or more arguments (any term) enclosed in parenthesis, separated
by comma structure head cannot be a variable or anything other than atom.
Is(X,+(Y,1)) _ X is X + 1
Static sugars:
Prolog interpreter automatically maps some easy to read syntax into its actual structure.
String: "ABC" _ [65,66,67] (ascii integer values) use display (Term). to see actual structure of the term
Unification:
S=T
head of S = head of T
S _ T;P =
Unification Example:
X = a ! pwith X = a
a(X,3) = a(X,3,2) ! ×
a(X,3) = b(X,3) ! ×
a(X,3) = a(3,X) ! pwith X = 3
Declarations:
p1(arg1, arg2, ...) :- p2(args,...) , p3(args,...) .means if p2 and p3 true, then p1 is true. There can be
arbitrary number of (conjunction of) predicates at right hand side.
p(args) :- q(args).
p(args) :- s(args).
Lists Example:
% list membership
memb(X, [X| Re s t ]) .
memb(X, [ | Re s t ]) :- memb(X, Re s t ).
% concatenation
Procedural Interpretation:
For goal clause all matching head clauses (LHS of clauses) are kept as backtracking points (like a
junction in maze search) Starts from first match. To prove head predicate, RHS predicates need to be
proved recursively. If all RHS predicates are proven, head predicate is proven. When fails, prolog goes
back to last backtracking point and tries next choice. When no backtracking point is left, goal clause fails.
All predicate matches go through unification so goal clause variables can be instantiated.
operators (is) force arithmetic expressions to be evaluated all variables of the operations needs to be
instantiated.
is operator forces RHS to be evaluated: X is Y+3*Y Y needs to have a numerical value when search hits
this expression. Note that X is X+1 is never successful in Prolog. Variables are instantiated once.
gcd (X,X,X) .
Deficiencies of Prolog
• Intrinsic limitations
•Expert systems