Concepts in Programming Languages 1st Edition John C. Mitchell Download
Concepts in Programming Languages 1st Edition John C. Mitchell Download
https://ebookultra.com/download/concepts-in-programming-languages-1st-
edition-john-c-mitchell/
https://ebookultra.com/download/theories-of-programming-languages-1st-
edition-john-c-reynolds/
https://ebookultra.com/download/programming-concepts-in-c-2nd-edition-
robert-burns/
https://ebookultra.com/download/programming-with-visual-c-concepts-
and-projects-1st-edition-james-allert/
https://ebookultra.com/download/beginning-c-game-programming-2nd-ed-
edition-john-horton/
Masterminds of Programming Conversations with the Creators
of Major Programming Languages 1st Edition Federico
Biancuzzi
https://ebookultra.com/download/masterminds-of-programming-
conversations-with-the-creators-of-major-programming-languages-1st-
edition-federico-biancuzzi/
https://ebookultra.com/download/essentials-of-programming-languages-
third-edition-daniel-p-friedman/
https://ebookultra.com/download/concepts-in-toxicology-1st-edition-
john-h-duffus/
https://ebookultra.com/download/programming-languages-principles-and-
paradigms-2nd-edition-allen-b-tucker/
https://ebookultra.com/download/c-programming-in-linux-1st-edition-
edition-haskins-d/
Concepts in programming languages 1st Edition John C.
Mitchell Digital Instant Download
Author(s): John C. Mitchell, Krzysztof Apt
ISBN(s): 9780521780988, 0521780985
Edition: 1st
File Details: PDF, 7.59 MB
Year: 2001
Language: english
Team-Fly
This book provides a better understanding of the issues and trade-offs that arise in
programming language design and a better appreciation of the advantages and pitfalls of the
programming languages used.
Table of Contents
Chapter 1 - Introduction
Chapter 2 - Computability
Chapter 3 - Lisp—Functions, Recursion, and Lists
Chapter 4 - Fundamentals
Part 2 - Procedures, Types, Memory Mangement, and Control
Team-Fly
Team-Fly
Back Cover
This textbook for undergraduate and beginning graduate students explains and examines the central concepts used in
modern programming languages, such as functions, types, memory management, and control. This book is unique in its
comprehensive presentation and comparison of major object-oriented programming languages. Separate chapters examine
the history of objects, Simula and Smalltalk, and the prominent languages C++ and Java.
The author presents foundational topics, such as lambda calculus and denotational semantics, in an easy-to-read, informal
style, focusing on the main insights provided by these theories. Advanced topics include concurrency and concurrent
object-oriented programming. A chapter on logic programming illustrates the importance of specialized programming
methods for certain kinds of problems.
This book will give the reader a better understanding of the issues and trade-offs that arise in programming language design
and a better appreciation of the advantages and pitfalls of the programming languages they use.
John C. Mitchell is Professor of Computer Science at Stanford University, where he has been a popular teacher for more
than a decade. Many of his former students are successful in research and private industry. He received his Ph.D. from MIT
in 1984 and was a Member of Technical Staff at AT&T Bell Laboratories before joining the faculty at Stanford. Over the past
twenty years, Mitchell has been a featured speaker at international conferences, has led research projects on a variety of
topics, including programming language design and analysis, computer security, and applications of mathematical logic to
computer science, and has written more than 100 research articles. His graduate textbook, Foundation for Programming
Languages covers lambda calculus, type systems, logic for program verification, and mathematical semantics of
programming languages. Professor Mitchell was a member of the standardization effort and the 2002 Program Chair of the
ACM Principles of Programming Languages conference.
Team-Fly
Team-Fly
Stanford University
http://www.cambridge.org
This book is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing
agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.
A
Typefaces Times Ten 10/12.5 pt., ITC Franklin Gothic, and Officina Serif System L TEX2ε [TB]
A catalog record for this book is available from the British Library.
This textbook for undergraduate and beginning graduate students explains and examines the central concepts used in
modern programming languages, such as functions, types, memory management, and control. The book is unique in
its comprehensive presentation and comparison of major object-oriented programming languages. Separate chapters
examine the history of objects, Simula and Smalltalk, and the prominent languages C++ and Java.
The author presents foundational topics, such as lambda calculus and denotational semantics, in an easy-to-read,
informal style, focusing on the main insights provided by these theories. Advanced topics include concurrency and
concurrent object-oriented programming. A chapter on logic programming illustrates the importance of specialized
programming methods for certain kinds of problems.
This book will give the reader a better understanding of the issues and trade-offs that arise in programming language
design and a better appreciation of the advantages and pitfalls of the programming languages they use.
John C. Mitchell is Professor of Computer Science at Stanford University, where he has been a popular teacher for
more than a decade. Many of his former students are successful in research and private industry. He received his
Ph.D. from MIT in 1984 and was a Member of Technical Staff at AT&T Bell Laboratories before joining the faculty at
Stanford. Over the past twenty years, Mitchell has been a featured speaker at international conferences; has led
research projects on a variety of topics, including programming language design and analysis, computer security, and
applications of mathematical logic to computer science; and has written more than 100 research articles. His previous
textbook, Foundations for Programming Languages (MIT Press, 1996), covers lambda calculus, type systems, logic for
program verification, and mathematical semantics of programming languages. Professor Mitchell was a member of the
programming language subcommittee of the ACM/IEEE Curriculum 2001 standardization effort and the 2002 Program
Chair of the ACM Principles of Programming Languages conference.
Team-Fly
Team-Fly
Preface
A good programming language is a conceptual universe for thinking about programming.
Programming languages provide the abstractions, organizing principles, and control structures that programmers use
to write good programs. This book is about the concepts that appear in programming languages, issues that arise in
their implementation, and the way that language design affects program development. The text is divided into four
parts:
Part 1 contains a short study of Lisp as a worked example of programming language analysis and covers compiler
structure, parsing, lambda calculus, and denotational semantics. A short Computability chapter provides information
about the limits of compile-time program analysis and optimization.
Part 2 uses procedural Algol family languages and ML to study types, memory management, and control structures.
In Part 3 we look at program organization using abstract data types, modules, and objects. Because object-oriented
programming is the most prominent paradigm in current practice, several different object-oriented languages are
compared. Separate chapters explore and compare Simula, Smalltalk, C++, and Java.
Part 4 contains chapters on language mechanisms for concurrency and on logic programming.
The book is intended for upper-level undergraduate students and beginning graduate students with some knowledge
of basic programming. Students are expected to have some knowledge of C or some other procedural language and
some acquaintance with C++ or some form of object-oriented language. Some experience with Lisp, Scheme, or ML
is helpful in Parts 1 and 2, although many students have successfully completed the course based on this book without
this background. It is also helpful if students have some experience with simple analysis of algorithms and data
structures. For example, in comparing implementations of certain constructs, it will be useful to distinguish between
algorithms of constant-, polynomial-, and exponential-time complexity.
After reading this book, students will have a better understanding of the range of programming languages that have
been used over the past 40 years, a better understanding of the issues and trade-offs that arise in programming
language design, and a better appreciation of the advantages and pitfalls of the programming languages they use.
Because different languages present different programming concepts, students will be able to improve their
programming by importing ideas from other languages into the programs they write.
Acknowledgments
This book developed as a set of notes for Stanford CS 242, a course in programming languages that I have taught
since 1993. Each year, energetic teaching assistants have helped debug example programs for lectures, formulate
homework problems, and prepare model solutions. The organization and content of the course have been improved
greatly by their suggestions. Special thanks go to Kathleen Fisher, who was a teaching assistant in 1993 and 1994
and taught the course in my absence in 1995. Kathleen helped me organize the material in the early years and, in
1995, transcribed my handwritten notes into online form. Thanks to Amit Patel for his initiative in organizing homework
assignments and solutions and to Vitaly Shmatikov for persevering with the glossary of programming language terms.
Anne Bracy, Dan Bentley, and Stephen Freund thoughtfully proofread many chapters.
Lauren Cowles, Alan Harvey, and David Tranah of Cambridge University Press were encouraging and helpful. I
particularly appreciate Lauren's careful reading and detailed comments of twelve full chapters in draft form. Thanks
also are due to the reviewers they enlisted, who made a number of helpful suggestions on early versions of the book.
Zena Ariola taught from book drafts at the University of Oregon several years in a row and sent many helpful
suggestions; other test instructors also provided helpful feedback.
Finally, special thanks to Krzystof Apt for contributing a chapter on logic programming.
John Mitchell
Team-Fly
Team-Fly
Chapter 2: Computability
Chapter 4: Fundamentals
Team-Fly
Team-Fly
Chapter 1: Introduction
"The Medium Is the Message"
--Marshall McLuhan
There are many difficult trade-offs in programming language design. Some language features make it easy for us to
write programs quickly, but may make it harder for us to design testing tools or methods. Some language constructs
make it easier for a compiler to optimize programs, but may make programming cumbersome. Because different
computing environments and applications require different program characteristics, different programming language
designers have chosen different trade-offs. In fact, virtually all successful programming languages were originally
designed for one specific use. This is not to say that each language is good for only one purpose. However, focusing
on a single application helps language designers make consistent, purposeful decisions. A single application also
helps with one of the most difficult parts of language design: leaving good ideas out.
I hope you enjoy using this book. At the beginning of each chapter, I have included pictures of people involved
in the development or analysis of programming languages. Some of these people are famous, with major
awards and published biographies. Others are less widely recognized. When possible, I have tried to include
some personal information based on my encounters with these people. This is to emphasize that programming
languages are developed by real human beings. Like most human artifacts, a programming language inevitably
reflects some of the personality of its designers.
As a disclaimer, let me point out that I have not made an attempt to be comprehensive in my brief biographical
comments. I have tried to liven up the text with a bit of humor when possible, leaving serious biography to more
serious biographers. There simply is not space to mention all of the people who have played important roles in
the history of programming languages.
Historical and biographical texts on computer science and computer scientists have become increasingly
available in recent years. If you like reading about computer pioneers, you might enjoy paging through Out of
Their Minds: The Lives and Discoveries of 15 Great Computer Scientists by Dennis Shasha and Cathy Lazere
or other books on the history of computer science.
John Mitchell
Even if you do not use many of the programming languages in this book, you may still be able to put the conceptual
framework presented in these languages to good use. When I was a student in the mid-1970s, all "serious"
programmers (at my university, anyway) used Fortran. Fortran did not allow recursion, and recursion was generally
regarded as too inefficient to be practical for "real programming." However, the instructor of one course I took argued
that recursion was still an important idea and explained how recursive techniques could be used in Fortran by
managing data in an array. I am glad I took that course and not one that dismissed recursion as an impractical idea. In
the 1980s, many people considered object-oriented programming too inefficient and clumsy for real programming.
However, students who learned about object-oriented programming in the 1980s were certainly happy to know about
these "futuristic" languages in the 1990s, as object-oriented programming became more widely accepted and used.
Although this is not a book about the history of programming languages, there is some attention to history throughout
the book. One reason for discussing historical languages is that this gives us a realistic way to understand
programming language trade-offs. For example, programs were different when machines were slow and memory was
scarce. The concerns of programming language designers were therefore different in the 1960s from the current
concerns. By imaging the state of the art in some bygone era, we can give more serious thought to why language
designers made certain decisions. This way of thinking about languages and computing may help us in the future,
when computing conditions may change to resemble some past situation. For example, the recent rise in popularity of
handheld computing devices and embedded processors has led to renewed interest in programming for devices with
limited memory and limited computing power.
When we discuss specific languages in this book, we generally refer to the original or historically important form of a
language. For example, "Fortran" means the Fortran of the 1960s and early 1970s. These early languages were called
Fortran I, Fortran II, Fortran III, and so on. In recent years, Fortran has evolved to include more modern features, and
the distinction between Fortran and other languages has blurred to some extent. Similarly, Lisp generally refers to the
Lisps of the 1960s, Smalltalk to the language of the late 1970s and 1980s, and so on.
Team-Fly
Team-Fly
1.2 GOALS
In this book we are concerned with the basic concepts that appear in modern programming languages, their
interaction, and the relationship between programming languages and methods for program development. A recurring
theme is the trade-off between language expressiveness and simplicity of implementation. For each programming
language feature we consider, we examine the ways that it can be used in programming and the kinds of
implementation techniques that may be used to compile and execute it efficiently.
To understand the design space of programming languages. This includes concepts and constructs
from past programming languages as well as those that may be used more widely in the future. We
also try to understand some of the major conflicts and trade-offs between language features, including
implementation costs.
To develop a better understanding of the languages we currently use by comparing them with other
languages.
To understand the programming techniques associated with various language features. The study of
programming languages is, in part, the study of conceptual frameworks for problem solving, software
construction, and development.
Many of the ideas in this book are common knowledge among professional programmers. The material and ways of
thinking presented in this book should be useful to you in future programming and in talking to experienced
programmers if you work for a software company or have an interview for a job. By the end of the course, you will be
able to evaluate language features, their costs, and how they fit together.
Here are some specific themes that are addressed repeatedly in the text:
Computability: Some problems cannot be solved by computer. The undecidability of the halting
problem implies that programming language compilers and interpreters cannot do everything that we
might wish they could do.
Static analysis: There is a difference between compile time and run time. At compile time, the
program is known but the input is not. At run time, the program and the input are both available to the
run-time system. Although a program designer or implementer would like to find errors at compile
time, many will not surface until run time. Methods that detect program errors at compile time are
usually conservative, which means that when they say a program does not have a certain kind of
error this statement is correct. However, compile-time error-detection methods will usually say that
some programs contain errors even if errors may not actually occur when the program is run.
Expressiveness versus efficiency: There are many situations in which it would be convenient to have
a programming language implementation do something automatically. An example discussed in
Chapter 3 is memory management: The Lisp run-time system uses garbage collection to detect
memory locations no longer needed by the program. When something is done automatically, there is
a cost. Although an automatic method may save the programmer from thinking about something, the
implementation of the language may run more slowly. In some cases, the automatic method may
make it easier to write programs and make programming less prone to error. In other cases, the
resulting slowdown in program execution may make the automatic method infeasible.
Team-Fly
Team-Fly
The history of modern programming languages begins around 1958-1960 with the development of Algol, Cobol,
Fortran, and Lisp. The main body of this book covers Lisp, with a shorter discussion of Algol and subsequent related
languages. A brief account of some earlier languages is given here for those who may be curious about programming
language prehistory.
In the 1950s, a number of languages were developed to simplify the process of writing sequences of computer
instructions. In this decade, computers were very primitive by modern standards. Most programming was done with
the native machine language of the underlying hardware. This was acceptable because programs were small and
efficiency was extremely important. The two most important programming language developments of the 1950s were
Fortan and Cobol.
Fortran was developed at IBM around 1954-1956 by a team led by John Backus. The main innovation of Fortran (a
contraction of formula translator) was that it became possible to use ordinary mathematical notation in expressions.
For example, the Fortran expression for adding the value of i to twice the value of j is i + 2*j. Before the development
of Fortran, it might have been necessary to place i in a register, place j in a register, multiply j times 2 and then add the
result to i. Fortran allowed programmers to think more naturally about numerical calculation by using symbolic names
for variables and leaving some details of evaluation order to the compiler. Fortran also had subroutines (a form of
procedure or function), arrays, formatted input and output, and declarations that gave programmers explicit control
over the placement of variables and arrays in memory. However, that was about it. To give you some idea of the
limitations of Fortran, many early Fortran compilers stored numbers 1, 2, 3 … in memory locations, and programmers
could change the values of numbers if they were not careful! In addition, it was not possible for a Fortran subroutine to
call itself, as this required memory management techniques that had not been invented yet (see Chapter 7).
Cobol is a programming language designed for business applications. Like Fortran programs, many Cobol programs
are still in use today, although current versions of Fortran and Cobol differ substantially from forms of these languages
of the 1950s. The primary designer of Cobol was Grace Murray Hopper, an important computer pioneer. The syntax of
Cobol was intended to resemble that of common English. It has been suggested in jest that if object-oriented Cobol
were a standard today, we would use "add 1 to Cobol giving Cobol" instead of "C++".
The earliest languages covered in any detail in this book are Lisp and Algol, which both came out around 1960. These
languages have stack memory management and recursive functions or procedures. Lisp provides higher-order
functions (still not available in many current languages) and garbage collection, whereas the Algol family of languages
provides better type systems and data structuring. The main innovations of the 1970s were methods for organizing
data, such as records (or structs), abstract data types, and early forms of objects. Objects became mainstream in the
1980s, and the 1990s brought increasing interest in network-centric computing, interoperability, and security and
st
correctness issues associated with active content on the Internet. The 21 century promises greater diversity of
computing devices, cheaper and more powerful hardware, and increasing interest in correctness, security, and
interoperability.
Team-Fly
Team-Fly
Lisp x x x
C x x x
Algol 60 x x
Algol 68 x x x x
Pascal x x x
Modula-2 x x x x
Modula-3 x x x x x x
ML x x x x x
Simula x x x x x
Smalltalk x x x x x x
C++ x x x x x x
Objective C x x x x
Java x x x x x x x
Although this matrix lists only a fraction of the languages and concepts that might be covered in a basic text or course
on the programming languages, one general characteristic should be clear. There are some basic language concepts,
such as expressions, functions, local variables, and stack storage allocation that are present in many languages. For
these concepts, it makes more sense to discuss the concept in general than to go through a long list of similar
languages. On the other hand, for concepts such as objects and threads, there are relatively few languages that
exhibit these concepts in interesting ways. Therefore, we can study most of the interesting aspects of objects by
comparing a few languages. Another factor that is not clear from the matrix is that, for some concepts, there is
considerable variation from language to language. For example, it is more interesting to compare the way objects have
been integrated into languages than it is to compare integer expressions. This is another reason why competing
object-oriented languages are compared, but basic concepts related to expressions, statements, functions, and so on,
are covered only once, in a concept-oriented way.
Most courses and texts on programming languages use some combination of language-based and concept-based
presentation. In this book a concept-oriented organization is followed for most concepts, with a language-based
organization used to compare object-oriented features.
In Part 1 a short study of Lisp is presented, followed by a discussion of compiler structure, parsing, lambda calculus,
and denotational semantics. A short chapter provides a brief discussion of computability and the limits of compile-time
program analysis and optimization. For C programmers, the discussion of Lisp should provide a good chance to think
differently about programming and programming languages.
In Part 2, we progress through the main concepts associated with the conventional languages that are descended in
some way from the Algol family. These concepts include type systems and type checking, functions and stack storage
allocation, and control mechanisms such as exceptions and continuations. After some of the history of the Algol family
of languages is summarized, the ML programming language is used as the main example, with some discussion and
comparisons using C syntax.
Part 3 is an investigation of program-structuring mechanisms. The important language advances of the 1970s were
abstract data types and program modules. In the late 1980s, object-oriented concepts attained widespread
acceptance. Because object-oriented programming is currently the most prominent programming paradigm, in most of
Part 3 we focus on object-oriented concepts and languages, comparing Smalltalk, C++, and Java.
Part 4 contains chapters on language mechanisms for concurrent and distributed programs and on logic programming.
Because of space limitations, a number of interesting topics are not covered. Although scripting languages and other
"special-purpose" languages are not covered explicitly in detail, an attempt has been made to integrate some relevant
language concepts into the exercises.
Team-Fly
Team-Fly
Chapter 2: Computability
Some mathematical functions are computable and some are not. In all general-purpose programming languages, it is
possible to write a program for each function that is computable in principle. However, the limits of computability also
limit the kinds of things that programming language implementations can do. This chapter contains a brief overview of
computability so that we can discuss limitations that involve computability in other chapters of the book.
The fact that not all functions are computable has important ramifications for programming language tools and
implementations. Some kinds of programming constructs, however useful they might be, cannot be added to real
programming languages because they cannot be implemented on real computers.
In mathematics, an expression may have a defined value or it may not. For example, the expression 3 + 2 has a
defined value, but the expression 3/0 does not. The reason that 3/0 does not have a value is that division by zero is
not defined: division is defined to be the inverse of multiplication, but multiplication by zero cannot be inverted. There is
nothing to try to do when we see the expression 3/0; a mathematician would just say that this operation is undefined,
and that would be the end of the discussion.
In computation, there are two different reasons why an expression might not have a value:
Alan Turing was a British mathematician. He is known for his early work on computability and his work for British
Intelligence on code breaking during the Second World War. Among computer scientists, he is best known for
the invention of the Turing machine. This is not a piece of hardware, but an idealized computing device. A Turing
machine consists of an infinite tape, a tape read-write head, and a finite-state controller. In each computation
step, the machine reads a tape symbol and the finite-state controller decides whether to write a different symbol
on the current tape square and then whether to move the read-write head one square left or right. The
importance of this idealized computer is that it is both very simple and very powerful.
Turing was a broad-minded individual with interests ranging from relativity theory and mathematical logic to
number theory and the engineering design of mechanical computers. There are numerous published
biographies of Alan Turing, some emphasizing his wartime work and others calling attention to his sexuality and
its impact on his professional career.
The ACM Turing Award is the highest scientific honor in computer science, equivalent to a Nobel Prize in other
fields.
Error termination: Evaluation of the expression cannot proceed because of a conflict between
operator and operand.
An example of the first kind is division by zero. There is nothing to compute in this case, except possibly to stop the
computation in a way that indicates that it could not proceed any further. This may halt execution of the entire
program, abort one thread of a concurrent program, or raise an exception if the programming language provides
exceptions.
The second case is different: There is a specific computation to perform, but the computation may not terminate and
therefore may not yield a value. For example, consider the recursive function defined by
This is a perfectly meaningful definition of a partial function, a function that has a value on some arguments but not on
all arguments. The expression f(4) calling the function f above has value 4 + 2 + 0 = 6, but the expression f(5) does not
have a value because the computation specified by this expression does not terminate.
The distinction can be made clearer by a look at the mathematical definitions. A reasonable definition of the word
function is this: A function f : A → B from set A to set B is a rule associating a unique value y = f (x)in B with every x in
A. This is almost a mathematical definition, except that the word rule does not have a precise mathematical meaning.
The notation f : A → B means that, given arguments in the set A, the function f produces values from set B. The set A
is called the domain of f, and the set B is called the range or the codomain of f.
The usual mathematical definition of function replaces the idea of rule with a set of argument-result pairs called the
graph of a function. This is the mathematical definition:
When we associate a set of ordered pairs with a function, the ordered pair ?x, y? is used to indicate that y is the value
of the function on argument x. In words, the preceding two conditions can be stated as (1) a function has at most one
value for every argument in its domain, and (2) a function has at least one value for every argument in its domain.
A partial function is similar, except that a partial function may not have a value for every argument in its domain. This is
the mathematical definition:
In words, a partial function is single valued, but need not be defined on all elements of its domain.
In most programming languages, it is possible to define functions recursively. For example, here is a function f defined
in terms of itself:
If this were written as a program in some programming language, the declaration would associate the function name f
with an algorithm that terminates on every evenx≥0, but diverges (does not halt and return a value) if x is odd or
negative. The algorithm for f defines the following mathematical function f, expressed here as a set of ordered pairs:
This is a partial function on the integers. For every integer x, there is at most one y with f (x) = y. However, if x is an odd
number, then there is no y with f (x) = y. Where the algorithm does not terminate, the value of the function is undefined.
Because a function call may not terminate, this program defines a partial function.
2.1.3 Computability
Computability theory gives us a precise characterization of the functions that are computable in principle. The class of
functions on the natural numbers that are computable in principle is often called the class of partial recursive functions,
as recursion is an essential part of computation and computable functions are, in general, partial rather than total. The
reason why we say "computable in principle" instead of "computable in practice" is that some computable functions
might take an extremely long time to compute. If a function call will not return for an amount of time equal to the length
of the entire history of the universe, then in practice we will not be able to wait for the computation to finish.
Nonetheless, computability in principle is an important benchmark for programming languages.
Computable Functions
Intuitively, a function is computable if there is some program that computes it. More specifically, a function f : A → B is
computable if there is an algorithm that, given any x ? A as input, halts with y = f (x) as output.
One problem with this intuitive definition of computable is that a program has to be written out in some programming
language, and we need to have some implementation to execute the program. It might very well be that, in one
programming language, there is a program to compute some mathematical function and in another language there is
not.
In the 1930s, Alonzo Church of Princeton University proposed an important principle, called Church's thesis. Church's
thesis, which is a widely held belief about the relation between mathematical definitions and the real world of
computing, states that the same class of functions on the integers can be computed by any general computing
device. This is the class of partial recursive functions, sometimes called the class of computable functions. There is a
mathematical definition of this class of functions that does not refer to programming languages, a second definition
that uses a kind of idealized computing device called a Turing machine, and a third (equivalent) definition that uses
lambda calculus (see Section 4.2). As mentioned in the biographical sketch on Alan Turing, a Turing machine consists
of an infinite tape, a tape read-write head, and a finite-state controller. The tape is divided into contiguous cells, each
containing a single symbol. In each computation step, the machine reads a tape symbol and the finite-state controller
decides whether to write a different symbol on the current tape square and then whether to move the read-write head
one square left or right. Part of the evidence that Church cited in formulating this thesis was the proof that Turing
machines and lambda calculus are equivalent. The fact that all standard programming languages express precisely
the class of partial recursive functions is often summarized by the statement that all programming languages are
Turing complete. Although it is comforting to know that all programming languages are universal in a mathematical
sense, the fact that all programming languages are Turing complete also means that computability theory does not
help us distinguish among the expressive powers of different programming languages.
Noncomputable Functions
It is useful to know that some specific functions are not computable. An important example is commonly referred to as
the halting problem. To simplify the discussion and focus on the central ideas, the halting problem is stated for
programs that require one string input. If P is such a program and x is a string input, then we write P(x)for the output of
program P on input x.
Halting Problem: Given a program P that requires exactly one string input and a string x, determine
whether P halts on input x.
We can associate the halting problem with a function fhalt by letting fhalt (P, x) = "halts" if P halts on input and fhalt(P,
x) = "does not halt" otherwise. This function fhalt can be considered a function on strings if we write each program out
as a sequence of symbols.
The undecidability of the halting problem is the fact that the function fhalt is not computable. The undecidability of the
halting problem is an important fact to keep in mind in designing programming language implementations and
optimizations. It implies that many useful operations on programs cannot be implemented, even in principle.
Proof of the Undecidability of the Halting Problem. Although you will not need to know this proof to understand any other
topic in the book, some of you may be interested in proof that the halting function is not computable. The proof is
surprisingly short, but can be difficult to understand. If you are going to be a serious computer scientist, then you will
want to look at this proof several times, over the course of several days, until you understand the idea behind it.
Step 1: Assume that there is a program Q that solves the halting problem. Specifically, assume that
program Q reads two inputs, both strings, and has the following output:
An important part of this specification for Q is that Q(P, x) always halts for every P and x.
Step 2: Using program Q, we can build a program D that reads one string input and sometimes does
not halt. Specifically, let D be a program that works as follows:
Note that D has only one input, which it gives twice to Q. The program D can be written in any
reasonable language, as any reasonable language should have some way of programming
if-then-else and some way of writing a loop or recursive function call that runs forever. If you think
about it a little bit, you can see that D has the following behavior:
In this description, the word halt means that D(P) comes to a halt, and runs forever means that D(P)
continues to execute steps indefinitely. The program D(P) halts or does not halt, but does not produce
a string output in any case.
Step 3: Derive a contradiction by considering the behavior D(D) of program D on input D. (If you are
starting to get confused about what it means to run a program with the program itself as input,
assume that we have written the program D and stored it in a file. Then we can compile D and run D
with the file containing a copy of D as input.) Without thinking about how D works or what D is
supposed to do, it is clear that either D(D) halts or D(D) does not halt. If D(D) halts, though, then by
the property of D given in step 2, this must be because D(D) runs forever. This does not make any
sense, so it must be that D(D) runs forever. However, by similar reasoning, if D(D) runs forever, then
this must be because D(D) halts. This is also contradictory. Therefore, we have reached a
contradiction.
Step 4: Because the assumption in step 1 that there is a program Q solving the halting problem leads
to a contradiction in step 3, it must be that the assumption is false. Therefore, there is no program that
solves the halting problem.
Applications
Programming language compilers can often detect errors in programs. However, the undecidability of the halting
problem implies that some properties of programs cannot be determined in advance. The simplest example is halting
itself. Suppose someone writes a program like this:
i=0;
while (i != f(i)) i = g(i);
printf( ... i ...);
It seems very likely that the programmer wants the while loop to halt. Otherwise, why would the programmer have
written a statement to print the value of i after the loop halts? Therefore, it would be helpful for the compiler to print a
warning message if the loop will not halt. However useful this might be, though, it is not possible for a compiler to
determine whether the loop will halt, as this would involve solving the halting problem.
Team-Fly
Random documents with unrelated
content Scribd suggests to you:
all the fact that the house was filled with people, had acted like
magic on the acoustics. The tone of the orchestra had become full,
clear, and incisive. My spirits rose and I forgot everything except the
orchestra before me and Beethoven’s score. After each movement
the applause was deafening, and at the end of the symphony there
was joyous shouting from the galleries. We seemed to have played
our way into their hearts, and after the first part there was a steady
stream of French musicians to my dressing-room to congratulate me
on our marvellous orchestra and its ensemble, and to express their
delight that we had come over on such a friendly mission. Among
them were: Vincent d’Indy, Gabriel Fauré, André Messager, Gabriel
Pierné, Theodore Dubois, Paul Vidal, Nadia Boulanger, and many
others.
As we turned into the French part of our programme the
enthusiasm became still greater, and at the conclusion of “Istar”
some of my first violins discovered the composer, d’Indy, in the
audience and, pointing toward him, stood up to applaud. In a
minute not only the whole orchestra but the audience were on their
feet and with loud cries of “Auteur!” “d’Indy!” the house was in an
uproar until d’Indy, his face as red as a beet, was compelled to rise
and acknowledge this tribute.
The programme finished with the marvellous “Daphnis et Chloe,”
by Ravel, in which the luscious tone of the orchestra and its
virtuosity demonstrated themselves so successfully that not only did
the concert come to a tumultuous climax, but several of the French
papers announced afterward that this work had never had such a
vivid and perfect rendering before.
My interpretation of the Beethoven “Eroica” Symphony puzzled
some of the newspaper critics, as it did not conform to their French
traditions. These do not permit such slight occasional modifications
of tempo as modern conductors brought up in the German traditions
of Beethoven believe essential to a proper interpretation of this
master. But I was much pleased and honored to receive a complete
approval of my interpretation, not only verbally from several of my
French colleagues, but also from M. d’Indy in an article which he
wrote on our concert and in which he said:
Leaving aside everything that Walter Damrosch has done for our country and the
French musicians, generous acts for which our gratitude has often been expressed, I wish
mainly to pay my tribute to the extremely expressive interpretation at the concerts he has
given lately at the Opera. Whether it is classical, romantic, or modern music, Damrosch first
of all endeavors to set off and illustrate what we call the “melos,” the element of
expression, the voice that must rise above all the other voices of the orchestra. He knows
how to distribute the agogic action, the dynamic power, and he is not afraid—even in
Beethoven’s works and in spite of the surprise this caused to our public—to accelerate or
slacken the movement when the necessities of expression demand it.
The French are a courteous people, and at the end of the concert
there was an even greater crowd of musicians and friends behind
the scenes to express their pleasure at our success.
The programmes of the other two concerts were as follows:
MAY 8
1. ...............................................................................................
Overture, “Le Roi d’Ys” Lalo
2. Symphony,
...............................................................................................
“From the New World” Dvořák
3. Concerto
...............................................................................................
for Violin and Orchestra in B Minor Saint-
Saëns
MR. SPALDING
a. “Pélléas et Mélisande” (Fileuse)
4. ............................................................................................... Fauré
b.
...............................................................................................
Ma Mère L’Oye (Les Pagodes) Ravel
5. Prelude
...............................................................................................
to “Die Meistersinger” Wagner
MAY 9
1. Symphony in C (Jupiter)
............................................................................................... Mozart
2. ...............................................................................................
Poems (d’après Verlaine) Loeffler
3. ...............................................................................................
Symphony in D Minor Franck
4. ...............................................................................................
Negro Rhapsody for Piano and Orchestra Powell
JOHN POWELL
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
ebookultra.com