Type Systems for Programming Languages. Benjamin C. Pierce [email protected] terney.info Working draft of January 15, This is preliminary draft of a book . Author: Benjamin C. Pierce Advanced Topics in Types and Programming Languages · Read more Programming Languages: Principles and Practices. Pierce, Benjamin C. Types and programming languages / Benjamin C. Pierce p. cm. Includes bibliographical references and index. ISBN (hc.

Author: | EUNICE OUIMETTE |

Language: | English, Spanish, French |

Country: | Dominica |

Genre: | Environment |

Pages: | 628 |

Published (Last): | 22.07.2016 |

ISBN: | 475-2-31757-636-7 |

Distribution: | Free* [*Registration Required] |

Uploaded by: | LASHAWN |

PDF | On Jan 1, , Benjamin C. Pierce and others published Advanced cover full existential types, using a programming language combining a pure. Types and Programming Languages. The Next Generation. Benjamin C. Pierce. University of Pennsylvania. LICS, 1/ Types and Programming Languages. Benjamin C. Pierce. The MIT Press 8. Typed Arithmetic Expressions. Types. The Typing Relation.

Types and Programming Languages. Benjamin C. The study of type systems -- and of programming languages from a type-theoretic perspective -- has important applications in software engineering, language design, high-performance compilers, and security. This text provides a comprehensive introduction both to type systems in computer science and to the basic theory of programming languages. The approach is pragmatic and operational; each new concept is motivated by programming examples and the more theoretical sections are driven by the needs of implementations. Each chapter is accompanied by numerous exercises and solutions, as well as a running implementation, available via the Web. Dependencies between chapters are explicitly identified, allowing readers to choose a variety of paths through the material. The core topics include the untyped lambda-calculus, simple type systems, type reconstruction, universal and existential polymorphism, subtyping, bounded quantification, recursive types, kinds, and type operators. Extended case studies develop a variety of approaches to modeling the features of object-oriented languages.

Most readers should just skim this chapter and refer back to it as necessary. We often write s R t instead of s. When S and T are the same set U. A set is said to be countable if its elements can be placed in one- to-one correspondence with the natural numbers. The size of a set S is written S. The powerset of S. To emphasize this intuition. A one-place relation on a set S is called a predicate on S. Suppose R is a binary relation on a set S and P is a predicate on S.

R is transitive if s R t and t R u together imply s R u. T is in the typing relation. The codomain or range of R. We say that P is preserved by R if whenever we have s R s 0 and P s. R is symmetric if s R t implies t R s.. For readability. Figure A binary relation R on a set S is reflexive if R relates every ele- ment of S to itself—that is.

The domain of a relation R on sets S and T. It is important to distinguish failure which is a legitimate. Show that R 0 is the reflexive closure of R.

Here is a more constructive definition of the transitive clo- sure of a relation R. R 0 contains all the pairs in R plus all pairs of the form s. Suppose we are given a relation R on a set S. The reflexive closure of R is the smallest reflexive relation R 0 that contains R.

A reflexive. The reflexive and transitive closure of R is the smallest reflexive and transitive relation that contains R.

A reflexive and transitive relation R on a set S is called a pre- order on S. Define the relation R 0 as follows: That is.

A preorder on a set S that is also antisymmetric is called a partial order on S. Suppose R is a binary relation on a set S. The sequence of numbers from 1 to n is abbreviated 1. For ex- ample. A sequence is written by listing its elements. We write a for the length of the sequence a. Chains can be either finite or infinite. One sequence is said to be a permutation of another if it contains exactly the same elements.

Suppose that P is a predicate on the natural numbers. Induction on pairs is fairly common. Many of these proofs are based on one of the following principles. Theorem 3. If P 0 and. Suppose that P is a pred- icate on pairs of natural numbers. The mathemati- cal foundations of inductive reasoning will be considered in more detail in Chapter A proof is a repeatable experiment in persuasion.

The beginning of Davey and Priestley has an excellent review of ordered sets. There are many sources for this. Halmos is a good introduction to basic set theory.

Chapter 9 extends these concepts to the lambda-calculus. Chapters 5 through 7 elaborate the same story for a much more powerful language. Looking further ahead. These forms can be summarized compactly by the following grammar. This language is so trivial as to be almost beneath consideration. This chapter and the next develop the required tools for a small language of numbers and booleans.

The system studied in this chapter is the untyped calculus of booleans and numbers Figure 3- 2. The associated OCaml implementation. Instructions for downloading and building this checker can be found at http: Chapter 8 commences the study of type systems proper. In fact. The prefix meta. A complete summary of metavariable conventions can be found in Appendix B. The first line t:: Each line that follows gives one alternative syntactic form for terms.

The italicized phrases on the right are just comments. This field also gives us the term metathe- ory. The symbol t in the right-hand sides of the rules of this grammar is called a metavariable.

Starting in Chapter 8. For the moment.

At every point where the symbol t appears. It is a variable in the sense that it is a place-holder for some particular term. Such terms are called values.

Here are some examples of programs. We shall have more to say about such terms later—indeed. This is for consistency with later calculi. During typesetting. For brevity in examples. We will return to the discussion of parentheses and abstract syntax in Chapter 5 p. A program in the present language is just a term built from the forms given by the grammar above. In examples. Notice that the syntax of terms permits the formation of some dubious- looking terms like succ true and if 0 then 0 else 0.

For brevity. Of course. The results of evaluation are terms of a particularly simple form: The set of terms is defined by the following rules: We have already seen one in the grammar on page This grammar is actually just a compact notation for the following inductive definition: Like the grammar on page The use of parentheses in examples is just a way of clarifying the relation between the linearized form of terms that we write on the page and the real underlying tree form.

The set of terms is the smallest set T such that 1. The first clause tells us three simple expressions that are in T. Each rule is read. The second and third clauses give us rules by which we can judge that certain compound expressions are in T.

Two points of terminology deserve mention. S3 contains these and all phrases that can be built using succ. In this book. How many elements does S3 have? S collects together all the phrases that can be built in this way—i. S2 contains the constants plus the phrases that can be built with constants and just one succ. S1 contains just the constants. Definitions 3. Show that the sets Si are cumulative—that is. Axioms are usually written with no bar.

For each natural number i. But then. We will argue. For part b. Definition 3. So it suffices to show a that S satisfies these conditions. To finish off the discussion. For part a. For each i. T was defined as the smallest set satisfying certain conditions.

The same remark will apply to most induction. By the definition of S as the union of all the Si.

In essence. We can put this observation to work in two ways: Simi- larly. The set of constants appearing in a term t. The size of a term t. The property in itself is entirely obvious.

There are three cases to consider: For good measure. By induction on the depth of t. We now calculate as follows: Assuming the desired property for all terms smaller than t. The number of distinct constants in a term t is no greater than the size of t i. At each step of the induction.

Like the different styles of natural-number induction. Induction on depth: We do this by separately considering each of the possible forms that t could have true. Most proofs by induction on terms have a similar structure. For simple proofs. Since the only parts of this structure that vary from one inductive proof to another are the details of the arguments for the individual cases. Suppose P is a predicate on terms.

Ordinary structural induction corresponds to the ordinary natural number induction principle 2. As a matter of style. Structural induction: Induction on size: Strictly speaking. The meaning of a term t can be taken to be the final state that the machine reaches when started with t as its initial state..

For simple languages. There are three basic approaches to formalizing semantics: By induction on t. It is actually easier for the reader simply to regenerate the proof on the fly by examining the grammar while keeping the induction hypothesis in mind than to check a written-out argu- ment. Operational semantics specifies the behavior of a programming language by defining a simple abstract machine for it. And similarly for the other syntactic forms. In such cases. Exercise 3. The beauty of axiomatic methods is that they focus attention on the pro- cess of reasoning about programs.

It is this line of thought that has given computer science such powerful ideas as invariants. The search for appropriate semantic domains for modeling various language features has given rise to a rich and elegant research area known as domain theory.

Denotational semantics takes a more abstract view of meaning: The meaning of a term is just what can be proved about it. One major advantage of denotational semantics is that it abstracts from the gritty details of evaluation and highlights the essential concepts of the language. Axiomatic semantics takes a more direct approach to these laws: Giving denota- tional semantics for a language consists of finding a collection of semantic domains and then defining an interpretation function mapping terms into elements of these domains.

Proving that the behaviors of these different machines correspond in some suitable sense when executing the same program amounts to proving the correctness of an implementation of the language. It is used exclusively in this book. Operational semantics has become an energetic research area in its own right and is often the method of choice for defining programming languages and studying their properties.

The right-hand column defines an evaluation relation 4 on terms. We now examine its parts in detail. Booleans B to seem more and more attractive by comparison—especially in the light of new developments in the area by a number of researchers.

The first is just a repetition for convenience of the syntax of terms. The left-hand column of Figure is a grammar defining two sets of ex- pressions. The second defines a subset of terms.

The metavariable v is used throughout the book to stand for values. Figure summarizes the definition. Some experts prefer to use the term reduction for this relation. The first rule. The third evaluation rule. In terms of abstract machines. E-IfFalse says that a conditional whose guard is literally false evaluates in one step to its else branch.

The different character of the rules is sometimes emphasized by referring to E-IfTrue and E-IfFalse as computation rules and E-If as a congruence rule. To be a bit more precise about these intuitions. The constants true and false do not evaluate to anything. It says that. What these rules do not say is just as important as what they do say.

In a sense. The E-IfTrue and E-IfFalse rules tell us what to do when we reach the end of this process and find ourselves with a conditional whose guard is already fully evaluated.

The E. Our only choice is to evaluate the outer conditional first. This relation is defined by three inference rules or. This interplay between the rules determines a particular evaluation strategy for conditionals. The derivability of a given statement can be jus- tified by exhibiting a derivation tree whose leaves are labeled with instances of E-IfTrue or E-IfFalse and whose internal nodes are labeled with instances of E-If.

A rule is satisfied by a relation if. When the pair t. By the same reasoning as above. The terminology will make more sense when we consider derivations for other inductively defined relations. So the last rule in the second derivation can only be E-IfTrue. Notice that the induction here is not on the length of an evaluation sequence: The proof of the following theorem illustrates this technique. We could just as well say that we are performing induction on the structure of t.

If t1 is neither true nor false. Consider the possible forms of t 1. Suppose that t is not a value. Every value is in normal form. The induction hy- pothesis then applies. If t is in normal form. But as programmers we are just as interested in the final results of evaluation—i. It is easy to show. Spell out the induction principle used in the preceding proof. We can rephrase this observation in more general terms as a fact about values: In the present system.

A term t is in normal form if no evaluation rule applies to it— i. Since t is not a value. This will not be the case in general. Corollary of the determinacy of single-step evaluation 3. In Chapter 12 we will return to this point.

Since S is well founded. We do this by defining a multi-step evaluation relation that relates a term to all of the terms that can be derived from it by zero or more single steps of evaluation. The function f is often called a termination measure for the evaluation relation. We now observe that an infinite sequence of evaluation steps beginning from t can be mapped. Even in situations where it does hold. In Chapter 12 we will see a termination proof with a somewhat more complex structure.

Rephrase Definition 3. Just observe that each evaluation step reduces the size of the term and that size is a termination measure because the usual order on the natural numbers is well founded. Most termination proofs in computer science have the same basic form: Suppose instead that we add this rule: Do any of the proofs need to change?

There are four computation rules E-PredZero. In E-PredSucc. To avoid wasting space on this kind of boilerplate. The notation in the upper-right corner of reminds us to regard this figure as an extension of The evaluation rules in the right-hand column of Figure follow the same pattern as we saw in Figure Which of the above theorems 3. The intuition is that the final result of evaluating an arithmetic expression can be a number. The syntactic category of numeric values nv plays an important role in these rules.

The definition of values is a little more interesting. Figure summarizes the new parts of the definition.

Show that Theorem 3. The successor of false.

Under the rules in Figure Arithmetic expressions NB would require instantiating the metavariable nv1 with pred 0. We call such terms stuck.. A closed term is stuck if it is in normal form but not a value. As is often the case when proving things about programming lan- guages. The one used in this book is called the small-step style. A different way of formalizing meaningless states of the abstract machine is to introduce a new term called wrong and augment the operational semantics with rules that explicitly generate wrong in all the situations where the present semantics gets stuck.

An alternative style. Two styles of operational semantics are in common use. On top of this. To do this in detail.

Inductive definitions. The big-step evaluation rules for our language of boolean and arithmetic expres- sions look like this: Show how the evaluation rules need to change to achieve this effect. Suppose we want to change the evaluation strategy of our language so that the then and else branches of an if expression are eval- uated in that order before the guard is evaluated. The style of operational semantics that we are using here goes back to a technical report by Plotkin The big-step style Exercise 3.

See Astesiano and Hennessy for more detailed developments. Structural induction was introduced to computer science by Burstall Why bother doing proofs about programming languages?

They are almost always boring if the definitions are right. The definitions are almost always wrong. We describe here the key components of an implementation of our language of booleans and arithmetic expressions. The code presented here and in the implementation sections throughout the book is written in a popular language from the ML family Gordon, Mil- ner, and Wadsworth, called Objective Caml, or OCaml for short Leroy, ; Cousineau and Mauny, Only a small subset of the full OCaml lan- guage is used; it should be easy to translate the examples here into most other languages.

The most important requirements are automatic storage manage- ment garbage collection and easy facilities for defining recursive functions by pattern matching over structured data types. Languages with neither, such as C Kernighan and Ritchie, , are even less suitable. The code in this chapter can be found in the arith implementation in the web repository, http: Of course, tastes in languages vary and good programmers can use whatever tools come to hand to get the job done; you are free to use whatever language you prefer.

But be warned: Our first job is to define a type of OCaml values representing terms. The constructors TmTrue to TmIsZero name the different sorts of nodes in the abstract syntax trees of type term; the type following of in each case specifies the number of subtrees that will be attached to that type of node. Each abstract syntax tree node is annotated with a value of type info, which describes where what character position in which source file the node originated.

This information is created by the parser when it scans the input file, and it is used by printing functions to indicate to the user where an error occurred. For purposes of understanding the basic algorithms of evaluation, typechecking, etc. This is a typical example of recursive definition by pattern matching in OCaml: The rec keyword tells the compiler that this is a recur- sive function definition—i.

The function that checks whether a term is a value is similar:. The implementation of the evaluation relation closely follows the single-step evaluation rules in Figures and As we have seen, these rules define a partial function that, when applied to a term that is not yet a value, yields the next step of evaluation for that term.

When applied to a value, the re- sult of the evaluation function yields no result. To translate the evaluation rules into OCaml, we need to make a decision about how to handle this case. One straightforward approach is to write the single-step evaluation function eval1 so that it raises an exception when none of the evaluation rules apply to the term that it is given. Another possibility would be to make the single- step evaluator return a term option indicating whether it was successful and, if so, giving the resulting term; this would also work fine, but would require a little more bookkeeping.

We begin by defining the exception to be raised when no evaluation rule applies:. Note that there are several places where we are constructing terms from scratch rather than reorganizing existing terms. The constant dummyinfo is used as the info annotation in such terms. Another point to notice in the definition of eval1 is the use of explicit when clauses in patterns to capture the effect of metavariable names like v and nv in the presentation of the evaluation relation in Figures and Finally, the eval function takes a term and finds its normal form by repeat- edly calling eval1.

Whenever eval1 returns a new term t0 , we make a recur-. When eval1 finally reaches a point where no rule applies, it raises the exception NoRuleApplies, causing eval to break out of the loop and return the final term in the sequence.

Obviously, this simple evaluator is tuned for easy comparison with the mathematical definition of evaluation, not for finding normal forms as quickly as possible. Change the definition of the eval func- tion in the arith implementation to the big-step style introduced in Exer- cise 3. Of course, there are many parts to an interpreter or compiler—even a very simple one—besides those we have discussed explicitly here.

In reality, terms to be evaluated start out as sequences of characters in files. They must be read from the file system, processed into streams of tokens by a lexical an- alyzer, and further processed into abstract syntax trees by a parser, before they can actually be evaluated by the functions that we have seen.

Further- more, after evaluation, the results need to be printed out. Interested readers are encouraged to have a look at the on-line OCaml code for the whole interpreter. We write eval this way for the sake of simplicity, but putting a try handler in a recursive loop is not actually very good style in ML. Why not? What is a better way to write eval? The core lan- guage used by Landin was the lambda-calculus, a formal system invented in the s by Alonzo Church , , in which all computation is reduced to the basic operations of function definition and application.

Its importance arises from the fact that it can be viewed simultaneously as a simple programming lan- guage in which computations can be described and as a mathematical object about which rigorous statements can be proved. The lambda-calculus is just one of a large number of core calculi that have been used for similar purposes. Most of the concepts and techniques that we will develop for the lambda-calculus can be transferred quite directly to these other calculi.

One case study along these lines is developed in Chapter The associated OCaml implementation is fulluntyped. The lambda-calculus can be enriched in a variety of ways. First, it is often convenient to add special concrete syntax for features like numbers, tuples, records, etc.

More interestingly, we can add more complex features such as mutable refer- ence cells or nonlocal exception handling, which can be modeled in the core language only by using rather heavy translations. As we shall see in later chapters, extensions to the core language often involve ex- tensions to the type system as well. Procedural or functional abstraction is a key feature of essentially all pro- gramming languages.

Instead of writing the same calculation over and over, we write a procedure or function that performs the calculation generically, in terms of one or more named parameters, and then instantiate this function as needed, providing values for the parameters in each case. For each nonnegative number n, instantiating the function factorial with the argument n yields the factorial of n as result.

In the lambda-calculus everything is a function: The syntax of the lambda-calculus comprises just three sorts of terms.

These ways of forming terms are summarized in the following grammar. When discussing the syntax of programming languages, it is useful to dis- tinguish two levels2 of structure. The concrete syntax or surface syntax of the language refers to the strings of characters that programmers directly read and write.

Abstract syntax is a much simpler internal representation of programs as labeled trees called abstract syntax trees or ASTs. The tree rep- resentation renders the structure of terms immediately obvious, making it a natural fit for the complex manipulations involved in both rigorous lan- guage definitions and proofs about them and the internals of compilers and interpreters. The transformation from concrete to abstract syntax takes place in two stages. First, a lexical analyzer or lexer converts the string of characters writ- ten by the programmer into a sequence of tokens—identifiers, keywords, con- stants, punctuation, etc.

The lexer removes comments and deals with issues such as whitespace and capitalization conventions, and formats for numeric and string constants. Next, a parser transforms this sequence of tokens into an abstract syntax tree. During parsing, various conventions such as operator precedence and associativity reduce the need to clutter surface programs with parentheses to explicitly indicate the structure of compound expressions. The phrase lambda-term is used to refer to arbitrary terms in the lambda-calculus.

Definitions of full-blown languages sometimes use even more levels. For example, following Landin, it is often useful to define the behaviors of some languages constructs as derived forms, by translating them into combinations of other, more basic, features.

The restricted sublanguage containing just these core features is then called the internal language or IL , while the full language including all derived forms is called the external language EL.

The transformation from EL to IL is at least conceptually performed in a separate pass, following parsing. Derived forms are discussed in Section The focus of attention in this book is on abstract, not concrete, syntax. Grammars like the one for lambda-terms above should be understood as de- scribing legal tree structures, not strings of tokens or characters.

Of course, when we write terms in examples, definitions, theorems, and proofs, we will need to express them in a concrete, linear notation, but we always have their underlying abstract syntax trees in mind. To save writing too many parentheses, we adopt two conventions when writing lambda-terms in linear form.

First, application associates to the left— that is, s t u stands for the same tree as s t u: Another subtlety in the syntax definition above concerns the use of metavari- ables. We will continue to use the metavariable t as well as s, and u, with or. Note, here, that x is a metavariable ranging over variables! To make matters worse, the set of short names is lim- ited, and we will also want to use x, y, etc. In such cases, however, the context will always make it clear which is which.

Homeworks There will be approximately 6 homework assignments during the course of the semester. Homeworks will I recommend that homeworks be typeset using the LaTeX document preparation system, but will not require it: you have the option to prepare your homework by hand, so long as you make sure that it is clearly legible by me.

I plan to provide LaTeX templates for you, so this is a good chance to learn one of the more common tools for writing academic computer science papers, though the learning curve may be steep at first.

I'm happy to give guidance on how to work with LaTeX though I probably don't know all the latest tricks. Assignments must be your individual work. You may discuss the homeworks with others, but you must write up and hand in your own solutions.

In particular, follow the whiteboard policy: at the end of the discussion the "whiteboard" must be erased and you must not transcribe or take with you anything that has been written on the board or elsewhere during your discussion.

You must be able to reproduce the results solely on your own after any such discussion. Do not draw upon solutions to assignments or in notes from similar courses, nor use other such materials e.

This includes asking and answering questions in class and on Piazza, volunteering to solve problems in class, and posting interesting course-related pointers to Piazza. There will be approximately six homework assignments during the term. Sidebar [Skip]. Types and Programming Languages Frustrated with your current programming language?

Spring Where: ECCR Textbook: Pierce Optional textbook: There will be a midterm and final exam. The midterm is take-home and due March 4. Lecture 2: Inductive sets, recursive functions, proof by rule induction pdf Lecture 3: Big-step, DeBruijn, Locally nameless pdf Lecture 5: Virtual and Abstract Machines pdf Lecture 6: Simple Types and Type Safety pdf Lecture 7: References, Exceptions, Normalization pdf Lecture