«Edited by Maribel Fern´ndez a Chris Fox Shalom Lappin King’s College London Strand, London, U.K. 8th –9th December 2003 Workshop on Lambda ...»
King’s College London
Strand, London, U.K.
8th –9th December 2003
Workshop on Lambda Calculus, Type Theory, and Natural Language
King’s College London, Strand, London, U.K., 8th–9th December 2003
Chris Fox email@example.com
Shalom Lappin firstname.lastname@example.org
David Clark King’s College London, U.K.
Robin Cooper G¨teborg University, Sweden o Maribel Fern´ndez a King’s College London, U.K.
Chris Fox University of Essex, U.K.
Dov Gabbay King’s College London, U.K.
Jamie Gabbay INRIA Futurs, France Simon Gay University of Glasgow, U.K.
Jonathan Ginzburg King’s College London, U.K.
Chris Hankin Imperial College, U.K.
Shalom Lappin King’s College London, U.K.
Ian Mackie King’s College London, U.K.
Francois-Regis Sinot Ecole Polytechnique, France Ray Turner University of Essex, U.K.
Key Note Speaker:
Jan van Eijck CWI, Netherlands
Department of Computer Science Department of Computer Science King’s College London University of Essex Strand Wivenhoe Park London WC2R 2LS Colchester CO4 3SQ United Kingdom United Kingdom http://www.dcs.kcl.ac.uk http://cswww.essex.ac.uk Programme Monday, 8th December.
9:30 Coﬀee — Welcome 10:00 Opening 10:15 Chris Hankin.
Lambda calculus and static analysis.
11:00 Coﬀee Break 11:30 Ian Mackie.
Reduction in the lambda calculus.
12:15 Francois-Regis Sinot.
N-ary director strings: Eﬃcient representations of variables in terms.
13:00 Lunch 14:00 Jan van Eijck.
Relations, Types and Scoping.
15:00 Chris Fox and Shalom Lappin.
Underspeciﬁed Representations for Natural Language using Curry-typed λCalculus.
15:45 Coﬀee Break 16:15 Robin Cooper.
Records and record types in semantic theory.
17:00 Jonathan Ginzburg.
Abstraction and Ontology.
19:00 Dinner Tuesday, 9th December.
9:30 Coﬀee 10:00 Maribel Fernandez.
Rewriting Frameworks and Types.
10:45 David Clark.
Measuring interference in PCF.
11:30 Coﬀee Break 12:00 Jamie Gabbay and Dov Gabbay.
The restart rule and evaluation.
12:45 Lunch 14:00 Ray Turner.
Polymorphism in Speciﬁcations.
14:45 Simon Gay.
Session Types: Specifying Structured Communication.
The Workshop on Lambda Calculus, Type Theory, and Natural Language (LCTTNL) is designed to bring together researchers interested in functional programming, type theory, and the application of the lambda calculus to the analysis of natural language. The communities that have developed around each of these areas share many formal and computational interestes. Unfortunately, they have not had much contact with each other. We hope that LCTTNL will provide a forum in which people working on the lambda calculus from a variety of distinct perspectives will share their research and come to appreciate new domains of application. If this encounter is successful, it can provide the basis for future cooperation and ongoing meetings of this kind.
We would like to thank the Computer Science Departments of King’s College London and the University of Essex for sponsorship and ﬁnancial support. We are grateful to the speakers for their participation, and to our keynote speaker, Jan van Eijck, for agreeing to present his lecture.
Many arguments for ﬂexible type assignment to syntactic categories have to do with the need to account for the various scopings resulting from the interaction of quantiﬁed DPs with other quantiﬁed DPs or with intensional or negated verb contexts.
We will deﬁne a type for arbitrary arity relations in polymorphic type theory. In terms of this, we develop the Boolean algebra of relations as far as needed for natural language semantics. The type for relations is ﬂexible: it can do duty for the whole family of types t, e → t, e → e → t, and so on.
If we use this ﬂexible type for the interpretations of verbs, we can perform a ‘ﬂexible lift’ on DP interpretations, so that DPs get interpreted as operations on verb meanings. This leads to an elegant implementation of Keenan’s proposal for polyadic quantiﬁcation in natural language (‘Beyond the Frege Boundary’). It turns out that scope reorderings are particularly easy to implement in this framework.
Measuring Interference in PCF (abstract) D. Clark, S. Hunt and P. Malacaria In recent work we have developed a security analysis for a simple imperative (while) language which has measured the leakage of secrets from conﬁdential variables to a low security attacker via observation of the low security outputs from the execution of a program . The analysis uses information theory to quantify the leakage and takes an information theoretic view: the quantity of information supplied to the program is taken to be the upper limit on the amount of information induced by the probability distribution on inputs (or entropy) as given by Shannon . This analysis is based not on the syntax of the program per se but on a use deﬁnition graph (UDG) which is derived from the ﬂow chart of the program and relates each use of a variable in an expression to its previous deﬁnition. The analysis uses this graph to trace the ﬂow of information through the program. In the course of the analysis it is necessary to associate random variables with program variables and expressions. The analysis proceeds as a forward analysis, propagating bounds on the entropic measure of conﬁdential information associated with the value at a program point of each variable or expression.
We have speculated that this approach could be applied to a functional language not too distant from the typed lambda calculus but containing delta rules for arithmetic, such as an enriched PCF. This would rely on using a version of the control ﬂow analysis (CFA) based on game semantics for PCF as developed by Malacaria and Hankin . To date we have yet to deﬁne a suitable notion of random variable for this CFA. This diﬃculty has caused us to examine anew the relationship between program variables, program points, and random variables  out of which has come the basis for this talk.
In this talk we examine another possible approach to quantifying interference between variables in a PCF program. The approach is quite general and could possibly be applied to any program whose semantics can be given as a function. We establish a relationship between program variables, random variables and equivalence relations and show how this relationship allows us to set an upper bound on the interference between variables at the beginning and the end of the program. We illustrate the power of the approach by means of examples. A feature of this approach is the separation between the qualitative and quantitative aspects of the analysis. The key notion is that constraints on the observation of outputs induce an equivalence relation on inputs (which may be interpreted as a random variable). This equivalence relation is the “coarsest” which is consistent with the constraints given the semantics of the program. An upper bound on the entropy of the random variable may be found by discovering any “ﬁner” equivalence class on inputs. A way of doing this consistent with the program semantics is via an
References  D. Clark, S. Hunt, and P. Malacaria. Quantiﬁed interference for a while language. submitted to ESOP’04.
 D. Clark, S. Hunt, and P. Malacaria. Quantiﬁed interference: information theory and information ﬂow security. submitted to QAPL’04.
 P. Malacaria and C. Hankin. A new approach to control ﬂow analysis. In Poceedings of CC’98, 1998.
 Claude Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27:379–423 and 623–656, July and October 1948. Available on-line at http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html.
Rewriting Frameworks and Types Maribel Fern´ndez a King’s College London Rewriting systems are usually deﬁned by specifying a set of terms, and a set of rewrite rules that are used to “reduce” terms. This simple idea is very powerful, and has had deep inﬂuence in the development of computational models, programming and speciﬁcation languages, theorem provers and proof assistants. Term rewriting systems can be used to deﬁne,
• ﬁrst-order equational theories: terms are built out of ﬁrst-order functions and variables and rewrite rules deﬁne an equality relation. We obtain in this case the standard (ﬁrstorder) term rewriting systems, sometimes called algebraic systems.
• a theory of functions: Church’s λ-calculus was designed as a notation for functions,
using λ-abstraction, application and variables. The main rewrite rule is β-reduction:
The right hand side of this rule includes a substitution, which is not part of the syntax of λ-terms: it is a meta-operation deﬁned modulo α-conversion since λ is a binder. This is an example of “higher-order” rule.
• higher-order equational theories, using ﬁrst- and higher-order functions and variables:
we obtain in this case higher-order algebraic systems.
• a generalization of the previous cases, where terms are deﬁned using functions, variables and binders. These are usually called higher-order rewriting systems, and can be divided
in two classes:
Algebraic λ-calculi: the syntax of the λ-calculus is enriched with algebraic functions, and the β-reduction rule is combined with algebraic rewrite rules. This kind of rewrite system is used as a computational model for functional programming languages and proof assistants.
First-order rewriting extended with a general notion of binder: Examples of systems in this class are Klop’s Combinatory Reduction Systems (CRS), Khasidashvili’s Expression Reduction Systems, Nipkow’s Higher-Order Rewrite Systems, and the ρ-calculus of Cirstea and Kirchner.
From the computational point of view all the systems mentioned above are equally powerful (they are all Turing complete), but from the point of view of their expressive power there are clear diﬀerences. The systems in the last class are the most expressive. For instance, it is easy to specify the λ-calculus, object-calculi (e.g. Abadi and Cardelli’s ς-calculus), and process calculi (e.g. the π-calculus), using Combinatory Reduction Systems (see ).
Although rewriting can be deﬁned on untyped terms, for most of the formalisms mentioned above some notion of typing has been deﬁned. Types can be used to deﬁne semantic interpretations for terms and rewrite rules, as speciﬁcations of programs deﬁned by rewrite rules, as a link between computation and logic (as in the typed versions of the λ-calculus), etc. When combining λ-calculus with algebraic rewriting, type disciplines provide an environment in which rewrite rules and β-reduction can interact without loss of their useful properties [4, 6, 3].
Type disciplines come in two main ﬂavours: explicitly typed systems (` la Church), and a type inference systems (` la Curry) which do not require type annotations. An important a feature of type systems is a notion of principal type, that is, a type from which all the other types of a term can be derived, or more generally, a notion of principal typing. The latter means that any typing judgement for a term can be obtained from the principal one, that is, not only the type but also the basis (containing the type assumptions for the free variables) is obtained. Unlike principal types, principal typings provide support for separate compilation, incremental type inference, and for accurate type error messages.
In the context of the λ-calculus type inference disciplines have been extensively studied, and type inference is now standard in functional languages (e.g. ML and Haskell). ML’s type system is polymorphic and has principal types, but does not have principal typings. System F provides a much more general notion of polymorphism, but lacks principal typings and type inference is undecidable in general (although it is decidable for some subsystems, in particular if we consider types of rank 2).The intersection type discipline  is an extension of Curry’s system that consists of allowing more than one type for variables and terms, considering the type constructor ∩ in addition to the arrow. Intersection type systems have principal typings (but are also undecidable above rank 2), and can be used to characterize normalization of λterms. Adding intersection types to System F, Margaria and Zacchi obtained a very powerful type system for the λ-calculus, which has principal typings.
Type inference for algebraic term rewriting systems or more general higher-order rewriting systems has attracted less attention. In some cases, it is possible to translate a term rewriting system into the λ-calculus and use the type systems mentioned above. However, this imposes strong restrictions in the rewrite rules. For example, adding the rewrite rules deﬁning surjective pairing to the λ-calculus gives a system in which the Church-Rosser property no longer holds, which implies that surjective pairing is not deﬁnable in the λ-calculus.