[logo suggestions welcome]

N
E
P
L
S

These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.


Reductions, Intersection Types, and Explicit Substitutions

Dan Dougherty (Wesleyan University) and Pierre Lescanne (ENS-Lyon)

This paper is part of a general programme of treating
explicit substitutions as the primary lambda-calculi from the point of
view of foundations as well as applications.

We work in a composition-free calculus of explicit substitutions and
an augmented calculus obtained by adding explicit garbage-collection,
and explore the relationship between intersection-types and reduction.

We show that the terms which normalize by leftmost reduction and the
terms which normalize by head reduction can each be characterized as
the terms typable in a certain system.  The relationship between
typability and strong normalization is subtly different from the
classical case.Our notions of leftmost- and head-reduction are
non-deterministic, and our normalization theorems apply to any
computations obeying these strategies.  In this way we refine and
strengthen the classical normalization theorems.

The proofs require some new techniques in the presence of reductions
involving explicit substitutions.  Indeed, in our proofs we do not
rely on results from classical lambda-calculus, which in our view is
subordinate to the calculus of explicit substitution.  The proof
technique we employreadily yields the result that a term is strongly
normalizing if and only if it is strongly normalizing in the calculus
extended by garbage-collection.


Flow-Directed Lightweight CPS Conversion

Jeffrey M. Siskind (NEC Research Institute)

Programming languages such as Scheme and SML/NJ support first-class
continuations through a CALL/CC primitive.  Without continuations,
programs obey a last-in-first-out procedure-call discipline that
allows stack-based allocation of activation records.  In the presence
of continuations, programs no longer obey a last-in-first-out
procedure-call discipline and thus require heap-based allocation of
activation records.  In the past, two general strategies have been
used to support CALL/CC.  One involves converting the entire program
to continuation-passing style (CPS).  This process is called CPS
conversion. In essence, this heap-allocates all activation records.
The other involves leaving the program in direct style by implementing
mechanisms to copy portions of the stack to the heap when creating
continuations and copying saved continuations from the heap back to
the stack when calling continuations.  The former has the advantage
that creating and calling continuations is cheap but the disadvantage
that ordinary procedure call and return is expensive. The latter has
the reverse set of tradeoffs.  Lightweight CPS conversion is a novel
scheme that gives the advantages of both approaches, without the
disadvantages, in most situations.


First-Class Macros and Polymorphic Types: Progress Report

Alan Bawden (Boston University)

In my paper First-class Macros Have Types (in POPL00) I showed how to
extend Scheme so that macros are first-class values -- although macros
are still <EM>expanded</EM> at compile-time.  The key idea is the
addition of a simple type system to keep track of the expanders. The
realization of this type system in a dynamically typed language such
as Scheme or Dylan is straightforward, but its realization in a modern
statically typed language (i.e., a language with type inference and
polymorphism) presents some interesting challenges.  In this talk, I
will explain what the problems are, and what progress I have made
towards solving them.


Making the Trains Run On Time

Alexander Garthwaite (Sun Microsystems)

Automatic memory management, or garbage collection, is an
important service for runtime systems supporting applications written
in higher-level languages like ML or Java. The challenge in
implementing garbage collectors is to make them as non-intrusive and
scalable as possible. In particular, collectors should not require the
application to be suspended for undue periods of time.

Generational collectors improve collection pause times by segregating
objects by age and by concentrating collection activity on recently
allocated objects. This segregation of younger objects in a young
generation works because, in practice, most objects die soon after
allocation. However, the older generations must also be collected
periodically--the challenge remains how to collect these regions
without incurring long periods where the application is suspended.

One approach to limiting pause times in the collection of older
generations is to use an incremental collection technique where, at
each collection pause, a portion of the older generation is
collected. One example of this kind of technique is the train
algorithm. Unfortunately, train-based collectors often fail to deliver
on their promise of short, predictable pause times. In this talk, I
will outline the reasons for these failures and present techniques
that allow us to overcome these deficiencies through the use of better
representations for remembered-sets and through better placement of
objects. I will also offer experimental results supporting the idea
that it is possible to constrain pause times, achieve good throughput,
and still support incremental, copying garbage collection.


Equational Reasoning for Linking with First-Class Primitive Modules

Joe Wells (Heriot Watt University)

Modules and linking are usually formalized by encodings
which use the lambda-calculus, records (possibly dependent), and
possibly some construct for recursion. In contrast, we introduce the
m-calculus, a calculus where the primitive constructs are modules,
linking, and the selection and hiding of module components. The
m-calculus supports smooth encodings of software structuring tools
such as functions (lambda-calculus), records, objects
(varsigma-calculus), and mutually recursive definitions. The
m-calculus can also express widely varying kinds of module systems as
used in languages like C, Haskell, and ML. The m-calculus has been
proven confluent, thereby showing that equational reasoning via the
m-calculus is sensible and well behaved.


Program Analysis Techniques for Pointers and Accessed Memory Regions

Radu Rugina (University of California at Santa Barbara)

Extracting information about what pieces of memory statements and
procedures in a program access is the basis for the automatic
detection of interactions between various code fragments in the
program. This talk presents two analysis techniques that are able to
deliver this kind of information for general programs, including
programs which use recursion, multithreading, and manipulate pointers.

A first step towards memory disambiguation is pointer analysis. This
analysis computes, for each pointer and each program point, a
conservative approximation of the memory locations to which that
pointer may point. In this talk I will presents a novel
interprocedural, flow-sensitive, and context-sensitive pointer
analysis algorithm for multithreaded programs that may concurrently
update shared pointers.

Since pointer analysis treats arrays as monolithic objects, more
sophisticated analyses are needed to extract access regions within
arrays. This talk presents a novel framework for the symbolic bounds
analysis of accessed memory regions. Our framework formulates each
analysis problem as a system of inequality constraints between
symbolic bound polynomials. It then reduces the constraint system to a
linear program. The solution to the linear program provides symbolic
lower and upper bounds for the regions of memory that each statement
and procedure accesses.

Experimental results from our implemented compiler show that the
combination of pointer analysis and symbolic bounds analysis can solve
several important problems, including static race detection, automatic
parallelization, static detection of array bounds violations,
elimination of array bounds checks, and reduction of the number of
bits used to store computed values.


Last modified Thursday, February 15th, 2001 11:09:32amPowered by PLT