[logo suggestions welcome]

N
E
P
L
S

These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.


Programming with Rectangles, Triangles, and Circles

Erik Meijer (Microsoft WebData)

We will argue that by properly generalizing the type system and
expression syntax, it is possible for any modern object oriented
language to provide first class support for manipulating both relational
and hierarchical data in a sound and statically typed manner. The type
system extensions, however, are not based on XML Schemas. We show that
XSDs and the XML data model do not fit well with the class-based nominal
type system and object graph representation of our target languages.
Instead we propose to extend object-oriented type system with new
structural types that model XSD sequences, choices, and all-groups. We
demonstrate our language and type system by translating a selection of
the XQuery use cases.


Controlling Gemini capsules: classical logic, intersection types and strong normalization

Dan Dougherty (WPI)

Curien and Herbelin have defined a typed term calculus supporting
a Curry-Howard interpretation for sequent proofs in classical
logic.  The untyped calculus underlying that of Curien and
Herbelin can naturally be considered as the foundation of a
functional programming language with explicit representation of
control.

We investigate some fundamental properties of the reduction
relation for this calculus (which we call Gemini).  In particular
we characterize the strongly normalizing computations in Gemini
by defining a system of intersection types, itself presented in
sequent-style.  As a corollary we deduce strong normalization for
the Curien-Herbelin calculus.  The proof that all strongly
normalizing terms are typable relies on a definition we give of a
perpetual strategy for reduction.  The proof that all typable
terms are strongly normalizing uses a variation on the
traditional reducibility method which seems particularly suited
to terms associated with sequent calculi, and which may be a
useful tool in related settings.

Joint work with:
  Silvia Ghilezan, University of Novi Sad
  Pierre Lescanne, ENS Lyon


A Typed Approach to Multiple Inheritance

Chiyan Chen (Boston University)

Multiple Inheritance (MI) in Object-Oriented Programming (OOP) can be
of great use in software practice.B However, current models for
statically typed OOP often either completely avoid addressing MI or
address it through the use of some greatly involved mechanisms that
seem neither natural nor amendable to further extensions.B In this
talk, we present a typed approach to modeling OOP that takes a view of
``objects as functions'', in contrast with the popular view of
``objects as records''.  We show that this approach can not only
address various common OOP features but also provide a simple and
general account for MI.B The object encoding in our OOP model is
primarily rooted in Applied Type System (ATS), a recently proposed
framework for facilitating the design and formalization of advanced
type systems.

Joint work with Hongwei Xi.


Engineering Calling Conventions in the Quick C-- Compiler Infrastructure

Norman Ramsey (Harvard)

Many compilers have been written that translate high-level languages
to C.  This technique eases portability, but C makes it difficult to
provide efficient implementations of some features of modern
languages, e.g., proper tail calls, accurate garbage collection, and
efficient exception dispatch.  C-- is designed as a better
alternative, one that should be as easy to use as C while providing
performance approaching that of custom code generators.  C-- requires
a balancing act: To be easy to use, it must abstract away from machine
details, but to make high performance possible, it must expose such
details.  Nowhere is this balancing act more delicate than in the
treatment of procedure calls.

This talk provides a brief overview of the C-- design, emphasizing the
mechanisms C-- uses to support procedure calls.  It then explains how
procedure calls are implemented in the Quick C-- compiler and shows
how this implementation can be used to achieve two goals: Tuning the
performance of calls and interoperating with calls in other
languages.

                  (Joint work with Simon Peyton Jones,
            Christian Lindig, Joao Dias, and Reuben Olinsky)


Fault Localization

Manos Renieris (Brown)

Once a program run exposes a bug, the programmers' first task is fault
localization: identifying the portions of code they need to change to
correct the bug.

In this talk, I will present a pattern common among state-of-the-art
fault-localization tools. Tools based on this pattern represent
program runs as sets of events, and contrast the representations of
successful and failing program runs to provide a hint as to the
location of the bug.  Isolated events rarely suffice, though, because
their impact on the outcome of a run depends heavily on the context in
which they execute.  Existing research addresses this context
problem by employing program-specific knowledge.

I will present a novel enhancement of the pattern that addresses the
context problem in a program-independent way. The technique employs a
distance metric between program runs and, given a failing run, selects
its most similar successful run to contrast with it.

To quantify the success of the technique, I developed a metric for the
quality of fault-localization tools.  This is the first metric I know
of, and it allows direct comparison between different techniques.  I
will use this metric to evaluate two implementations of my technique.


Mark-Copy: Fast copying GC with less space overhead

Narendran Sachindran (Umass Amherst)

Copying garbage collectors have a number of advantages over
non-copying collectors, including cheap allocation and avoiding
fragmentation. However, in order to provide completeness (the
guarantee to reclaim each garbage object eventually), standard copying
collectors require space equal to twice the size of the maximum live
data for a program.  We present a mark-copy collection algorithm (MC)
that extends generational copying collection and significantly reduces
the heap space required to run a program. MC reduces space overhead by
75-85% compared with standard copying garbage collectors, increasing
the range of applications that can use copying garbage collection. We
show that when MC is given the same amount of space as a generational
copying collector, it improves total execution time of Java benchmarks
significantly in tight heaps, and by 5-10% in moderate size heaps.  We
also compare the performance of MC with a (non-generational)
mark-sweep collector and a hybrid copying/mark-sweep generational
collector.  We find that MC can run in heaps comparable in size to the
minimum heap space required by mark-sweep. We also find that for most
benchmarks MC is significantly faster than mark-sweep in small and
moderate size heaps.  When compared with the hybrid collector, MC
improves total execution time by about 5% for some benchmarks, partly
by increasing the speed of execution of the application code.

Joint work with J. Eliot B. Moss.


Object Equality Profiling

Darko Marinov (MIT LCS)

This talk presents Object Equality Profiling (OEP), a new technique
for helping programmers discover optimization opportunities in
programs.  OEP discovers opportunities for replacing a set of
equivalent object instances with a single representative object.  Such
a set represents an opportunity for automatically or manually applying
optimizations such as hash consing, heap compression, lazy allocation,
object caching, invariant hoisting, and more.  To evaluate OEP, we
implemented a tool to help programmers reduce the memory usage of Java
programs.  Our tool performs a dynamic analysis that records all the
objects created during a particular program run.  The tool partitions
the objects into equivalence classes, and uses collected timing
information to determine when elements of an equivalence class could
have been safely collapsed into a single representative object without
affecting the behavior of that program run.  We report the results of
applying this tool to benchmarks, including two widely used Web
application servers.  Many benchmarks exhibit significant amounts of
object equivalence, and in most benchmarks our profiler identifies
optimization opportunities clustered around a small number of
allocation sites.  We present a case study of using our profiler to
find simple manual optimizations that reduce the average space used by
live objects in two SpecJVM benchmarks by 47% and 38% respectively.

This is a joint work with Robert O'Callahan from the IBM T. J. Watson
Research Center.


The HOW of Programming Language Research -- Starting Point for a Discussion

John Ridgway (Umass Amherst)

At many past NEPLS meetings we have heard and talked about programming
language research RESULTS.  I maintain that an equally valid subject
for these meetings is HOW we do our research -- the tools and
techniques that we use to get our work done.  I propose to initiate a
discussion of this topic by describing some tools that I have
developed, and am developing, for our work on language
inter-operation.  I hope that this will stimulate other people to talk
about tools that they use, as such a discussion should be mutually
beneficial.  At the very least, if there are wonderful tools out there
that I have missed I would like to know about them.

My presentation will describe two sets of tools that I am developing.
One set takes an S-expr-based representation of a set of syntax, type,
and semantic rules as input, and produces LaTeX representations of the
rules.  It also produces a corresponding set of Prolog inference rules
for type-checking and another set of Prolog inference rules for
execution of a program written in the language.  The other set of
tools help to verify proofs about the system of rules by taking a
similar representation of the proof, verifying (most of) the steps of
the proof and generating a LaTeX version of it.


Last modified Wednesday, October 15th, 2003 12:28:52pmPowered by PLT