[logo suggestions welcome]

N
E
P
L
S

These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.


Dissecting Transactional Implementations for Fun and Profit

Suresh Jagannathan (Purdue University)

There has been much recent interest in exploring the role of software
transactions as an alternative concurrency control mechanism to mutual
exclusion locks.  While transactions can provide numerous benefits
with respect to programmability, safety, and performance, their
implementations are necessarily complex.  This complexity arises
because of the need to dynamically monitor execution within a
transaction for consistency violations, and to remedy these violations
once they are detected.

However, by carefully selecting different features of these
implementations, we can provide efficient solutions to a number of
more constrained, but equally challenging, problems.  In this talk, we
present three such examples drawn from three very different domains:
(1) safe futures for Java; (2) preemptible atomic regions to deal with
priority inversion in realtime systems; and (3) a lightweight
checkpointing facility for concurrent (mostly) functional programs.


jpaul - Java Program Analysis Utilities Library

Alex Salcianu (MIT)

jpaul is a Java implementation of several algorithms that are widely
used in program analysis.  Compiler writers often reimplement a
(relatively small) set of algorithms: graph algorithms, constraint
solvers, dataflow engines etc.  The goal of the jpaul project is to
have these algorithms implemented independently of any compiler.
jpaul emphasizes (in order): correctness, flexibility, ease-of-use,
and asymptotic complexity (we are not interested in cutting-edge low
level optimizations).  The intended audience consists of program
analysis researchers in need of a correct, elegant, and fast way of
prototyping their analysis ideas.

jpaul is implemented in Generic Java (JDK 1.5) and is released under
the terms of the (Modified) BSD Licence.

For downloads and more information, please visit the jpaul website on Sourceforge: http://jpaul.sourceforge.net/.


Taking Derivatives of Functional Programs: AD in a Functional Framework

Jeffrey Mark Siskind (Purdue University)

Algorithm Differentiation (AD) is an established enterprise that seeks
to take the derivatives of functions specified as programs through
symbolic manipulation rather than finite differencing.  AD has
traditionally been applied to imperative programs, typically using
preprocessors that apply a source-to-source transformation to derive a
program for computing the derivative of a function from a program that
computes the original function.  We present a novel framework for
performing AD within modern functional-programming languages, treating
AD operators as first-class higher-order functions that map
first-class function objects to first-class function objects.  Our
approach is more modular, allowing a library of functions based on
derivatives to be built compositionally.  It allows the callee to
specify the necessary AD, rather than insisting that the caller
provide appropriately transformed functions.  Our approach is
foundational: we develop a method for taking the derivative of any
lambda-calculus expression over a basis of standard numeric
primitives.  Since the lambda calculus is both the source and target
of our transformations, higher-order derivatives can be constructed
naturally, without special mechanisms.  In this talk, I will give a
tutorial overview of AD, demonstrate the power and convenience of the
functional formulation of AD by way of a series of small examples, and
sketch the framework underlying our implementation.

Joint work with Barak A. Pearlmutter.


Formal Semantics of Weak References

J.J. Hallett (Boston University)

Weak references are references that do not prevent the object they
point to from being garbage collected. Many realistic languages,
including Java, SML/NJ, and Haskell to name a few, support weak
references. However, there is no generally accepted formal semantics
for weak references. Without such a formal semantics it becomes
impossible to formally prove properties of such a language and the
programs written in it.

We give a formal semantics for a calculus called lambda-weak that
includes weak references and is derived from Morrisett, Felleisen, and
Harper's lambda-gc. The semantics is used to examine several issues
involving weak references. We use the framework to formalize the
semantics for the key/value weak references found in
Haskell. Furthermore, we consider a type system for the language and
show how to extend the earlier result that type inference can be used
to collect reachable garbage. In addition we show how to allow
collection of weakly referenced garbage without incurring the
computational overhead often associated with collecting a weak
reference which may be later used. Lastly, we address the
non-determinism of the semantics by providing both an effectively
decidable syntactic restriction and a more general semantic criterion,
which guarantee a unique result of evaluation.

(joint work with Kevin Donnelly and Assaf Kfoury)


DieHard: Statistical Memory Safety for Unsafe Languages

Emery Berger (University of Massachusetts, Amherst)

The vast majority of existing applications are written in unsafe
languages like C and C++. These applications are vulnerable to memory
errors such as buffer overflows, dangling pointers, and reads to
uninitialized data. Such errors can lead to program crashes, security
vulnerabilities, or unpredictable behavior. We present DieHard, a
system that combats these errors by providing *statistical memory
safety* for unsafe languages. Through randomization and replication,
DieHard achieves memory safety with high probability. It thus allows
programs to continue to execute correctly in the face of memory
errors, *masking* errors like buffer overflows and dangling pointers
while ensuring safety by detecting uninitialized reads. We show
analytically that DieHard increases memory safety exponentially with
the number of replicas. Because it is embarrassingly parallel by
design, DieHard can leverage multiple CPUs or cores to improve program
reliability.

(joint work with Ben Zorn of Microsoft)


The Architecture of a World-Wide Distributed Repository to Support Fine-Grained Sharing of Source Code

Jeffrey Mark Siskind (Purdue University)

October is a new infrastructure designed to support world-wide
fined-grained source-code sharing at the level of individual
functions.  It is organized like the Web.  Just as the Web is a
world-wide distributed hyperlinked indexed repository of text
documents, October is a world-wide distributed hyperlinked indexed
repository of function definitions.  Instead of Web pages there are
top-level definitions.  Top-level definitions are stored in a
distributed repository implemented by October servers that are
analogous to Web servers.  They are viewed and manipulated by October
browsers that are analogous to Web browsers.  October servers and
browsers communicate via the October protocol which plays the role of
HTTP and allows different server and browser implementations to
interoperate.  The October protocol is designed from the start with
support for searching, caching, mirroring, and load balancing.  These
play the role of search engines, proxy servers, and Web caches.
Top-level definitions are stored as abstract syntax trees (ASTs) which
play the role of HTML.  Instead of URLs there are version identifiers.
Top-level definitions are hyperlinked via embedded version
identifiers.  When building an executable, the browser crawls the
repository to fetch all top-level definitions referenced directly or
indirectly by the top-level definition being built.  Just as the Web
supports different document styles via DTDs, October supports
different programming languages and programmer preferences via
programming language descriptions (PLDs).  PLDs are currently written
to support Scheme, C, C++, and Java.  In this talk, I will describe
the design goals of October, discuss how these goals motivate and
influence the creation of a new infrastructure, present some of the
technical problems and design tradeoffs addressed so far while
creating this infrastructure, and give a live demo of the prototype
implementation of October.


MetaOCaml Server Pages: Web Publishing as Staged Computation

Christopher League (Long Island University)

Modern dynamic web services are really computer programs.  Some parts
of these programs run off-line, others run server-side on each
request, and still others run within the browser.  In other words, web
publishing is staged computation, either for better performance, or
because certain resources are available in one stage but not another.
Unfortunately, the various web programming languages make it difficult
to spread computation over more than one stage.  This is a tremendous
opportunity for multi-stage languages in general, and for MetaOCaml in
particular.

We present the design of MetaOCaml Server Pages.  Unlike other
languages in its genre, the embedded MetaOCaml code blocks may be
annotated with staging information, so that the programmer may safely
and precisely control which computation occurs in which stage.  A
prototype web server, written in OCaml, supports web sites with both
static and dynamic content.  We provide several sample programs and
demonstrate the performance gains won using multi-stage programming.


HasSound: Generating Musical Instrument Sounds in Haskell

Paul Hudak (Yale University)

To complement Haskore, a Haskell library for composing music at the
"note" level, we have designed a Haskell library called "HasSound" for
designing instrument sounds at the level of oscillators, filters, etc.
It is based on csound, a popular music software system written in C.
Although csound has a rather baroque syntax, it is "mostly"
functional, which of course is reflected well in Haskell.  But it also
has some rather ugly imperative edges that provide important
functionality that can't be ignored.  We discuss how we handle both
the functional and imperative sides to this design, including a monad
for channels, functional delay lines, and an explicit way to express
recursive signals.  The work is a useful example of embedded DSL
design.

Joint work with Matt Zamec (Yale) and Sarah Eisenstat (Brown).


Last modified Wednesday, October 12th, 2005 10:38:14pmPowered by PLT