[logo suggestions welcome]

N
E
P
L
S

These abstracts are for talks at this event.

NEPLS is a venue for ongoing research, so the abstract and supplemental material associated with each talk is necessarily temporal. The work presented here may be in a state of flux. In all cases, please consult the authors' Web pages for up-to-date information. Please don't refer to these pages as a definitive source.


The Architecture of a World-Wide Distributed Repository to Support Fine-Grained Sharing of Source Code

Jeffrey M. Siskind, Purdue University

There has been an explosion in free software over the past few years.
Vehicles like FSF, GNU, CVS, RPM, Linux, and freshmeat.net allow programmers
to share hundreds of millions of lines of source code.  These vehicles,
however, support only coarse-grained source-code sharing.  The unit of sharing
is a complete package.  And packages are monolithic.  About the only thing one
can easily do with packages obtained from such vehicles is install them.
While a package might contain a collection of procedures and type declarations
that implement some functionality that a programmer might wish to reuse in a
different system, it is difficult to find which package contains that
functionality, extract that functionality from its original package, and
import it into a new system.  In this talk, I present a new vehicle, called
October, that is designed to support fine-grained source-code sharing: sharing
at the level of individual procedure and type declarations.  Unlike CVS, which
allows many people to share in the development of the same system, October
allows many people to share in the development of different systems.

October is organized like the Web.  Instead of Web pages there are top-level
definitions.  Top-level definitions are stored in a distributed repository
implemented by October servers that are analogous to Web servers.  They are
viewed and manipulated by October browsers that are analogous to Web browsers.
October servers and browsers communicate via the October protocol which plays
the role of HTTP and allows different server and browser implementations to
interoperate.  The October protocol is designed from the start with support
for searching, caching, mirroring, and load balancing.  These play the role of
search engines, proxy servers, and Web caches.  Search is currently based on
string matching, though future plans call for type-based search supported by
local and global type inference built into the October protocol and
implemented by the October servers.  Top-level definitions are stored as
abstract syntax trees (ASTs) which play the role of HTML.  Instead of URLs
there are version locators.  Top-level definitions are hyperlinked via
embedded version locators.  When building an executable, the browser crawls
the repository to fetch all top-level definitions referenced directly or
indirectly by the top-level definition being built.

October is designed to be programming-language and programmer-preference
neutral.  It is neither a new programming language nor a new compiler.
Rather, it supports existing programming languages and compilers.  Just as the
Web supports different document styles via DTDs, October supports different
programming languages and programmer preferences via programming language
definitions (PLDs).  A novel aspect of PLDs is that they separate the
definition of the abstract syntax for a given language from the mapping
between abstract and concrete syntax.  This allows users to configure their
browsers to dynamically render the code they view in a different concrete
syntax according to their personal preference.  PLDs are currently written to
support Scheme, C, and Java.

The overall goal of October is to boost world-wide programmer productivity by
encouraging an unprecedented degree of source-code sharing, shifting the
prevalent mode of programming from implementation to augmentation.  In this
talk, I will describe how October is designed to support this goal, discuss
how this goal motivates and influences the creation of a new infrastructure,
present some of the technical problems and design tradeoffs addressed so far
while creating this infrastructure, and give a live demo of the prototype
implementation of October.


Solving Regular Path Queries

Y. Annie Liu, SUNY Stony Brook

Regular path queries are a way of declaratively specifying program
analyses as a kind of regular expressions that are matched against
paths in graph representations of programs.  These and similar queries
are useful for other path analysis problems, such as model checking,
as well.  This talk describes the precise specification, derivation,
and analysis of an efficient algorithm and implementation for solving
regular path queries.  We start with a specification that corresponds
rather directly to the English description of the problem, apply rules
that capture some common knowledge at a high level, and then apply
program transformations that are centered around Paige's finite
differencing (i.e., incremental computation) techniques.  This formal
derivation allows us to analyze the time and space complexity of the
implementation precisely in terms of size parameters of the graph and
the deterministic finite automaton that corresponds to the regular
expression.  In particular, the time and space complexity is linear in
the size of the graph.  We also prove that the problem is
PSPACE-complete in terms of the size of the regular expression.  In
applications such as program analysis, the size of the graph may be
very large, but the size of the regular expression is small and can be
considered a constant.

This is joint work with Fuxiang Yu.

Speaker's Supplement

A paper describing part of this work appeared in MPC'02 (Mathematics of Program Construction, Dagstuhl, Germany, July 2002). It can be found at ftp://ftp.cs.sunysb.edu/pub/liu/RegPathQ-MPC02.ps.gz


Better Abstraction via Race Freedom

Stephen N. Freund, Williams College

The design of multithreaded programs is particularly prone
to errors due to subtle interactions between threads.
Currently, specification methodologies are unable to express
complex synchronization mechanisms, and verification tools
are unable to scale to large programs.  We present a
specification and verification methodology with a supporting
checker to overcome these limitations.

First, we develop a general mechanism for describing
mutually exclusive access to shared variables.  Each shared
variable in a program is annotated with a predicate stating
the condition under which a thread may assume exclusive
access to that variable.  We define a program to be
race-free when every thread respects these assumptions.
Further, to support verification of large programs, our
methodology checks each procedure called by a thread
modularly, using only the specifications of other
procedures.  A procedure specification is an abstract
program that captures all possible behaviors of a thread
executing that procedure in a race-free environment.

We present a method to modularly check that every procedure
is race-free and satisfies its specification.  We have
implemented our method in Calvin, a static checker for
multithreaded Java programs.  To validate our methodology,
we have used Calvin to check a number of important
properties for a file system.  Our experience shows that
requirements for complex multithreaded systems can be
intuitively stated and verified in our framework.

This is joint work with Shaz Qadeer (Systems Research
Center, HP Labs).


Statically Type-Safe Virtual Types in Object-Oriented Languages

Kim Bruce, Williams College

The virtual class construct was introduced in the language Beta to 
provide for a mechanism similar to that of parametric polymorphism. 
Unfortunately, the virtual class construct in Beta is not statically 
type-safe (though it is dynamically safe because of run-time checks). 
In this paper we show how a generalization of the semantics of 
object-oriented languages with a MyType construct led to the discovery 
of a variant of virtual classes which need no run-time checks.  This 
results in an object-oriented language in which both parametric and 
virtual types are well-integrated, and which is statically type-safe.

This is joint work with Joe Vandervaart at Carnegie Mellon University.


Static Use-Based Object Confinement

Christian Skalka, University of Vermont

The confinement of object references within protection domains is a
significant security concern in languages such as Java.  Aliasing and
other features of OO languages can make this a difficult task; recent
work has focused on the development of type systems for enforcing
various confinement mechanisms in the presence of these features.  We
describe a new language and type system for the implementation of
object confinement mechanisms that is more general than previous
systems, and which is based on a distinct ``use-based'' notion of
security enforcement.

The language model has sufficient generality to be applicable to a
variety of systems, allowing our system to serve as a foundation for
studying object confinement in OO languages.  The type system further
enhances security, by providing a declarative statement of security
policies, and by improving run-time efficiency through static, rather
than dynamic, enforcement of security.  To develop these types, we use
a transformational method that allows re-use of pre-existing type
systems and implementations-- in particular, row types and conditional
constraints.  This saves significant effort in the proof of type
safety for our system, and allows us to easily define and implement an
accurate, flexible type system that is founded on, and benefits from,
extensive previous work.

This is joint work with Scott Smith, Johns Hopkins University.

Speaker's Supplement

If you would like to see the paper itself, it is available at: http://www.cs.jhu.edu/~ces/papers/skalka-smith-fcs02.ps


StreamIt: A Language for Streaming Applications

William Thies, MIT

High-performance streaming applications represent a new and distinct
domain of programs that is becoming increasingly important.  Within this
domain, the StreamIt language is designed to satisfy two criteria:
first, it ought to provide high-level abstractions that improve
programmer productivity, and second, it should be amenable to
straightforward compiler analyses for achieving high performance.  While
these goals are often conflicting in the design of programming
languages, the salient features of StreamIt are beneficial to both ends.
These features include the notion of an autonomous "filter" as the basic
stream component, a structured model of filter composition, a messaging
system for control, and a hierarchical re-initialization mechanism.

In order to express a streaming computation, a set of filters must be
assembled into a stream graph.  Rather than allowing the user to
construct an arbitrary network of filters, StreamIt provides three
pre-defined stream structures that can be to construct hierarchical
stream graphs.  The comparison of StreamIt's structure with arbitrary
stream graphs could be likened to the difference between structured
control flow and GOTO statements.  Though sometimes the structure
restricts the expressiveness of the programmer, the gains in robustness,
readability, and compiler analysis are immense.

StreamIt also provides a dynamic messaging system for passing irregular,
low-volume control information between non-neighboring filters.  The
most apparent difference between a message and a traditional function
call is that a message is asynchronous--there is no return value
associated with a message, and the callee can continue to execute while
the message is en route.  However, StreamIt also provides a
sophisticated message timing system so that an asynchronous message can
be delivered precisely in terms of the flow of data through the stream
graph.  This messaging system can also be combined with the hierarchical
structure of StreamIt to give a unified framework for morphing
sub-sections of the stream depending on runtime demands.

This is joint work with Michael Gordon, Michal Karczmarek, 
David Maze, and Saman Amarasinghe at MIT.


Supporting Binary Compatibility with Static Compilation

Dachuan Yu, Yale University

There is an ongoing debate in the Java community on whether statically
compiled implementations can meet the Java specification on dynamic
features such as binary compatibility. Static compilation is sometimes
desirable because it provides better code optimization, smaller memory
footprint, more robustness, and better intellectual property
protection.Unfortunately, none of the existing static Java compilers
support binary compatibility, because it incurs unacceptable performance
overhead. In this paper, we propose a simple yet effective solution
which handles all of the binary-compatibility cases specified by the
Java Language Specification. Our experimental results using an
implementation in the GNU Java compiler shows that the performance
penalty is on average less than 2%. Besides solving the problem for
static compilers, it is also possible to use this technique in JIT
compilers to achieve an optimal balance point between static and dynamic
compilation.

This is joint work with Zhong Shao and Valery Trifonov.

Speaker's Supplement

More information about this paper can be found at http://flint.cs.yale.edu/flint/publications/bincomp.html


Last modified Monday, July 29th, 2002 12:33:13amPowered by PLT