Designing Elk, not as another Scheme implementation, but as an extension language kit, provided a design space different from that traditionally available for Lisp implementations. The necessary deviations from the treaded paths of UNIX programming uncovered limitations in portability, aggravated by badly tested corners of standard UNIX facilities. This section discusses the more interesting examples of such issues.
Finding a way to efficiently implement Scheme's continuations called for considerable efforts during the design phase of Elk. Continuations are a powerful language feature; they support the definition of arbitrary control structures such as non-local loop and procedure exits, break and return as in C, exception handling facilities, explicit backtracking, co-routines, or multitasking based on engines [Dybvig 1987].
The primitive procedure
(call-with-current-continuation receiver)packages up the current execution state of the program into an object (the continuation or escape procedure) and passes this object as an argument to receiver (which is a procedure of one argument). Continuations are first-class objects in Scheme; they are represented as procedures of one argument (not to be confused with the receiver procedure). Each time a continuation procedure is called with a value, it causes this value to be returned as the result of the call-with-current-continuation expression which created this continuation. If the procedure receiver terminates normally (i.e. does not invoke the continuation given to it), the value returned by call-with-current-continuation is the return value of receiver.
As long as the use of a continuation is confined to the runtime
of the receiver procedure, call-with-current-continuation
is similar in its functionality to catch/throw in most
Lisp dialects or setjmp/longjmp in C.
However, continuations, like all procedures in Scheme, have indefinite
extent (unlimited lifetime); they can be stored in variables and
called an arbitrary number of times, even after the receiver and
the enclosing call-with-current-continuation have already
Listing 2 shows a program fragment where continuations
are used to get back an arbitrary number of times into the middle
of an expression whose computation has already been completed.
While not particularly useful, this example demonstrates that
continuations can be used to build control structures that
cannot be implemented by means of less general language features like
catch/throw or setjmp/longjmp.
(define my-function (lambda (n m) (+ n (mark m))) ; return n+m (define get-back "uninitialized") (define mark ; identity function, but also (lambda (value) ; assign current continuation (call-with-current-continuation ; to a global variable (lambda (continuation) (set! get-back continuation) ; (assign it) value)))) (my-function 10 20) ; invoke my-function prints 30 (get-back 5) ; resume with new value prints 15 (get-back 0) ; ...once more prints 10
The different approaches applicable to implementing continuations are intimately tied to the strategies used for interpreting the language itself. Scheme interpreters generally employ a lexical analyzer and parser -- the reader -- to read and parse the Scheme source code and produce an intermediate representation of the program. During this phase, symbols are collected in a global hash table (in Lisp jargon, the symbols are interned), and a tree structure representing the program's S-expressions is built up on the heap of the interpreter. The majority of interpreters compile this intermediate representation into an abstract machine language (such as byte code). The evaluator is then implemented as an abstract machine which interprets the low-level language; this machine -- usually a simple stack machine -- may even be implemented in hardware.
In an abstract machine implementation, the straightforward approach to implement call-with-current-continuation is to package up the contents of the abstract machine's registers (program counter, stack pointer, etc.) and runtime stack. Since continuations have indefinite extent, it would not suffice to just capture its registers (as the C library function setjmp does for the real machine). To be able to continue the evaluation of procedures that have already returned and whose frames are therefore no longer on the stack, a continuation must also embody the contents of the abstract machine's stack at the time it is created. When a continuation is applied, the machine resumes the ``frozen'' computation by restoring the saved registers and stack contents of the abstract machine.
Just saving the abstract machine's state would not work in Elk, because at the time a continuation is created, arbitrary library functions may be active in addition to Scheme primitives. For instance, consider the Elk interface to the ``Xt'' toolkit intrinsics of the X window system. Here, a typical scenario is that some Scheme procedure invokes the primitive that enters the toolkit's event dispatching main loop (XtAppMainLoop()). When an event arrives (for example, a mouse button press event), the toolkit's main loop invokes a callback function, which in turn calls a user-supplied Scheme procedure to be executed when a mouse button is pressed. This Scheme procedure might in turn invoke yet another function from the ``Xt'' library, and so on. A similar example would be a qsort or ftw extension to Elk, where the user-supplied function called by the qsort() or ftw() C library function would invoke a procedure written in Scheme.
The interpreter's thread of execution at any time obviously involves both Scheme primitives and library functions (such as XtAppMainLoop() and qsort() in the examples above) in an arbitrary combination. Therefore, a continuation must embody not only the execution state of the active Scheme procedures, but also that of the currently active library functions (such as local variables used by the library functions). In the approach used by Elk, a continuation is created by capturing the machine's registers -- like setjmp in C does -- and the C runtime stack. When a continuation is applied later, the registers and the saved stack contents are copied back. Actually, we did not follow the usual ``abstract machine'' technique in Elk at all; instead, the Scheme evaluator directly interprets the intermediate representation produced by the reader. In a sense, it is the ``real'' machine (the hardware on which Elk is executed) that plays the role the abstract machine plays in implementations with byte-code compilation.
Although the abstract machine technique usually yields faster execution of Scheme code, the performance of Elk resembles that of existing interpreters employing this technique, and the implementation of Elk is simpler than that of comparable interpreters using byte-code compilation. While the technique to implement continuations in Elk is not strictly portable -- it is based on certain assumptions on the machine's stack layout and the C compiler and runtime environment -- it works on most major machine architectures (with two exceptions, which are supported using asm statements).
Continuations provide a natural basis for implementing the execution-state preserving semantics of the dump primitive. When called, dump invokes call-with-current-continuation. The real work is done in the receiver procedure; it stores the newly created continuation into a global variable, sets a global was-dumped flag to indicate that a dump has taken place, creates an executable file from the image of the running process, and finally returns ``false''. The return value of the dump primitive is the return value of this call to call-with-current-continuation, i.e. ``false'' if a dump has just been performed.
When the interpreter -- either the original program or a dumped executable -- is started, it examines the was-dumped flag as its very first action. If the flag is set, the running interpreter was started from a dumped executable. In this case the interpreter immediately invokes, with an argument of ``true'', the continuation that was saved away by a call to dump; this causes that call to dump to finish and return ``true'' to its caller. If, on the other hand, the was-dumped flag is not set (i.e. the running process was not started from a dumped image), the interpreter initializes and starts up as usual.
Before writing an image of the running process to disk, dump has to close all open Scheme file ports, as open file descriptors would not survive a dump -- they would no longer be valid in the dumped executable. Generally, this is true for all objects pointing to information maintained by the UNIX kernel, such as the current directory, the current signal dispositions, resource limits, or interval timers. Users and implementors of Elk extensions must be aware of this particular restriction. For instance, users of the X11 extensions have to make sure that, if dump is to be used, connections to X-displays are only established in the dumped invocation.
To be able to create an executable from the running process, dump has to open and read the a.out file from which the running process was started (actually, if the system linker has been called to dynamically load object files, the output of the most recent invocation of the linker is used instead of the original a.out). The symbol table of the new executable is copied from the a.out file of the running program; in addition, the a.out header has to be read to obtain the length of the text segment and the start of the data segment of the running process. To do so, dump has to determine the filename of the a.out file from which the process was started based on the information in argv and in the PATH environment variable. This approach is obviously based on several prerequisites: dump must be able to access its a.out file (argv must carry meaningful information; the file must be readable) and the running program's a.out file must not have been stripped. It would have been advantageous for the implementation of dump if the entire a.out file were automatically mapped into memory on startup, like it is done, for instance, in NeXT-OS/Mach.
dump combines the data segment and the ``bss'' segment of the running process into the data segment of the new executable. If Elk had a separate heap for storing constant objects (future versions may have one), dump could place this read-only part of the memory into the new executable's text segment to make it sharable. When the interpreter's heap is written to disk, dump seeks over the unused portions of the heap, so that fake blocks (holes) can be used for these parts of the file. This results in a considerable conservation of disk space in the final executable, as at least half of the interpreter's heap is unused at any time due to the garbage collection algorithm of Elk.
Since the a.out formats used in the numerous versions of UNIX differ vastly, Elk has to include separate implementations of dump for the currently supported a.out formats. Version 2.2 of Elk handles the BSD-style a.out format used in BSD and ``derived'' UNIX versions (such as SunOS 4.1), the COFF a.out format (used in older releases of UNIX System V and in A/UX), Convex SOFF, Extended COFF of MIPS-based computers (DEC, SGI), and the ELF a.out format of System V Release 4 and related UNIX versions (Solaris 2.x, OSF/1).
When loading an object file during runtime, addresses within this object file must be relocated to their new location in the program's address space. To allow extensions to directly reference objects of the interpreter kernel, such as the heap and the built-in primitives, unresolved references into the base program must be resolved during dynamic loading. Finally, the object file needs to be able to export its entry points (such as Elk's extension initialization functions) to the base program.
More than one object file may have to be loaded into one invocation of Elk. To manage non-trivial, hierarchically structured sets of extensions, where a number of high-level extensions require one or more lower-level extensions to be loaded, it is essential that object files loaded later can make use of the symbols defined by previously loaded object files. As this style of dynamic loading allows building complex systems from small components incrementally, we will use the term incremental loading.
With the advent of 4.0BSD in 1980 [Joy 1980], support for incremental loading was added to the system linker and has since been supported by most major UNIX variants: when the -A option and the name of the base executable are supplied to the linker, linking is performed in a way that the object file produced by the linker can be read into the already running executable. The symbol table of the resulting object file is a combination of the symbols defined by the base program and the newly defined symbols added by the linking process, from the object file or from libraries used in linking. Only this newly linked code and data is entered into the resulting object file. The incremental style of dynamic loading is achieved by saving the resulting output file each time the linker is invoked and using this file as the base program for the next incremental loading step, such that both old and new symbols can be referenced.
Incremental loading is generally supported by the linkers of UNIX versions that use the BSD-style a.out format and by those of several UNIX systems based on more modern a.out formats (e.g. Ultrix). It is not supported by any existing release of UNIX System V. Some newer UNIX versions that have shared libraries and dynamic linking (such as System V Release 4 or SunOS) offer a library interface to the dynamic linker. In some systems this kind of interface is intended to replace the incremental loading functionality of the system linker. These dynamic linker interfaces usually come in the form of a library that exports functions such as dlopen() to map a shared object module or shared library into the address space of the caller (the base program) and dlsym() to obtain the address of a function or data item in the newly attached object module.
In some implementations, object files attached through dlopen() may directly reference symbols in the base program; in other implementations they may not. In any case, object files cannot directly reference symbols defined by objects that have been placed into the program by previous calls to dlopen() (only, if at all, indirectly by calling dlsym()). Thus, these dynamic linker interfaces are clearly inferior to incremental loading, as they lack the important capability to load a set of object files incrementally. Many vendors who have replaced ``/bin/ld -A'' by a dlopen-style library in their UNIX systems, or who intend to do so, do not seem to be aware of the fact that this change will break applications that rely on incremental loading.
For Elk, the consequence of being restricted to dynamic linker interfaces of that kind is that, except for the simplest applications, one must pre-link all possible combinations of extensions that are not completely independent of each other. In general, given a set of n extensions each of which can be based on one out of m other extensions, this means having to prepare and keep around n×m pre-linked object files; not to mention the contortions one has to go through when the hierarchy of extensions has a depth greater than two (not an unlikely scenario in practice). If the number of extensions and relations between them is larger than trivial, or if the extensions are large or require large libraries, keeping around all pre-linked combinations of object modules will cause a maintenance problem and may waste a considerable amount of disk space.
Another, although minor, problem with these dynamic linker interfaces is that they usually offer only a simple-minded function (such as dlsym()) to look up the address of a specific symbol of a newly accessed object module (typically some kind of module initialization function); but they do not provide a way to scan all newly defined symbols. This functionality is insufficient to implement extension initialization in Elk, where a dynamically loadable extension often is composed from a number of small modules, each defining its own initialization function. Requiring a single, common initialization function name for the entire object file implies that (often configuration-dependent) ``glue code'' must be added to call all the individual initialization functions, including the C++ static constructors.
Version 2.2 of Elk supports dynamic loading in environments with ``ld-A'' (such as BSD, SunOS 4, Ultrix, and certain versions of SGI Irix and HP-UX), in HP-UX 9 (based on shl_load), and in MACH/NeXT-OS (rld_load). By generating shared objects on the fly, System V Release 4 and SunOS 5 (Solaris 2) are also supported, although in a limited and not yet satisfactory way.
As the current version of the Scheme standard (deliberately) does not specify several important language issues, such as error handling or syntactic extensions, we have added a number of non-standard language features to the Scheme interpreter of Elk to fill some of the holes.
A proposal for a macro extension has only recently been added as an addendum to the Revised^4 Report on the Algorithmic Language Scheme [Clinger et al. 1991] and is still being discussed controversially within the Scheme community. To avoid having to wait for a final version of a macro system to evolve and be included in the Scheme standard, we implemented a simple-minded macro mechanism in Elk that resembles the macro facilities offered by various existing Scheme and Lisp systems.
One area where the Scheme standard does not specify any language features yet is error and exception handling; the standard merely states which error situations a conforming implementation is required to detect and report. Since it is essential for a non-trivial application to be able to gracefully handle error situations (such as failures in interactions with the operating system) and other exceptional conditions, we have added a simple error and exception handling facility to Elk.
When an error is detected by the interpreter, a user-supplied
error handling procedure is invoked with arguments identifying the
type and source of the error.
The standard interactive environment (top-level) of Elk provides a
default error handler that prints an error message and then resumes the
main read-eval-print loop by means of a reset primitive.
Most primitives of Elk and the extensions use this error handling
facility to signal an error, as opposed to indicating failure by
a distinctive return value (which would be prone to being ignored).
To by-pass the standard error handler and ``catch'' failure of a
particular primitive, programs may enclose the call to the primitive by
call-with-current-continuation and dynamically bind the error
handler to the continuation (as shown in listing 3).
(define (open-input-file-or-not name) (call-with-current-continuation (lambda (return) ; return becomes escape procedure (fluid-let ((error-handler ; rebind error-handler (lambda args (return #f)))) (open-input-file name)))))
Elk provides a similar facility to handle an interrupt exception: a user-supplied interrupt handler is invoked when a SIGINT signal is sent to the interpreter (usually by typing the interrupt character on the keyboard). Support for other exceptions, such as timer interrupts, may be provided in future versions.
Another non-standard primitive that facilitates handling of errors is dynamic-wind, a generalization of the unwind-protect form offered by many Lisp dialects. dynamic-wind is used to implement the fluid-let special form (to create fluid or dynamic variable bindings). Both dynamic-wind and fluid-let are also provided by several other Scheme dialects [MIT 1984, Dybvig 1987].
The current version of the Scheme standard does not provide any language features that would make it possible to implement a useful Scheme debugger (apart from a debugger based on source code instrumentation). To compensate for this shortcoming, we have added a few primitives that aid the implementation of a simple interactive debugger, among them an eval primitive (although, in theory, eval could be implemented by writing an expression into a temporary file and then loading this file). In addition, Elk, like a few other Scheme dialects, provides lexical environments as first class (but immutable) objects. Other non-standard primitives that aid writing debuggers are procedure-lambda to obtain the lambda expression that evaluated to a given procedure, and a primitive that returns the list of currently active procedures together with their actual arguments and the lexical environments in which the procedure calls took place (a backtrace).
The garbage collector of Elk is based on the stop-and-copy algorithm (see e.g. [Abelson et al. 1985]). The heap area is divided into two semispaces, only one of which is active during normal operation. In a garbage collection, all objects that are still reachable are moved into the unused semispace; the previously used semispace then remains unused until the next garbage collection. An incremental, generational garbage collector for Elk, inspired by Yip's garbage collector [Yip 1991], has recently been implemented as an alternative to the stop-and-copy garbage collector[note 2] .
Extensions to Elk can register before-GC and after-GC functions with the interpreter; these functions are invoked by the garbage collector immediately before and after each garbage collection run. Within after-GC functions, extensions can determine whether a particular Scheme object has become garbage, i.e. no references to the object exist any longer. In this case, an extension may perform some kind of clean-up action; for example, if the now unreferenced object contains a handle to an open file, close this file.
The Elk distribution contains a library based on this mechanism that enables extensions to register a termination function for objects of a particular type. The termination function associated with an object is then invoked by the garbage collector automatically when this object has been detected to be unused. The Xlib extension of Elk uses this library to perform suitable finalization operations on objects created by the extensions, for example, close windows, unload fonts, and free colormap objects that have become unreferenced. This mechanism is slightly complicated by the fact that objects may have to be terminated in a predefined order; for instance, when an X11 display becomes garbage, all objects associated with this display must be terminated before the display itself is finally closed.
The problems we encountered when designing and implementing Elk's interfaces to the C libraries of X11 are likely to apply to a wide range of similar APIs. The X11 libraries, especially Xlib, are quite complex; the core Xlib alone exports more than 600 functions and macros, with numerous different mechanisms for passing arguments and for manipulating objects, some of which could be considered rather verbose and error-prone. This complexity is, at least partly, caused by the semantic restrictiveness of the C programming language. Thus, when designing the Scheme language interface, we had the opportunity to eliminate some of the ``warts.''
If integration of a library with an extension language (or interactive language in general) is not anticipated at the time the programmer's interface of the library is designed, writing a properly functioning extension language interface to this library can become rather challenging or even impossible. This problem is exemplified by the ``Xt'' toolkit intrinsics library of X11, in particular by earlier versions of this library. The following example illustrates a typical difficulty caused by the ``static'' nature of the programmer's interface to ``Xt'':
Each class of graphical objects (widgets in ``Xt'' terminology) exports a list of attributes (resources) that are associated with objects of this class. A function is provided by ``Xt'' to obtain the list of resources of a widget class together with the name and C type (integer, string, pixmap, color, etc.) of each resource. On this basis, operations like setting the value of a widget's resource from within Scheme can be implemented in a straightforward way. The ``Xt'' extension just has to check if the user-supplied Scheme value can be converted into a C object of the resource's type, perform this conversion, and call the Xt-function to set the resource, or complain to the user if the value is not suitable for this resource. However, in early versions of Xt, some classes of widgets had a subset of resources (the constraint resources) whose names and types could not be obtained by an ``Xt'' application. While this omission was usually not perceived as a problem for C programmers (who would know each widget's resources a priori from reading the documentation), it had a dramatic effect on Elk's ``Xt'' extension, as now the knowledge about these resources had to be hard-wired into the extension. As a result, the extension's source code had to be modified for each new widget set to be made usable from within Scheme code.
This particular problem has been remedied in recent releases of X11, though several similar problems remain; even in the UNIX C library. While design flaws of library interfaces often go unnoticed or are considered minor when writing C or C++ programs (e.g. the fact that implementations of the qsort() functions are non-reentrant), they become crucial when these libraries are made accessible to an extension language. As the importance of extension languages is growing, it is essential that future library interfaces are designed with the particular requirements of extensions languages in mind.