Fexpr the Ultimate Lambda

This article is dedicated to the memory of John McCarthy (1927–2011)

We are constantly on a quest for the elegant combination of simplicity and expressiveness in computer languages—what Alan Kay calls the “Maxwell’s Equations of Software“. An important early milestone was John McCarthy’s LISP [1] (The evolution of these ideas and the thinking behind them is fascinating to study [2]). Later, in an attempt to understand Carl Hewitt’s Actor Model [3], Gerald Sussman and Guy Steele developed Scheme, featuring proper lexical scoping [4]. Scheme has become the basis for a rich tradition of exploration into the semantics of computation, supported by the regularity of its syntax and its mechanisms for extensibility [5]. The theme of extensibility combined with a simple core is further developed by Ian Piumarta with Maru [6]. John Shutt has revisited the very foundations of LISP, observing that Fexpr is more primitive that Lambda [7], resulting in the simple and extensible Kernel language. Standing on the shoulders of these giants, we hope to glimpse a new horizon.

xkcd.com/224

xkcd.com/224

Fexpr

Application of a Lambda causes evaluation of its operands, producing arguments which are combined as specified by a body expression. In contrast, a Fexpr, as Shutt describes, is a combining form which acts on its operands rather than the results of their evaluation. Macros and so-called “special forms” (including conditionals and Lambda itself) are, in essence, Fexprs. Regular combiners, such as those created with Lambda, can be viewed as a generic argument evaluator wrapped around a Fexpr.

This relationship became evident when creating actor-based evaluators for functional expressions. The evaluation of operands can be completely decoupled from the combination of the resulting arguments. Exposing this powerful mechanism within the language itself is the basis for a cleanly extensible core. Extensions are smoothly integrated with language primitives. Developers have access to all the capabilities required to reproduce built-in functionality, or add their own to explore new semantics.

Vau-calculus

An important issue arising in the evaluation process is determining the proper environment in which evaluation occurs. When a Lambda expression is evaluated, it captures the environment in a closure. When a closure is applied to a list of operands, there are two environments available. The static (or lexical) environment is the one captured in the closure. The dynamic environment is the environment in which the application occurs (the call-site). The operands are evaluated in the dynamic environment, since they are part of the call. If we use the dynamic environment to evaluate the body expression of the closure, we get dynamic scoping, which is how LISP worked. If we use the static environment to evaluate the body expression of the closure, we get static scoping, which is how Scheme worked. Although static scoping is usually what we want, there are times when dynamic scope is useful. Shutt’s Vau abstraction provides controlled access to both environments.

Lambda creates an applicative combiner. One that evaluates its operands before evaluating the combination. Vau creates an operative combiner. One that passes its operands unevaluated, and provides access to both the static and dynamic environments. This improved Fexpr allows the programmer to explicitly choose to use, or prevent use of, the dynamic environment. The development of Vau shifts our perspective—considering operand evaluation as something we explicitly request, instead of something we need special machinery (various kinds of quoting) to avoid.

Operations

One way to look at the difference between functional and object-oriented algorithms is to consider the relationship between types and operations. Let’s say I have a small set of types { A, B, C } and operations { f, g, h, + }.

Table 1 – Operations with Functions
f g h +
A f(x:A) → A g(x:A) → B h(x:A, y:C) → B x:A + y:A → A
B f(x:B) → B g(x:B) → C h(x:B, y:C) → A x:B + y:B → B
C f(x:C) → C g(x:C) → A n/a x:C + y:C → C

Table 1 shows how these operations might be modeled with functions. The same function name refers to different operations based on the argument type(s). Operations are grouped by function name, as shown by the columns of the table.

f g h +
A A.f → A A.g() → B A.h(y:C) → B A.+(y:A) → A
B B.f → B B.g() → C B.h(y:C) → A B.+(y:B) → B
C C.f → C C.g() → A n/a C.+(y:C) → C
Table 2 – Operations with Objects

Table 2 shows how these operations might be modeled with objects. The same method name refers to different operations based on the class of the object on which the method is called. In the case of the infix + operator, the infix notation is syntactic sugar for a method call on the first operand. Operations are grouped by class (type), as shown by the rows of the table.

In a language with support for type/class-based dispatch, this overloading/polymorphism is resolved as part of the language machinery, often during separate compilation. The LISP/Scheme family takes a functional approach, but there is no overloading. A name, in a given context, represents a single operation. Operand types must be distinguished explicitly at run-time, with the help of type predicates. This is evident in the way traditional meta-circular evaluators are expressed.

The Actor Model, on the other hand, takes an object-oriented approach. Each actor interprets messages (like method calls) based on its behavior (like a class, but more flexible). The resulting structure is similar to a term-rewriting system. We will explore an evaluator based on Vau-calculus, but implemented with actors.

Core Evaluator

We will represent each fundamental type as an actor behavior. Evaluation proceeds by sending an #eval message to an actor representing an expression. A Symbol is evaluated by looking up an associated value in the evaluation environment. A Pair represents a combination, which is evaluated by evaluating the left component, then passing the right component to be combined with the result. All other objects evaluate to themselves.

LET Symbol(name) = \(cust, req).[
	CASE req OF
	(#eval, env) : [ SEND (cust, #lookup, SELF) TO env ]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]
LET Pair(left, right) = \(cust, req).[
	CASE req OF
	(#eval, env) : [
		SEND (k_comb, #eval, env) TO left
		CREATE k_comb WITH \comb.[
			SEND (cust, #comb, right, env) TO comb
		]
	]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]
CREATE Nil WITH \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]

The Nil actor represents an empty list. Since Nil is neither a Symbol nor a Pair, it evaluates to itself. With just these three types, we have enough machinery to represent our syntactic structure. Since the language we’re evaluating is homoiconic, this is also how we represent our semantic structure.

We will assume the usual relationship between this internal structure and its external representation. A Symbol is represented externally as a sequence of alphanumeric characters and punctuation that is not otherwise meaningful. A Pair is represented externally as (a . d), where a and d are the external representations of any object. Nil is a unique immutable object with the external representation (). Lists can be represented by a compact external representation based on Pairs such that (1 2 3) is an abbreviation for (1 . (2 . (3 . ()))).

Environment

An #eval request includes an env parameter which specifies the evaluation environment. We will represent environments as first-class (though opaque) objects and provide appropriate primitives to manipulate and access them. We begin with the, almost trivial, empty environment.

CREATE Env_empty WITH \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#lookup, key) : [ THROW (#Undefined, key) ]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]

The empty environment is a first-class object, so it evaluates to itself. It also understands #lookup requests. No symbols are defined in the empty environment. Looking up an undefined symbol signals an error.

Environments are organized into nested scopes. A scope limits the effect of mutations to an environment. New bindings and changes to existing bindings are limited to the current scope. Enclosing (parent) scopes are protected from mutation.

LET Env(parent) = \(cust, req).[
	BECOME Env_scope(parent, map_empty)
	SEND (cust, req) TO SELF
]

In order to encapsulate the mechanism used to implement bindings, we use lazy initialization to create a scope. We use a mapping function to implement bindings, so a new scope is initialized with an empty mapping.

LET map_empty = \_.?
LET map_bind(map, key, value) = \lookup.(
	CASE lookup OF
	$key : value
	_ : map(lookup)
	END
)

New bindings are established by constructing a new mapping function that extends a previous map. The mapping function returns the bound value given the key. If the key doesn’t match, the previous map function is called recursively, until the empty map is reached. The empty map returns ?, the “undefined” value.

LET Env_scope(parent, map) = \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#lookup, key) : [
		CASE map(key) OF
		? : [ SEND (cust, req) TO parent ]
		value : [ SEND value TO cust ]
		END
	]
	(#bind, key, value) : [
		BECOME Env_scope(parent, map_bind(map, key, value))
		SEND Inert TO cust  # new binding
	]
	_ : [ SEND (cust, req) TO parent ]
	END
]

An environment is a first-class object, so it evaluates to itself. When an environment receives a #lookup message, it consults its local map for a value associated with the given key. If there is no value bound locally, the request is forwarded to the parent environment. Otherwise, the value is returned to the customer cust. When an environment receives a #bind request, the local map is extended with the new binding. The result of establishing a new binding is the unique Inert object, indicating that there is no meaningful value to return.

CREATE Inert WITH \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]

The Inert actor is the result of evaluation when there is no meaningful value to return (like void or unit in some languages). Since Inert is neither a Symbol nor a Pair, it evaluates to itself.

Conditional

We have said that the left component of a pair (which is the first component of a list) is evaluated to produce a combiner. Then the combiner receives a #comb request containing the right component of the pair (the operand list) and the dynamic environment. It is up to the combiner to decide if and when it evaluates its operands. A primitive combiner is the Oper_&if conditional operative, which is usually bound to Symbol(#&if) in the default (ground) environment (Note: we use “&” where Kernel uses “$” because “$” is not a valid character in a Humus symbol).

The Oper_&if operative has the external representation (&if test consequence alternative). The test expression is evaluated and is expected to return a Boolean result. If the result is True, then the consequence is evaluated. If the results is False, then the alternative is evaluated. Otherwise an error is signaled.

CREATE Oper_&if WITH \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#comb, opnds, env) : [
		SEND (k_args, #as_tuple) TO opnds
		CREATE k_args WITH \(test, cnsq, altn, NIL).[
			SEND (k_bool, #eval, env) TO test
			CREATE k_bool WITH \bool.[
				SEND (cust, #if, cnsq, altn, env) TO bool
			]
		]
	]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]

An operative is a first-class object, so it evaluates to itself. It is also a combiner, so it understands #comb requests. When it receives a #comb request, it expects the operands opnds to be a proper list of Pairs terminated by Nil. Since we represent Pairs and Nil with actors, we must ask the actors to produce a tuple containing the elements of the operand list. The resulting tuple is de-structured to access the test expression, consequence cnsq and alternative altn. The test is evaluated, producing the bool result. An #if request is sent to bool to selectively evaluate either cnsq or altn.

LET Pair(left, right) = \(cust, req).[
	CASE req OF
	...
	#as_tuple : [
		SEND (k_tuple, #as_tuple) TO right
		CREATE k_tuple WITH \tuple.[
			SEND (left, tuple) TO cust
		]
	]
	...
	END
]
CREATE Nil WITH \(cust, req).[
	CASE req OF
	...
	#as_tuple : [ SEND NIL TO cust ]
	...
	END
]

The Pair and Nil behaviors must be extended to understand the #as_tuple request. For a Pair, this means sending #as_tuple to its right component, then forming a tuple from its left component and the result. For Nil, this means simply returning the empty tuple value NIL. Note that the tuple value produced by #as_tuple is not first-class. It is a value in the implementation language Humus.

CREATE True WITH \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#if, cnsq, _, env) : [ SEND (cust, #eval, env) TO cnsq ]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]
CREATE False WITH \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#if, _, altn, env) : [ SEND (cust, #eval, env) TO altn ]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]

The True and False actors represent the two unique Boolean object instances. Since they are neither Symbols nor Pairs, they evaluates to themselves. They also understand #if requests. When True receives an #if request, it evaluates the consequence cnsq. When False receives an #if request, it evaluates the alternative altn.

Mutability

The language we are evaluating is not purely functional. It is mostly functional, but it provides mutable environments [8]. The Oper_&define! operative performs environment mutation.

The Oper_&define! operative has the external representation (&define! ptree expression). The expression is evaluated and then matched to the components of the formal parameter tree ptree, potentially binding Symbols in the local environment.

CREATE Oper_&define! WITH \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#comb, opnds, env) : [
		SEND (k_args, #as_tuple) TO opnds
		CREATE k_args WITH \(ptree, expr, NIL).[
			SEND (k_value, #eval, env) TO expr
			CREATE k_value WITH \value.[
				SEND (cust, #match, value, env) TO ptree
			]
		]
	]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]

Oper_&define! is a first-class object, so it evaluates to itself. When it receives a #comb request, it extracts a parameter tree ptree and expression expr from the operands. An #eval request is sent to expr. The resulting value is sent in a #match request to ptree in order to bind variables in the environment env.

A Symbol is the most common case for a parameter tree. For example, in the expression (&define! answer 42), the Symbol(#answer) will be bound to the value 42 (numeric constants, of course, evaluate to themselves).

LET Symbol(name) = \(cust, req).[
	CASE req OF
	...
	(#match, value, env) : [
		SEND (cust, #bind, SELF, value) TO env
	]
	...
	END
]

When a Symbol receives a #match request, it sends a #bind request to the Environment env. An Environment returns Inert on successful binding.

Sometimes we would like to ignore a parameter. The Ignore actor represents the unique object used to allow successful matching without creating a binding.

CREATE Ignore WITH \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#match, _) : [ SEND Inert TO cust ]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]

The Ignore object evaluates to itself. When it receives a #match request, it indicates success by sending Inert to the customer.

The parameter tree may also be a list, consisting of Pairs and Nil. A Pair matches if the value is also a pair, its left matches the left of the value, and its right matches the right of the value. Nil matches only if the value is also Nil.

LET Pair(left, right) = \(cust, req).[
	CASE req OF
	...
	(#match, value, env) : [
		CREATE fork WITH fork_beh(k_pair, value, value)
		SEND (
			(#match_left, left, env), 
			(#match_right, right, env)
		) TO fork
		CREATE k_pair WITH \($Inert, $Inert).[
			SEND Inert TO cust
		]
	]
	(#match_left, ptree, env) : [
		SEND (cust, #match, left, env) TO ptree
	]
	(#match_right, ptree, env) : [
		SEND (cust, #match, right, env) TO ptree
	]
	...
	END
]
CREATE Nil WITH \(cust, req).[
	CASE req OF
	...
	(#match, $Nil, env) : [ SEND Inert TO cust ]
	...
	END
]

The state of an actor is always private to the actor, so matching Pairs involves asking the parameter-tree Pair and the value Pair each to do part of the work. We send two concurrent requests to the value, #match_left and #match_right. Each attempts to match a component of the value Pair to the corresponding component of the parameter-tree Pair. If both return Inert, then the match was successful. Otherwise an error is signaled.

Sending the concurrent requests, and collecting an ordered-pair of results, is coordinated by a fork actor. The concurrency provided by fork_beh is a generalization of techniques described previously. An actor with fork_beh receives a pair of requests, sends them to a pair of actors, then becomes join_beh to await and combine the results.

LET tag_beh(cust) = \msg.[ SEND (SELF, msg) TO cust ]
LET fork_beh(cust, head, tail) = \(h_req, t_req).[
	CREATE k_head WITH tag_beh(SELF)
	CREATE k_tail WITH tag_beh(SELF)
	SEND (k_head, h_req) TO head
	SEND (k_tail, t_req) TO tail
	BECOME join_beh(cust, k_head, k_tail)
]
LET join_beh(cust, k_first, k_rest) = \msg.[
	CASE msg OF
	($k_first, first) : [
		BECOME \($k_rest, rest).[ SEND (first, rest) TO cust ]
	]
	($k_rest, rest) : [
		BECOME \($k_first, first).[ SEND (first, rest) TO cust ]
	]
	END
]

The tag_beh is used by fork_beh to create customers for each request. These customers tag a result with their own identity. This identity tag is used by join_beh to distinguish the results. When both results have arrived, an ordered-pair of the results is sent to the original customer cust.

Applicative

The vast majority of combiners are applicatives (including everything created with &lambda). An applicative evaluates its list of operands, then passes the resulting arguments to the combiner it wraps. If the operands are not a proper (Nil-terminated) list, an error is signaled.

LET Appl(comb) = \(cust, req).[  # applicative combiner
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#comb, opnds, env) : [
		SEND (k_args, #map, #eval, env) TO opnds
		CREATE k_args WITH \args.[
			CREATE expr WITH Pair(comb, args)
			SEND (cust, #eval, env) TO expr
		]
	]
	#unwrap : [ SEND comb TO cust ]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]

Appl is the applicative constructor. When a new Appl is created, it is given a combiner comb to wrap. An applicative is a first-class object, so it evaluates to itself. When it receives a #comb request, it sends a #map request to the opnds list. This causes an #eval request to be sent to each element of opnds. The wrapped combiner comb is Paired with the list of evaluated arguments arg, and the Pair is evaluated (calling comb with args). When Appl receives an #unwrap request, it returns the wrapped combiner comb, which is also a first-class object.

In order to evaluate the operand list, we introduce a #map request on Pairs and Nil. This is a kind of “higher-order” request. The #map request carries a nested request. The intent of #map is to send the nested request to each element of a List, specifically the left component of each Pair and the final Nil. The results of each nested request are gathered into a new List of Pairs.

LET Pair(left, right) = \(cust, req).[
	CASE req OF
	...
	(#map, req') : [
		CREATE fork WITH fork_beh(k_pair, left, right)
		SEND (req', req) TO fork
		CREATE k_pair WITH \(head, tail).[
			SEND NEW Pair(head, tail) TO cust
		]
	]
	...
	END
]
CREATE Nil WITH \(cust, req).[
	CASE req OF
	...
	(#map, req') : [ SEND (cust, req') TO SELF ]
	...
]

When a Pair receives a #map request, it creates a fork to concurrently send req’ (the nested request) to the left component, and req (the original #map request) to the right component. When both results are returned, they are used to create and return a new Pair. When the nested request is #eval, this results in parallel evaluation of operands.

When Nil receives a #map request, it simply returns the result of sending req’ (the nested request) to itself. When the nested request is #eval, the result is Nil because Nil evaluates to itself.

Operative

So far we’ve only shown primitive operatives. Extensibility requires that we be able to construct our own operatives, indistinguishable from those provided. The central abstraction of the Vau-calculus is a constructor for user-defined operatives.

The Oper_&vau operative has the external representation (&vau vars evar body). When Oper_&vau is called, it creates a new operative combiner where vars is the formal parameter tree and body is the combiner expression. The environment variable evar is either a Symbol that will be bound to the dynamic environment, or Ignore if (as is usually the case) the dynamic environment is not needed.

CREATE Oper_&vau WITH \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#comb, opnds, static) : [  # static env
		SEND (k_args, #as_tuple) TO opnds
		CREATE k_args WITH \(vars, evar, body, NIL).[
			CREATE ptree WITH Pair(vars, evar)
			CREATE comb WITH Vau(static, ptree, body)
			SEND comb TO cust
		]
	]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]

When Oper_&vau receives a #comb request, it uses Vau to construct a new compound operative and returns the resulting combiner to the customer cust. The operative is compound because it carries the syntactic structure of the body, along with the static environment and a parameter tree ptree, so that is can later evaluate the body in a properly-extended environment.

LET Vau(static, ptree, body) = \(cust, req).[  # compound operative
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#comb, opnds, dynamic) : [  # dynamic env
		CREATE local WITH Env(static)
		CREATE value WITH Pair(opnds, dynamic)
		SEND (k_eval, #match, value, local) TO ptree
		CREATE k_eval WITH \$Inert.[
			SEND (cust, #eval, local) TO body
		]
	]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]

Vau is the compound operative constructor. An operative is a first-class object, so it evaluates to itself. When it receives a #comb request, it constructs a new (empty) local environment with the static environment as a parent. Since the formal parameter tree ptree was constructed as a Pair of formal parameters and an environment parameter, we construct a value Pair of the operands opnds and the dynamic environment. The Pair value is then matched to ptree in order to bind variables in the local environment. Finally, the body expression is evaluated in the local environment.

Lambda

We could hardly claim that the Fexpr Vau is the ultimate Lambda without showing how to implement Lambda. Now that we’ve made evaluation explicit, Lambda can be described as an applicative wrapper (evaluating operands to arguments) around an operative (to combine arguments) created by Vau.

The Oper_&lambda operative has the external representation (&lambda ptree body). This is nearly that same as &vau, but without access to the dynamic environment. Also, &lambda constructs an applicative rather than an operative. &lambda is just an applicative wrapper around &vau where the dynamic environment is ignored. In other words, (&lambda ptree body) is equivalent to (wrap (&vau ptree #ignore body)), where wrap is the applicative constructor Appl and #ignore designates the Ignore object.

CREATE Oper_&lambda WITH \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#comb, opnds, static) : [  # static env
		SEND (k_args, #as_tuple) TO opnds
		CREATE k_args WITH \(vars, body, NIL).[
			CREATE ptree WITH Pair(vars, Ignore)
			CREATE oper WITH Vau(static, ptree, body)
			CREATE appl WITH Appl(oper)
			SEND appl TO cust
		]
	]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]

Oper_&lambda is a first-class object, so it evaluates to itself. When it receives a #comb request, it extracts a parameter tree vars and expression body from the operands. A new parameter tree ptree is created by pairing vars with Ignore, indicating that the dynamic environment should be ignored. A new operative is created with Vau, capturing the current environment as static. The operative oper is wrapped by Appl, creating a new applicative appl that is returned to the customer cust.

Note that the dynamic environment is always ignored. As an optimization, we could skip the step of passing Ignore to Vau, and the extra effort of matching Ignore to the dynamic environment. This requires that we inline the implementation of Vau, then refactor to remove the redundant handling of the dynamic environment.

CREATE Oper_&lambda WITH \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#comb, opnds, static) : [  # static env
		SEND (k_args, #as_tuple) TO opnds
		CREATE k_args WITH \(vars, body, NIL).[
			CREATE oper WITH \(cust, req).[
				CASE req OF
				(#eval, _) : [ SEND SELF TO cust ]
				(#comb, opnds, _) : [  # dynamic env
					CREATE local WITH Env(static)
					SEND 
					(k_eval, #match, opnds, local) 
					TO vars
					CREATE k_eval WITH \$Inert.[
						SEND (cust, #eval, local) 
						TO body
					]
				]
				_ : [ THROW (#Not-Understood, SELF, req) ]
				END
			]
			CREATE appl WITH Appl(oper)
			SEND appl TO cust
		]
	]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]

Identity

Our conditional operative &if is not of much use without some predicate to occupy the test position. The primitive combiner Appl_eq? is an applicative predicate which compares the identity of two objects. It has the external representation (eq? x y), where x and y are expressions. Since eq? is an applicative, both x and y are evaluated before their values are compared. If they evaluate to the same object, eq? evaluates to True, otherwise False.

CREATE Appl_eq? WITH Appl(NEW \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	(#comb, opnds, env) : [
		SEND (k_args, #as_tuple) TO opnds
		CREATE k_args WITH \(x, y, NIL).[
			IF $x = $y [
				SEND True TO cust
			] ELSE [
				SEND False TO cust
			]
		]
	]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
])

Appl_eq? is created using the Appl constuctor, wrapping a nested (anonymous) combiner. Since the combiner is a first-class object, it evalutes to itself. When it receives a #comb request, it extracts the evaluated arguments x and y, and compares their values. If they are identical, True is returned to the customer cust, otherwise False is returned.

Example Evaluation

In order to evaluate an expression, we must construct the “ground” environment, in which there are standard mappings from Symbols to our primitives. For simplicity (and efficiency) we use a single combined mapping function, rather than build up the bindings one-at-a-time.

CREATE Symbol_&if WITH Symbol(#&if)
CREATE Symbol_&define! WITH Symbol(#&define!)
CREATE Symbol_&vau WITH Symbol(#&vau)
CREATE Symbol_&lambda WITH Symbol(#&lambda)
CREATE Symbol_eq? WITH Symbol(#eq?)
CREATE Env_ground WITH Env_scope(Env_empty, \lookup.(
	CASE lookup OF
	$Symbol_&if : Oper_&if
	$Symbol_&define! : Oper_&define!
	$Symbol_&vau : Oper_&vau
	$Symbol_&lambda : Oper_&lambda
	$Symbol_eq? : Appl_eq?
	_ : ?
	END
))

We will use a rather contrived example to exercise most of capabilities we have defined so far. Our example expression has this external representation:

((&lambda (x)
   (&if (eq? x #inert) answer x))
 (&define! answer 42))

The anonymous applicative ((&lambda (x) (&if (eq? x #inert) answer x)) is called with the operand (&define! answer 42). The result of evaluating &define! is #inert, which is bound to x in the &lambda. The body of the &lambda is (&if (eq? x #inert) answer x), a conditional expression that compares the value of x with #inert (designating the Inert object). Since x is bound to #inert, the test is True and the value of answer is returned. Otherwise the value of x would be returned instead. The variable answer is bound (by &define!) to the constant 42 in the local environment where this expression is evaluated, thus 42 should be the final result.

LET pr(x, y) = (NEW Pair(x, y))
LET Constant(value) = \(cust, req).[
	CASE req OF
	(#eval, _) : [ SEND SELF TO cust ]
	_ : [ THROW (#Not-Understood, SELF, req) ]
	END
]
CREATE Const_42 WITH Constant(42)
CREATE Symbol_answer WITH Symbol(#answer)
CREATE Symbol_x WITH Symbol(#x)
CREATE Env_standard WITH Env(Env_ground)

We define a couple of helper-functions to create Pairs and Constants, then define the Symbols and Constants needed by our example. The Env_standard environment represents the “top-level” mutable environment, which protects the Env_ground environment from modification.

# ((&lambda (x)
#    (&if (eq? x #inert) answer x))
#  (&define! answer 42))
LET lambda_x = 
$pr(
	Symbol_&lambda, 
	pr(
		pr(Symbol_x, Nil), 
		pr(
			pr(
				Symbol_&if, 
				pr(
					pr(
						Symbol_eq?, 
						pr(
							Symbol_x, 
							pr(Inert, Nil)
						)
					), 
					pr(
						Symbol_answer, 
						pr(Symbol_x, Nil)
					)
				)
			), 
			Nil
		)
	)
) 
SEND (println, #eval, Env_standard) TO
pr(
	lambda_x, 
	pr(
		pr(
			Symbol_&define!, 
			pr(Symbol_answer, pr(Const_42, Nil))
		), 
		Nil
	)
)

As mentioned above, evaluting this example should produce Const_42 as the final result (sent to the console via println).

Summary

John Shutt’s Kernel language [9], and its underlying Vau-calculus, is a simplified reformulation of the foundations of the LISP/Scheme family of languages. It is based on the notion that evaluation should be explicit, patterned after Fexprs, rather than implicit, using Lambda. The result is a powerful well-behaved platform for building extensible languages. Not extensible in syntax, but in semantics. We have implemented the key mechanisms of Vau-calculus using actors. The actor-based evaluation strategy introduces inherent concurrency pervasively throughout the evaluation process.

References

[1]
J. McCarthy, et. al. LISP 1.5 Programmer’s Manual. MIT Press, Cambridge, Mass., 1962.
[2]
H. Stoyan. The Influence of the Designer on the Design – J. McCarthy and LISP. Artificial Intelligence and Mathematical Theory of Computation, p.409-426, Academic Press, 1991.
[3]
C. Hewitt, P. Bishop, R. Steiger. A Universal Modular ACTOR Formalism for Artificial Intelligence. IJCAI, 1973.
[4]
G. Sussman and G. Steele Jr. Scheme: an Interpreter for Extended Lambda Calculus. memo 349, MIT Artificial Intelligence Laboratory, December 1975.
[5]
R. Kelsey, W. Clinger, and J. Rees, editors. Revised5 Report on the Algorithmic Language Scheme. 20 February 1998.
[6]
I. Piumarta. Open, extensible composition models. Free Composition (FREECO-11), Lancaster, UK. July 26, 2011.
[7]
J. Shutt. Fexprs as the basis of Lisp function application; or, $vau : the ultimate abstraction. Ph.D. Dissertation, WPI CS Department, 2010.
[8]
G. Steele, G. Sussman. The Art of the Interpreter or, the Modularity Complex (Parts Zero, One, and Two). AIM-453, MIT, 1978.
[9]
J. Shutt. Revised-1 Report on the Kernel Programming Language. Technical report WPI-CS-TR-05-07, Worcester Polytechnic Institute, Worcester, MA, March 2005, amended 29 October 2009.


Tags: , , , , , , , , , , , , , ,
This entry was posted on Friday, November 25th, 2011 at 11:32 am and is filed under Uncategorized. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

7 Responses to “Fexpr the Ultimate Lambda”

  1. Alan Kay

    Yep, this was the picture of “what Lisp was really about”, and the approach I took to do Smalltalk-72 (as the result of a hallway bet at Xerox PARC).

    This is described in “The Early History of Smalltalk” ACM SIGPLAN and HOPL II ca 1993 and 1996.

    Cheers,

    Alan

  2. Onne

    “Although static scoping is usually what we want, there are times when dynamic scope is useful”

    That is a rather bold statement.

    I’d say that any dynamic scoping must be done very carefully and very controlled, otherwise it is never what you want, building any larger abstraction on something dynamically scoped will fail.

    For example injecting a “this” would be fine, maybe even implementing a “return” or “break” by inserting them in the scope might be a reasonable thing to do. But injecting any name which does not behave like a keyword will be awkward and bug prone. And failing to inject such dynamically scoped things where they were expected is also bug prone.

    Honestly I was a bit disappointed that the kernel language uses a different syntax for Vau terms.

  3. Dale Schumacher

    One of the things I really appreciated about Kernel and Vau was the way they give us explicit control, while adopting several conventions to help avoid mistakes. Sure, dynamic scope must be used carefully. Shutt’s guideline G3 states: “Dangerous computation behaviors (e.g., hygiene violations), while permitted on general principle, should be difficult to program by accident.” I’m willing to accept some additional complexity (for example, disjoint encapsulated types for Boolean, Inert, Ignore, Environment, etc.) in support of that higher goal.

  4. Semantic Extensibility with Vau

    […] In the previous article we constructed an actor-based evaluator for a Kernel-inspired language core. […]

  5. DAY

    I am not quite sure that without binding the runtime environment with the operand (i.e., forming a thunk) in a combination and passing the thunk (instead of the expression alone) around for later evaluation (i.e., as an argument to `eval’), the hygiene condition could be preserved. For example, in the following definition:

    ($define! $f
    ($vau (x) env
    ($g x) ) )

    for a call like `($f e)’, the passed-in expression`e’ (bound to `x’) will escape from the environment it should be evaluated if it ever needs to, namely in `env’. Indeed, `e’ may be evaluated in the body of the operative of `$f’ (in an explicit evaluation context) which will violate hygiene. Generally, I am not convinced that this could be classified as an accidental case since in the body of an operative, you can do whatever with the passed-in expression, in particular, passing to another operative.

  6. John Shutt

    DAY,

    If I understand correctly, you’re looking at

    ($define! $f ($vau (x) env ($g x)))

    and suggesting there’s a hygiene violation going on with the call to $g. A basic difficulty here is that what is or isn’t a hygiene violation depends on what $g is meant to do. It’s a question about the design of $g, not the design of $f.

    But in this example, it seems unlikely that $g has been well-designed in the first place; it’s possible (since we have no idea what $g is actually meant to do), but unlikely. Why? Because you seem to be asking a question about how $g will evaluate the operand that was passed to $f, but —judging by the name “$g”— you have in mind that $g is an operative. Since it’s an operative, you aren’t passing the parameter of $f to $g, you are passing the symbol “x” to $g. We can suppose that $g will capture its dynamic environment (which is the local environment of $f), and that $g will then evaluate its operand (the symbol “x”) in this dynamic environment. The result of this evaluation will then be the operand that was passed to $f. Perhaps there’s some reason for doing it is way instead of defining an applicative g that takes its argument already evaluated; but we don’t know what that would be.

    Furthermore, and somewhat independent of whether $g should really have been an applicative that didn’t capture the local environment of $f, since $g evidently means to evaluate the operand of $f, if for some reason it isn’t intuitively obvious from the purpose of $g that it should do this further evaluation in the lexical scope of the call to $g, then $g should be taking an explicit argument saying what environment to use for the further evaluation. This is why, for example, standard applicative “eval” requires an explicit environment argument; the environment argument to “eval” is *not* optional, it *must* be specified. If $g were taking an explicit environment argument, the programmer who writes $f would probably remember to correctly pass the environment locally called “env”, since that name is *right there* in the definition of $f that the programmer is writing.

  7. DAY

    Hi, John,

    Thanks for the answer. Indeed I realized earlier today that I made a mistake in my comment: that what passed to $g is actually the symbol `x’, rather than the operand to $f, as you pointed out. So really what I wanted to write is the following:

    ($define! $f ($vau (x) e (g x)))

    Since `g’ is an applicative, `x’ will first be evaluated to the operand to $f, which is passed into the underlying operative of `g’, say $g. Then in the body of $g, hygiene violation would only happen if its parameter bound to the passed-in operand to $f is used in a context that induces evaluating the parameter twice. Then I see your point that the definition of $g should take explicitly an environment argument.

    Thanks a lot for the explanation. From your explanation, I also see that we should define an operative when it is really necessary and in doing so we should be extremely careful.

Leave a Reply

Your comment