# Message Passing, part 1 – Synchronous Rendezvous

What do we mean when we say “message-passing”. For Object-Oriented developers from the Smalltalk tradition, message-passing involves a dynamic method lookup, invocation of that method with the target object as an implicit parameter, and return of a result object. By contrast, message-passing in synchronous communication models (such as π-calculus) involves “rendezvous” between sender and receiver, and transfer of data from sender to receiver, after which both sender and receiver continue independently. No explicit data is passed from receiver back to the sender, but the rendezvous itself provides synchronization information to both the sender and receiver. Finally, asynchronous message-passing is the most primitive, involving non-deterministic reception of each message ordered only by causality. The message must be sent before it can (after an arbitrary delay) be received. The sender has no direct way of knowing when the message is received. As you can see, the phrase “message-passing” has a wide range of accepted meanings. Let’s see how they all map to patterns of asynchronous messages.

Asynchronous message passing is the most primitive mechanism in terms of information flow. Information flows from sending to receiver, but no information, not even synchronization information, flows in the reverse direction. The synchronous rendezvous model can be implemented by a pattern of asynchronous messages. The more complex object-oriented “message” model (for clarity herein referred to as “object-oriented method invocation”) can also be implemented with a pattern of asynchronous messages. Part one of this article explores how synchronous rendezvous can be expressed with actors. Part two will explore object-oriented method invocation.

## Implementing Synchronous Rendezvous

Synchronous rendezvous involves one-way transfer of data from sender to receiver. The sender and receiver must both be “ready” for the transfer to occur, and both proceed independently afterwards. Although no data is explicitly passed from receiver back to the sender, the rendezvous itself provides synchronization information to both the sender and receiver.

Often synchronous communication is described in terms of “channels” connecting senders and receivers. Sometimes these channels have a buffering capacity which allows some number of messages to be stored for later delivery. If the buffer is full, senders will be blocked waiting for capacity. If the buffer is empty, receivers will be blocked waiting for messages. A buffer size of zero (no buffering) will block a sender/receiver until a corresponding receiver/sender is available. Let’s begin exploring the actor behaviors used to implement an unbuffered synchronous communication channel.

LET empty_channel_beh = \(cust, req).[
CASE req OF
]
(#write, msg) : [
BECOME write_waiting_beh((cust, msg), NIL)
]
END
]

When an unbuffered channel is empty, there are no readers or writers waiting to rendezvous. If a “read” request arrives, the customer of the read is used to initialize a list of waiting readers, and the channel moves to “read waiting” state. If a “write” request arrives, the customer and message are used to initialize a list of waiting writers, and the channel moves to “write waiting” state. A waiting read/write customer is effectively “blocked” until it receives a message in reply to its request.

LET read_waiting_beh(readers) = \(cust, req).[
CASE req OF
]
(#write, msg) : [
LET (first, rest) = $readers SEND msg TO first SEND msg TO cust IF$rest = NIL [
BECOME empty_channel_beh
] ELSE [
]
]
END
]

LET write_waiting_beh(writers) = \(cust, req).[
CASE req OF
(#write, msg) : [
BECOME write_waiting_beh((cust, msg), writers)
]
LET ((first, msg), rest) = $writers SEND msg TO cust SEND msg TO first IF$rest = NIL [
BECOME empty_channel_beh
] ELSE [
BECOME write_waiting_beh(rest)
]
]
END
]

When there are writers waiting for a reader, the writers are “blocked”. If another “write” request arrives, the new customer and message are added to the list of blocked writers. If a “read” request arrives, we deliver a message from one of the waiting writers to the customer of the “read” request. We also deliver the same message to the customer of that “write” request. This represents the synchronization signal which tells the writer that the message has been received, and allows the writer to continue processing. If there are still other writers waiting, we remain in “write waiting” state. Otherwise we return to “empty” state.

Figure 1 illustrates the flow of messages among three writers and three readers on a synchronous channel.

Figure 1 - Synchronous Rendezvous Message Flow.

It may appear that we are in-fact buffering an unbounded set of messages in this supposedly “unbuffered” channel. But what we are really buffering are the blocked reader/writer “threads” involved in the communication. Each reader/writer continues processing only when it receives a reply message in response to its request message. Thus withholding the reply effectively “blocks” the reader/writer. We’ve shown these sets of waiting readers/writers explicitly as part of the state of the channel. In many systems implementing synchronous communication, these sets are managed implicitly by the kernel in the form of blocked threads waiting to be awakened by a read/write on a particular channel. By expressing this behavior with actors we can analyze the flow of events involved in the synchronous rendezvous messaging protocol.

## Summary

The phrase “message-passing” has a wide range of accepted meanings. Asynchronous messaging is the most primitive in terms of information content. In part one, we have shown how the basic mechanism of synchronous rendezvous can be implemented on the asynchronous messaging foundation provided by the Actor model of computation. In part two, we will implement object-oriented method invocation. The additional information flow implicit in these models is made explicit through an actor-based implementation. This allows us to better understand (and explore modifications of) each message-passing model.

### One Comment

1. Dale Schumacher
Posted 2011-11-07 at 8:21 am | Permalink

See http://www.dalnefre.com/wp/2011/11/high-availability-for-mutable-shared-state/#queue for an O(1) Banker’s Queue implementation that could be used to enhance “fairness” for the waiting readers/writers.