A significant challenge in developing concurrent systems is the problem of composability. We create a solution that works properly in isolation, but when composed with other solutions leads to interference. The actor model ensures that individual actors may be composed without changing their behavior. Interference is prevented by definition, keeping the system consistent. In order to create larger actor-based systems, we must be able to treat groups of collaborating actors as if they were a single actor. We need the same guarantee of consistency that exists for individual actors. This consistency is provided by serializers .
We use the term “service” to describe an actor with request-response behavior. Services can be implemented by a group of collaborating actors. A single actor will always process messages one-at-a-time. For a group of actors to behave like a single actor, requests (from outside the group) to the group entry-point must be handled one-at-a-time. The following behaviors implement a general purpose serializer for actor-based services.
An actor implementing a service expects a message consisting of a customer paired with a request. Eventually, a response is sent to the customer. A serializer provides a wrapper for a service that ensures the service will receive requests one-at-a-time. If the service is busy handling a request, the serializer buffers new requests until the service responds to the current request (see Figure 1). Customers of the service do not issue requests directly to the service. Instead they issue requests to the serializer acting as gate-keeper for the service. From the perspective of a single request-response cycle, there is no difference between the serializer and the actual service.
LET serializer_beh(svc) = \(cust, req).[ CREATE ret WITH return_beh(SELF) BECOME serializer_busy_beh(svc, cust, ret, NIL) SEND (ret, req) TO svc ]
An “idle” serializer receives a message consisting of a customer cust and a request req. It creates a return-proxy ret, transitions to “busy” state, and sends the request on to the actual service svc. The customer for the request to the actual service is the return-proxy ret rather than the original customer.
LET return_beh(caller) = \res.[ SEND (SELF, res) TO caller ]
The return proxy eventually receives a response res from the actual service. When it does, it tags the response with its own identity, and sends the pair back to caller (the serializer, in “busy” state). The identity of the return proxy actor is used as an authentication token. Only the serializer that created the return proxy (and the proxy itself) knows the identity of the proxy. Therefore, no other actor could construct a message containing the identity of the proxy and send it to the serializer. This allows the serializer to distinguish between a return response and a new request.
LET serializer_busy_beh(svc, cust, ret, pending) = \msg.[ CASE msg OF ($ret, res) : [ SEND res TO cust IF $pending = ((cust', req'), rest) [ CREATE ret' WITH return_beh(SELF) BECOME serializer_busy_beh(svc, cust', ret', rest) SEND (ret', req') TO svc ] ELSE [ BECOME serializer_beh(svc) ] ] (cust', req') : [ LET pending' = $((cust', req'), pending) BECOME serializer_busy_beh(svc, cust, ret, pending') ] END ]
A “busy” serializer retains much more state than an “idle” serializer. In addition to the actual service svc, it captures the original customer cust for the current request, the identity of the return proxy ret, and a (possibly empty) list of pending requests that arrive during processing of the current request. A “busy” serializer receives two kinds of messages, return responses and new requests.
A return response is a pair consisting of the identity of the return proxy ret, and the response value res. This is our indication that the service has completed processing the current request and produced a response. We immediately forward the response to the original customer cust. If the pending list is empty we simply transition back to “idle” state. Otherwise, we extract a new customer cust’ and request req’ from the pending list. A new return proxy ret’ is created, and we remain in “busy” state with the rest of the buffered messages. Finally, the new request is sent on to the actual service svc. This is the same as the behavior of the serializer in “idle” state, except that we have to keep track of any additional buffered messsages.
Any message that is not a return response must be a new concurrent outside request. Since the service is still busy processing the current request, this new request must be buffered for future delivery. We accomplish this buffering by adding the entire message (customer and request) to the pending list.
This queue implementation uses a serializer to guard internal state transitions among a group of collaborating actors. Customers of the queue issue requests to the serializer as if they were sending to the queue directly. Responses are forwarded to the original customer through a return-proxy created by the serializer.
We start with a behavior that could be considered a “constructor”. It contains all the information needed to initialize a new configuration of actors. It does not process any messages itself, but delegates them to itself after preparing its initial behavior. This is a form of lazy initialization. It is done only once, if and when the first message is delivered to this actor.
LET serialized_queue_beh = \msg.[ BECOME serializer_beh(NEW empty_queue_beh) SEND msg TO SELF ]
In this case, we initialize the actor as an empty queue wrapped in a serializer. The creator of an actor with this behavior need not be aware that a serializer is involved. In fact, we could choose to define empty_queue_beh within the local scope of this behavior, thus preventing direct use of the implementation. The (initially empty) queue service is created anonymously, so only the serializer has a reference to the service, thus only the serializer can send messages to the service.
The protocol of a queue includes
#take removes an item from the head of the queue and sends it to the customer.
#put appends an item to the tail of the the queue and sends the item to the customer as a synchronization signal. Figure 2 illustrates the interaction between the lazy-initialized serializer and the underlying queue service implementation in handling
#take requests from concurrent customers.
LET empty_queue_beh = \(cust, req).[ CASE req OF #take : [ SEND NIL TO cust ] (#put, item) : [ CREATE e WITH queue_entry_beh(item, NIL) BECOME queue_beh(e, e) SEND item TO cust ] END ]
When the queue state is “empty”, it responds to
#take requests by returning
NIL. If an “empty” queue receives a
#put request, it creates a new queue entry e to hold the item, transitions to “non-empty” state, and sends the item back to the customer cust.
A “non-empty” queue retains references to the first and last entries in the queue. Of course, initially these are the same entry. But as the queue grows, this allows constant-time access to both the head and tail of the queue.
LET queue_beh(first, last) = \(cust, req).[ CASE req OF #take : [ BECOME queue_first_beh(cust, first, last) SEND (SELF, #remove) TO first ] (#put, item) : [ BECOME queue_last_beh(cust, first, last) SEND (SELF, #append, item) TO last ] END ]
Removing or appending an item on a “non-empty” queue involves coordination between the queue and the queue entries. To handle a
#take request, the queue transitions to “waiting-for-first” state, and sends a
#remove request to the first queue entry. To handle a
#put request, the queue transitions to “waiting-for-last” state, and sends an
#append request to the last queue entry. While in these intermediate states we are assured that no outside requests will arrive because we have not sent a response to the serializer (our customer).
Each queue entry stores an item, and a reference to the next entry (or
NIL if this entry is the last).
LET queue_entry_beh(item, next) = \(cust, req).[ CASE req OF #remove : [ SEND (SELF, item, next) TO cust ] (#append, item') : [ CREATE next' WITH queue_entry_beh(item', next) BECOME queue_entry_beh(item, next') SEND (SELF, item', next') TO cust ] END ]
When a queue entry receives a
#remove request, it sends its own identity, the item it holds, and its next link to the customer cust. The customer is always the queue itself, since no other actor sends messages to a queue entry.
#append request is received, a new queue entry next’ is created to hold the new item’. The next links are adjusted to attach the new entry after the current one. Finally, this entry’s identity, the new item’, and the new entry next’ are sent to the customer (the queue itself).
#take request is in progress, the queue is waiting for the first queue entry to respond to a
#remove request. In this state the queue retains, the original customer cust, and references to the current first and last queue entries.
LET queue_first_beh(cust, first, last) = \msg.[ IF $msg = ($first, item, next) : [ IF $next = NIL [ BECOME empty_queue_beh ] ELSE [ BECOME queue_beh(next, last) ] SEND item TO cust ] ]
When the first queue entry completes processing the
#remove request, the queue receives a message that matches the first identity, provides the item removed, and the next entry to become the new first. If there is no next entry (it is
NIL), then the queue transitions to “empty” state. Otherwise the queue updates first with the value of next, returning to “non-empty” state. In either case, the item is sent back to the customer cust. Since this customer is the serializer (or more precisely, the return-proxy) this indicates completion of the serialized transaction and allows delivery of new requests from actors outside this collaborative group.
#put request is in progress, the queue is waiting for the last queue entry to respond to an
#append request. In this state the queue retains, the original customer cust, and references to the current first and last queue entries.
LET queue_last_beh(cust, first, last) = \msg.[ IF $msg = ($last, item, next) : [ BECOME queue_beh(first, next) SEND item TO cust ] ]
When the last queue entry completes processing the
#append request, the queue receives a message that matches the last identity, provides the itemappended, and the next entry to become the new last. The queue updates last with the value of next, returning to “non-empty” state, and sends the item back to the customer cust. Since this customer is the serializer (or more precisely, the return-proxy) this indicates completion of the serialized transaction and allows delivery of new requests from actors outside this collaborative group.
Figure 3 summarizes the state transitions within the queue service. Requests from outside customers (via the serializer) can arrive only when the queue is in “empty” or “non-empty” states. The “waiting-for-first” and “waiting-for-last” states are internal states used to coordinate the actor representing the queue with the actors representing the queue entries.
Groups of collaborating actors can be treated as a single actor with the help of a serializer. We describe a generic transparent serializer for request-response services. A queue implementation is used to illustrate the usage of this serializer. Correct concurrent behavior of the queue requires the coordination of updates to multiple collaborating actors. The serializer provides protection from overlapping requests, allowing the queue to operate safely.
- C. Hewitt and R. Atkinson. Specification and Proof Techniques for Serializers. IEEE Transactions on Software Engineering, Vol. SE-5, No. 1, January 1979.