Draft! This page is a draft - a work in progress!

Gluing reactivity Subscribe

Let's take a look at reactive programing concepts from a different point of view: how do we glue reactive bits together? This will shine a light at a different angle on the subject, giving you more to think about before starting the next project.

Simple callbacks

The simplest way to glue together or bind asynchronous bits of a program is using callbacks, something like this:

def notify(orderId, notification) = {
  db.findOrder (id) {order =>
    db.findAccount(order.accountId) {account=>
      email.send(account.email, notification);

Or, a little more terse, in Javascript:

function notify(orderId, notification) {
  db.findOrder (id, function(order) {
    db.findAccount(order.accountId, function(account) {
      email.send(account.email, notification);

Let's take an x-ray of what actually happens here. The 3 steps of this process or flow are encapsulated in the 3 callbacks. Each callback is a closure, encapsulating the global state of this flow, carrying with it values like notification and objects like order and account.

Each is an asynchronous call to a db, with a callback to be called when the db returns a result - this will occur asynchronously, in the same or other threads, depending on the implementation of the db. The point however is that these steps will be executed in sequence.


A more recent evolution of producer-consumer protocols, streams, allows a different way to glue these steps:

notificationStream.flatMap{ (orderId, notification) =>
  db.findOrder (id).map(o=>(o,notification))
}.flatMap { (order, notification) =>
 { (account, notification) =>
  email.send(account.email, notification);

The stream processing middleware will take care of processing these steps and sometimes optimize processing with advanced features like backpressure and operator fusion. However, the added advantages of middleware processing does not make these steps any more decoupled.

Note the complication arising from having to carry the notification around. I'm sure there's better options perhaps like splitting the first stream in two, do the transformations on the first and zipping them back before the end... but that would introduce too many stream-processing artifacts here.

Coupling and decoupling

While callbacks are simple mechanisms to orchestrate asynchronous behaviour, they introduce coupling: the caller needs to be up. The state of the continuation is in memory of one single node only. It cannot generally be backed up or load-balanced across a network, in the middle: if this node went down, the entire flow is gone and we'd rely on the initial caller to deal with it.

There are languages like scala, which allow serializing continuations and this was used in projects like Swarm to move the computation closer to the data, but in reality, here we'd want to use an explicit decoupling mechanism.

So... how do we glue those 3 steps, in a way that allows more distribution and even persistency?

Actors and messages

While the usual decoupling mechanism at the system level is the messaging middleware and processes, a very effective construct for asynchronous decoupling within a program are actors: an actor is an object with state, and a mailbox full of messages, which are processed one at a time.

While those of you experienced in concurrent programming may think of it as nothing but a more complicated monitor, it has the added advantage of decoupling.

Here's one way to implement the sequence with actors, using akka:

class MyActor(orderId, notification) extends Actor {
  def receive = {
    case MsgNotify => db ! MsgFindOrder(orderId)
    case MsgOrderFound(order) => db ! MsgFindAccount(order.accountId)
    case MsgAccountFound(order) => email ! MsgSend(account.email, notification)

def notify(orderId, notification) = {
   new MyActor ! new MsgNotify

A little more verbose, but notice that in this implementation, the actor maintains the state of this computation (to send a new notification, we would create and start a new actor).

We had to breakup the flow in explicit messages and assume that the db actor will respond with Found messages. The state of the entire flow is now encapsulated in the actor object and the messages that are now flying around the system.

However, there are some benefits over the simple callback model:

  • if the node crashes, the state of the actor can be backed up and then recovered on restart
  • also, if the node crashes while processing a message, the system will retry the message on restart (assuming we persisted them)

Also, the db actor can now be distributed: the two actors are decoupled. Akka can easily be configured to run the db actor on a separate node and it will automatically route the messages to and from there.

It can also be fairly easy to configure more processing power for db as well as replication or load balancing.

Now we're cooking!

Of course, there is some extra overhead, but the glue usually has a very insignificant cost compared to the actual steps that we're trying to glue together.


Workflow engines are typically used to glue asynchronous activities, at the macro level, coordinating and orchestrating worthy services.

Typical. Their benefits are quite similar to actors, plus:

  • graphical process designer
  • embedded error handling
  • etc

By: Razie | 2017-04-14 | Tags: post , reactive , akka , streams |    

See more in: Cool Scala Subscribe
( | Print ) this page.

This content is Copyrighted, all rights reserved.

You need to log in to post a comment!