Buffered Communication Between Real-Time Software Processes

Suppose you have two processes: a server and a client. The server process reads some I/O from a hardware interface and passes the data on to a client process. These processes may or may not be running on separate processors. In particular, they do not have a common shared memory area.

Buffered Communication Between Real-Time Software Processes

In this situation the server and client have to communicate over some explicit pipe between them. This communication mechanism may be implemented in different ways depending on the system. The server part of this system can be run with code similar to the following pseudo-code:

while (1) {
    get_data_from_pins();
    send_data_to_client();
}

and the client part can run with the following code pattern:

while (1) {
    wait_for_then_get_data_from_server();
    process_data();
}

In this case, the server initializes the communication and the client waits for and responds to the communication. So the server is the master and the client is the slave. This is perhaps to be expected since the whole process is driven by the arrival of data on the hardware interface. So far, this is simple. However, things get a bit more complicated if timing requirements are taken into account. Depending on the needs of the system, the protocol between the two processes may have to be more complicated.

Blocking behavior and buffering

Assuming there is only limited buffering within the communication pipe itself, the call to send_data_to_client() in the server process will be blocking – it will wait until the client is ready. This is fine if the client is ready in time and the data can be communicated before the next item of data needs to be read from the hardware.

However, if this isn’t the case and the client is too slow then next data from the hardware will be missed.

There are different methods to get around the blocking problem. If the hardware has some kind of flow control, it may be possible to hold off the interface and push the delay upstream in the data flow. However, sometimes this is not possible.

The rest of this article looks at situations when there is no flow control, where the pull of the client has variable timing and the push of the hardware has fixed timing. In this case, the common solution is to use a buffer for the data.

With a buffer, the server process reads data from the hardware interface and places the data in a FIFO. The client process asks the server to provide data from the other end of the FIFO. The buffer needs to be big enough to cope with the amount of data that can arrive during the longest processing time of the client.

Buffered Communication Between Real-Time Software Processes

The question is how to design a communication protocol between the two processes such that the server can read from the hardware when it needs to and the client can get data when necessary?

Polling

One solution to the problem is for the server to repeatedly poll the client for readiness in between each time it gathers data from the hardware. The code would look something like this:

while (1) {
    get_data_from_pins();
    add_data_to_fifo();
    while (!time_to_get_data()) {
       client_ready = poll_client();
       if (!fifo_empty() && client_ready) {
          get_data_from_fifo();
          send_data_to_client();
       }
    }
}

And the corresponding client code would be:

while (1) {
    signal_ready_to_server();
    get_data_from_server();
    process_data();
}

The server may send several items of data between hardware interactions or none, in which case the buffer will start to fill up to be emptied later.

Event-based programming: Selects

Writing the communication with a loop that repeatedly polls for a fixed amount of time is a slightly clumsy way of writing this kind of code. A better method is to use a programming style that instructs the processes to directly react to events that occur in the system.

The key to coding in this style is the use of selects to wait for an event to happen out of a specified set and then react when one of them occurs. The select construct in the XC programming language does this, as do constructs in other areas (e.g. the select system call in Unix or the wait call in SystemC).

An XC style select statement has a similar form to a switch statement in C:

select {
    case event1:
       ...
       break;
    case event2:
       ...
       break;
    ....
}

The statement waits for one of event1, event2 etc. to occur and then executes the code in the relevant case body. Given this construct the server code can be rewritten in a non-polling style:

while (1) {
    select {
       case pins_ready():
          get_data_from_pins();
          add_data_to_fifo();
       break;
       case !fifo_empty() && client_ready():
          get_data_from_fifo();
          send_data_to_client();
       break;
    }
}

Making the client a slave again

The act of adding the buffer causes the client process to be the master of the communication. It signals the start of the transaction of data between the server and client. This is a problem if the client wants to react to other events in a select statement as well as the incoming data.

You can make the client a slave again by introducing an intermediate process that pulls from the server and pushes to the client:

Buffered Communication Between Real-Time Software Processes

In this case the pseudo-code for the intermediate process is:

while (1) {
    signal_ready_to_server();
    get_data_from_server();
    send_data_to_client();
}

Now the client process can react to the event of the intermediate process pushing data and the server process can react to the event of the intermediate process pulling.

Making it more efficient: Exploiting communication buffers

The introduction of the intermediate process is inefficient. There is a whole process just concerned with shoveling data to change the master/slave relationship of the other processes. This is not a problem if processes are cheap, but in many cases they are expensive or limited. Luckily, you can do without the intermediate process if there is a small amount of buffering in the pipe.

If there is enough buffering to store a byte in the pipe itself, the server can send a notification byte and then carry on processing. When the server first puts data in its buffer it sends a notification:

Buffered Communication Between Real-Time Software Processes

The client can then react to the event of this notification arriving and know there is now data available. It can then signal to the server that it is ready for data:

Buffered Communication Between Real-Time Software Processes

The server can respond to this (providing that it is not busy dealing with hardware) and send the data to the client:

Buffered Communication Between Real-Time Software Processes

After this transaction is completed, the pipe is clear of the notification byte. So the server can send a new one if there is more data in its buffer.

Buffered Communication Between Real-Time Software Processes

If the buffer is empty, the pipe remains clear until the server receives more data. At any time there is only ever one notification byte in the pipe, so the pipe buffer will never overflow and block the server process.

In this case the code for the server looks like this:

int notified = 0; // this variable tracks whether a notification is
                  // sitting in the pipe
while (1) {
    select {
       case pins_ready():
          get_data_from_pins();
          add_data_to_fifo();
          if (!notified) {
             send_notification_to_client();
             notified = 1;
          }
       break;
       case !fifo_empty() && client_ready():
          get_data_from_fifo();
          send_data_to_client();
          if (fifo_empty()) {
             notified = 0;
          }
          else {
             send_notification_to_client();
             notified = 1;
          }
       break;
    }
}

An important thing to note with this code is that the send_notification_to_client call will not block, so the code will always carry on. The call to send_data_to_client on the other hand will block until the client is ready. However, in this case, the client will be ready since it signals its readiness to the server.

The client code for this case would be:

while (1) {
    select {
       case get_notification_from_server():
          signal_ready_to_server();
          get_data_from_server();
          process_data();
          break;

       ...
    }
}

This version of the communication protocol allows the client to be a slave and the server buffer in the right place without the need for an intermediate process.

Doing it in XC

On XMOS platforms using XC, the server and client processes will be XC threads and the communication mechanism will be XC channels.

The notification from the server needs to use the asynchronous outct primitive to send a control token without the normal XC synchronous handshaking on communication.

This control token should be an XS1_CT_END token to make sure that any inter-core switches between the threads are free after the token is delivered into the destination channel end buffer:

send_notification_to_client(chanend c) {
    outct(c, XS1_CT_END);
}

The client can select on this notification in code similar to this:

select {
    ...
    case inct_byref(c, tmp): // receive notification
       c <: 0; // send ready signal
       c :> len; // receive data length
       for (int i=0;i>len;i++ // receive data
          c :> data[i];
       break;
}

You can find XC communication examples of this type in the open source repository found here.

  • Buffered Communication Between Real-Time Software Processes已关闭评论
    A+
发布日期:2019年07月13日  所属分类:参考设计