The Synchronous Blog

A blog about reactive programming languages.

Posts Tagged ‘asynchronous

Interrupt Service Routines in Céu

leave a comment »

An Interrupt Service Routine (ISR) executes in reaction to an asynchronous hardware request, interrupting the ongoing computation in the CPU.
As an example, in an Arduino, whenever the USART subsystem receives a byte from the serial line, the CPU execution is redirected to the “USART_RX interrupt vector”, which is a predefined memory address containing the ISR to handle the byte received.
Only after the ISR returns that the interrupted computation resumes.

ISRs are often associated with a high-priority functionality that cannot wait long.
Complementing the USART example, if the execution of the ISR is too much delayed, some received bytes can be lost.

Likewise, the execution of an ISR should never take long, because other interrupts will not trigger in the meantime (although it is possible to nest ISRs).
For this reason, a typical USART ISR simply stores received bytes in a buffer so that the program can handle them afterwards.

ISRs in Céu:

Céu has primitive support for ISRs, which are declared similarly to functions.
However, instead of a name identifier, an ISR declaration requires a number that refers to the index in the interrupt vector for the specific platform.

When an interrupt occurs, not only the ISR executes, but Céu also enqueues the predefined event OS_INTERRUPT passing the ISR index.
This mechanism allows the time-critical operation associated with the interruption to be handled in the ISR, but encourage non-critical operations to be postponed and respect the event queue, which might already be holding events that occurred before the interruption.

The code snippets that follow is part of an USART driver for the Arduino.
The driver emits a READ output event to signal a received byte to other applications (i.e. they are awaiting READ).
The ISR just hold incoming bytes in a queue, while the main body is responsible for signaling each byte to all applications (in a lower priority).

/* variables to manage the buffer */

var byte[SZ] rxs;                   // buffer to hold received bytes
var u8 rx_get;                      // position to get the oldest byte
var u8 rx_put;                      // position to put the newest byte
atomic do
   rx_get = 0;                      // initialize get/put
   rx_put = 0;                      // the `atomic´ block disables interrupts

/* ISR for receiving byte (index "20" in the manual) */

function isr[20] do
   var u8 put = (rx_put + 1) % SZ;  // next position
   var byte c = _UDR0;              // receive the byte
   if put != rx_get then            // check buffer space
      rxs[rx_put] = c;              // save the received byte
      rx_put = put;                 // update to the next position

/* DRIVER body: receive bytes in a loop */

output byte READ;                    // the driver outputs received bytes to applications

loop do
   var int idx = await OS_INTERRUPT
                 until idx==20;      // USART0_RX_vect

   var byte c;                       // hold the received byte
      atomic do                      // protect the buffer manipulation new interrupts
         c = rxs[rx_get];            // get the next byte
         rx_get = (rx_get + 1) % SZ; // update to the next position
      emit READ => c;                // signal other applications


Note how the real-time/high-priority code to store received bytes in the buffer runs in the ISR, while the code that processes the buffer and signal other applications runs in the body of the driver after every occurrence of OS_INTERRUPT.

Given that ISRs share data with and abruptly interrupt the normal execution body, some sort of synchronization between them is necessary.
As a matter of fact, Céu tracks all variables that ISRs access and enforces all other accesses (outside them, in the normal execution body) to be protected with atomic blocks.


Céu provides primitive support for handling interrupt requests:

  • An ISR is declared similarly to a function, but specifies the interrupt vector index associated with it.
  • An ISR should only execute hard real-time operations, leaving lower priority operations to be handled in reaction to the associated OS_INTERRUPT event.
  • The static analysis enforces the use of atomic blocks for memory shared between ISRs and the normal execution body.


Written by francisco

April 13, 2014 at 11:42 am

“Céu: Embedded, Safe, and Reactive Programming”

with one comment

We have published a technical report entitled “Céu: Embedded, Safe, and Reactive Programming”.

Enjoy the reading!


Céu is a programming language that unifies the features found in dataflow and imperative synchronous reactive languages, offering a high-level and safe alternative to event-driven and multithreaded systems for embedded systems.

Céu supports concurrent lines of execution that run in time steps and are allowed to share variables. However, the synchronous and static nature of Céu enables a compile time analysis that can enforce deterministic and memory-safe programs.

Céu also introduces first-class support for “wall-clock” time (i.e. time from the real world), and offers seamless integration with C and simulation of programs in the language itself.

The Céu compiler generates single-threaded code comparable to handcrafted C programs in terms of size and portability.

Table of Contents:

  1. Introduction
  2. The Language Céu
    1. Parallel compositions
    2. Internal events & Dataflow support
    3. Wall-clock time
    4. Integration with C
    5. Bounded execution
    6. Determinism
    7. Asynchronous execution
    8. Simulation in Céu
    9. GALS execution
  3. Demo applications
    1. WSN ring
    2. Arduino ship game
    3. SDL game simulation
  4. Implementation of Céu
    1. Temporal analysis
    2. Memory layout
    3. Gate allocation
    4. Code generation
    5. Reactive execution
    6. Evaluation
  5. Related work
    1. Synchronous model
    2. Asynchronous model
  6. Conclusion

The case for synchronous concurrency

with 4 comments

The point of this post is to show that it is wrong to rely on timing issues in preemptive multithreading for activites that require a synchronized behavior. I compare the same program specification in three different implementations.

The example program is supposed to blink two leds with different frequencies (400ms and 1000ms) on an Arduino board. They must blink together every 2 seconds.

The first two implementations use different RTOSes with preemptive multithreading. The third implementation uses the synchronous language Céu.

(UPDATE) The complete source files can be found here.


The first implementation uses the ChibiOS RTOS. I omitted all code related to creating and starting the threads (which run with the same priority).

Follows the code and video for the implementation in ChibiOS:

static msg_t Thread1(void *arg) {
    while (TRUE) {
        digitalWrite(11, HIGH);
        digitalWrite(11, LOW);
static msg_t Thread2(void *arg) {
    while (TRUE) {
        digitalWrite(12, HIGH);
        digitalWrite(12, LOW);

You can see that around 1:00 the leds loose synchronism among them, and also with the metronome.


The second implementation uses the DuinOS RTOS. I also omitted all code related to creating and starting the threads (which run with the same priority).

In this example the leds were well synchronized, so I included another task that uses the serial port with a different frequency.

Follows the code and video for the implementation in DuinOS:

taskLoop (T1) {
    digitalWrite(11, HIGH);
    digitalWrite(11, LOW);

taskLoop (T2) {
    digitalWrite(12, HIGH);
    digitalWrite(12, LOW);

int c = 0;
taskLoop (T3) {

Since the beginning you can see that the leds loose synchronism among them, and also with the metronome.


The third implementation uses the synchronous language Céu.

In this example the leds were well synchronized, even with the third activity that uses the serial port.

Follows the code and video for the implementation in Céu:

par do
    loop do
        _digitalWrite(11, _HIGH);
        await 400ms;
        _digitalWrite(11, _LOW);
        await 400ms;
    loop do
        _digitalWrite(12, _HIGH);
        await 1s;
        _digitalWrite(12, _LOW);
        await 1s;
    int c = 0;
    loop do
        await 77ms;
        c = c + 1;



The execution model of preemptive multithreading does not ensure implicit synchronization among threads.

There’s nothing wrong with the RTOSes and the example implementations: the behavior shown in the videos is perfectly valid.

The problem is that usually the very first examples for these systems use blinking leds (supposely synchronized) to show how easy is to write multithreaded code. This is not true!

Preemptive multithreading should not be used as is to write this kind of highly synchronized applications. Adding semaphores or other synchronization primitives to these codes won’t help alone, they require a single thread to handle timers that is responsible to dispatching others.

I used timers in the example, but any kind of high frequency input would also behave nondeterministically in multithreaded systems.

In synchronous languages like Céu, the execution model enforces that all activities are synchronized all the time.

Written by francisco

March 23, 2012 at 10:20 pm

Concurrent Systems

leave a comment »

Concurrency is one of those terms that everyone has an intuition about its definition until needs to write about it, realizing that the concept is too open to just use “concurrency”.

Follows the first phrase in Wikipedia’s entry for “Concurrency”:

In computer science, concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other.

By using the words simultaneously and interacting, this definition captures (for me) the essence of concurrency.

One of the fundamental properties of concurrent systems is their execution model, that is, when should the concurrent computations (I’ll call them concurrent entities) in the system run, and what are the rules that an entity should obey while running.

  • Asynchronous Concurrency

In asynchronous execution, entities are in charge of their own control flow and execute independently of each other. Hence, each entity has its own notion of time, not shared globally. The decision to synchronize with other parts of the system is also internal to each entity, and not enforced by the surrounding environment. Depending on the concurrency model in use, these entities are known as threads, actors, processes, tasks, etc.

  • Synchronous Concurrency

In synchronous execution, the system flow is controlled by the environment, and internal entities must execute at its pace, in permanent synchrony. Time is now shared between entities and is represented as time steps or as a series of events, both triggered by the surrounding environment.

In my personal experience, when saying “concurrency” people assume asynchronous concurrency, excluding all synchronous reactive languages/systems.

For example, if I state that event-driven programming is a concurrency model, I’ll probably be inquired about this position.

However, if you agree with Wikipedia’s definition and thinks about an event-driven implemented game with hundreds of entities interacting, how can it not be considered “concurrent”?

In a paper from Gerard Berry [1], this “prejudice” is also commented:

Being somewhat unclassical compared to prevalent CSP or CCS based models, it took more time for the synchronous model to be accepted in the mainstream Computer Science community.

Execution model is just one property of concurrent systems. I did not discuss here communication, synchronization, parallelism, determinism…

Maybe it is time to build something like a “Taxonomy for Concurrency”, enumerating all recurrent properties found in concurrency models and languages. Does anyone know about an existing work in this direction?

[1] Gérard Berry, The foundations of Esterel, Proof, language, and interaction: essays in honour of Robin Milner, MIT Press, Cambridge, MA, 2000

Written by francisco

November 19, 2009 at 5:56 pm