The Synchronous Blog

A blog about reactive programming languages.

Posts Tagged ‘concurrency

Interrupt Service Routines in Céu

leave a comment »

An Interrupt Service Routine (ISR) executes in reaction to an asynchronous hardware request, interrupting the ongoing computation in the CPU.
As an example, in an Arduino, whenever the USART subsystem receives a byte from the serial line, the CPU execution is redirected to the “USART_RX interrupt vector”, which is a predefined memory address containing the ISR to handle the byte received.
Only after the ISR returns that the interrupted computation resumes.

ISRs are often associated with a high-priority functionality that cannot wait long.
Complementing the USART example, if the execution of the ISR is too much delayed, some received bytes can be lost.

Likewise, the execution of an ISR should never take long, because other interrupts will not trigger in the meantime (although it is possible to nest ISRs).
For this reason, a typical USART ISR simply stores received bytes in a buffer so that the program can handle them afterwards.

ISRs in Céu:

Céu has primitive support for ISRs, which are declared similarly to functions.
However, instead of a name identifier, an ISR declaration requires a number that refers to the index in the interrupt vector for the specific platform.

When an interrupt occurs, not only the ISR executes, but Céu also enqueues the predefined event OS_INTERRUPT passing the ISR index.
This mechanism allows the time-critical operation associated with the interruption to be handled in the ISR, but encourage non-critical operations to be postponed and respect the event queue, which might already be holding events that occurred before the interruption.

The code snippets that follow is part of an USART driver for the Arduino.
The driver emits a READ output event to signal a received byte to other applications (i.e. they are awaiting READ).
The ISR just hold incoming bytes in a queue, while the main body is responsible for signaling each byte to all applications (in a lower priority).

/* variables to manage the buffer */

var byte[SZ] rxs;                   // buffer to hold received bytes
var u8 rx_get;                      // position to get the oldest byte
var u8 rx_put;                      // position to put the newest byte
atomic do
   rx_get = 0;                      // initialize get/put
   rx_put = 0;                      // the `atomic´ block disables interrupts
end

/* ISR for receiving byte (index "20" in the manual) */

function isr[20] do
   var u8 put = (rx_put + 1) % SZ;  // next position
   var byte c = _UDR0;              // receive the byte
   if put != rx_get then            // check buffer space
      rxs[rx_put] = c;              // save the received byte
      rx_put = put;                 // update to the next position
   end
end

/* DRIVER body: receive bytes in a loop */

output byte READ;                    // the driver outputs received bytes to applications

loop do
   var int idx = await OS_INTERRUPT
                 until idx==20;      // USART0_RX_vect

   var byte c;                       // hold the received byte
   ...
      atomic do                      // protect the buffer manipulation new interrupts
         c = rxs[rx_get];            // get the next byte
         rx_get = (rx_get + 1) % SZ; // update to the next position
      end
      emit READ => c;                // signal other applications
   ...
end


 

Note how the real-time/high-priority code to store received bytes in the buffer runs in the ISR, while the code that processes the buffer and signal other applications runs in the body of the driver after every occurrence of OS_INTERRUPT.

Given that ISRs share data with and abruptly interrupt the normal execution body, some sort of synchronization between them is necessary.
As a matter of fact, Céu tracks all variables that ISRs access and enforces all other accesses (outside them, in the normal execution body) to be protected with atomic blocks.

Conclusion:

Céu provides primitive support for handling interrupt requests:

  • An ISR is declared similarly to a function, but specifies the interrupt vector index associated with it.
  • An ISR should only execute hard real-time operations, leaving lower priority operations to be handled in reaction to the associated OS_INTERRUPT event.
  • The static analysis enforces the use of atomic blocks for memory shared between ISRs and the normal execution body.

 

Written by francisco

April 13, 2014 at 11:42 am

A Synchronous Micro Kernel for Embedded Systems

with 2 comments

I’m working on a new operating system that follows the synchronous execution model of Céu.
In reality, the term micro kernel would be more accurate: I’m only concerned with scheduling and inter-process communication mechanisms.

Why a new micro kernel?

Many reasons. :)

General frustrations

(Accumulated over the years of using UNIX-based systems.)

“Programmability”:

We are still using select or threads to communicate with multiple applications at the same time:

  • select: multiplex all communication in a single point and deal with state explicitly (e.g., switch (state) {…}).
  • threads: decentralize the communication in multiple threads and deal with synchronization explicitly (e.g., locks, semaphores).

Both approaches are complex and error prone (threads, select/events).

Debugging/Safety:

It is difficult, if not impossible, to reproduce the behavior of a set of interacting applications (even of a single application):

  • Preemptive schedulers are highly sensitive to small timing variations.
  • Most programming languages offer non-deterministic concurrency models only (e.g. threads, actors).

OS/Language integration:

I’m not sure if the “everything is a file”  philosophy of UNIX-based systems is too generalist and language agnostic.
Maybe operating systems and programming languages should be designed in conjunction, with the latter offering first-class OS-level IPC mechanisms.

Particular demands

I also have particular demands related to ongoing projects in our research group.

Micro platforms:

We work with highly-constrained embedded platforms (e.g., 16MHz, 64Kb FLASH, 4Kb RAM).

With such little resources, we can take advantage of the synchronous concurrency model to be more economic:

  • Processes share the same event queue.
  • Processes share the same stack.
  • Cooperative scheduling instead of preemptive (together with language guarantees that processes do cooperate).

Distributed WSNs:

Wireless Sensor Networks demand more functionality from operating systems:

  • Processes that communicate might not be on the same network node.
  • Applications need to be replaced remotely.

Demo

The video that follows shows a working demo of the OS:

(To see the texts in the video, please use a resolution of at least 480p in the “settings” icon.)

Together with the kernel, the microcontroller is preloaded with three processes:

  • An USART driver.
  • A shell that allows us to start/stop applications.
  • An USART<=>shell bridge that stores received bytes from the serial line until the command is complete.

These processes have no special rights, they are only preloaded so that we can command the microcontroller remotely.

In the first demo (0s-35s), after booting the microcontroller, we issue a sequence of commands through the terminal:

  • Upload a blinking application to address 0xF000.
  • Load address 0xF000 as process 0.
  • Start process 0.
  • Stop process 0 after some time.

Then, with the OS still up and running, we upload a similar application (it only blinks a different LED) to the same address and repeat the sequence of commands. The visual result is to see the blinking LED to change from red to yellow.

The second demo (35s-1min14s) issues another sequence of commands to upload and start three other applications:

  • A GPIO driver.
  • Two blinking applications [code1,code2].

Now, while the first application executes and blinks the yellow LED, we upload another application to blink the red LED.

Both applications rely on the the GPIO driver and avoid to duplicate the GPIO functions (as happens in the first demo).

The link command connects the applications with the GPIO driver. For example, to link the output call PIN_MODE of application 1 with the input call PIN_MODE of the driver (application 0), we write <link 1 1 0 243> (i.e., link app 1 / output 1 => app 0 / input 243).

(Sorry, event identifiers must be “hardcoded” as numbers instead of names.)

Check also the code for the USART driver, shell, USART<=>shell bridge, and GPIO driver.

Details in another post to come.

Written by francisco

March 11, 2014 at 2:50 pm

Dynamic Applications in Céu (1/2)

with 4 comments

The basic prerequisite to build dynamic applications is language support to deal with abstractions and code reuse. Programming languages provide a multitude of abstraction mechanisms, from simple abstract data types, to OO classes. Regarding an abstraction, an effective mechanism should provide means to deal with at least the following points:

  • Hide its internal implementation details.
  • Expose a uniform programming interface to manipulate it.
  • Control its life cycle.

As an example, to build an ADT in C, one can define a struct, hide it with a typedef, expose functions to manipulate it, and control instances with local variables or malloc/free. Classes extend ADTs with richer mechanisms such as inheritance and polymorphism. Furthermore, the life cycle of an object is typically controlled automatically through a garbage collector.

Céu organisms

Abstractions in Céu are created through organisms, which basically reconcile threads and objects into a single concept:

  • An organism has intrinsic execution, being able to react to the environment on its own.
  • An organism exposes properties and actions in order to interact with other organisms during its life cycle.

Like an object, an organism exposes properties and methods (events in Céu) that can be accessed and invoked (emitted in Céu) by other instances. Like a thread, an organism has its own line(s) of execution, with persistent local variables and execution state.
In contrast, an object method call typically shares the same execution context with its calling method. Likewise, a thread does not expose fields or methods.

An example

The program below defines the class HelloWorld and executes two instances of it:

  class HelloWorld with
     var int id;   // organism interface
  do               // organism body
     every 1s do
         _printf("[%d] Hello world!\n",
                  this.id);
     end
  end

  var HelloWorld hello1, hello2;
  hello1.id = 1;
  hello2.id = 2;
  await FOREVER;
.

The behavior can be visualized in the video on the right. The top-level code creates two instances of the class HelloWorld, initializes the exposed id fields, and then awaits forever. As organisms have “life”, the two instances react to the environment autonomously, printing the “Hello world!” message every second.

Note in the example that organisms are simply declared as normal variables, which are automatically spawned by the language runtime to execute in parallel with its enclosing block.

In the following variation, we add the event stop in the class interface and include another line of execution in the organism body:

  class HelloWorld with
     var   int  id;
     event void stop;
  do
     par/or do
         every 1s do
             _printf("[%d] Hello world!\n",
                      this.id);
         end
     with
         await this.stop;
     end
  end

  var HelloWorld hello1, hello2;
  hello1.id = 1;
  hello2.id = 2;

  await 3s500ms;
  emit hello1.stop;
  hello2.id = 5;
  await 2s;
  emit hello2.stop;

  await FOREVER;
.

Now, besides printing the message every second, each organism also waits for the event stop in parallel. The par/or construct splits the running line of execution in two, rejoining when any of them terminate. (Céu also provides the par/and construct.)

After the top-level code instantiates the two organisms, it waits 3s500ms before taking the actions in sequence. At this point, the program has 5 active lines of execution: 1 in the top-level and 2 for each of the instances. Each organism prints its message 3 times before the top-level awakes from 3s500ms.

Then, the top-level emits the stop event to the first organism, which awakes and terminates. It also changes the id of the second organism and waits more 2s. During this period the second organism prints its message 2 times more (now with the id 5).

Note that although the first organism terminated its body, its reference hello1 is still visible. This way, the organism is still alive and its fields can be accessed normally (but now resembling a “dead” C struct).

Execution model

Lines of execution in Céu are known as trails and differ from threads in the very fundamental characteristic of how they are scheduled.

Céu is a synchronous language based on Esterel, in which lines of execution advance together with a unique global notion of time.
In practical terms, this means that Céu can provide seamless lock-free shared-memory concurrency. It also means that programs are deterministic and have reproducible execution. As a tradeoff, concurrency in Céu is not suitable for algorithmic-intensive activities as there is no automatic preemption among trails.

In contrast, asynchronous models have time independence among lines of execution, but either require synchronization primitives to acquire shared resources (e.g. locks and semaphores in pthreads), or completely forbid shared access in favor of message passing (e.g processes and channels in actor-based languages). In both cases, ensuring deterministic execution requires considerable programming efforts.

The post entitled “The case for synchronous concurrency” illustrates these differences in practical terms with an example.

The synchronous model of Céu is presented in more depth in these videos.
The videos also show organisms in action together with the SDL graphical library.

Conclusion

Céu organisms reconcile objects and threads in a single abstraction mechanism.

Classes specify the behavior of organisms, hiding implementation details and exposing an interface in which they can be manipulated by other organisms.

In the next post, I’ll show how Céu can control the life cycle of organisms with lexical scope in three different ways: local variables, named allocation, and anonymous allocation.

http://www.ceu-lang.org/

Written by francisco

May 22, 2013 at 6:59 pm

“Céu: Embedded, Safe, and Reactive Programming”

with one comment

We have published a technical report entitled “Céu: Embedded, Safe, and Reactive Programming”.

Enjoy the reading!

Abstract:

Céu is a programming language that unifies the features found in dataflow and imperative synchronous reactive languages, offering a high-level and safe alternative to event-driven and multithreaded systems for embedded systems.

Céu supports concurrent lines of execution that run in time steps and are allowed to share variables. However, the synchronous and static nature of Céu enables a compile time analysis that can enforce deterministic and memory-safe programs.

Céu also introduces first-class support for “wall-clock” time (i.e. time from the real world), and offers seamless integration with C and simulation of programs in the language itself.

The Céu compiler generates single-threaded code comparable to handcrafted C programs in terms of size and portability.

Table of Contents:

  1. Introduction
  2. The Language Céu
    1. Parallel compositions
    2. Internal events & Dataflow support
    3. Wall-clock time
    4. Integration with C
    5. Bounded execution
    6. Determinism
    7. Asynchronous execution
    8. Simulation in Céu
    9. GALS execution
  3. Demo applications
    1. WSN ring
    2. Arduino ship game
    3. SDL game simulation
  4. Implementation of Céu
    1. Temporal analysis
    2. Memory layout
    3. Gate allocation
    4. Code generation
    5. Reactive execution
    6. Evaluation
  5. Related work
    1. Synchronous model
    2. Asynchronous model
  6. Conclusion

The case for synchronous concurrency

with 4 comments

The point of this post is to show that it is wrong to rely on timing issues in preemptive multithreading for activites that require a synchronized behavior. I compare the same program specification in three different implementations.

The example program is supposed to blink two leds with different frequencies (400ms and 1000ms) on an Arduino board. They must blink together every 2 seconds.

The first two implementations use different RTOSes with preemptive multithreading. The third implementation uses the synchronous language Céu.

(UPDATE) The complete source files can be found here.

 

The first implementation uses the ChibiOS RTOS. I omitted all code related to creating and starting the threads (which run with the same priority).

Follows the code and video for the implementation in ChibiOS:

static msg_t Thread1(void *arg) {
    while (TRUE) {
        digitalWrite(11, HIGH);
        chThdSleepMilliseconds(400);
        digitalWrite(11, LOW);
        chThdSleepMilliseconds(400);
    }
}
static msg_t Thread2(void *arg) {
    while (TRUE) {
        digitalWrite(12, HIGH);
        chThdSleepMilliseconds(1000);
        digitalWrite(12, LOW);
        chThdSleepMilliseconds(1000);
    }
}

You can see that around 1:00 the leds loose synchronism among them, and also with the metronome.

 

The second implementation uses the DuinOS RTOS. I also omitted all code related to creating and starting the threads (which run with the same priority).

In this example the leds were well synchronized, so I included another task that uses the serial port with a different frequency.

Follows the code and video for the implementation in DuinOS:

taskLoop (T1) {
    digitalWrite(11, HIGH);
    delay(400);
    digitalWrite(11, LOW);
    delay(400);
}

taskLoop (T2) {
    digitalWrite(12, HIGH);
    delay(1000);
    digitalWrite(12, LOW);
    delay(1000);
}

int c = 0;
taskLoop (T3) {
    delay(77);
    Serial.print(c++);
}

Since the beginning you can see that the leds loose synchronism among them, and also with the metronome.

 

The third implementation uses the synchronous language Céu.

In this example the leds were well synchronized, even with the third activity that uses the serial port.

Follows the code and video for the implementation in Céu:

par do
    loop do
        _digitalWrite(11, _HIGH);
        await 400ms;
        _digitalWrite(11, _LOW);
        await 400ms;
    end
with
    loop do
        _digitalWrite(12, _HIGH);
        await 1s;
        _digitalWrite(12, _LOW);
        await 1s;
    end
with
    int c = 0;
    loop do
        await 77ms;
        c = c + 1;
        _Serial.print(c);
    end
end

 

Conclusion:

The execution model of preemptive multithreading does not ensure implicit synchronization among threads.

There’s nothing wrong with the RTOSes and the example implementations: the behavior shown in the videos is perfectly valid.

The problem is that usually the very first examples for these systems use blinking leds (supposely synchronized) to show how easy is to write multithreaded code. This is not true!

Preemptive multithreading should not be used as is to write this kind of highly synchronized applications. Adding semaphores or other synchronization primitives to these codes won’t help alone, they require a single thread to handle timers that is responsible to dispatching others.

I used timers in the example, but any kind of high frequency input would also behave nondeterministically in multithreaded systems.

In synchronous languages like Céu, the execution model enforces that all activities are synchronized all the time.

Written by francisco

March 23, 2012 at 10:20 pm

A “down to earth” reactive language: Céu

leave a comment »

It has been more than one year since my last blog post. The reason is the direction I took two years ago, in the beginning of my PhD, switching from LuaGravity to something more grounded.

LuaGravity was very fun to work with, it showed how reactive languages are expressive, allowing complex dependency patterns to be written with simple expressions. It also showed how easily Lua can be hacked in runtime to provide a completely different semantics.

However, LuaGravity is overly powerful as a research artifact. In this context, what really matters is to understand the motivations, goals, and what is needed  and not needed in a reactive language. The border between Lua and LuaGravity was unclear and Lua is too dynamic, what complicates the deterministic execution enforcement we wanted to provide.

The development of a new language—Céu—is the process to answer and pose research questions related to reactive languages.

Céu can be defined in keywords as a reactive, imperative, concurrent, synchronous, and deterministic language. The syntax is very compact (resembling CSP or Pi-calculus), what is great for writing papers and discussing programs, but not necessarily for developing applications.

Currently, Céu is targeted at Wireless Sensor Networks, but any constrained embedded platform is of our interest. Follows a “Hello World!” program in Céu  that blinks three leds, each with a different frequency, forever:

(
    ( ~250ms  ; ~>Leds_led0Toggle)*
||
    ( ~500ms  ; ~>Leds_led1Toggle)*
||
    ( ~1000ms ; ~>Leds_led2Toggle)*
)

.

I presented Céu in the Doctoral Colloquium [1] at Sensys’11 last week. The 3-page summary submitted to the conference can be reached here.

[1] http://www.cse.ust.hk/~lingu/SenSys11DC/

Written by francisco

November 15, 2011 at 11:24 pm

Concurrent Systems

leave a comment »

Concurrency is one of those terms that everyone has an intuition about its definition until needs to write about it, realizing that the concept is too open to just use “concurrency”.

Follows the first phrase in Wikipedia’s entry for “Concurrency”:

In computer science, concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other.

By using the words simultaneously and interacting, this definition captures (for me) the essence of concurrency.

One of the fundamental properties of concurrent systems is their execution model, that is, when should the concurrent computations (I’ll call them concurrent entities) in the system run, and what are the rules that an entity should obey while running.

  • Asynchronous Concurrency

In asynchronous execution, entities are in charge of their own control flow and execute independently of each other. Hence, each entity has its own notion of time, not shared globally. The decision to synchronize with other parts of the system is also internal to each entity, and not enforced by the surrounding environment. Depending on the concurrency model in use, these entities are known as threads, actors, processes, tasks, etc.

  • Synchronous Concurrency

In synchronous execution, the system flow is controlled by the environment, and internal entities must execute at its pace, in permanent synchrony. Time is now shared between entities and is represented as time steps or as a series of events, both triggered by the surrounding environment.

In my personal experience, when saying “concurrency” people assume asynchronous concurrency, excluding all synchronous reactive languages/systems.

For example, if I state that event-driven programming is a concurrency model, I’ll probably be inquired about this position.

However, if you agree with Wikipedia’s definition and thinks about an event-driven implemented game with hundreds of entities interacting, how can it not be considered “concurrent”?

In a paper from Gerard Berry [1], this “prejudice” is also commented:

Being somewhat unclassical compared to prevalent CSP or CCS based models, it took more time for the synchronous model to be accepted in the mainstream Computer Science community.

Execution model is just one property of concurrent systems. I did not discuss here communication, synchronization, parallelism, determinism…

Maybe it is time to build something like a “Taxonomy for Concurrency”, enumerating all recurrent properties found in concurrency models and languages. Does anyone know about an existing work in this direction?

[1] Gérard Berry, The foundations of Esterel, Proof, language, and interaction: essays in honour of Robin Milner, MIT Press, Cambridge, MA, 2000

Written by francisco

November 19, 2009 at 5:56 pm