The Synchronous Blog

A blog about reactive programming languages.

Posts Tagged ‘synchronization

Interrupt Service Routines in Céu

leave a comment »

An Interrupt Service Routine (ISR) executes in reaction to an asynchronous hardware request, interrupting the ongoing computation in the CPU.
As an example, in an Arduino, whenever the USART subsystem receives a byte from the serial line, the CPU execution is redirected to the “USART_RX interrupt vector”, which is a predefined memory address containing the ISR to handle the byte received.
Only after the ISR returns that the interrupted computation resumes.

ISRs are often associated with a high-priority functionality that cannot wait long.
Complementing the USART example, if the execution of the ISR is too much delayed, some received bytes can be lost.

Likewise, the execution of an ISR should never take long, because other interrupts will not trigger in the meantime (although it is possible to nest ISRs).
For this reason, a typical USART ISR simply stores received bytes in a buffer so that the program can handle them afterwards.

ISRs in Céu:

Céu has primitive support for ISRs, which are declared similarly to functions.
However, instead of a name identifier, an ISR declaration requires a number that refers to the index in the interrupt vector for the specific platform.

When an interrupt occurs, not only the ISR executes, but Céu also enqueues the predefined event OS_INTERRUPT passing the ISR index.
This mechanism allows the time-critical operation associated with the interruption to be handled in the ISR, but encourage non-critical operations to be postponed and respect the event queue, which might already be holding events that occurred before the interruption.

The code snippets that follow is part of an USART driver for the Arduino.
The driver emits a READ output event to signal a received byte to other applications (i.e. they are awaiting READ).
The ISR just hold incoming bytes in a queue, while the main body is responsible for signaling each byte to all applications (in a lower priority).

/* variables to manage the buffer */

var byte[SZ] rxs;                   // buffer to hold received bytes
var u8 rx_get;                      // position to get the oldest byte
var u8 rx_put;                      // position to put the newest byte
atomic do
   rx_get = 0;                      // initialize get/put
   rx_put = 0;                      // the `atomic´ block disables interrupts
end

/* ISR for receiving byte (index "20" in the manual) */

function isr[20] do
   var u8 put = (rx_put + 1) % SZ;  // next position
   var byte c = _UDR0;              // receive the byte
   if put != rx_get then            // check buffer space
      rxs[rx_put] = c;              // save the received byte
      rx_put = put;                 // update to the next position
   end
end

/* DRIVER body: receive bytes in a loop */

output byte READ;                    // the driver outputs received bytes to applications

loop do
   var int idx = await OS_INTERRUPT
                 until idx==20;      // USART0_RX_vect

   var byte c;                       // hold the received byte
   ...
      atomic do                      // protect the buffer manipulation new interrupts
         c = rxs[rx_get];            // get the next byte
         rx_get = (rx_get + 1) % SZ; // update to the next position
      end
      emit READ => c;                // signal other applications
   ...
end


 

Note how the real-time/high-priority code to store received bytes in the buffer runs in the ISR, while the code that processes the buffer and signal other applications runs in the body of the driver after every occurrence of OS_INTERRUPT.

Given that ISRs share data with and abruptly interrupt the normal execution body, some sort of synchronization between them is necessary.
As a matter of fact, Céu tracks all variables that ISRs access and enforces all other accesses (outside them, in the normal execution body) to be protected with atomic blocks.

Conclusion:

Céu provides primitive support for handling interrupt requests:

  • An ISR is declared similarly to a function, but specifies the interrupt vector index associated with it.
  • An ISR should only execute hard real-time operations, leaving lower priority operations to be handled in reaction to the associated OS_INTERRUPT event.
  • The static analysis enforces the use of atomic blocks for memory shared between ISRs and the normal execution body.

 

Written by francisco

April 13, 2014 at 11:42 am

The case for synchronous concurrency

with 4 comments

The point of this post is to show that it is wrong to rely on timing issues in preemptive multithreading for activites that require a synchronized behavior. I compare the same program specification in three different implementations.

The example program is supposed to blink two leds with different frequencies (400ms and 1000ms) on an Arduino board. They must blink together every 2 seconds.

The first two implementations use different RTOSes with preemptive multithreading. The third implementation uses the synchronous language Céu.

(UPDATE) The complete source files can be found here.

 

The first implementation uses the ChibiOS RTOS. I omitted all code related to creating and starting the threads (which run with the same priority).

Follows the code and video for the implementation in ChibiOS:

static msg_t Thread1(void *arg) {
    while (TRUE) {
        digitalWrite(11, HIGH);
        chThdSleepMilliseconds(400);
        digitalWrite(11, LOW);
        chThdSleepMilliseconds(400);
    }
}
static msg_t Thread2(void *arg) {
    while (TRUE) {
        digitalWrite(12, HIGH);
        chThdSleepMilliseconds(1000);
        digitalWrite(12, LOW);
        chThdSleepMilliseconds(1000);
    }
}

You can see that around 1:00 the leds loose synchronism among them, and also with the metronome.

 

The second implementation uses the DuinOS RTOS. I also omitted all code related to creating and starting the threads (which run with the same priority).

In this example the leds were well synchronized, so I included another task that uses the serial port with a different frequency.

Follows the code and video for the implementation in DuinOS:

taskLoop (T1) {
    digitalWrite(11, HIGH);
    delay(400);
    digitalWrite(11, LOW);
    delay(400);
}

taskLoop (T2) {
    digitalWrite(12, HIGH);
    delay(1000);
    digitalWrite(12, LOW);
    delay(1000);
}

int c = 0;
taskLoop (T3) {
    delay(77);
    Serial.print(c++);
}

Since the beginning you can see that the leds loose synchronism among them, and also with the metronome.

 

The third implementation uses the synchronous language Céu.

In this example the leds were well synchronized, even with the third activity that uses the serial port.

Follows the code and video for the implementation in Céu:

par do
    loop do
        _digitalWrite(11, _HIGH);
        await 400ms;
        _digitalWrite(11, _LOW);
        await 400ms;
    end
with
    loop do
        _digitalWrite(12, _HIGH);
        await 1s;
        _digitalWrite(12, _LOW);
        await 1s;
    end
with
    int c = 0;
    loop do
        await 77ms;
        c = c + 1;
        _Serial.print(c);
    end
end

 

Conclusion:

The execution model of preemptive multithreading does not ensure implicit synchronization among threads.

There’s nothing wrong with the RTOSes and the example implementations: the behavior shown in the videos is perfectly valid.

The problem is that usually the very first examples for these systems use blinking leds (supposely synchronized) to show how easy is to write multithreaded code. This is not true!

Preemptive multithreading should not be used as is to write this kind of highly synchronized applications. Adding semaphores or other synchronization primitives to these codes won’t help alone, they require a single thread to handle timers that is responsible to dispatching others.

I used timers in the example, but any kind of high frequency input would also behave nondeterministically in multithreaded systems.

In synchronous languages like Céu, the execution model enforces that all activities are synchronized all the time.

Written by francisco

March 23, 2012 at 10:20 pm

Paper accepted.

leave a comment »

Good news received last week:

Dear Mr. Francisco Sant’Anna,

I am pleased to confirm that your paper “LuaGravity, a Reactive Language
Based on Implicit Invocation” has been accepted for presentation and
publication at SBLP 2009.

All papers went through a rigorous reviewing process by the program
committee. Out of 30 research papers and 3 tutorials submitted, 12
papers and 1 tutorial were accepted.

Please make sure that in the preparation of the final paper you
carefully address the reviewers’ comments. Additionally, at least one
author is required to register in the conference for your paper to
appear in the proceedings.

Congratulations again on having your paper accepted. We look forward to
seeing you in Gramado!

Reviewer’s comments already addressed and final version submitted! One reviewer in particular pointed several constructive observations, which we took very seriously in the final version.

Follows the abstract for the paper:

The reactive programming paradigm covers a wide range of applications, such as
games and multimedia systems.
Mainstream languages do not offer proper support for reactive programming,
lacking language-level primitives that focus on synchronism and interactions
within application parts.
We propose an imperative reactive language, called
LuaGravity, based on
unconventional implicit invocation mechanisms.
LuaGravity allows dataflow programming, sequential imperative execution, and
deterministic use of shared-memory.
With this work, we intend to unite the essential features of reactive languages
while keeping a convenient imperative style of programming.

SBLP [1] is the main Brazilian congress on programming languages. This year it will be held in Gramado on August 18-21.

[1] http://sblp2009.ucpel.tche.br/

“A Synchronous Reactive Language based on Implicit Invocation”

leave a comment »

Observados os dispositivos do art. 6º da DELIBERAÇÃO 001/76, será defendida no dia 16/03/2009 às 10:00h, no local RDC511, a DISSERTAÇÃO DE MESTRADO intitulada “A Synchronous Reactive Language based on Implicit Invocation” do(a) aluno(a) Francisco Figueiredo Goytacaz Sant’Anna candidato ao título de Mestre em Informática.

Abstract:


The reactive programming paradigm covers a wide range of applications, such as games and multimedia systems.
Mainstream languages neglect reactive programming, lacking language-level primitives that focus on synchronism and interactions within application parts.


We propose a new reactive synchronous language, with an imperative style, whose primitives are based on unconventional implicit invocation mechanisms.
With this work, we intend to unite the essential features of reactive languages while keeping a convenient imperative style of programming.
A reactive scheduler is responsible for executing reactors, our processing units, based on dependency relations between them built dynamically.
Our language provides dataflow programming, sequential imperative execution, and deterministic use of shared-memory.

Written by francisco

March 9, 2009 at 10:13 pm

The Dining Philosophers Problem

leave a comment »

Ok, here’s the classical synchronization problem.
I’ll jump the description of the problem [1], as you probably remember it from your operating system class.
A common solution involves mutual exclusion to lock access to shared forks.
Besides deadlock, the solution should avoid starvation and livelock as well.

The result can be seen in the video below:

The solution in LuaGravity uses its AWAIT primitive.

Here’s a possible solution for the problem:

Phil = org.class('phil', function ()
    function init (self)
        self.state  = ''
        self.total  = 0
        self.text   = self.state .. ' (' .. self.total .. ')'
        local p = POS[self._i]
        screen:add(
            Text {
                text   = self.text,
                face   = 'vera.ttf',
                _color = {r=0,g=0,b=0},
                x = p.x,
                y = p.y, dy=13,
        })
    end
    function run (self)
        local left, right = self._left, self._right
        while true
        do
            -- think
            self.state = 'thinking'
            AWAIT(math.random(5))

            -- wait
            self.state = 'hungry!'
            while not (left:_isFree() and right:_isFree()) do
                AWAIT(left.put, right.put)
            end

            -- eat
            left:_get(self)
            right:_get(self)
            self.state = 'eating'
            self.total = self.total() + 1
            AWAIT(math.random(5))

            -- back to think
            left:put(self)
            right:put(self)
        end
    end
end )

The reactor run is a infinity loop where the philosopher thinks, waits for the forks and eats.
The think and eat steps are just an AWAIT on a random time between 1 and 5 seconds.

After thinking, the philosopher enters the inner while, checking whether his forks are available for use, otherwise he waits for them until he succeeds.
After leaving the inner while, the philosopher acquires his forks in order to eat.
Note that, in synchronous languages, there’s no need for mutexes or other means of mutual exclusion.

After acquiring the forks, the philosopher eats for a random time.
Then, he goes back to think, releasing his forks.
The call to reactor put awakes philosopher’s neighbors if they are in their inner while.

Just by changing the variables self.state and self.total is enough to update the screen – it’s all about reactivity.

The full example code can be found here.

[1] http://en.wikipedia.org/wiki/Dining_philosophers_problem

Written by francisco

November 3, 2008 at 5:29 pm

“Cada Macaco no seu Galho”

with 3 comments

Or “A place for everything and everything in its place”.

In applications like games and multimedia systems several entities run concurrently, leading to a high degree of internal and external interactivity among them.
Internal entities might be game characters, graphical widgets, AI actions and so on, while external entities might be the mouse, keyboard, network and also the time.

Interaction may be better defined as synchronization in this scope.
Using a shooter game as an example, a character movement is synchronized with user controls, as a gun sight is with the mouse position. Also, animations must be synchronized, so that all objects move together.

Currently, when realizing that asynchronous threads are not adequate for these applications, most programmers use event-driven or observer pattern techniques (I’ll use the general term implicit invocation).

Synchronous functionality is present in some conventional languages, unfortunately they are barely used when known.
Continuations or coroutines are examples of such features present in languages like Scheme and Lua.
With coroutines, the programmer can keep state between successive calls to the same callback and avoid issues in implicit invocation such as inversion-of-control and stack-ripping.
It’s not by chance that Lua is so popular in the game industry.

A place is still reserved for asynchronous threads: exactly where you don’t need a high degree of synchronization.
If you want to compute the product of some 1000×1000 matrices, there’s no point on doing it synchronized, just go async.
You can still have some degree of synchronization, though. That would be in specific (and obvious) points like a memoization table for calculated results.

Well, this is only an opinion, still in the need for a more solid background.
I would like to see more papers around this topic, although it’s not easy to find bibliography defending asynchronous threads.
Maybe their advocates are just indifferent to this discussion or too busy debugging their applications.
What do you think?

Here is some bibliography:
[1] “Event-driven programming for robust software” (pdos.csail.mit.edu/~rtm/papers/dabek:event.pdf)
[2] “Why Threads Are A Bad Idea (for most purposes)” (http://www.stanford.edu/class/cs240/readings/threads-bad-usenix96.pdf)
[3] “The Problem with Threads” (http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf)
[4] “Why events are a bad idea for high-concurrency servers” (capriccio.cs.berkeley.edu/pubs/threads-hotos-2003.pdf)

Written by francisco

August 21, 2008 at 1:42 pm

Threads Are Evil

with 4 comments

I never refuse to express my opinion about threads: they are Evil.

Using threads is like selling your soul to Satan without even noticing it.
They offer you the paradise with concurrency and shared memory for your programs.
After the first week using it, you see yourself working full time to correct the “special cases” you weren’t aware of. After one month you become a slave of your own program.

The other (not so) common approach to concurrency is message passing.
It seems that it is always gaining popularity, but its use is still restricted to niches. It was never really adopted by a mainstream language.
Message passing eradicates memory corruption and is seen as a safer model.

What is common with these models is that they are both asynchronous and non-deterministic.
By asynchrony I mean that the concurrent entities run unaware of each other. If they want to synchronize, they must use a language primitive for it (receive, lock, synchronized, etc).
The non-determinism is also inherent here, one can never be sure when the scheduler will preempt a running entity. This is, of course, a source of non-deterministic bugs.

What is the point of using asynchronous languages to write synchronous programs?
We feel (probably we learnt) that we have no option: If we want to synchronize code we must do it explicitly.
That is so not true as I’ll try to defend here…

Written by francisco

August 16, 2008 at 4:07 pm