Posts Tagged ‘asynchronous’
An Interrupt Service Routine (ISR) executes in reaction to an asynchronous hardware request, interrupting the ongoing computation in the CPU.
As an example, in an Arduino, whenever the USART subsystem receives a byte from the serial line, the CPU execution is redirected to the “USART_RX interrupt vector”, which is a predefined memory address containing the ISR to handle the byte received.
Only after the ISR returns that the interrupted computation resumes.
ISRs are often associated with a high-priority functionality that cannot wait long.
Complementing the USART example, if the execution of the ISR is too much delayed, some received bytes can be lost.
Likewise, the execution of an ISR should never take long, because other interrupts will not trigger in the meantime (although it is possible to nest ISRs).
For this reason, a typical USART ISR simply stores received bytes in a buffer so that the program can handle them afterwards.
ISRs in Céu:
Céu has primitive support for ISRs, which are declared similarly to functions.
However, instead of a name identifier, an ISR declaration requires a number that refers to the index in the interrupt vector for the specific platform.
When an interrupt occurs, not only the ISR executes, but Céu also enqueues the predefined event OS_INTERRUPT passing the ISR index.
This mechanism allows the time-critical operation associated with the interruption to be handled in the ISR, but encourage non-critical operations to be postponed and respect the event queue, which might already be holding events that occurred before the interruption.
The code snippets that follow is part of an USART driver for the Arduino.
The driver emits a READ output event to signal a received byte to other applications (i.e. they are awaiting READ).
The ISR just hold incoming bytes in a queue, while the main body is responsible for signaling each byte to all applications (in a lower priority).
/* variables to manage the buffer */ var byte[SZ] rxs; // buffer to hold received bytes var u8 rx_get; // position to get the oldest byte var u8 rx_put; // position to put the newest byte atomic do rx_get = 0; // initialize get/put rx_put = 0; // the `atomic´ block disables interrupts end /* ISR for receiving byte (index "20" in the manual) */ function isr do var u8 put = (rx_put + 1) % SZ; // next position var byte c = _UDR0; // receive the byte if put != rx_get then // check buffer space rxs[rx_put] = c; // save the received byte rx_put = put; // update to the next position end end /* DRIVER body: receive bytes in a loop */ output byte READ; // the driver outputs received bytes to applications loop do var int idx = await OS_INTERRUPT until idx==20; // USART0_RX_vect var byte c; // hold the received byte ... atomic do // protect the buffer manipulation new interrupts c = rxs[rx_get]; // get the next byte rx_get = (rx_get + 1) % SZ; // update to the next position end emit READ => c; // signal other applications ... end
Note how the real-time/high-priority code to store received bytes in the buffer runs in the ISR, while the code that processes the buffer and signal other applications runs in the body of the driver after every occurrence of OS_INTERRUPT.
Given that ISRs share data with and abruptly interrupt the normal execution body, some sort of synchronization between them is necessary.
As a matter of fact, Céu tracks all variables that ISRs access and enforces all other accesses (outside them, in the normal execution body) to be protected with
Céu provides primitive support for handling interrupt requests:
- An ISR is declared similarly to a function, but specifies the interrupt vector index associated with it.
- An ISR should only execute hard real-time operations, leaving lower priority operations to be handled in reaction to the associated OS_INTERRUPT event.
- The static analysis enforces the use of
atomicblocks for memory shared between ISRs and the normal execution body.
Concurrency is one of those terms that everyone has an intuition about its definition until needs to write about it, realizing that the concept is too open to just use “concurrency”.
Follows the first phrase in Wikipedia’s entry for “Concurrency”:
In computer science, concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other.
By using the words simultaneously and interacting, this definition captures (for me) the essence of concurrency.
One of the fundamental properties of concurrent systems is their execution model, that is, when should the concurrent computations (I’ll call them concurrent entities) in the system run, and what are the rules that an entity should obey while running.
- Asynchronous Concurrency
In asynchronous execution, entities are in charge of their own control flow and execute independently of each other. Hence, each entity has its own notion of time, not shared globally. The decision to synchronize with other parts of the system is also internal to each entity, and not enforced by the surrounding environment. Depending on the concurrency model in use, these entities are known as threads, actors, processes, tasks, etc.
- Synchronous Concurrency
In synchronous execution, the system flow is controlled by the environment, and internal entities must execute at its pace, in permanent synchrony. Time is now shared between entities and is represented as time steps or as a series of events, both triggered by the surrounding environment.
In my personal experience, when saying “concurrency” people assume asynchronous concurrency, excluding all synchronous reactive languages/systems.
For example, if I state that event-driven programming is a concurrency model, I’ll probably be inquired about this position.
However, if you agree with Wikipedia’s definition and thinks about an event-driven implemented game with hundreds of entities interacting, how can it not be considered “concurrent”?
In a paper from Gerard Berry , this “prejudice” is also commented:
Being somewhat unclassical compared to prevalent CSP or CCS based models, it took more time for the synchronous model to be accepted in the mainstream Computer Science community.
Execution model is just one property of concurrent systems. I did not discuss here communication, synchronization, parallelism, determinism…
Maybe it is time to build something like a “Taxonomy for Concurrency”, enumerating all recurrent properties found in concurrency models and languages. Does anyone know about an existing work in this direction?
 Gérard Berry, The foundations of Esterel, Proof, language, and interaction: essays in honour of Robin Milner, MIT Press, Cambridge, MA, 2000