Compiler, Multi-Processor Programming, Software

From Linearizability to CAP Theorem: A Brief Overview of Consistency Guarantees in Concurrent Programs

Concurrent systems, including both multiprocessor and distributed systems, often rely on object replication for improving performance and scalability. Object replication, the idea of keeping multiple local copies of shared data structures, demands another sub-system that ensures consistency between these local copies in real-time!! Network delays in distributed systems and overlapping operations on shared data structures within multiprocessor systems are some of the major challenges in ensuring consistency. To overcome these issues, without sacrificing correctness and performance, the research community has proposed various “relaxations” in the consistency requirement that I intend to cover in the next few sections.

Following is the table of content:

  • Motivation
  • Linearizability
  • Sequential Consistency
  • Causal Consistency
  • Eventual Consistency (CAP theorem)

Motivation

Consider the memory hierarchy in multi-core processors and that in distributed network-partitioned systems as shown in Figures 1 and 2. Did you find some similarities? Well, one obvious analogy worth highlighting is the presence of shared memory. Another subtle observation is that shared memory is usually located far away from the CPU core or Nodes and so, it takes a finite amount of time to access this shared memory (~ 30CPU cycles to access L2 Cache). To improve performance and fault-tolerance, concurrent systems maintain a local copy of each shared object in their respective local memory to speed up read operations. However, this optimization requires all the local copies of the shared object to be consistent with each other and so, every write operation needs to be replicated or made visible across all the local copies “instantaneously”, which in real-world systems is not possible.

And *** Ladies and Gentlemen*** this gave rise to the notion of weak consistency guarantees.

Fig. 1 – Intel Dual-core, Dual-processor memory hierarchy. Image courtesy: Intel whitepaper (https://software.intel.com/content/www/us/en/develop/articles/software-techniques-for-shared-cache-multi-core-systems.html?wapkw=smart+cache)
Fig. 2 – Shared memory is distributed, network-partitioned systems. Reference: http://courses.cs.vt.edu/~cs5204/fall00/distributedSys/amento/dsm.html

 


A more formal definition of weak consistency, as mentioned by Wikipedia[1], is as follows:

A protocol is said to support weak consistency if:

  1. All accesses to synchronization variables are seen by all processes (or nodes, processors) in the same order (sequentially) – these are synchronization operations. Accesses to critical sections are seen sequentially.
  2. All other accesses may be seen in a different order on different processes (or nodes, processors).
  3. The set of both read and write operations between different synchronization operations is the same in each process.

Note:- Linearizability and Sequential consistency, though very important, are not weak consistency guarantees and so, doesn’t follow the above definition. 

Linearizability

Informally, Linearizability guarantees that every read to a shared object returns the value set by the “most recent” write operation to that object. For example, suppose we have following write operations:  W1 at time T1 denoted by :<W1, T1> and similarly <W2, T2>, <W3, T3> on an object O1 such that T1 < T2 < T3, then, any read operation on object O1 scheduled after T3, should return the value set by W3 operation. It doesn’t matter whether all the 3 write operations are dispatched by the same client (same core or node) or by different clients.

Another way to look at the definition of linearizability is that it requires the global order of operations to be preserved. This is the strongest form of consistency and gives correct results as if all the operations occur atomically and on a single copy shared data type. Linearizability is also compositional i.e. any combination of two linearizable objects is also linearizable given that we preserve the total order of memory events[2]. Several weak memory models(WMM) preserve partial ordering and so in those cases, linearizability is not compositional[3]. Moreover, due to the compositional property of linearizability, it is widely used for describing large systems.

Linearization points: Side effects (like writing to shared memory) of a function gets visible to other CPUs/Nodes at any point of time between its invocation and response. These points are called Linearization points. For instance, consider a concurrent queue with the following operations: Enqueue(), Dequeue() and Peek(). Linearization point of Enqueue() function will occur at that CPU instruction which updates the tail of the Queue. For the dequeue() function, the linearization point will be at the instruction that updates the head of the queue, and in the case of the peek() function, the linearization point will be at the return statement.

Now we are ready to raise the following question:

How do we test Linearizability? Given the implementation of a concurrent data structure, How do we, formally, make sure that it is linearizable?

Answering this question is important as Linearizability implies Observational Refinement[4], which is widely used for formal reasoning over concurrent data structures.
According to [5] and [6], linearizability can be verified by finding all possible linearization points(LP) of all methods of a concurrent data structure and then finding a counter-example such that a given ordering of LPs violates the specification of the data structure.

Sequential Consistency

Unlike linearizability, Sequential Consistency(SC) preserves the local ordering of program instructions i.e. the system can freely interleave, in any order, the operations coming from different clients but it must preserve the order of instructions coming from the same client. Although SC is a weaker consistency guarantee than linearizability, it also requires the operations to occur one at a time as if they are executed atomically on a single copy of the object.

SC is not compositional and modern microprocessors, by default, do not offer sequential consistency. However, programmers can explicitly ask for SC by using compiler directives which compiles down to memory fences that force the CPU to update the local copy of the shared object with other copies. Moreover, as proposed by [4], Observational refinement implies SC iff two threads are sequentially consistent and are non-interfering (except through the shared concurrent data structure).

Causal Consistency

Causal consistency weaker consistency model that allows asynchronous operation between 2 or more clients. It allows CPUs/Nodes to write to their respective local copies of the shared object without the “immediate” need for those local copies to be in sync with each other. Causal Consistency is stronger than Eventual Consistency and weaker than SC and Linearizability.

According to [7] and [8], Causal Consistency can be formally defined as:

Causal consistency captures the potential causal relationships between operations and guarantees that all processes observe causally-related operations in a common order. In other words, all processes in the system agree on the order of causally-related operations. They may disagree on the order of operations that are causally unrelated. Here, a causal relation between 2 operations, A and B, means that A is dependent on B or vice versa. For example, If A writes to a memory location that is observed(read) by B to perform another write operation.

Eventual Consistency – CAP theorem

In the case of network partitioning, there is no way all the nodes in a distributed system can communicate with each other and so, in order to keep them consistent all we can do is to compromise availability i.e. we can block all the nodes until the network is restored. Well…. there is an alternative to this: Eventual consistency. The dilemma here is between consistency and availability. This conundrum is formalized by the CAP theorem, which suggests that out of consistency, availability, and partition tolerance, we can ensure only 2 of them.

Eventual consistency ensures that all the nodes will be eventually consistent with each other and so it favors availability over consistency.

References:

[1] Weak consistency definition: https://en.wikipedia.org/wiki/Weak_consistency
[2] Understanding memory ordering: https://www.internalpointers.com/post/understanding-memory-ordering
[3] Causal Linearizability: Compositionality for Partially Ordered Executions: https://arxiv.org/pdf/1802.01866.pdf
[4] Abstraction for Concurrent Objects by Peter O’Hearn: http://www0.cs.ucl.ac.uk/staff/p.ohearn/papers/concurrent-objects-esop09.pdf
[5] Proving Concurrent Data Structures Linearizable: https://web.njit.edu/~ineamtiu/pubs/issre16singh.pdf
[6] The art of multi-processor programming: Book
[7] Causal Consistency: Beyond Memory: https://hal.archives-ouvertes.fr/hal-01286755/document
[8] Causal COnsistency Wikipedia: https://en.wikipedia.org/wiki/Causal_consistency

 

Tagged , , ,

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.