Common Concurrency Bug Patterns
Humans tend to make the same mistake again and again. We, software developers, are no different. Hello everyone, in this blog post, I’m going to talk about common concurrency bug patterns. Bug patterns give a good insight into developer’s psychology, specifically, the assumptions we make about the programming languages, compilers, 3rd party code, OS scheduler, and hardware architecture. Interactive IDE’s, static or dynamic concurrency bug-finding tools can specifically search for these common bug patterns and can prevent these bugs from escaping into the production code. After an exhaustive survey of existing literature and concurrency bugs in open-source software systems, like Memcached, Apache, and MySQL, I compiled the following table, consisting of some common concurrency bug patterns:
Table 1: Concurrency bug patterns
S. No. | Bug Type | Bug Name | Reference |
---|---|---|---|
1 | Atomicity Violation | Non-atomicity of compound operators | |
2 | Atomicity Violation | Interleaving between time-of-check and time-of-access | |
3 | Atomicity Violation | Using wrong lock or no lock | |
4 | Atomicity Violation | Interleavings between multiple atomic instructions or code blocks (Two access bug pattern) | |
5 | Atomicity Violation | Unlock before I/O operation | |
6 | Atomicity Violation | Non-atomic, lazy initialization of shared variables | |
7 | Atomicity Violation | Shared variables passed as parameters to synchronized functions | |
8 | Atomicity Violation | Hardware-dependent atomicity of reads and writes | |
9 | Order Violation | Order violation of shared variable access after thread creation | |
10 | Order Violation | Mutex reinitialized or destroyed while some thread(s) is/are still using it | |
11 | Order Violation | Expected order between wait() and notify() | |
12 | Order Violation | Forced interleavings by explicit delays() and sleeps() | |
13 | Order Violation | Spurious wakeups from CondVar.Wait() | |
14 | Data Races | Concurrent use of generic (non-concurrent) data collections | |
15 | Data Races | Locking on object returned by getClass() method | |
16 | Data Races | Locking on non-static variables for protecting static shared variables | |
17 | Data Races | Incorrect usage of Pthread APIs | |
18 | Deadlock | Releasing and retaking outer-lock | |
19 | Deadlock | Waiting on a dead or finished thread | |
20 | Deadlock | Locking on objects that can be reused | |
21 | Deadlock | Unlock the mutex upon exception from within the critical sction |
Section 1: Atomicity Violation
1.1: Non-atomicity of compound operators
Compound operators like X++ are not atomic and they are usually translated by the compiler into the following three hardware instructions:
- Retrieve the value of X from memory
- Increment the value of X
- Store the incremented value into the memory again
Now, consider the following example:
What should be the output? 0, 1, or 2?
Well, the output can be either 1 or 2. The value of ‘i’ will be one when both the tasks try to simultaneously try to overwrite the value of variable ‘i‘.
static int i = 0;
var task1 = Task.Run(() => {
i++;
});
var task2 = Task.Run(() => {
i++;
});
Task.WhenAll(task1, task2);
print(i);
1.2: Interleaving between time-of-check and time-of-access
What should be the output of the adjacent code?
i == 1 or i ==2?
Come on, Think harder!
Apparently, ‘i’ can be equal to both one or two, depending on the interleavings. Consider the following interleaving:
- Task 1: “if (i == 0)”
- Task 2: “if (i == 0)”
- Task 1: “i++”
- Task 2: “i++”
With these interleavings, final value of ‘i’ will be 2;
static int i = 0;
var tid1 = Task.Run(() => {
if (i == 0){
i++
}
});
var tid2 = Task.Run(() => {
if (i == 0){
i++
}
});
Task.WhenAll(tid1, tid2); print(i);
1.3: Using wrong lock or no lock
In this bug pattern, two or more concurrent operations accesses the same shared variable but by not taking any lock or using two different locks.
Consider the following code:
Same, the value of ‘i’ can be either 1 or 2. The two tasks take different locks so they don’t provide mutual exclusion to the operation i++. If instead of taking two different locks, both tasks try to take same lock on, say ‘a’, then the final value of ‘i’ will be only 2.
static int i = 0;
static object a, b = default;
var tid1 = Task.Run(() => {
lock(a){ i++; }
});
var tid2 = Task.Run(() => {
lock(b){ i++; }
});
Task.WhenAll(tid1, tid2); print(i);
1.4: Interleavings between multiple atomic instructions or code blocks (Two access bug pattern)
Atomic functions guarantee to execute a statement or a group of statements in an atomic or indivisible manner. For instance, in the adjacent example, atomic_load and atomic_add atomically read and increment the value, irrespective of the hardware or integer size.
So, given the above information, now try to predict the output of the following code:
The output is still the same, i.e., ‘i’ can be either one or two. Are you wondering why? Consider the following interleaving:
- Task 1: atomic_load(&i) (Line 3)
- Task 2: atomic_load(&i) (Line 8)
- Task 1: atomic_add(&i, 1) (Line 4)
- Task 2: atomic_add(&i, 1) (Line 9)
With these interleavings, the value of ‘i’ is two. So the problem here is that there can be interleavings between two atomic statements or blocks. Atomic code block guarantees to execute indivisibly, but there is no guarantee that two atomic blocks will execute in a particular manner. The fix here is to merge the two atomic blocks into a single one, i.e., atomic_cmp_and_add. After the fix, the value of ‘i’ is always equal to one.
static int i = 0;
var tid1 = Task.Run(() => {
if (atomic_load(&i) == 0){
atomic_add(&i, 1);
}
});
var tid2 = Task.Run(() => {
if (atomic_load(&i) == 0){
atomic_add(&i, 1);
}
});
Task.WhenAll(tid1, tid2); print(i);
---------- Correct Solution ------------
static int i = 0;
var tid1 = Task.Run(() => {
atomic_cmp_and_add(&i, 0, 1);
});
var tid2 = Task.Run(() => {
atomic_cmp_and_add(&i, 0, 1);
});
Task.WhenAll(tid1, tid2); print(i);
1.5: Unlock before I/O operation
This bug pattern is quite similar to the previous one. IO operations are more taxing than memory operations, and lengthy IO operations inside the critical section are even more time-consuming because they cause threads that wish to enter the critical section to wait.
One similar bug was found inside the Linux kernel [17] where there was an expensive IO operation inside the critical section, and the developer releases the lock before IO operation and then retake the lock after completing the IO operation. Doing that, however, compromised the atomicity of the critical section. Here’s an example:
So, is the adjacent code buggy? Yes. Releasing the lock at the IO operation breaks the atomicity of the block, and thus the two atomic blocks can interleave in any manner.
global int count = 0; // Shared variable.
gloval int[] tidArray = default; // Shared Var
function void foo()
{ // This function should be executed atomically.
lock(&count){
count++;
} // Lock released before IO operation
int local_tid = IO.GetProcessTid();
lock(&count){
tidArray[count] = local_tidl
}
}
1.6: Non-atomic, lazy initialization of shared variables
Lazy initialization of shared variables should always be atomic. Double-checked locking, as shown in the adjacent code, is perhaps the most widely used method for “atomic,” lazy-initialization of shared variables [2].
Take a look at the adjacent code. So, Is the getList() method thread-safe? Well, it’s not.
The statement at Line 13, ‘SharedVar = new List()’ is further broken down by the compiler into the following statements:
- temp = Malloc(sizeof(List(1))); // Allocate space on heap
- temp->InitializeHeapMemory(); // Initialize the allocated heap space with zero.
- SharedVar = temp;
Note that statements 2 and 3 are independent of each other, and so, the compiler can reorder them. Now, take a look at the getList() function again but this time, comment off the statement at line 13 and uncomment the statements at line 15, 16, and 17. Do you notice the concurrency bug now?
Let’s consider the following buggy interleavings:
- Task 1: Line 16, “SharedVar = temp”
- Task 2: Line 9, “if (SharedVar == NULL)”
- Task 2: Line 22, “return SharedVar”
- Task 2 tried to access the uninitialized SharedVar and resulted in an *error*.
One potential solution to the double-check locking bug is to mark the SharedVar as volatile. Another possibility is to use explicit memory fences to prevent the compiler from reordering the statements.
Class FooBar
{
private static List<int> SharedVar;
private static object SyncLock = new object;
// Double-check locking lazy-initialization
@ThreadSafe public getList()
{
if (SharedVar == NULL){
lock(&SyncLock){
if (SharedVar == NULL)
{
SharedVar = new List<int>();
// After compiler re-orderings:
// temp = Malloc(sizeof(List(1)));
// SharedVar = temp;
// temp->InitializeHeapMemory();
}
}
}
return SharedVar;
}
}
1.7: Shared variables passed as parameters to synchronized functions
This bug pattern is due to the difference between the assumptions made by the user and the formal thread-safety guarantees provided by the developers. Before discussing the bug, take a look at the C#’s concurrent queue’s Enqueue() function. ConcurrentQueue.Enqueue(T) is a thread-safe method and ensures atomic operations on the shared concurrent queue. Now, let’s look at the adjacent code;
Is this code bug free? Yes it is. Now, uncomment the statements at line 6, 7, 8, and 12, 13, 14. Is the code thread-safe now?
Nope. There is a data race bug: simultaneous read (Line 5 and Line 11) and write (Line 7 and Line 13) on the shared variable “Counter”. Apparently, ConcurrentQueue.Enqueue() method gives thread-safe operation guarantees only to the underlying ConcurrentQueue and not to the parameters passed to this method. The parameters are accessed in a thread-unsafe manner. This type of bug is very common, for instance, take a look at the case studies by Roberson et al. [3]:
“Two atomicity violations are detected in the addAllmethods() of a data structure. These methods take a Collection as a parameter but they do not synchronize the parameter. Thus, changes to the Collection during execution of the addAllmethods() violates atomicity and can lead to exceptions. The third atomicity violation is in a Heapdata structure. The insertmethod() is synchronized,but it does not synchronize its parameter” (Sor benchmark)
static ConcurrentQueue<int> SharedVar = new ConcurrentQueue<int>(10);
static int Counter = 0;
var tid1 = Task.Run(() => {
SharedVar.Enqueue(Counter);
// lock(&Counter) {
// Counter++;
// }
});
var tid2 = Task.Run(() => {
SharedVar.Enqueue(Counter);
// lock(&Counter) {
// Counter++;
// }
});
Task.WhenAll(tid1, tid2);
1.8: Hardware-dependent atomicity of reads and writes
Consider the adjacent code. What should be the possible value(s) if the variable ‘i’? 123? 124?
Yes, the value of ‘i’ can be 123 or 124 depending on the interleavings. However, other values of ‘i’ are also possible. Let’s consider the following scenario:
The variable ‘i’ is 64-bit, and the platform on which this code is running is 32-bit. This means that every read or write of the variable ‘i’ will take exactly two machine instructions, one to fetch/write the MSBs (Most significant 32 bits) and another machine operation to fetch/write the LSBs (Lower significant 32 bits). These machine instructions can also interleave. For instance, consider the following interleavings:
- Task 1: Writes 32 MSBs to the memory location of ‘i’ (Line 4)
- Task 2: Reads 32 LSBs from the memory location of ‘i’ (Line 8)
- Task 1: Write 32 LSBs to the memory location of ‘i’ (Line 4)
- Task 2: Reads 32 MSBs from the memory location of ‘i’ (Line 8)
The above interleavings will result in the print() statement (Line 8) printing the wrong or a corrupt value of ‘i’. This bug can be resolved via declaring ‘i’ as volatile. This bug pattern is also common, for instance, take a look at the following bug in Memcached: Fix data race on stats due to an access w/o acquiring the lock by danielschemmel · Pull Request #573 · memcached/memcached · GitHub
“On 64-bit x86 systems the dirty read there should be fine, but that is wrong for 32bit.”
static long int i = 123;
var task1 = Task.Run(() => {
i++;
});
var task2 = Task.Run(() => {
print(i);
});
Task.WhenAll(task1, task2);
Section 2: Order Violation
2.1 Order violation of shared variable access after thread creation
In this bug pattern, the developer implicitly assumes some order between the parent and the child thread, right after the child thread is created. For instance, consider the adjacent code:
Do you see the bug? In this case the developer assumes that the statement at Line 7 (ParentTID = pthread_self()) executes before the statement at Line 12. This assumption doesn’t hold for every case.
Now consider the second C#-based example, is this code buggy?
Yes, it is. The reason is still the same. Developer assumes that write to the variable ‘parent‘ precedes the read to the variable.
static pthread_id ParentTID = NULL;
void main() {
pthread_id tid = null;
pthread_create(&tid, NULL, &fun, NULL);
ParentTID = pthread_self();
}
void fun() {
printf("Hi Parent: %lu \n", (size_t)ParentTID);
}
// ----------- Second Example (C#) ------------
static TaskId parent = null;
void main() {
parent = Task.Run(() => { print(parent); });
}
2.2 Mutex destroyed or re-initialized while some thread(s) is/are still using it
Mutex objects or shared objects used for locks (in C#, Java) are used for synchronizing the access to a shared resource. Destroying the lock or mutex, while some threads are still using it might lead to a crash (null pointer dereference) or data races.
For instance, consider the adjacent code: start_threads method might finish and call destructor of the mutex before the worker threads finished. In this case, the developer assumes some order between the worker threads and main thread. However, this assumption isn’t correct.
// Source: https://wiki.sei.cmu.edu/confluence/display/cplusplus/CON50-CPP.+Do+not+destroy+a+mutex+while+it+is+locked
void do_work(size_t i, std::mutex *pm) {
std::lock_guard<std::mutex> lk(*pm);
// Access data protected by the lock.
}
void start_threads() {
std::mutex m;
for (size_t i = 0; i < 10; ++i) {
std::thread(do_work, i, &m);
}
}
2.3 Expected order between wait() and notify()
Consider the adjacent code, Is the code buggy? obviously it is! then what is the bug?
In this case, the developer assumes some order between the statement at line 5 (pthread_wait()) and that at the line 9 (pthread_signal()). Main thread waits for the worker thread to finish using the wait() and signal() synchronization mechanism. However, consider the case when pthread_signal() (Line 9) executes before pthread_wait() (Line 5). This can lead to a deadlock. This bug pattern is very common, for instance, checkout the following Memcached bug: Possible deadlock bug between Slab rebalancer and Dispatcher thread · Issue #738 · memcached/memcached · GitHub
static pthread_cond_var WorkDone = PTHREAD_COND_INIT;
void main() {
pthread_id tid = null;
pthread_create(&tid, NULL, &fun, NULL);
pthead_wait(&WorkDone); // wait for worker thread to finish
}
void fun() {
// Do some work
pthread_signal(&WorkDone); // signal main thread
}
2.4 Forced interleavings by explicit delays() and sleeps()
Forced interleavings between threads using delay() and sleep() statements is always unsafe. For instance, is the adjacent code buggy? The developer in this code forces the statement at line 10 (Counter++) to execute before the statement at line 6 (print(Counter)). However, there is a slight possibility of expected order violation between statement at line 6 and that at line 10. Consider the case, when this code is running on a single-core computer and the OS scheduler doesn’t schedule the worker thread for, say two seconds (it is less likely but possible). In that case, the main thread will report the value of the shared variable ‘counter‘ as zero.
static int counter = 0;
void main() {
pthread_id tid = null;
pthread_create(&tid, NULL, &fun, NULL);
sleep(1000);
print(counter);
}
void fun() {
printf("Work done");
counter++;
}
2.5 Spurious wakeups from CondVar.Wait()
Functions like pthread_cond_wait() and pthread_cond_timedwait() wait for either timeout or another thread to signal the conditional variable. However, these functions can spuriously wake up without any external trigger. According to a user on StackOverflow:
“I have a production system that exhibits this behavior. A thread waits on a signal that there is a message in the queue. In busy periods, up to 20% of the wakeups are spurious (i.e., when it wakes, there is nothing in the queue).”
20% spurious wakeups, if these numbers are correct, is a huge figure. It means that every one in five wakeups is spurious.
Let’s consider the adjacent code. The worker thread is waiting at line 14 (‘condition.wait()’) for any item to be added to the queue. However, ‘condition.wait()’ can return spuriously, which may result in a possible crash.
The fix here is to wrap the ‘condition.wait()’ statement within a loop i.e., just replace the if condition at line 13 with a while loop.
// Source: https://wiki.sei.cmu.edu/confluence/display/cplusplus/CON54-CPP.+Wrap+functions+that+can+spuriously+wake+up+in+a+loop
struct Node {
void *node;
struct Node *next;
};
static Node list;
static std::mutex m;
static std::condition_variable condition;
void consume_list_element(std::condition_variable &condition) {
std::unique_lock<std::mutex> lk(m);
if (list.next == nullptr) {
condition.wait(lk);
}
// Proceed when condition holds.
}
Section 3: Data Races
3.1 Concurrent use of generic (non-concurrent) data collections
Li et al. [11] showed through open-source and closed-source case studies that it is very common for developers to use generic data collections in multi-threaded programs without any synchronization methods. Their tool, TSVD, found more than a thousand thread-safety bugs on data collections and a lot of those bugs are from Microsoft’s internal codebase. From my inspection of these bugs in Microsoft’s codebase, they are mostly in the mocks prepared by the developers for testing, and they were easy to trigger with random delay fuzzing.
Let’s look at the adjacent code that instantiates this bug pattern. The two tasks will simultaneously perform the operation on the shared, non-concurrent data structure (List
static List<int> SharedVar = default;
var task1 = Task.Run(() => {
SharedVar.Add(1);
});
var task2 = Task.Run(() => {
SharedVar.Clear();
});
Task.WhenAll(task1, task2);
3.2 Locking on object returned by getClass() method
Java’s getClass() method returns the runtime class object of the current instance. The returned class instance is the same that is locked when using the static synchronized methods. However, the getClass() method should not be used for synchronization, especially when the method can be inherited.
For instance, consider the adjacent code: the addFamilyMember() method uses ‘this.GetClass()‘ for synchronization, and it can be inherited by the Child class. Now consider the case when the Parent class object and Child class object calls the addFamilyMember() method simultaneously. It will lead to a data race despite the lock() statement (line 4). This is because the class instance returned by the getClass() method is different for the Parent and the Child class. This bug pattern is explained in detail in the CERT Java concurrency guide [6].
class Parent {
static List<int> Family = default; // Shared Variable
void addFamilyMember(Parent member) {
lock (this.GetClass()) {
Family.Add(member);
}
}
}
class Child : public Parent {
void initFamilyMember(Parent member) {
addFamilyMember(member);
// Do some other work
}
}
3.3 Locking on non-static variables for protecting static shared variables
It is very common for developers to use the current object instance for synchronization. However, doing that for protecting static shared variables might lead to a data race.
For instance, take a look at the adjacent code: in the addFamilyMember() method, the current object instance is used to synchronize access to the static variable ‘Family.’ However, consider the case when two different instances of the class ‘Parent‘ simultaneously call the addFamilyMember() method. In this case, ‘lock(this)‘ statement will take a lock on two different object instances, and thus, they won’t synchronize the access to the shared, static variable.
class Parent {
static List<int> Family = default; // Shared Variable
void addFamilyMember(Parent member) {
lock (this) {
Family.Add(member);
}
}
}
3.4 Incorrect usage of Pthread APIs
Double initialization of mutex or condition variables, unlocking an already unlocked mutex, and double locking of a non-reentering mutex are all examples of the incorrect usage of Pthread APIs. These bugs are very common, for instance, take a look at the following Memcached bug: Invoking pthread_mutex_unlock() on an already unlocked mutex · Issue #741 · memcached/memcached · GitHub.
Section 4: Deadlock
4.1 Releasing and retaking outer-lock
In the case of nested locks, extreme caution is required while releasing or acquiring the locks. Releasing and retaking the outer lock might cause a deadlock.
For instance, consider the adjacent example: in ‘function2()’ developer released the outer lock to improve performance. However, this arrangement might lead to a deadlock. CERT Java concurrency guidelines [6] have explained this bug pattern (LCK07-J) in detail.
void function1() {
lock(Outer);
lock(Inner);
function2();
unlock(Inner);
unlock(Outer);
}
void function2() {
// Do some work
unlock(Outer);
// Do some more work
lock(Outer)
}
4.2 Waiting on a dead or finished thread
It is very common for the main thread to spawn multiple worker threads and then wait for those worker threads to finish. However, the problem arises when (1) the worker thread(s) exits even before the main thread calls pthread_join() (2) the worker thread exits abnormally due to some exception or user interrupt. In both these cases, the parent thread continues to wait for the worker thread even when it has already finished, and this could cause a deadlock.
For instance, consider the adjacent code example: start_threads() method spawns ten worker threads and then waits for them to finish. However, what if any worker thread ends before the start_threads() method even calls the join() statement? This will cause a deadlock! Another case is that the worker thread terminates due to an unhandled exception at line 3. In this case, too, waiting for the finished worker thread will lead to a deadlock. This pattern is also common, for example, look at the following Memcached bugs: Attempt to join assoc_maintainer thread even if it does not exist · Issue #733 · memcached/memcached · GitHub
Memcached attempts to join lru_maintainer thread even if it does not exist · Issue #685 · memcached/memcached · GitHub
void do_work(size_t i, void *arg) {
// Do some work
IO.File.Write();
// Do some more work
}
void start_threads() {
std::thread threads[maxThreads];
for (size_t i = 0; i < 10; ++i) {
threads[i] = std::thread(do_work, i, NULL);
}
for (size_t i = 0; i < 10; ++i) {
threads[i].join();
}
}
4.3 Locking on objects that can be reused
Programming languages like Java support boxed variables and autoboxing, i.e., automatically replacing primitive data types with boxed ones. Basically, boxing replaces a value type with a reference type. Using boxed types or interned string as a lock might have some unexpected consequence. Java runtime maintains a pool of constant strings and basic boxed objects which might be reused, i.e., two different initialization of basic boxed data types might refer to the same object at runtime.
For example, consider the adjacent code: Will this code work correctly? No, it will deadlock. This is because the statement ‘new Integer(1)‘ at lines 2 and 3 might return the same reference type object at runtime. You can read more about this bug pattern (LCK01-J) in CERT Java guidelines [6].
class Parent {
private static Integer lock1 = new Integer(1);
private static Integer lock2 = new Integer(1);
private static int counter = 0;
public void IncrementCounter(int val) {
lock(lock1) {
lock(lock2) {
counter += val;
}
}
}
}
4.4 Unlock the mutex upon exception from within the critical section
It is essential to handle all possible exceptions from within the critical section. Unhandled exceptions from the critical section will abnormally exit the critical section without releasing the locks. All I/O operations, heap memory allocations should carefully handle the exceptions. These bugs are common; for instance, take a look at the following Memcached bug: item_cachedump not unlock when malloc falied · Issue #461 · memcached/memcached · GitHub
References:
[1] Hong, Shin, and Moonzoo Kim. “Effective pattern-driven concurrency bug detection for operating systems.” Journal of Systems and Software 86.2 (2013): 377-388.
[2] Double-Checked Locking is Broken: https://www.cs.cornell.edu/courses/cs6120/2019fa/blog/double-checked-locking/
[3] Roberson, Michael, and Chandrasekhar Boyapati. “A Static Analysis for Automatic Detection of Atomicity Violations in Java Programs.” Dept. Elect. Eng. Comput. Sci., Univ. Michigane, Ann Arbor, MI, USA, Tech. Rep. CSE-TR-569-11 (2011).
[4] Lu, Shan, et al. “Learning from mistakes: a comprehensive study on real world concurrency bug characteristics.” Proceedings of the 13th international conference on Architectural support for programming languages and operating systems. 2008.
[5] Farchi, Eitan, Yarden Nir, and Shmuel Ur. “Concurrent bug patterns and how to test them.” Proceedings international parallel and distributed processing symposium. IEEE, 2003.
[6] CERT Java Concurrency Guidelines: https://resources.sei.cmu.edu/asset_files/TechnicalReport/2010_005_001_15239.pdf
[7] Mutation Operators for Concurrent Java: https://research.cs.queensu.ca/TechReports/Reports/2006-520.pdf
[8] Wu, Zhendong, Kai Lu, and Xiaoping Wang. “Surveying concurrency bug detectors based on types of detected bugs.” Science China Information Sciences 60.3 (2017): 1-27.
[9] Park, Soyeon, Shan Lu, and Yuanyuan Zhou. “CTrigger: exposing atomicity violation bugs from their hiding places.” Proceedings of the 14th international conference on Architectural support for programming languages and operating systems. 2009.
[10] Jalbert, Kevin, and Jeremy S. Bradbury. “Using clone detection to identify bugs in concurrent software.” 2010 IEEE International Conference on Software Maintenance. IEEE, 2010.
[11] Li, Guangpu, et al. “Efficient scalable thread-safety-violation detection: finding thousands of concurrency bugs during testing.” Proceedings of the 27th ACM Symposium on Operating Systems Principles. 2019.
[12] https://sungsoo.github.io/2013/12/26/concurrent-bug-patterns.html
[13] Fiedor, Jan, et al. “A uniform classification of common concurrency errors.” International Conference on Computer Aided Systems Theory. Springer, Berlin, Heidelberg, 2011.
[14] Common Concurrency Problems: https://pages.cs.wisc.edu/~remzi/OSTEP/threads-bugs.pdf
[15] Lourenço, João M., et al. “Discovering concurrency errors.” Lectures on Runtime Verification. Springer, Cham, 2018. 34-60.
[16] Boehm, Hans-Juergen. “How to Miscompile Programs with” Benign” Data Races.” HotPar. 2011.
[17] Concurrency Bug Detection through Improved Pattern Matching Using Semantic Information, MS Thesis. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.716.3864
[18] CERT C coding guidelines https://wiki.sei.cmu.edu/confluence/display/cplusplus/CON51-CPP.+Ensure+actively+held+locks+are+released+on+exceptional+conditions
[19] Hovemeyer, David, and William Pugh. “Finding concurrency bugs in java.” In Proceedings of the PODC Workshop on Concurrency and Synchronization in Java Programs. 2004.
Share this:
- Click to share on Twitter (Opens in new window)
- Click to share on Facebook (Opens in new window)
- Click to share on Telegram (Opens in new window)
- Click to share on LinkedIn (Opens in new window)
- Click to share on Reddit (Opens in new window)
- Click to share on Pocket (Opens in new window)
- Click to email a link to a friend (Opens in new window)
- Click to share on WhatsApp (Opens in new window)
- Click to print (Opens in new window)