C++1y Concurrent Data Structures


Parallelism and Concurrency in Modern C++

I always come across this situation: I have some program, or multiple programs running concurrently, they need to share a data. The shared data needs to be protected against read/write violation.

Well, this is known as reader/writer problem: one of traditional problems used to teach concurrency concepts in operating system courses. The well-known solution is lock-do-unlock idiom. Using threading APIs (say POSIX) that they provide mutex mechanism, this can be easily implemented using a mutex per object OR a mutex per critical area. For the first scenario, I have a better solution for object-oriented programming languages that I’m gonna describe it in this post. And also I’ll provide a possible syntax for C++.


Suppose we have a class that it’s instances are supposed to be shared between threads. Of course all operations that modify the class should not be concurrent. Well this is not true for all possible scenarios We may want to do two independent operations, say o1 and o2, concurrently if we can make sure about safety. We can have a data structure containing two independent structures, and providing independent operations, o1 and o2, each one modifying one of internal structures. Anyway.. that’s not the common case I wanna present.

The well-known solution to reader-writer problem is to lock the critical area by mutexes. This is a bad solution. Why? First it’s inelegant. Second It lacks readability: Which mutex guards which data? Third its error-prone. What happens when you forgot an unlock? Obviously deadlocks.

This is a small example of my experience with AIT library. I was implementing Asynchronous Backtracking algorithm using a multi-agent fashion. Each agent has two threads: Message reader and Algorithm workhorse. First one reads messages from a TCP socket and stores them in a queue, second reads message from that queue and erases recently read message. Here is the code:

// Algorithm workhorse
pthread_mutex_lock(&this->messageRW);   // lock the door
P_Message x = this->messageQueue.pop(); // do the job
pthread_mutex_unlock(&this->messageRW); // get out of there
// somewhere else, in another thread
while (true) {
    P_Message message;
    pthread_mutex_lock(&(solver->messageRW));   // lock the door
    solver->messageQueue.push(message);         // do the job
    pthread_mutex_unlock(&(solver->messageRW)); // get out of there

That’s not the end of world. Everything works fine, though consider the fact that there are some scenarios in which you need to do the first part many many many times! Yes that’s the points. You probably don’t want to wrap each modification code in the ugly lock / unlock pairs.

Better Solution

So far, we have seen how bad mutexes are 🙂 Just kidding. They’re great. And I’m doing to present a mutex-based solution to remove lock / unlock pairs.

Suppose C++ has a keyword concurrent. This keywords comes before a complex type and magic begins. All non-const accessors and modifiers, in fact every single method or operator implemented in the class/structure, becomes wrapped in the lock/unlock pairs. For example, consider the std::queue class. We want to suppose that it’s safe to read head of the queue and append a new item to its tail at the same time (well I mean in two threads.) But it’s obviously not safe to append enqueue two items. In my very own solution, all I need to do is just put the keyword concurrent before declaration of std::queue.

concurrent std::queue<protocols::csp::abt::P_Message> messageQueue;

This keyword makes compiler to generate a code which in every non-const method of std::queue is wrapped in a lock / unlock pair. Consider following method which is implemented somewhere in stdlib.

std::queue::push_back(const T& item){
    // some of stdlib secret codes

Happens to be

std::queue::push_back(const T& item){
    // this->mutex->lock()
    // some of stdlib secret codes
    // this->mutex->unlock()

This way I can tell the compiler to call which function. In fact to generate the code tha calls which function. But before this, I need to tell the complier, which classes are supposed to be used concurrently. So I can use same keyword before class definition. Adding that keyword, causes compiler to generate binary code for two versions of each non-const operation:

// somewhere in stdlib accessed by #include <queue>
namespace std{
template <typename T>
concurrent class queue{
    // class definition

In the worst case, making a class concurrent, almost doubles size of generated binary. On the other hand there exist some design scenarios in which programmer don’t want to make a non-const operation thread-safe. One may need to have a partially concurrency-enabled class. For this two reason, it’s better to provide ability to declare concurrency status of each method separately. Again, using the same magic keyword: concurrent.

Of course there are ambiguities within this proposal. This post describes main idea. I’m gonna submit the detailed proposal to the C++ WG comitee, SG1, tomorrow.