Relaxed atomics and data races

The following code is a race condition:

Because memory_order_relaxed is used the compiler and the CPU are free to reorder the two writes in thread t1. They are also free to reorder the two reads in thread t2. Hence undefined behavior.

The fix is to either use memory_order_seq_cst on the store and load calls, or memory_order_release on the store call and memory_order_acquire on the load call. Release prevents any prior loads and stores to be reordered past it; acquire guarantees that all stores from the thread that released the atomic variable are visible to the current thread; it also prevents reads and writes to be reordered before it. See here.

Bit fields and race conditions

The following program, despite looking correct, is a race condition and has unpredictable behaviour:

The proof is in the output of running this code several times:

sizeof(S) = 2, 511, 127
sizeof(S) = 2, 511, 127
sizeof(S) = 2, 0, 127
sizeof(S) = 2, 511, 0
sizeof(S) = 2, 511, 127
sizeof(S) = 2, 511, 127
sizeof(S) = 2, 511, 127
sizeof(S) = 2, 0, 127
sizeof(S) = 2, 0, 127
sizeof(S) = 2, 511, 127

The race condition and the problem here is that the C++ standard states that adjacent bit fields are a single object in memory and may be packed to share the same byte. The CPU cannot operate on single bits, it must load at least 1 byte at a time, and because of the overlap, loading bits of a also loads the bits of b. So the write to bit field a, even though protected with its own mutex, also overwrites the value of b. And vice versa.

The fix is to use one mutex to protect all adjacent bit fields. I say all because you have no guarantee that the CPU will be able to load 1 byte at a time. It may be limited to working on 32-bit values at a time; depending on the architecture.

Corrected program:

Dekker’s algorithm for N threads

In the previous post I described Dekker’s algorithm. Its limitation is that it only works for 2 threads. I wanted it to work for N threads, but I can’t atomically check N-1 flags belonging to other threads 🙁 now what?

Instead of multiple flags we’ll use one atomic variable’s bits to indicate which thread wants entry into the critical section. So, if first bit is set that means first thread declares its intent to enter. And so on for N threads.

Complete listing:

Dekker’s algorithm

Dekker’s algorithm is a way of synchronizing two threads around a critical section. It is a dance where thread 1 sets its flag declaring its intent to enter the critical section, then it checks the flag of thread 2. If it is not set, it enters the critical section, if it is set, it resets its flag and backs off. Thread 2 does the same.

Below is my implementation of the algorithm using C++ atomics.

T1 in critical section
T2 in critical section
T1 in critical section
T2 in critical section
T2 in critical section
T1 in critical section
T1 in critical section
T2 in critical section
T1 in critical section
T2 in critical section
T1 in critical section
T2 in critical section
T1 in critical section
T2 in critical section
T1 in critical section
T2 in critical section
T1 in critical section
T2 in critical section
T1 in critical section
T2 in critical section
Program ended with exit code: 1

Program output.

Memory barriers and thread synchronization

We will jump straight to the code. This innocent looking little program has a major issue (when compiled for release build with optimizations on my Mac using GCC, Apple’s CLANG, and LLVM, as well as on Windows using Visual Studio 2017, and ran on a multicore machine). Can you spot the problem?

That’s right! It will never terminate! It will hang forever! The while loop in line 18 will never break. But why? Thread t1 sets flag to true after all. Yes, but it does so too late (notice the 100ms sleep). At that point thread t2 has already L1 cached flag and will never see its updated value. If you think that making flag volatile will help you’re wrong. It may work on your compiler/machine but it is no guarantee. Now what?

This was one of the hardest lessons in C++ and computer science for me. Before continuing to the fix section I highly recommend you read about the following: memory barriers, C++ memory model as well as C++ Memory Model at Modernest C++, and memory ordering. I’ll see you in a couple of days 😉

The fix.

The simplest fix is to wrap access to flag around a mutex lock/ unlock or make flag an atomic<bool>;(both of those solutions will insert appropriate memory barriers). But that’s not always an option for other data types…

We need to make sure that t2 can see the actions of t1 that happened later in time. For this we need to force cache synchronization between different CPU cores. We can do it in three ways:
1) By inserting memory barriers in the right places.
2) By inserting loads and stores of an atomic variable using release/acquire semantics.
3) By inserting loads and stores of a dependent atomic variable using release/consume semantics.

Below is the corrected version of our example; uncomment each #define to engage different fixes:

Atomic blocking queue

As I learn about atomics and memory model I decided to take a stab at rewriting my blocking queue using atomic operations and eliminate the mutex around the critical section of code responsible for pushing and popping elements, effectively creating a fast path through the queue if no blocking is taking place.

Let’s jump straight into the code:

Lines 1-4 are basically a template concept which specifies that this method will only be present if the type is no-throw copy constructible.
Line 7 is the same semaphore decrement and possible blocking if the queue is full. The fast path of this semaphore implementation uses only atomic operations, so if it doesn’t block it will not engage a mutex ( fast_semaphore code available on GitHub).
Line 9 is where the magic starts. We atomically increment the m_pushIndex while fetching its previous value into a temporary pushIndex. From now on we work with the temporary.
Line 10 is where we insert the element by copy constructing it in the right open slot.
Line 11 is book-keeping needed during the destruction of the queue.
Line 13-15 is where we have to modulo the m_pushIndex with m_size, so it never overflows. It check, in a loop, if it has changed, if it has it loads it back into expected and checks again until it hasn’t changed in which case it atomically swaps m_pushIndex with m_pushIndex % m_size.
Line 17 signals to other blocked threads, if there are any, that the queue now has an element available for popping.

Other methods of the queue work in a very similar way so I will not be describing them in detail here. The only crux of this implementation is that it only works for no-throw copyable and movable types; so declare your constructors and assignment operators with noexcept if you want to use them with this queue 🙂

Complete listing:

Synchronizing with atomics

I’ve been reading and learning about the C++ memory model and relaxed atomics and decided to rewrite my Interview question, part 1 using atomic<bool>’s to synchronize the threads. I took it one step further and created three threads that print A, B, C, A, B, C,…
Each thread waits for its atomic<bool> to be set to true while spinning; prints then sets the next thread’s flag. The wait is a CAS (compare-and-swap) operation.
Next, I will try to figure out how to relax the compare_exchange and the store calls with memory_order_acquire and memory_order_release parameters 🙂

Complete listing:

A
B
C
A
B
C
A
B
C
Program ended with exit code: 1

Program output.

Fun with unordered containers

I’ve been playing around with unordered_map and unordered_set to see how they grow when elements are inserted. Here’s what I learned so far: the default load factor is 1; this means the container will grow to accommodate at most 1 element per bucket. You can change the load factor with max_load_factor call followed by rehash call. I also wanted to see how different STL implementations behave, so I tested with GCC, Apple’s CLAND, and LLVM on my Mac. The test program creates an unordered_map and unordered_set. Loads it with 10'000'000 entries and prints out the container’s properties. I then clear the container and do the same test but reserve space for the entries. I print the container’s properties: size, bucket count, and load factor. Next I change the max_load_factor to 10 (allowing for up to 10 entries per bucket) and rehash with maximum 1/10th the bucket count.
The GCC implementation is more aggressive in terms of container growth; it creates more buckets than Apple’s CLANG and LLVM’s implementation which had identical results.

The takeaway here is that when using unordered containers it is good to either reserve the appropriate element count, or rehash after inserting the elements to reduce the bucket count and optimize memory use.

****************************************
* GCC -Ofast -march=native -lstdc++ *
****************************************
unordered_map
initial
size = 10000000
bucket count = 15345007
load factor = 0.651678

reserved
size = 10000000
bucket count = 10352717
load factor = 0.96593

re-hashed
size = 10000000
bucket count = 1056323
load factor = 9.4668

unordered_set
initial
size = 10000000
bucket count = 15345007
load factor = 0.651678

reserved
size = 10000000
bucket count = 10352717
load factor = 0.96593

re-hashed
size = 10000000
bucket count = 1056323
load factor = 9.4668

**********************************************
* Apple CLANG -Ofast -march=native -lc++ *
**********************************************
unordered_map
initial
size = 10000000
bucket count = 13169977
load factor = 0.759303

reserved
size = 10000000
bucket count = 10000019
load factor = 0.999998

re-hashed
size = 10000000
bucket count = 1000003
load factor = 9.99997

unordered_set
initial
size = 10000000
bucket count = 13169977
load factor = 0.759303

reserved
size = 10000000
bucket count = 10000019
load factor = 0.999998

re-hashed
size = 10000000
bucket count = 1000003
load factor = 9.99997

Program output.

Complete listing:

trace.h:

Short introduction to parameter packs and fold expressions

Docendo discimus.

Seneca the Younger.

I’m new to this topic since I literally just learned about the fold expressions today, and parameter packs days before that, so it will be a short post where I’ll explain the basics 🙂

Let’s jump straight to the code:

Line 1 tells us there will be zero or more parameters.
Line 2 declares a function with said variable number of parameters.
Line 4 is where the magic happens. The fold expression states that for every parameter in the args parameter pack, combine it with the next one using operator+. So it becomes args0 + args1 + ... + argsN.

Another example:

Here in line 4 the fold expression takes the form of “(init op … op pack)” as described by syntax #4 here. It expands to cout << args0 << args1 << ... << argsN.

Another way of doing the same is:

Here the cout << args << " " is expanded N times, separated by the comma operator, so it’s less efficient but we get a space between each printed argument.

That’s it for now 🙂 I’ll post more as I learn more!

Complete listing:

Pure virtual destructor

Did you know that a destructor can be pure virtual? It can be defined as such, but it still needs a body declaration. It is a way of making a class abstract (instances of it cannot be created) without having to make any pure virtual methods. So use it if you have a base class that you don’t want users to instantiate but you have no other candidates for pure virtual specifier 🙂

Additional discussion of this topic at Geeks for Geeks: Pure virtual destructor in C++.

Complete listing:

Read/write mutex

Designed and implemented by Chris M. Thomasson. Complete implementation can be found on GitHub.

mutex.h:

Templates in STL containers

Storing template classes in STL containers is tricky 🙂 If v is a std::vector and TC is a template class, how can we do the following:

The trick is two-fold: 1) we need a non-template base class (let’s call it NTB) and 2) we need to store pointers to the non-template base class in the STL container. Like this:

Complete listing:

Multiple return values

C++17 gives us an elegant way to return multiple values from a function using structured binding.

Here’s how we can declare a function that returns 3 values:

This function takes one argument of type T, and returns a tuple<T, T, T> of 3 values.

In order to get the return values in C++11 we would have to write the following:

tuple<int, int, int> t = ReturnThree(9);

But with C++17 and structured binding syntax we can write:

auto [t1, t2, t3] = ReturnThree(9);

Much cleaner code this way 🙂

Complete listing:

Simple timer

Jonathan Boccara over at Fluent{C++} made a post a while ago titled A Simple Timer in C++. I felt things could be done… different 😉 so I decided to write my own version of the timer code.

First, I felt there’s no need to actually instantiate timer objects; a simple function call to set_timeout or set_interval from namespace timer should be sufficient.
Second, I didn’t like the way cancellation was done. Single stop call interrupted all intervals and timeouts. How about a cancelation event per set_timeout or set_intervalcall?
Finally, I wanted the set_timeout and set_interval functions to accept any callable with any number of arguments.
That’s exactly how I designed my interface.

Usage example:

waiting 5s…
timeout
interval
interval
interval
interval
waiting 5s…
interval
interval
interval
interval
interval
Program ended with exit code: 1

Program output.

timer.h:

Updated event.h:

Interview question, part 6

Given a set of integer ranges defined as [LO, HI) and a value P, find which range P falls into.

My approach to this programming puzzle was to first define a range as a struct that can be sorted (thanks to operator <), then perform binary search on a vector of sorted ranges. The code is pretty self explanatory 🙂

Example input and output:

LO: 0, HI: 10
LO: 10, HI: 20
LO: 20, HI: 30
LO: 30, HI: 40
LO: 40, HI: 50
LO: 50, HI: 60
LO: 60, HI: 70
LO: 70, HI: 80
LO: 80, HI: 90
LO: 90, HI: 100
P = 15 falls in range LO: 10, HI: 20
P = 16 falls in range LO: 10, HI: 20
P = 4 falls in range LO: 0, HI: 10
P = 73 falls in range LO: 70, HI: 80
P = 25 falls in range LO: 20, HI: 30
P = 28 falls in range LO: 20, HI: 30
P = 19 falls in range LO: 10, HI: 20
P = 60 falls in range LO: 60, HI: 70
P = 83 falls in range LO: 80, HI: 90
P = 76 falls in range LO: 70, HI: 80
Program ended with exit code: 1

The answer:

Fast mutex

Many thanks to Chris M. Thomasson for rewriting POSIX Threads for Win32 mutex into standard C++ implementation. Using auto_event class from Event objects.

mutex.h:

Simple thread pool

LESSON RECORDING: How to implement a (simple) thread pool.

PART 2: Advanced Thread Pool.

I know the topic of thread pools has been beaten to death on the internet, nevertheless I wanted to present to you my implementation which uses only standard C++ components 🙂

I will be using queue and semaphore classes discussed in my earlier posts.

Below you will find a simple thread pool implementation which can be parametrized by the number of worker threads and the blocking queue depth of work items. Each thread waits on a blocking_queue::pop() until a work item shows up. The threads pick up work items randomly off of the queue, execute them, then go back to blocking_queue::pop(). Destruction and cleanup of threads is done with nullptr sentinel pushed onto the queue. If a sentinel is popped off the queue the thread will push it back and break out of its work loop. This way all threads are waited on and allowed to finish all unprocessed work items during destruction of a pool instance.
Moreover, a work item can be any callable entity: lambda, functor, or a function pointer. Work item can accept any number of parameters thanks to template parameter pack of pool::enqueue_work().

UPDATE:

Thank you reddit user sumo952 for bringing to my attention the progschj/ThreadPool. I have updated my implementation to support futures and the ability to retrieve work item’s result.

Usage example:

work item 1 starting 170521507 iterations…
work item 2 starting 141859716 iterations…
work item 2 finished
work item 3 starting 189442810 iterations…
work item 1 finished
work item 4 starting 125609749 iterations…
work item 4 finished
work item 3 finished
Program ended with exit code: 1

Program output.

pool.h:

Inverting std::map and std::multimap

A junior colleague asked me about the following problem: given a std::map or std::multimap of key to value pairs, sort it by values; or turn it into value to key mapping.
In order to solve this problem we need two data structures: the source std::map or std::multimap of key to value pairs, and a second, output std::multimap in which we will invert the pairs ( std::multimap because in the source different keys can point at the same value; the value in the source will become the key in the output, and std::multimap will handle duplicates).

The solution is to walk the source data structure of key to value pairs, and build the output data structure of value to key pairs.

Complete listing:

Source (Key -> Value):
2 -> 4
6 -> 8
8 -> 4
9 -> 6
9 -> 2
Output (Value -> Key):
2 -> 9
4 -> 2
4 -> 8
6 -> 9
8 -> 6
Program ended with exit code: 1

Program output.