As I learn about atomics and memory model I decided to take a stab at rewriting my blocking queue using atomic operations and eliminate the mutex around the critical section of code responsible for pushing and popping elements, effectively creating a fast path through the queue if no blocking is taking place.

Let’s jump straight into the code:

Lines 1-4 are basically a template concept which specifies that this method will only be present if the type is no-throw copy constructible.
Line 7 is the same semaphore decrement and possible blocking if the queue is full. The fast path of this semaphore implementation uses only atomic operations, so if it doesn’t block it will not engage a mutex ( fast_semaphore code available on GitHub).
Line 9 is where the magic starts. We atomically increment the m_pushIndex while fetching its previous value into a temporary pushIndex. From now on we work with the temporary.
Line 10 is where we insert the element by copy constructing it in the right open slot.
Line 11 is book-keeping needed during the destruction of the queue.
Line 13-15 is where we have to modulo the m_pushIndex with m_size, so it never overflows. It check, in a loop, if it has changed, if it has it loads it back into expected and checks again until it hasn’t changed in which case it atomically swaps m_pushIndex with m_pushIndex % m_size.
Line 17 signals to other blocked threads, if there are any, that the queue now has an element available for popping.

Other methods of the queue work in a very similar way so I will not be describing them in detail here. The only crux of this implementation is that it only works for no-throw copyable and movable types; so declare your constructors and assignment operators with noexcept if you want to use them with this queue 🙂

Complete listing:

2 Replies to “Atomic blocking queue”

Leave a Reply