Simple timer

Jonathan Boccara over at Fluent{C++} made a post a while ago titled A Simple Timer in C++. I felt things could be done… different 😉 so I decided to write my own version of the timer code.

First, I felt there’s no need to actually instantiate timer objects; a simple function call to set_timeout or set_interval from namespace timer should be sufficient.
Second, I didn’t like the way cancellation was done. Single stop call interrupted all intervals and timeouts. How about a cancelation event per set_timeout or set_intervalcall?
Finally, I wanted the set_timeout and set_interval functions to accept any callable with any number of arguments.
That’s exactly how I designed my interface.

Usage example:

waiting 5s…
timeout
interval
interval
interval
interval
waiting 5s…
interval
interval
interval
interval
interval
Program ended with exit code: 1

Program output.

timer.h:

Updated event.h:

Interview question, part 6

Given a set of integer ranges defined as [LO, HI) and a value P, find which range P falls into.

My approach to this programming puzzle was to first define a range as a struct that can be sorted (thanks to operator <), then perform binary search on a vector of sorted ranges. The code is pretty self explanatory 🙂

Example input and output:

LO: 0, HI: 10
LO: 10, HI: 20
LO: 20, HI: 30
LO: 30, HI: 40
LO: 40, HI: 50
LO: 50, HI: 60
LO: 60, HI: 70
LO: 70, HI: 80
LO: 80, HI: 90
LO: 90, HI: 100
P = 15 falls in range LO: 10, HI: 20
P = 16 falls in range LO: 10, HI: 20
P = 4 falls in range LO: 0, HI: 10
P = 73 falls in range LO: 70, HI: 80
P = 25 falls in range LO: 20, HI: 30
P = 28 falls in range LO: 20, HI: 30
P = 19 falls in range LO: 10, HI: 20
P = 60 falls in range LO: 60, HI: 70
P = 83 falls in range LO: 80, HI: 90
P = 76 falls in range LO: 70, HI: 80
Program ended with exit code: 1

The answer:

Fast mutex

Many thanks to Chris M. Thomasson for rewriting POSIX Threads for Win32 mutex into standard C++ implementation. Using auto_event class from Event objects.

mutex.h:

Simple thread pool

LESSON RECORDING: How to implement a (simple) thread pool.

PART 2: Advanced Thread Pool.

I know the topic of thread pools has been beaten to death on the internet, nevertheless I wanted to present to you my implementation which uses only standard C++ components 🙂

I will be using queue and semaphore classes discussed in my earlier posts.

Below you will find a simple thread pool implementation which can be parametrized by the number of worker threads and the blocking queue depth of work items. Each thread waits on a blocking_queue::pop() until a work item shows up. The threads pick up work items randomly off of the queue, execute them, then go back to blocking_queue::pop(). Destruction and cleanup of threads is done with nullptr sentinel pushed onto the queue. If a sentinel is popped off the queue the thread will push it back and break out of its work loop. This way all threads are waited on and allowed to finish all unprocessed work items during destruction of a pool instance.
Moreover, a work item can be any callable entity: lambda, functor, or a function pointer. Work item can accept any number of parameters thanks to template parameter pack of pool::enqueue_work().

UPDATE:

Thank you reddit user sumo952 for bringing to my attention the progschj/ThreadPool. I have updated my implementation to support futures and the ability to retrieve work item’s result.

Usage example:

work item 1 starting 170521507 iterations…
work item 2 starting 141859716 iterations…
work item 2 finished
work item 3 starting 189442810 iterations…
work item 1 finished
work item 4 starting 125609749 iterations…
work item 4 finished
work item 3 finished
Program ended with exit code: 1

Program output.

pool.h:

Inverting std::map and std::multimap

A junior colleague asked me about the following problem: given a std::map or std::multimap of key to value pairs, sort it by values; or turn it into value to key mapping.
In order to solve this problem we need two data structures: the source std::map or std::multimap of key to value pairs, and a second, output std::multimap in which we will invert the pairs ( std::multimap because in the source different keys can point at the same value; the value in the source will become the key in the output, and std::multimap will handle duplicates).

The solution is to walk the source data structure of key to value pairs, and build the output data structure of value to key pairs.

Complete listing:

Source (Key -> Value):
2 -> 4
6 -> 8
8 -> 4
9 -> 6
9 -> 2
Output (Value -> Key):
2 -> 9
4 -> 2
4 -> 8
6 -> 9
8 -> 6
Program ended with exit code: 1

Program output.

Interview question, part 5

Find the starting position(s) of “runs” (consecutive characters) in the input from left to right.

Example input and output:

Input: 0000000000
Runs: 0

Input: 0101010101
Runs:

Input: 1111111111
Runs: 0

Input: 0110110110
Runs: 1 4 7

Input: 1110000011
Runs: 0 3 8

Input: 0010011010
Runs: 0 3 5

Program ended with exit code: 1

The answer:

Interview question, part 4

I dug up another one from years ago 🙂 This one asked to write a program that:

Prints all binary numbers that do not contain consecutive bits set.

My approach was to brute force over all possible numbers and test each one with a sliding 0b11 bit mask for 2 consecutive bits set.

The answer:

Interview question, part 3

I was given this piece of code to analyze and point out what’s wrong with it:

I was also able to compile and execute this program on my Mac; output from Xcode gives away the answer:

libc++abi.dylib: Pure virtual function called!
Program ended with exit code: 9

Program output.

But why? The constructor of Base calls void bar(), which in turn calls pure virtual void foo() = 0. The Derived overrides foo() so what’s the problem here?
Inside the constructor of Base, the type of this pointer is… Base. So the call to foo() inside bar() resolved to Base version of foo(), which is a pure virtual function lacking implementation. This is illegal and the compiler puts a check inside Base::foo() virtual function table pointing to __cxa_pure_virtual. That in turn calls __pthread_kill terminating the program.

__cxa_pure_virtual disassembly:


UPDATE:

Item #9 From Effective C++.

Related discussion: C++ corner case: You can implement pure virtual functions in the base class.

Interview question, part 2

Another interview question I was asked in the past:

Given only a stack, implement a queue.

I remember thinking at first that it cannot be done! My mind for some reason imagined that I am limited to only one stack. I told the interviewer that I don’t think it can be done with one stack. His response was: I didn’t say anything about having only one stack. That’s when it hit me: if I use two stacks, one for pushing, one for popping, and move the elements between them, it can be done!

When pushing, items go on stack 1. So the bottom of stack 1 is the front of the queue. So how do we get to the bottom element? We pop all the items form stack 1 and push them onto stack 2. This effectively reverses the order of elements and the top of stack 2 becomes the front of the queue. We can then pop off the queue by popping off of stack 2. We do this reversal of elements on every push and pop of the queue. Yea it’s inefficient, but it works 🙂 Honestly I don’t know why you would ever want to implement a queue this way, but nevertheless it was a great interview question!

The answer:

1
2
3

Program output.

Interview question, part 1

When interviewing for a job in the past, I was asked the following question:

Write a multithreaded program that prints a sequence of characters like this: A, B, A, B, A, B… Synchronize properly so it never prints the same character more than once in a row.

Thinking quickly on my feet 🙂 I realized the trick is to use an event object (I will be using this one). But wait, one event object is not enough. The real trick is to use two event objects! Where thread 1 waits on event 1 to be signaled, then prints ‘A’, and finally signals event 2; and repeats the sequence in a loop. Thread 2 waits on event 2, then prints ‘B’, and finally signals event 1. Ditto loop. That should do it!

The answer:

A
B
A
B
A
B

Program output.

Template concepts, sort of

Back to the queue class from previous posts (here and here)… I realized things can be done better: I want the queue to work with move-constructible and move-assignable types as well as copyable types, and do so automatically. This way the push and pop methods can use std::move to insert and remove elements from the queue. I also want it to work with no-throw constructible and assignable types; this way the push and pop methods can take a more optimized path that does not deal with exceptions. So I realized that I need four versions of push and pop methods: 1) copy-constructible, 2) no-throw copy-constructible, 3) move-constructible, 4) no-throw move-constructible. All fine and dandy, but how do we select the right version at compile time based on type T that the queue holds?
In C++20 we will get template concepts that will allow us to do just that sort of selection at compile time. In the meantime we have to make do with std::enable_if (see here) and other type traits.
Finally, for completeness of interface, I want to add a pop method that returns an item rather than taking an output reference parameter, and declares proper noexcept value depending on type T.

Below is the no-throw-movable pop method; it’s declared noexcept and lacks exception handling code (possibly making it faster at runtime):

And here is the new pop method; notice its noexcept value is dependent on which version of pop it uses, and it is deduced at compile time 🙂

Complete listing:

Event objects

No, not the type where you subscribe to an event using an interface or a callback. The type where you wait on an event to be signaled, just like Windows Event Objects.
Thanks to many suggestions from the kind people on Reddit’s r/cpp I came up with 2 classes: manual_event that can be waited on, and when signaled stays signaled until it’s reset, unblocking all waiting threads. And auto_event that resets to non-signaled state when signaled, and unblocks only one waiting thread.
The below implementation is far from final and I am open to any suggestions from people experienced in multi-threaded programming.

The usage example:

The event.h:

Class mechanics

A friend asked me to make a post about the mechanics of virtual functions in C++. I thought about it for a few days and decided to write a broader post about several topics dealing with classes and what happens under the hood when we use member functions, inheritance, and virtual functions.

Member functions. You can think of member functions as no different than static members, or functions defined outside the class, except the compiler adds a hidden first parameter this. That’s how a member function is able to operate on an instance of a class. So when you write this:

What you are really getting is this:

A compiler generated C_MemFun with extra parameter of type C*.

Inheritance. Here I want to briefly mention the order in which constructors and destructors are called: if D derives from B and you create an instance of D, the constructors will be executed in order: B’s first, D’s second. So when the control enters D’s constructor, B part of the class has been initialized already. The opposite is true for destructors. When you delete an instance of D, the destructors will be executed in order: D’s first, B’s second. The code at the end of this post will demonstrate it. BTW I’m skipping over virtual and pure virtual destructors in this post since that’s a topic that could easily become its own blog post 😉

Virtual functions. Several things happen when we declare a virtual function: for each class the compiler creates a hidden table called virtual function table. Pointers to virtual functions are stored in this table. Next each instance of a class gets a hidden member variable called virtual function table pointer that, as the name implies, points at the virtual function table for that particular class. So an instance of B has a hidden member pointing at B’s virtual function table, and an instance of D… ditto. When you create virtual functions the compiler generates the following dispatch code:

For an instance of type T called _this_, it does a lookup in the VirtualTablePtr for virtual function number VirtualFuncNum, and calls it with _this_as the first argument plus whatever extra parameters it accepts. Depending on the type of _this_, VirtualTablePtr will point at a different virtual function table and that’s more or less how we get runtime polymorphism in C++ 🙂

Complete listing:

===> Base start <===
BaseConstructor(0x1005845d0, Member1 = 29)
Base(Member1 = 29)::BaseVirtualFunction_1(59)
Base(Member1 = 29)::BaseVirtualFunction_2(52)
BaseDestructor(0x1005845d0)
===> Base end <===

===> Derived start <===
BaseConstructor(0x10060b6a0, Member1 = 52)
DerivedConstructor(0x10060b6a0, Member2 = 80)
Derived(Member1 = 52, Member2 = 80)::DerivedVirtualFunction_1(40)
Derived(Member1 = 52, Member2 = 80)::DerivedVirtualFunction_2(79)
DerivedDestructor(0x10060b6a0)
BaseDestructor(0x10060b6a0)
===> Derived end <===

Program output.

Refactoring std::auto_ptr

A colleague at work was recently tasked with refactoring some legacy code and came across the good old std::auto_ptr. In C++0x it has been deprecated (and later removed outright; I can’t even compile std::auto_ptr code in Visual Studio 2017 nor Xcode 10.1) so it was his job to rewrite it in the modern C++ way. The code that my colleague came across had another problem. Can you spot it?

std::auto_ptr was never meant to hold pointers to arrays. So the code above, assuming it didn’t crash, was leaking memory (regardless of what it holds, std::auto_ptr always calls delete in its destructor; delete[] was needed in the code above).

So how do we refactor legacy std::auto_ptr code into modern C++? The answer is std::unique_ptr(https://en.cppreference.com/w/cpp/memory/unique_ptr). A std::auto_ptr holding a pointer to a single object of type T, in modern C++, becomes:

std::make_unique can forward constructor parameters to T::T, like this:

And an array of T’s of size N becomes:

Note that for std::make_unique to be able to create an array of T’s, T must have a T::T() (default constructor; or a constructor with all parameters having default values: T::T(int x = 0)).

Fast semaphore

I received an interesting piece of code during my recent discussion about the blocking queue: a fast semaphore implementation. The fast_semaphore class uses a regular C++ semaphore as fallback for waiting and unblocking the thread(s) while doing its magic using an atomic variable and memory fences of the new C++ memory model (see here and here).

I present to you, my shortened version of, Fast-Semaphore by Joe Seigh; C++ Implementation by Chris Thomasson:

Better blocking queue

In the previous post we discussed a multi-threaded blocking queue who’s implementation was lacking: it was not exception safe. It left the semaphore counts wrongly modified in the presence of exceptions emitted by T’s copy constructor or assignment operator.

Below is a corrected void push(const T item) method:

In the corrected version we first catch any exception possibly emitted by T’s copy constructor then adjust back the open-slot semaphore. Since the push operation failed the number of open and full slots has not changed. Finally we re-throw the exception.

Below is a corrected void pop(T item) method:

In the corrected version we first catch any exception possibly emitted by T’s assignment operator then adjust back the full-slot semaphore. Since the pop operation failed the number of open and full slots has not changed. Finally we re-throw the exception.

We now have a robust queue template class that can handle misbehaving T’s 🙂

Blocking queue

Where produce-consumer pattern is present it is often the case that one is faster that the other: a parsing producer reads records faster than a processing consumer; a disk reading producer is faster than network sending consumer.
Producer and consumer often communicate by queues: the producer will put items on a queue while the consumer will pop items off a queue. What happens when the queue becomes full, or empty? One approach of the producer is to try to put an item on a queue and if it’s full yield the thread and repeat. Similarly the consumer can try to pop an item off a queue and if it’s empty, ditto. This approach of try-fail-yield can unnecessarily burn CPU cycles in tight loops that constantly try to put or pop items off a queue.
Another approach is to temporarily grow the queue, but that doesn’t scale well. When do we stop growing? And once we stop we have to fall back onto the try-fail-yield method.

What if we could implement a blocking queue: a queue who’s put operation blocks when the queue if full, and unblocks only when another thread pops an item off the queue. Similarly a queue who’s pop operation blocks when the queue is empty, and unblocks only when another thread puts an item on the queue. An example of using such a queue would look like this (notice a fast producer and slow consumer in the code below):

Notice no try-fail-yield code in the example above. The put operation of the fast producer simply blocks until the slow consumer makes more room on the queue. The output of the program is as we expect; the fast producer fills up the queue then blocks, a second later the consumer starts to slowly pop items of the queue; they go in tandem for a while until the producer exits and the consumer drains the queue:

push v = 1
push v = 2
push v = 3
push v = 4
push v = 5
pop  v = 1
push v = 6
pop  v = 2
push v = 7
pop  v = 3
push v = 8
pop  v = 4
push v = 9
pop  v = 5
push v = 10
pop  v = 6
pop  v = 7
pop  v = 8
pop  v = 9
pop  v = 10
Program ended with exit code: 1

The trick to implementing such a queue is the ability to count both open and full slots of the queue, and block. A semaphore is a perfect mechanism to do just that 🙂 In fact we need two semaphores: one to count the open slots, and another to count the full slots. The open-slot semaphore starts with a count equal to the size of the queue. The full-slot semaphore starts with a count of zero. A push operation waits on the open-slot semaphore and signals the full-slot semaphore. A pop operation waits on the full-slot semaphore and signals the open-slot semaphore.

The blocking queue implementation below uses Boost semaphores to count and std::mutex to protect the critical section. In the next post I will show how to make it safe in the presence of exceptions; currently it misbehaves if T’s copy constructor or assignment operator throw (it assumes T’s destructor will never throw).

queue.h

P.S. It was suggested to me that the use of boost::interprocess::interprocess_semaphore is a heavy-handed approach. I agree. I only used it to keep the example code small and uncluttered with more utility classes. In production you should have a lightweight semaphore class that uses a mutex and a condition variable. Like this 🙂 …

RAII

I know the topic of RAII has been blogged about plenty of times before. Still, I want to present to you my take on it 🙂 Recently I created a policy-based generic RAII template for holding various types of resources (pointers, file handles, mutexes, etc). The nice thing about my implementation is that in order to acquire and release a new type of a resource you only have to define simple Acquire and Release policy template classes and plug them as parameters to the RAII template. Here’s how you can use it with std::mutex:

In the code above we declare 2 policy classes: LockPolicy which calls .lock(); on type T, and UnlockPolicy which calls .unlock(); on type T. Next we declare scope_lock to be a template which will hold type T by reference and apply LockPolicy::Execute(T t); and UnlockPolicy::Execute(T t); in the constructor and destructor of RAII. This way we can use scope_lock with any object that has .lock(); and .unlock(); methods.

As an exercise in writing policy classes let’s use RAII template to hold and automatically delete or delete[] pointers and pointers to arrays:

First we need a policy that does nothing; let’s call it NoOpPolicy. That is because nothing needs to happen to a pointer in the constructor of RAII; it’s already allocated. Next we declare two policies: PointerReleasePolicy which calls delete on type T, and ArrayReleasePolicy which calls delete[] on type T. Finally we define ptr_handle_t to be a RAII template which holds a pointer to T, applies NoOpPolicy::Execute(T t); in its constructor and PointerReleasePolicy::Execute(T t); in its destructor. We do the same for arr_ptr_handle_t except using ArrayReleasePolicy.

Complete listing:

L1 cache lines

Herb Sutter gave an interesting talk titled Machine Architecture: Things Your Programming Language Never Told You:

In it he gives a tiny yet very interesting example of code that illustrates hardware destructive interference: how the size of L1 cache line and improper data layout can negatively affect performance of your code.
The example program allocates two int’s on the heap one right next to the other. It then starts two threads; each thread reads and writes to one of the int’s location. Let’s do just that, 100’000’000 times and see how long it takes:

Duration: 4338.55 ms

Let us now do the same exact thing, except this time we’ll space the int’s apart, by… you guessed it, L1 cache line size (64 bytes on Intel and AMD x64 chips):

Duration: 1219.50 ms

The same code now runs 4 times faster. What happens is that the L1 caches of the CPU cores no longer have to be synchronized every time we write to a memory location.

Lesson here is that data layout in memory matters. If you must run multiple threads that perform work and write to memory locations, make sure those memory locations are separated by L1 cache line size. C++17 helps us with that: Hardware Constructive and Destructive Interference Size.

Complete listing:

Sorting and locality

Let’s look at two standard data structures: std::vector and std::list and what happens to the performance of traveling them sequentially once they are sorted.

In this example we will generate a std::vector and std::list of 10,000,000 randomly generated int values. We will then travel them sequentially using std::accumulate and measure the time it took. Finally we will sort both data structures and repeat the sequential access.

Below are the durations captured on my 2012 MacBook Pro 2.3 GHz Intel Core i7. The code was compiled with maximum optimizations using latest GCC, Apple Clang, and LLVM compilers available for Mac OS:

GCC -Ofast -march=native -lstdc++
List duration 31.795 ms
Sorted list duration 970.679 ms
Vector duration 0.011 ms
Sorted vector duration 0.005 ms

Apple CLANG -Ofast -march=native -lc++
List duration 29.112 ms
Sorted list duration 996.429 ms
Vector duration 0.004 ms
Sorted vector duration 0.004 ms

LLVM CLANG -Ofast -march=native -lc++
List duration 27.358 ms
Sorted list duration 1014.641 ms
Vector duration 0.005 ms
Sorted vector duration 0.005 ms

Let’s talk about the std::vector first. The time to run std::accumulate on unsorted and sorted vector is virtually the same. Why? Because std::vector is a data structure laid out continuously in memory. Regardless of what we do to it the locality of elements is always present. By accessing Nth element the CPU brings into L1 cache N+1,…,N+M elements; up to a cache line size (64 bytes on Intel and AMD x64 chips).

What about the std::list? Why such a performance penalty after sorting?
Remember how a doubly-linked list (usually how std::list is implemented) is laid out in memory: nodes containing data elements with pointers to previous and next node. By sorting it me moved the nodes all over the heap; effectively randomizing its layout on the heap and making each node’s next and previous pointers point in random places. The process of sorting destroyed whatever locality may have been there when the list was first created.

Lesson here is to carefully choose your containers. If you are going to sort them and later access them sequentially then std::vector is the clear winner.

Complete listing: