Propagate exceptions across threads

What if you need to catch an exception in a worker thread and re-throw it in the main thread that’s waiting for the worker to finish? std::future works this way. If you spawn a future on a new thread using std::async(std::launch::async, ...); and that future’s worker throws an exception, when you later call get() on the future it will emit that exception.

You do it by wrapping the worker thread’s function in try { /* CODE */ } catch(...) {} and capturing the current exception pointer ( std::exception_ptr) using std::current_exception. You can then re-throw the captured exception using the pointer and std::rethrow_exception. Below is an example that illustrates this technique. Just remember, if you have multiple worker threads make sure to have multiple std::exception_ptr instances; one per worker thread.

exceptions.cpp:

Thread 0x1048d75c0 caught exception from thread 0x700001cf3000

Program output.

int main()

I have been spanked by certain commenter (who shall not remain anonymous 😉 ) on here and FB about my style of naming unused main arguments and unnecessary return 1; at the end of every main function.

I have though about and I… concede the point of his argument 🙂 From now on the style on this blog shall be as follows (if arguments to main are not needed):

P.S. C++ standard allows for two valid signatures: int main() and int main(int argc, char** argv), see here.

XML-RPC

XML-RPC is yet another method of implementing remote procedure calls. It used XML over HTTP to transmit data. In my past live working at TLO I used XML-RPC-C library to implement communication between cluster nodes and a cluster management system. I thought the library was well designed and easy to use so I wanted to introduce you to it.

Below is a simple client and server implementation using the XML-RPC-C library. The server implements one RPC that accepts one string parameter and returns one string. The client makes the call to the server saying hello and prints the reply. The code is easy to read and does not need any further explanation 🙂

The client. xmlrpc_c.cpp:

The server. xmlrpc_s.cpp:

Base64 encoding

Base64 encoding: turning binary data into ASCII text for the purpose of saving it to text files like XML, or transmitting it over protocols like HTTP, or embedding it into web page files, and many other purposed. That’s the basic idea behind it. For every 3 bytes of input you get 4 bytes of output, so it’s not crazy inflated.

I was looking for a library that has built in, easy to use and clean base64 encoding and decoding functions but didn’t really find anything to my liking. So I looked for a reference implementation and found one at wikibooks.org. Their C++ implementation (released to public domain so I could freely use and modify it) was my starting point. I beautified the code and brought it closer to modern C++ 🙂 So now you have a header only, clean base64 encode and decode functions you can use in your projects: base64.hpp.

I wrote a program that encodes and decodes an input string, checks the original against the decoded one, and also checks the encoded base64 text against reference base64 string taken from wiki. The implementation checks out and produces correct encoded strings and decoded data 🙂 Below is the test program.

base64.cpp:

Input: Man is distinguished, not only by his reason, but by this singular passion from other animals, which is a lust of the mind, that by a perseverance of delight in the continued and indefatigable generation of knowledge, exceeds the short vehemence of any carnal pleasure.

Reference:

TWFuIGlzIGRpc3Rpbmd1aXNoZWQsIG5vdCBvbmx5IGJ5IGhpcyByZWFzb24sIGJ 1dCBieSB0aGlzIHNpbmd1bGFyIHBhc3Npb24gZnJvbSBvdGhlciBhbmltYWxzLCB3 aGljaCBpcyBhIGx1c3Qgb2YgdGhlIG1pbmQsIHRoYXQgYnkgYSBwZXJzZXZlcmF uY2Ugb2YgZGVsaWdodCBpbiB0aGUgY29udGludWVkIGFuZCBpbmRlZmF0aWd hYmxlIGdlbmVyYXRpb24gb2Yga25vd2xlZGdlLCBleGNlZWRzIHRoZSBzaG9ydC B2ZWhlbWVuY2Ugb2YgYW55IGNhcm5hbCBwbGVhc3VyZS4=

Encoded:

TWFuIGlzIGRpc3Rpbmd1aXNoZWQsIG5vdCBvbmx5IGJ5IGhpcyByZWFzb24sIGJ 1dCBieSB0aGlzIHNpbmd1bGFyIHBhc3Npb24gZnJvbSBvdGhlciBhbmltYWxzLCB3 aGljaCBpcyBhIGx1c3Qgb2YgdGhlIG1pbmQsIHRoYXQgYnkgYSBwZXJzZXZlcmF uY2Ugb2YgZGVsaWdodCBpbiB0aGUgY29udGludWVkIGFuZCBpbmRlZmF0aWd hYmxlIGdlbmVyYXRpb24gb2Yga25vd2xlZGdlLCBleGNlZWRzIHRoZSBzaG9ydC B2ZWhlbWVuY2Ugb2YgYW55IGNhcm5hbCBwbGVhc3VyZS4=

Encoded data matches reference :o)

Decoded: Man is distinguished, not only by his reason, but by this singular passion from other animals, which is a lust of the mind, that by a perseverance of delight in the continued and indefatigable generation of knowledge, exceeds the short vehemence of any carnal pleasure.

Decoded data matches original :o)

Program output.

And here is the encoder and decoder function implementation.

base64.hpp:

Extremely Fast Compression Algorithm

LZ4. GitHub repository here. It is open-source, available on pretty much every platform, and widely used in the industry.

It was extremely easy to get started with it. The C API could not possibly be any simpler (I’m looking at you zlib 😛 ); you pass in 4 parameters to the compression and decompression functions: input buffer, input length, output buffer, and max output length. They return either the number of bytes produced on the output, or an error code. Just be careful when compressing random data (which you should not be doing anyways!): the output is larger than the input!

Here’s a short example that compresses a vector of thousand characters:

compression.cpp:

LZ4 compress, bytes in: 1000, bytes out: 14
LZ4 decompress, bytes in: 14, bytes out: 1000
Decompressed data matches original :o)

Program output.

Parsing command line options

In case you haven’t noticed, I love Boost 😛 so I’m going to introduce you to its Program Options library. It is a command line options parser; it can also read options from an INI file, and it works with STL containers. It supports showing a help message, setting default values for missing options, allows multiple instances of the same option, etc. For complete set of features I refer you to the documentation. This post will be a short introduction.

Let’s start with some code that defines what options our program will accept:

The first option invoked with --help will display a friendly description, like this:

Allowed options:
  –help                           produce help message
  -i [ –int ] arg (=42)           int value
  -f [ –float ] arg (=3.14100003) float value
  -s [ –string ] arg (=Vorbrodt)  string value
  -a [ –int_list ] arg            list of int values
  -b [ –string_list ] arg         list of string values

Options help message.

Next is an integer value option; the first string "int,i" means that you can specify it as either --int or -i on the command line. When specified its value will be pulled into variable v, and if not specified the default value will be 42. Next two options for float and string behave in the exact same way. The parser will throw an exception if you specify those options more than once.
Next are list options: they allow you to specify the same option multiple times and are returned through the vi and vs variables which are std::vector’s of int and string.

Here is the program invocation and the output it produces:

./bin/options -i 1 -f 3.141 -s “Martin” -a 10 -a 11 -a 12 -b “Vorbrodt’s” -b “Blog”
Int value was set to 1
Float value was set to 3.141
String value was set to “Martin”
List of ints value was set to 10
List of ints value was set to 11
List of ints value was set to 12
List of strings value was set to “Vorbrodt’s”
List of strings value was set to “Blog”

Program invocation and output.

Complete source code of the program below.

options.cpp:

ANSI escape codes

ANSI escape codes are a way to do more than just plain text in the terminal (be it Windows cmd.exe or UNIX xterm). A picture is worth a thousand words so here’s what I was able to do with them:

ANSI escape codes in action.

All of the text appearance manipulation and coloring was done using a tiny library I wrote yesterday and a very intuitive C++ syntax. Here’s the code responsible for the screenshot above:

colors.cpp:

Pretty easy to follow I hope 🙂 The library defines a bunch of stream manipulators that inject the appropriate escape sequences. For example, cout << bold << "BOLD"; will print out, you guessed it, bolded text. color_n picks from the 8 bit color table. color_rgb let’s you define a 24 bit truecolor. The _bg_ version is for selecting the background color. Here’s the complete source code for the library, hope you will find it useful!

P.S. The color_rgb does not appear to work on Mac OS terminal. So far all the codes work only on Linux; haven’t tested on Windows 🙂

ansi_escape_code.hpp:

Two-factor authentication

If you don’t know what multi-factor authentication is please read this before continuing. I am going to assume you understand the security concepts mentioned in this post…

In plain English: two-factor authentication is something you know (your password) and something you have (your token). I will focus on the token in this post. Apps like Google Authenticator and Authy generate one-time time-based tokens, or passwords. They generate them by hashing a shared secret combined with the current time. By default, the resulting token changes every 30 seconds giving the user a short window to authenticate to a service.
You can set it up with GitHub for example; in your user security settings you enable 2-factor authentication, GitHub then generates the shared secret for you which you import into the authenticator app. From then on when you login you must provide your password plus the generated token. Because both you and GitHub have access to the shared secret, both can generate the same token at the same time. If the user provided and the GitHub generated tokens match, the authentication succeeds and you’re logged in.

So what is this post about anyways? What I set-out to do today was to generate the one-time tokens programmatically from a C++ program. I wanted to test this by feeding the same shared secret to Authy and see that both my program and Authy generate the same tokens. With this working I, or you the reader, could add two-factor authentication to our applications, which is cool 🙂

Initially I started reading about the algorithm used to generate the tokens: Time-based One-time Password Algorithm. I sure as hell didn’t want to implement all of this from scratch, so I started looking for an OpenSSL implementation. During my search I came across a free (and available on Mac and Linux) framework to do what I wanted: OATH Toolkit. Once I started reading the documentation the rest fell in place very easily. I generated a dummy shared secret: 00112233445566778899 and fed it to Authy as well as my program (Google Authenticator requires it to be base32 encoded).

Below are screenshots of Authy and my program generating the same tokens. And of course the code!


Screenshot.

Plugins: loading code at runtime

On windows we have the .dll files, .so files on Linux, and .dylib files on Mac. They all have one thing in common: they can be loaded at runtime and provide entry points to call into. One example is an online chat client that uses plugins to add support for more protocols (Google Chat, ICQ, IRC, etc). There are plenty more examples but you get the idea: drop a binary in plugins folder and restart the app and you have just added more functionality without having to recompile and redeploy your application.

So how do we do it? We could go the difficult route and use OS specific calls to find and load the plugin file, then do some more platform specific code to extract the entry point and call it. Or, we could make it easy on ourselves and use Boost DLL library 🙂

I am no expert on this library nor do I want to write a complete tutorial on it; the documentation is great. I have just started using it today in fact and wanted to see how easy it would be to get a basic plugin built and loaded by another program. It took all of 20 minutes to come up with a basic structure so I decided to make a short blog post about my experience so far. And so far it looks promising!

I started by creating a dynamic library which exports two function calls: one to get the name of the plugin and another to get its version. Below is all the code needed to create such a plugin with Boost:

Next I wrote a simple program which accepts the path to the plugin as a command line argument, loads the library, finally extracts and calls the two entry points. Here’s the code:

Plugin name    : Vorbrodt’s 1st Plugin
Plugin version : 1.0

Program output.

It works! You can take it from here and build a cool plugin engine. Oh I forgot to mention, the same code compiles and behaves the same on Windows, Linux, and Mac 🙂

Micro-benchmarks

I was looking for a way to benchmark a piece of code… I came up with 5 libraries that make it very easy 🙂

I’m not going to write a tutorial on how to use each one because I would basically be rewriting their documentation sections. What I will do is show you how to get started. As an example I will write a simple benchmark that tests copy constructor of std::string. But first, the libraries:

  1. Google Benchmark
  2. Catch2
  3. Hayai
  4. Celero
  5. Nonius

Catch2 and Nonius are header only libraries; be aware of long compile times 🙁 Google, Catch2, and Nonius automatically pick the number of runs and iterations for you, which is nice: no guessing how many times you need to run a function you want to benchmark to get a reasonable performance reading.


Google Benchmark:



Catch2:



Hayai:



Celero:



Nonius:



SQL database access

When I first set out to make a post about database access from C++ I was going to write about MySQL Connector/C++. But for some reason that didn’t sit well with me. After sleeping on it I realized it didn’t appeal to me because it was 1) limited to only one database backend and 2) too low-level of an API. I wanted to write about a library that supports multiple database backends and abstracts the connection details as much as possible. Ideally I wanted a library that brings you closest to SQL syntax rather than deal in C++ mechanics. So in my quest for cool and portable C++ libraries I decided to keep looking…

And then I came across SOCI – The C++ Database Access Library 🙂 It has everything I was looking for: multiple backend support (DB2, Firebird, MySQL, ODBC, Oracle, PostgresSQL, and SQLite3), and a very natural way of issuing SQL queries thanks to operator overloading and template sorcery. Even their first example is purposely left without comments because it is that easy to read and understand.

Besides a very natural way of talking to a SQL backend what I like most about it is that it allows you, in a non-intrusive way (no existing code change needed), to store and retrieve your custom data structures to and from database tables.

So I installed MySQL server on my Mac using brew package manager and started coding. Within 30 minutes I had a working example that connected to my database server, created a database and a table, inserted rows into the table from my custom Person data structure, counted the rows, retrieved a table entry with a given ID, and finally cleaned up after itself.

The only code that requires explaining is the struct type_conversion<Person>; object. It is SOCI’s mechanism of converting to and from custom data structures and requires 2 methods: from_base which converts from a set of row values to a structure, and to_base which goes the other way. The rest is self explanatory! Here’s how you can get started:

Table ‘people’ has 2 row(s)
Martin, Vorbrodt, 19800830, [email protected]

Program output.

Measuring CPU time

Measuring how long your program ran for is easy with std::chrono, but what if you need details about user and system space time? Easy! Use Boost CPU Timer library 🙂 It’s another simple library from Boost with only few classes: auto_cpu_timer is like a RAII object; it starts the clock in its constructor, stops it in its destructor, and prints the elapsed time to standard output. cpu_timer class is for manual starting and stopping of the clock; it also allows you to retrieve the elapsed times (wall, user, and system) in nanoseconds.

In the below example I create three threads: procUser spends 100% of its time in the user space calling std::sqrt function. procSystem spawns a lot of threads causing transitions into the kernel. And procTimer is just an illustration of cpu_timer usage.

Thread timer: 0.750s wall, 1.790s user + 0.850s system = 2.640s CPU (352.1%)
Program timer: 3.171s wall, 5.080s user + 2.980s system = 8.060s CPU (254.2%)

Program output.

Printing stack traces

This one will be short 🙂

If you want to add more diagnostic information to your programs make sure to check out Boost.Stacktrace library. With it you can capture and print current stack traces. It’s especially useful when combined with exception handling; it allows you to know right away where the exception originated from.

0# f3() in /Users/martin/stacktrace
1# f2() in /Users/martin/stacktrace
2# f1() in /Users/martin/stacktrace
3# main in /Users/martin/stacktrace

0# main::$_0::operator()() const in /Users/martin/stacktrace

1# void std::__1::__async_func<main::$_0>::__execute<>(std::__1::__tuple_indices<>) in /Users/martin/stacktrace

2# std::__1::__async_func<main::$_0>::operator()() in /Users/martin/stacktrace

3# std::__1::__async_assoc_state<void, std::__1::__async_func<main::$_0> >::__execute() in /Users/martin/stacktrace

4# void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (std::__1::__async_assoc_state<void, std::__1::__async_func<main::$_0> >::*)(), std::__1::__async_assoc_state<void, std::__1::__async_func<main::$_0> >*> >(void*) in /Users/martin/stacktrace

5# _pthread_body in /usr/lib/system/libsystem_pthread.dylib

6# _pthread_start in /usr/lib/system/libsystem_pthread.dylib

Program output

Generating Unique IDs

I like simple and well written C++ libraries, and I’m especially fond of Boost, so I’m going to blog about it some more 🙂

Today I’ll show you how to generate unique IDs using Boost::UUID header only library. There really isn’t much to be said about it other than it’s simple, consists of only few hearer files, and allows you to generate, hash, and serialize unique IDs.

Refer to the documentation for a complete overview. Here’s a sample program that generates and prints a unique ID:

5023e2d9-8008-4142-b1c7-87cda7cb36b3

Program output.

RPC with Protocol Buffers

Just when I thought I covered the topic of RPC in C++ with Thrift, I came across a new and very promising RPC framework: Google’s gRPC. The more I read about it the more I like it: just like Thrift it supports many programing languages and internally uses Protocol Buffers to encode messages. So I decided to give it a try and see how easily I could create a simple client and serve programs…

Just like with Thrift, the first step is to define your RPC service. Here’s my definition of a simple service with one RPC method which accepts an input parameter and provides a return value:

I saved it as grpc_service.proto and now was the time to generate the client and server stubs. Unlike Thrift, this is a two step process: 1) you must generate the protocol buffer files, and 2) generate the gRPC client/server stubs. Both actions are done with protoc compiler and a custom plugin that comes with gRPC:

protoc -I . –cpp_out=. grpc_service.proto

protoc -I . –grpc_out=. –plugin=protoc-gen-grpc=/usr/local/bin/grpc_cpp_plugin grpc_service.proto

gRPC code generation steps.

This will compile our service definition file and produce 4 output files: .h and .cc for protocol buffer messages, and .h and .cc for gRPC client/server stubs.

Alright, that was easy. Let’s see what it takes to implement a gRPC client program… after 15 minutes reading through the documentation I came up with the following:

Little more code needed to construct and invoke the RPC method as compared to Thrift, but not too bad. The extra lines are mostly around creating the protocol buffer message objects and setting their properties.

The corresponding gRPC server code is pretty much just as easy to implement as with Thrift. Here’s the simple server that prints out the message it receives and sends back a reply:

So there you have it folks! It is very easy to get started with this framework.

As far as which one you should prefer (Thrift vs gRPC) I can’t honestly say. I don’t know enough about them to claim one is better than the other. What I can say is that Thrift is a more mature framework that has been around for longer. You decide 🙂

P.S. On my Mac I was able to use Homebrew package manager to install the required headers/libraries/executables for gRPC. My Ubuntu 18.04 Linux has libgrpc available in its online repository. I also verified that Microsoft’s vcpkg has ports available for Windows.

Serialize data to XML

Protocol Buffers are not always an option and you just have to serialize your data to XML. Then what? Luckily there is a serialization library available from Boost and it makes that pretty easy for us. You don’t even have to modify your existing data structures to write them out as XML: the library is non-invasive.

Let’s say I want to serialize a list of people to a file, and read it back later. My data structures would be defined like this:

And once a vector of person’s is serialized I want it to look something like this:

Easy! First you must define a generic serialize function for your data structure, then you instantiate an XML output archive with an ofstream object and pass it the data. Reading is done by instantiating an XML input archive with an ifstream object and loading the data into a variable. Like this:

Name  : Martin Vorbrodt
DOB   : 19800830
EMail : [email protected]

Name  : Dorota Vorbrodt
DOB   : 19810127
EMail : [email protected]

Program output.

The library has built in support for STL containers. It can also write the data in many output formats, not just XML. Luckily you only have to define one serialization function per data type and it will work with all input and output archives. Heck, you could even define a serialization function for your protocol buffers data types 😉

Thrift: or how to RPC

Just like my previous Protocol Buffers post, this one is also meant as a brief introduction that will point you in the right direction rather than an exhaustive tutorial. Here we go…

Again we are in search of a portable library, this time not for serialization, but for a portable RPC mechanism. On Windows we have the WCF, but what if we want support for many platforms and programming languages? All that is answered by Thrift (also see here). Initially developed at Facebook it is now free and open source project.

Let’s start by creating a simple thrift file that defines a “service”, or a RPC server, with functions and parameters:

Here in a file service.thrift we have defined a RPC server called Service with three functions (one asynchronous) and a string parameter msg. Next we need to compile it. Just like protocol buffers, thrift is a code generator. It will produce everything needed to instantiate both the server and the client:

thrift –gen cpp -out . service.thrift

Thrift basic usage.

The above command will produce several header and source files for us. Now we are ready to implement our C++ client that will connect to the RPC server and issue remote procedure calls. The code is straight forward and easy to read and understand:

The server code is slightly more complicated, but not by much 🙂 In this post I’m using the most basic functions of thrift for illustration purposes. But know that it is quite capable of handling huge workloads and many connections. Here’s the corresponding server code:

The extra code here is the ServiceHandler class which will do the actual RPC work. Let’s put it all together now. After I start the server and execute the client program on my machine, I get the following output:

Starting the server…
ping()
Martin says hi!
async_call()

Thrift RPC server output.

It works! I hope you enjoyed this little introduction to thrift. Now go read all about it!

P.S. As always, complete source and build files available at my GitHub.

Protocol Buffers: or how to serialize data

This post is meant as a brief introduction that will point you in the right direction rather than an exhaustive tutorial. Here we go…

Have you ever had to write code that serialized structured data into an efficient binary format, to be later saved to disk or sent over the network? Do you remember how difficult and time consuming it was? Haven’t you wished there was a standard C++ library to do it instead of reinventing the wheel? Well, today is your lucky day 🙂

Say hello to Google’s Protocol Buffers! It is a highly portable (and free!) library that allows you to define near arbitrary (numerical, strings, structures, lists, vectors, maps, you name it) data formats and easily serialize and deserialize them into a platform portable binary format. Why not use XML you may ask? How about 3 to 10 times smaller output, and 20 to 100 times faster serialization and deserialization just to name a few 🙂

Let’s start by defining a protocol buffer file that defines a “Person” data structure:

Save that to protobuf.proto file and let’s compile it using the protoc code generator. Yes, Protocol Buffers is a code generator; it takes as input a .proto file and spits out C++ classes (it can also produce code in C#, Java, JS, ObjC, PHP, Python, and Ruby).

protoc -I=. –cpp_out=. protobuf.proto

protoc basic usage.

The above command will produce 2 files:

This code is pretty self explanatory. We create, set and serialize one data structure of type data::Person, then deserialize it into another. The output is what we would expect:

Name  = Martin Vorbrodt
DOB   = 19800830
EMail = [email protected]

Program output.

That’s enough of an introduction. I hope you will read up on Protocol Buffers and realize the enormous potential of this library. Happy coding!

P.S. As always, complete source and build files available at my GitHub.

Well, that was no fun :(

No, not blogging, that’s still fun 🙂 moving my website to bluehost over the last 24 hours! But it’s finally done and the Vorbrodt’s C++ Blog is smooth sailing once again… on a brand spanking new domain! But have no fear, the old one will kindly redirect.

So what have I learned through this exercise? For starters website migrations, no matter how trivial (it’s only a basic blog with 2 pages and 46 posts after-all), just don’t go smooth. 95% of it will work, but then the annoying 5% that didn’t migrate will eat up 23 1/2 hours of your life trying to chase it down and patch it up! I can’t imagine moving a huge corporate or banking website… but I guess that’s what staging environments are for.

So my blog uses a custom theme, has few WordPress and Jetpack widgets, and few custom modifications to the php files by yours truly. Non of that migrated! See, I did an export followed by an import operation, and that only moved the data (posts, pages, comments). So I had to hunt down and reinstall theme and plugins on the new site; then I re-implemented the few php changes to make the theme to my liking again.

But the most irritating part of the export / import process was the fact that the post excerpts vanished into thin air! So when I fired up the site with the new theme all I cloud see were post titles… no excerpts on the front page! Luckily the post content was preserved perfectly. I then looked for and tried four or five different excerpts plugins to no avail, until I found the one I needed: Excerpt Editor. Nothing about it was automatic, but at least it let me rebuild, one by one, my post excerpts. Ufff. That took a while.

Once I got the page up and running the way it was before I immediately purchased a Jetpack Personal plan for $39/year which offers automatic site backups and restorations. It backs up the database (pages, posts, comments) as well as themes, plugins, uploads, etc. Hopefully I’ll never have to use it, but you know what they say… it’s better to have it and not need it, then need it and not have it 🙂
Oh, and the site hosting is $10/month, another $30/year for domain name and privacy protection on it. Pretty low price to pay for a total peace of mind!

Finally a word about my website hosting setup up to yesterday: yes it was done from home and a constant source of headaches! For starters I’m on a residential 1Gbit line with 40Mbit upload. Great for streaming 4K TV but not so hot for uploading website content (plus the upload latency was bad). Then there’s the whole dynamic IP address thing… so I had to create a sub domain with my google domain, enable DDNS service on it through my OpenWrt router, and pray it doesn’t change too often. Of course I couldn’t run the web server directly on the router, so I pushed it behind the firewall onto my Qnap NAS. Getting WordPress to run on this thing smoothly was an issue and required plenty of workarounds (like a custom cron job to pull wp-cron.php file every minute from the web server or else the internal WordPress tasks would get delayed and bad things would happen). Just a mess overall. Oh and don’t get me started on using LetsEncrypt certificates for https access. I love that they provide them for free, but for 90 days at a time! Really?! And then there was the weekend I spent figuring out how to serve the intermediate certificate along with my domain certificate from Qnap’s custom build Apache server… so I could get an A rating from SSL Labs 🙂

Anyways, too much venting and not enough C++ in this post so I’ll stop now!

P.S. If you got this far, please re-subscribe to email notifications if you’ve done so in the past. Those sadly didn’t survive the export / import process 🙁

Parallel STL

C++17 standard introduced execution policies to the standard algorithms; those allow for parallel and SIMD optimizations. I wanted to see how much faster the Parallel STL can be on my quad core system but non of my compilers currently support it. Luckily Intel has implemented it and made it available to the world 🙂

On a side note, in this post’s example I will be using several frameworks: TBB needed to compile the Parallel STL. And Catch2 to create the test benchmark. All are freely available on GitHub. BTW, thanks to Benjamin from Thoughts on Coding for pointing me toward the Catch2 library. It’s great for creating unit tests and benchmarks.

Let’s benchmark the following operations using STL and PSTL: generating random numbers, sorting the generated random numbers, finally verifying if they’re sorted. The performance increase on my quad core 2012 MacBook Pro with i7 2.3GHz is about 5x! Nice!

benchmark name              iters   elapsed ns      average 
———————————————————–
STL                             1  10623612832    10.6236 s 
PSTL                            1   1967239761    1.96724 s 

Program output.