San Diego C++ Meetup – #61 Apr 16 2024

Hello all,

Yet another summary of our latest session of San Diego C++ Meetup. America’s finest city endorsing America’s favorite language 😉

The meetup page can be found here.

San Diego C++ Meetup #61 recording.

Agenda summary

  1. C++ quiz – Arrays and copies during range-for-loop.
  2. The right auto/auto* – When using auto, and the deduced type is pointer, should we use auto? or auto*. Spoiler – use auto*. But why? This items takes us through the reasoning behind it. Hint – it helps readability and correctness especially with const involved.
  3. Does a default virtual destructor prevent compiler-generated move operations? – Fascinating journey investigating the implication of user defined destructor and move operation, talking about Howard’s famous table. Then – what does it mean in base classes when =default is used for virtual destructor. And finally, what is the best way to understand whether a specific class type is move constructible. Hint – it’s not that easy and straightforward, but we introduced a neat trick.
  4. And finall In-Place Construction for std::any, std::variant and std::optional – this item was inspired by BartÅ‚omiej Filipek C++ Stories blog post.

That’s it for this session. Thank you for reading!

Kobi

San Diego C++ Meetup #60 – March 12 2024

Hello everyone,

San Diego C++ Meetup – 5 Years! 60 meetings! took place Tuesday March 12th 2024.

Here is the link to the meetup.com page.

Recently, I’m moving away from Windows+Teams.

Last session, we used Zoom utilizing Andreas Fertig’s account. Andreas was our guest speaker on Feb 2024.

For March, I tried using Google Meet but I missed the fact that I needed premium “GoogleOne” account and it was too late to add it. I have it now for next sessions. Hence the recording did not work.

No worries, after the meeting I spent ~20 minutes to record myself going over the agenda so we have it on record. Here is the recap:

San Diego C++ meetup session #60 – Recap post session recording

Our Agenda:

1. C++ Quiz
2. What is IILE/IIFE ? Deep dive to some of immediate invocation tricks.
3. Strongly typed syntax – How can we achieve a better, less error prone code? NamedType to the rescue!

More details:

C++Quiz – Question 126 was about Lookup rules. Question 29 was about invoking virtual functions in constructors and destructors (Don’t!), and finally Question 312 was about class/struct inheritance and access specifiers – “would this compile?”

The second part was about Immediately Invoked Lambda/Function Expression(IILE/IIFE). Few tricks, why is it useful, should we use std::invoke and benchmarks using Bartek’s cppstroies blog post.

And finally, we discussed C++ and strong types, user defined string literals and finalized with NamedTyped library example.

That’s if for the 60th session update, thank you for reading!

Kobi

Exploring Polymorphism in C++ – San Diego C++ Meetup Feb 2024

Hello everyone,

We had a special guest speaker on Tuesday, Feb 20th 2024. Andreas Fertig!

This is not Andreas’ first appearance in our Meetup and I was super happy to host him again!

This session went over an interesting and extremely useful information – Runtime vs Compile time polymorphism.

The event page can be found here.

Recording, as usual can be found in the San Diego C++ Meetup Youtube channel.

Exploring Polymorphism in C++: Run-time vs. Compile-time by Andreas Fertig – San Diego C++ meetup

Presentation material can be found in our dropbox location (join the meetup to gain access).

Summary of the material discussed:

  1. Cost of runtime polymorphism.
  2. CRTP as an alternative, lower runtime cost alternative.
  3. Policy based design.
  4. Example of policy – std::unique_ptr and the deleter policy.
  5. std::sort and the sorting policy.
  6. Another example of the Policy based idiom, checked array boundaries with multiple types of handling errors – all with Policy design.

That’s it for this month!

Next time, it’d be our #60 session. Yes! 5 years of San Diego C++ Meetup. We have over 1650 members. We started off on March 2019, with around 70 members!

Thanks for reading,

Kobi

San Diego C++ Meetup #58 – Modern C++ Design, chapter 2

Hello all,

Quick summary on a great Tuesday night on Jan 16 where we went over chapter 2 of Andrei’s A. book – Modern C++ design.

meetup-event link

Recording:

San Diego C++ Meetup sdcppmu Youtube recording

Here is the overall summary of what we went over in this session:

  1. Compile time assertions
  2. Partial template specialization.
  3. Local classes.
  4. Map integral constants to types. Compile time dispatch based on numeric values. Boolean conditions.
  5. Type to type mapping. For overloading and function template partial specialization.
  6. Type selection, based on compile time boolean conditions.
  7. Detect convertibilities and inheritance at compile time.
  8. TypeInfo wrapper over type_info. Value semantic and ordering comparisons.
  9. NullType, EmptyType placeholders classes.
  10. The last part – “TypeTraits template to offer multiple general purpose traits to help us tailor the code to specific categories of types” – as I expected – we did not have time since we were already overtime (around 1hr and 10 mins) but feel free to go over the last 4-5 slides with the type_info and the overall type traits class.

Overall, in the past 2 sessions (Dec 2023, Jan 2024) we showed the power of Templates with the great help of Andrei’s book. As I mentioned, the book is unique and even after 20+ years, still ranked top IMO. The first 2 chapters are the basic building blocks, the pillars for a better understanding of Generic programming. Worth investing time and reading it.

Thank you!

Kobi

Introduction to Package Management with Conan 2.0 – by Chris McArthur – JFrog

Hello,

We had another great night in San Diego C++ Meetup, the 54th meeting (Sep 12 2023). This time hosting JFrog and specifically having Chris McArthur presenting Conan 2.0.

The session was super informative and easy enough that even if you’re hearing about Conan for the first time, you’d be able to pick it up quickly and start using it in your projects.

The meetup page for this event can be found here.

San Diego C++ meetup #54 – Introduction to Package Management with Conan 2.0 – by Chris McArthur – JFrog

So what did we learn?

  1. Describing what is Conan, how does it solve the gap in C++ wrt package management.
  2. Demo – building a small application pulling in dependencies using Conan.
    • spdlog was used in the first demo
  3. What is conanfile.txt, how to bring in dependencies, installing and integrating into CMake files.
  4. Dependencies graph and the transitive trait of it.
  5. Using VSCode as IDE. Clion has also newer version of their Conan plugin that is worth looking at.
  6. More demos, with more packages, demonstrating different versioning, local caching of the packages. All working flawlessly.
    • glad
    • glfw
    • tinycthread
    • linmath
  7. Using presets
  8. How to use test_requires packages – e.g. bringing gtest package for build and testing – but not for production distribution.
  9. What is Conan lock-file and how to utilize it. CI, Reproducible builds.
  10. Picking up packages from conan-center/conan.io
  11. Writing a simple conanfile.py to distribute an app as a Conan package. (Conan Recipe).
  12. Introducing Conan extension.
  13. Developing Packages Locally.
  14. Resources on the web – ACCU talks, Conan blog, and future talks in Cppcon2023.

Thanks again for Qualcomm for paying the meetup fees, Charles Bergan, for supporting this group.

Thank you for reading!

Kobi

Token Bucket: or how to throttle

For a while now I was wondering how one could throttle network traffic, disk reads/writes etc. Google search quickly brought me to the Token Bucket Algorithm as well as few C++ implementations here and here. I was a bit puzzled at first by how the algorithm works when looking at the code; the two implementations I found operate on atomic integers but the algorithm operates on, well, time. After some head scratching and time at a whiteboard it made sense. Here’s how I understand it:

Imagine you’re walking in a straight line at a constant speed and you are dragging a peace of rope behind you. Every time you want to do something you first pull the rope toward you a little such that the length of it you’re dragging behind becomes shorter. You repeat it every time you want to do something (that something is what you’re trying to throttle btw). At the same time, if you choose to do nothing, you release little bit of rope such that what you’re dragging gets longer, up to the maximum length of the rope. Another way to think of it is that the rope if not yanked on unwinds at a constant rate up to its maximum length. If however you pull on the rope too much you will eventually bring it all in and now you will have to wait for it to unwind a little before you can pull it again.
Now imagine that instead of walking down a straight path you’re actually moving through time and it should all make sense now: pulling on the rope a little is like consuming a token; the length of the rope is the token bucket capacity, and the rate at which the rope unwinds up to its maximum length is the rate at which the token bucket refills if no tokens are consumed. You can also pull all of the rope in at once, and that’s the sudden burst the algorithm allows for after which the rate is limited to how fast it unwinds back behind you aka how quickly the bucket refills with tokens. I really hope that explanation makes sense to you!

Some comments about the implementations I found: both use 3 std::atomic variables where only one is actually needed (unless you want the ability to change bucket capacity and token rate reliably after constructing an instance in a multi-threaded environment, which my implementation supports); the code I linked to above only needs to keep the time variable atomic. Both also operate on integers and I felt it could be abstracted better using std::chrono. Finally, there’s no need for any atomics if only one thread is consuming tokens so I decided to create a separate class for such case (not shown below).

Complete source code:
token_bucket.hpp | throttle.cpp



San Diego C++ Meetup Jan 17 2023 – Coroutines

Hello everyone!

On Tuesday, Jan 17 2023 we hosted the 46th session of San Diego C++ Meetup.

This time, I gave a talk on C++20 Coroutines which I named “YACRT – Yet Another Coroutine Talk”. There are many good talks out there and I decided to have another one for the San Diego C++ group.

Here is the recording:

During the talk, I’ve described both Generators and non-Generators techniques, focusing on thread context, the C++20 Coroutine API and the various customization points. I had few CLion gdb/debugger screenshots to demonstrate the various parts of the runtime call flow.

Generators – I showed simple Generator with co_yield.

For non-generators, I was inspired by this blog post Pablo A io_uring and coroutine post which has 3 parts. It talks about the combination of Coroutines and Linux 5.x Kernel’s io_uring feature which is by itself really cool feature to be aware of. Here is a simple diagram. You basically submit work (read/write from/to FD) and the kernel is doing it on your behalf! You just query for completion and carry-on.

io_uring

And what did I implemented for this session? A Coroutine function that is reading bytes from a UDP socket on a dedicated thread once we are suspending an Awaiter, when done – resuming the coroutine and then submitting a request to the kernel for file write. That’s the second co_await/Awaiter. The resume of this co_await happens in main function where we wait on the Kernel operation completion using a blocked function API. Here is a simple diagram of the main parts:

coroutine function and the pieces

Obviously, you’d need to watch the recording in order to get the full sense of what’s happening.

It was the first time for me presenting on C++20 coroutine and it’s not easy. Lots of moving parts and details to be aware of. It took me 2 hours to go over 60 slides!

Thanks for reading!

Kobi

San Diego C++ Meetup – October 17 2022

Hello everyone,

Yet another great evening meeting others and discussing C++.

Agenda can be found in the even link here: sdcppmu-event

Recording, in the Youtube channel at sdcppmu:

Great discussion points on const vs non const function – how to avoid duplication of code.

How “deducing this” makes the syntax shorter.

Martin shared the following: volatile: The Multithreaded Programmer’s Best Friend

We also discussed the blog post update: use-case-of-utilizing-stdset-instead-of-stdmap and heterogeneous lookup. See more here:

abseil-tips-144

and cppstories-heterogeneous-lookup-cpp14 , https://www.cppstories.com/2021/heterogeneous-access-cpp20/

Thank you!

Kobi

San Diego C++ Meetup #42 – September 22 2022

Yet another fun night in San Diego C++ Meetup (sdcppmu).

Recording can be found in our sdcppmu Youtube channel.

sdcppmu #42

Here is the agenda link:

https://www.meetup.com/san-diego-cpp/events/288596469/

We had quiz, C++ book which is https://www.amazon.com/Template-Metaprogramming-everything-templates-metaprogramming/dp/1803243457

We also discussed life time extension in the context of C++11 range for loop.

One thing that we discovered during the meeting is MSVC non-conformance with binding RValue to a non const reference. See the twitter discussion here: https://twitter.com/kobi_ca/status/1573155334696628224?s=20&t=2LZMv2JdfxLcbE7piQ76tA

Enjoy the recording!

Kobi

inline – not what it used to be

UPDATE:
Thank you Peter Sommerlad for pointing out that the non-violation of ODR was and remains the primary purpose of inline, not the optimization hint. I’m updating the post to reflect this.

Today I want to talk about the C++ keyword inline; what it used to mean prior to C++17 and what it means today. Let’s start with the historical first.

A function declared inline could be defined in a header file (its body present in the .h file), then included in multiple compilation units (.cpp files) and the linker would not complain about seeing multiple definitions of the same symbol. This was a way of stating that ODR was not being violated by the programmer. Without inline one had to provide the signature of a function in a header file, and its implementation in a source file. Alternatively inline function could be defined multiple times across multiple source files and everything would be hunky-dory as long as the definitions were identical, otherwise… undefined behavior.

inline used to also apply to standalone and member functions (class methods declared and defined inside the body of a class or struct were implicitly inline) as a hint to the compiler to inline the function call: instead of outputting assembly code that would push parameters onto the stack and jump to the function’s address the compiler would instead emit the compiled function in place, skipping the jump and stack pushes/pops. This allowed for faster running code, sometimes at the cost of the size of the executable (if the same function’s assembly was emitted in many places across the executable).
Good example of potential performance gain would be inside a tight loop making calls/jumps to a function; the overhead in each iteration of the loop could result in significant impact on performance; inline helped to mitigate that.

I mentioned earlier that inline was a hint, meaning that declaring a function as inline did not guarantee that it would be assembled in place; compilers had the ultimate say in the matter and were free to ignore inline each and every time. The workaround to this powerlessness over the mighty compiler was to instead #define the function as a macro. Preprocessor macros are evaluated and replaced with actual code prior to compilation, effectively always resulting in a function (macro) call replaced with its body in the source code.

The compiler could refuse to inline Add but it had no choice but to compile MUL in-place. Note the parenthesis around x and y in the macro; that’s there in case x and y are complex expressions that need to be fully evaluated before the final multiplication takes place. Without the parenthesis this macro call would be very problematic: MUL(1 + 2, 3 + 4); would expand to 1 + 2 * 3 + 4 which is clearly not what’s expected (due to operator precedence) at the time of the macro call.

Enter the grand inline unification!

Since C++17 the multiple definitions meaning applies equally to both functions and variables (while also being an optimization hint for functions).

If we wanted to have a global variable shared across multiple compilation units (.cpp files) we had to first declare an extern variable in a header file:

extern int x; // Inside .h file

Then define it (and provide storage for it) in a source file:

int x = 98; // Inside .cpp file

A header only workaround prior to C++17 was to use Meyer’s Singleton approach:

Starting with C++17 the same can be accomplish by simply declaring and defining the variable as inline in a header file:

inline int x = 17; // Inside .h file only

Now the header file can be included by many source files and the linker will intelligently, despite seeing multiple symbols, pick only one and disregard all others, guaranteeing the same variable at the same memory location is accessed or modified regardless of which compilation unit it happens in.

The same holds true for static member variables of a class or struct. In the past we had to do the following in a header file:

And inside a source file:

int S::x = 98; // Inside .cpp file

C++17 requires only a header file to achieve the same result:

Worth noting is that template and constexpr functions as well as constexpr variables are also implicitly inline. You can read more about all the gory details here and here.

= delete; // not just for special member functions

During the 29th San Diego C++ Meetup fellow C++ enthusiast and contributor to my blog, Kobi, brought up something interesting about deleted functions and I wanted to share this little gem with you…

To my surprise the = delete; postfix can be applied not only to special member functions like constructors or assignment operators, but to any free standing or member function!

Why would you want such sorcery you ask? Imagine you have a function like this:

void foo(void*) {}

On its own foo can be called with a pointer to any type without the need for an explicit cast:

foo(new int); // LEGAL C++

If we want to disable the implicit pointer conversions we can do so by deleting all other overloads:

template<typename T> void foo(T*) = delete;

Or the short and sweet C++20 syntax:

void foo(auto*) = delete;

To cast a wider net and delete all overloads regardless of the type and number of parameters:

template<typename ...Ts> void foo(Ts&&...) = delete;

Kobi found this on stack overflow of course 🙂


Example program:
delete.cpp


C++20 Concepts

C++20 introduced the Concepts library and the corresponding language extensions to template metaprogramming. This post will be a brief introduction to the topic for people already well versed in C++ templates.

What is a concept? Besides being a new keyword in C++20 it is a mechanism to describe constraints or requirements of typename T; it is a way of restricting which types a template class or function can work with. Imagine a simple template function that adds two numbers:

template<typename T> auto add(T a, T b) { return a + b; }

The way it is implemented doesn’t stop us from calling it with std::string as the parameters’ type. With concepts we can now restrict this function template to work only with integral types for example.

But first let’s define two most basic concepts, one which will accept, or evaluate to true, for all types, and another which will reject, or evaluate to false, for all types as well:

template<typename T> concept always_true = true;
template<typename T> concept always_false = false;

Using those concepts we can now define two template functions, one which will accept, or compile with any type as its parameter, and one which will reject, or not compile regardless of the parameter’s type:

template<typename T> requires always_true<T> void good(T) {} // ALWAYS compiles
template<typename T> requires always_false<T> void bad(T) {} // NEVER compiles

Let’s now rewrite the function that adds two numbers using a standard concept std::integral found in the <concepts> header file:

template<typename T> requires std::integral<T> auto add(T a, T b) { return a + b; }

Now this template function will only work with integral types. But that’s not all! There are two other ways C++20 allows us to express the same definition. We can replace typename with the name of the concept and drop the requires keyword:

template<std::integral T> auto add(T a, T b) { return a + b; }

Or go with the C++20 abbreviated function template syntax where auto is used as a function’s parameter type together with the (optional) name of the concept we wish to use:

auto add(std::integral auto a, std::integral auto b) { return a + b; }

I don’t know about you but I really like this short new syntax!

Concepts can be easily combined. Imagine we have two concepts we wish to combine into a third one. Here’s a simple example of how to do it:

template<typename T> concept concept_1 = true;
template<typename T> concept concept_2 = false;
template<typename T> concept concept_3 = concept_1<T> and concept_2<T>;

Alternatively a function or class template can be declared to require multiple concepts (which requires additional parenthesis):

template<typename T> requires(concept_1<T> and concept_2<T>) foo(T) {}

What follows the requires keyword must be an expression that evaluates to either true or false at compile time, so we are not limited to just concepts, for example:

template<typename T> requires(std::integral<T> and sizeof(T) >= 4) void foo(T) {}

The above function has been restricted to working only with integral types that are at least 32 bit.

Let’s look at a more complex example and analyze it line by line:

In line #1 we define a concept called can_add and define an optional variable a of type T. You may be wondering why the requires keyword appears multiple times. It’s because what follows after requires and is within curly braces {} is referred to as a compound requirement. Compound requirements can contain within them other requirements separated by a semicolon ;. If the expression inside is not prefixed by the requires keyword it must only be a valid C++ statement. However, what follows directly after requires without the surrounding curly braces must instead evaluate to true at compile time. Therefore line #3 means that the value of std::integral<T> must evaluate to true. If we remove requires from line #3 it would mean only that std::integral<T> is a valid C++ code without being evaluated further. Similarly line #4 tells us that the sizeof(T) must be greater than or equal to sizeof(int). Without the requires keyword it would only mean whether or not sizeof(T) >= sizeof(int) is a valid C++ expression. Line #5 means several things: a + a must be a valid expression, a + a must not throw any exceptions, and the result of a + a must return type T (or a type implicitly convertible to T). Notice that a + a is surrounded by curly braces that must contain only one expression without the trailing semicolon ;.

We can apply the can_add concept to a template function as follows:

template<typename T> requires can_add<T> T add(T x, T y) noexcept { return x + y; }

Template function add can now only be invoked with types that satisfy the can_add concept.

So far I have limited the examples to standalone template functions, but the C++20 concepts can be applied to template classes, template member functions, and even variables.

Here’s an example of a template struct S with a template member function void func(U); the struct can only be instantiated with integral types and the member function can only be called with floating point types as the parameter:

See the source code below for more examples.


Example program:
concepts.cpp


C++ Lambda Story, a book review

C++ Lambda Story
Everything you need to know about Lambda Expressions in Modern C++!
by Bartłomiej Filipek


I thought I knew all (or most) there was to know about lambda expressions in modern C++ until I befriended a fellow coder (and countryman) Bartłomiej Filipek, found his book, and after reading it I realized there were gaps in my knowledge and understanding of lambdas as one of the major features of modern C++.

I don’t want to give away too much of the book’s content so I will make this review brief. BartÅ‚omiej (Bartek) takes the reader through the entire history of callable objects in C++ starting with C++98 and function objects (or functors, objects which implement the function call operator), explains the usefulness as well as limitations of functors, then introduces us to lambda expressions as they appeared in C++11.

In the first and longest chapter of the book Bartek builds the foundation of knowledge by going over in great detail the syntax, types of lambdas, capture modes, return type, conversion to function, IIFE (Immediately Invoked Functional Expression), lambdas in container, and even inheriting from a lambda!
This gives the reader complete overview of the feature as it was first introduced in C++11, and prepares him/her for the following chapters where Bartek goes over lambda’s evolution through C++14, C++17, and all the way to the most recent iteration of the language, the C++20.

I found the book to be well organized into small bite-sized nuggets of knowledge that could be read, reread, understood and absorbed in a matter of minutes (similar to how Scott Meyers presents information in Effective C++).

C++ Lambda Story is not a book for people who are just starting to learn C++. In other words, it is not a C++ book one should read first. The content within the 140 or so pages is intermediate and advanced level of what the language has to offer. But it is a must for a seasoned C++ programmer as well as someone who already has a good grasp on the language.

You can find the book in black and white print on Amazon, a full color version, or a digital version on LeanPub.

To find out more about the author visit his website C++ Stories.

How to synchronize data access, part 2

Yesterday I wrote about How to synchronize data access, you should read it before continuing with this post, it will explain in detail the technique I will expand upon here. I’ll wait…

Alright! Now that you understand how a temporary RAII object can lock a mutex associated with an instance of an object, effectively synchronizing across multiple threads every method call, let’s talk about reader/writer locks.
R/W locks allow for two levels of synchronization: shared and exclusive. Shared lock allows multiple threads to simultaneously access an object under the assumption that said threads will perform only read operations (or to be more exact, operations which do not change the externally observable state of an object). Exclusive lock on the other hand can be held by only one thread at which point said thread is allowed to modify the object’s state and any data associated with it.

In order to implement this functionality I had to create two types of the locker object (the RAII holder of a mutex). Both lockers hold a reference to std::shared_mutex but one locker uses std::shared_lock while the other uses std::unique_lock to acquire ownership of the mutex. This approach is still transparent to the user with the following exception: a non-const instance of shared_synchronized<T> class must use std::as_const in order to acquire shared ownership ( const shared_synchronized<T>, shared_synchronized<const T>, and const shared_synchronized<const T> will always acquire shared lock; note that std::as_const performs a const_cast).


Implementation:
synchronized.hpp | synchronized.cpp


How to synchronize data access

I came across a post titled C++ Locking Wrapper shared by Meeting C++ on Twitter and it reminded me of Synchronized Data Structures and boost::synchronized_value so I decided to implement my own version as a learning exercise and possible topic of a future video on my YT channel.

The idea is simple: synchronize across multiple threads every method call to an instance of an object; do it in the most transparent and un-obstructing way possible. Here’s a simple example illustrating the idea:

The trick comes down to two things really: 1) Wrap a value of type T and its lock (in my case std::mutex) in a containing class and 2) Override operator->() to return a RAII temporary responsible for guarding the value.

I will add that this is perhaps a heavy handed approach to guarding data since I do not know how to make it transparent while allowing reader/writer locks, or synchronization of only some select methods. Perhaps some type system hackery with const and volatile methods could help here…


Implementation:
synchronized.hpp | synchronized.cpp