Herb Sutter gave an interesting talk titled Machine Architecture: Things Your Programming Language Never Told You:
In it he gives a tiny yet very interesting example of code that illustrates hardware destructive interference: how the size of L1 cache line and improper data layout can negatively affect performance of your code.
The example program allocates two
int’s on the heap one right next to the other. It then starts two threads; each thread reads and writes to one of the
int’s location. Let’s do just that, 100’000’000 times and see how long it takes:
Duration: 4338.55 ms
Let us now do the same exact thing, except this time we’ll space the int’s apart, by… you guessed it, L1 cache line size (64 bytes on Intel and AMD x64 chips):
Duration: 1219.50 ms
The same code now runs 4 times faster. What happens is that the L1 caches of the CPU cores no longer have to be synchronized every time we write to a memory location.
Lesson here is that data layout in memory matters. If you must run multiple threads that perform work and write to memory locations, make sure those memory locations are separated by L1 cache line size. C++17 helps us with that: Hardware Constructive and Destructive Interference Size.
Complete listing:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
#include <iostream> #include <thread> #include <chrono> #include <cstdlib> using namespace std; using namespace chrono; const int CACHE_LINE_SIZE = 64;//sizeof(int); const int SIZE = CACHE_LINE_SIZE / sizeof(int) + 1; const int COUNT = 100'000'000; int main(int argc, char** argv) { srand((unsigned int)time(NULL)); int* p = new int [SIZE]; auto proc = [](int* data) { for(int i = 0; i < COUNT; ++i) *data = *data + rand(); }; auto start_time = high_resolution_clock::now(); std::thread t1(proc, &p[0]); std::thread t2(proc, &p[SIZE - 1]); t1.join(); t2.join(); auto end_time = high_resolution_clock::now(); cout << "Duration: " << duration_cast<microseconds>(end_time - start_time).count() / 1000.f << " ms" << endl; return 1; } |
I can reproduce your numbers in debug mode, but in release mode the difference is much smaller.
Debug
~270 ms – CACHE_LINE_SIZE = 64;
~630 ms – CACHE_LINE_SIZE = sizeof(int);
Release
~240 ms – CACHE_LINE_SIZE = 64;
~270 ms – CACHE_LINE_SIZE = sizeof(int);
Compiled on a Lenovo with VS2017 on an Windows 7 64 bit i7-3720QM @ 2.6 GHz
Any idea why?
Yes. In release the loop got optimized away. I will update the listing to work better in release mode.
Doesn’t alignas ( 64 ) int a, b; do the job [instead of the array]?
Yes it does! I’ll make a post about it.