Hi there,
I’ve noticed that when trying to compile the 4.0 client through Yocto, memory usage seems to have doubled, and my build system ends up completely killing everything because it runs out of memory. Is this an intentional change? Going from 7gb of memory usage to the entire 16 + swap once it starts trying to compile the mender client doesn’t seem right.
It’s an 8 core machine (i7 11700) with 16gb of RAM at the moment.
Hi @pyxlwuff,
It’s definitely not “intentional”. A first guess would be that boost compilation runs into OOM, as that is a new dependency. Do you have any pointers on which compilation stage actually gets aborted?
Greets,
Josef
Hi Josef,
It gets aborted on the do_compile stage of mender 4.0.1. I’ve also ran bitbake with BB_NUMBER_THREADS=1 and it always will completely lock up when it gets to the client. Weirdly this doesn’t happen if I were to compile the client myself, only through Yocto
Hi @pyxlwuff,
Sorry for taking some time. This definitely should not happen. One possible thing to look into the the PARALLEL_MAKE
variable. Can you maybe inspect it for the mender
4.0.1 recipe?
Greets,
Josef
Hi Josef,
I experimented by adding PARALLEL_MAKE
to the recipe and got the following results:
Set to 4 it froze around 36% before completely freezing at 42%
Set to 2 it froze immediately
Set to 1 it froze, then was killed by the memory manager.
This was all with BB_NUMBER_THREADS=1 bitbake <image>
After giving myself a ridiculously sized swapfile (around 16GB), I was able to get Mender 4.0.1 to compile successfully through Yocto. With htop
open in another terminal window, I saw it seems to spawn around 10-12 processes when compiling. At its peak it was using >22GB of memory for do_compile across RAM and Swap. I’m not too sure if this is to do with the cmake bbclass that’s being used, or if its something else entirely, but it doesn’t seem like it’s respecting PARALLEL_MAKE
.