Superfluous vm.drop_caches in vmm/kvm/*?

Hello,

when running onehost flush before host reboot, I observe huge load spike after the VMs are migrated away from the host-to-be-rebooted. The root cause is that there are tens of CPU-eating sysctl processes, probably two for each VM that used to be running on that host. grep sysctl vm/kvm/* shows the following:

vmm/kvm/cancel:    (sudo -l | grep -q sysctl) && (sudo -n sysctl vm.drop_caches=3 vm.compact_memory=1 &>/dev/null &) || true
vmm/kvm/deploy:    (sudo -l | grep -q sysctl) && sudo -n sysctl vm.drop_caches=3 vm.compact_memory=1 >/dev/null
vmm/kvm/migrate:    ssh_exec_and_log "$DEST_HOST" "(sudo -l | grep -q sysctl) && sudo -n sysctl vm.drop_caches=3 vm.compact_memory=1 >/dev/null || true" \
vmm/kvm/migrate:    (sudo -l | grep -q sysctl) && sudo -n sysctl vm.drop_caches=3 vm.compact_memory=1 &>/dev/null &
vmm/kvm/migrate_local:    ssh_exec_and_log "$dest_host" "(sudo -l | grep -q sysctl) && sudo -n sysctl vm.drop_caches=3 vm.compact_memory=1 >/dev/null || true" \
vmm/kvm/migrate_local:    ssh_exec_and_log "$src_host" "(sudo -l | grep -q sysctl) && (sudo -n sysctl vm.drop_caches=3 vm.compact_memory=1 &>/dev/null &) || true" \
vmm/kvm/resize:        (sudo -l | grep -q sysctl) && sudo -n sysctl vm.drop_caches=3 vm.compact_memory=1 >/dev/null
vmm/kvm/resize:        (sudo -l | grep -q sysctl) && sudo -n sysctl vm.drop_caches=3 vm.compact_memory=1 &>/dev/null &
vmm/kvm/restore:    (sudo -l | grep -q sysctl) && sudo -n sysctl vm.drop_caches=3 vm.compact_memory=1 >/dev/null
vmm/kvm/save:    (sudo -l | grep -q sysctl) && sudo -n sysctl vm.drop_caches=3 vm.compact_memory=1 &>/dev/null &
vmm/kvm/shutdown:    (sudo -l | grep -q sysctl) && sudo -n sysctl vm.drop_caches=3 vm.compact_memory=1 &>/dev/null &

Is it really necessary to do this on every migration/deployment/… etc? On a host with 512 GB RAM and 19 running VMs the above sysctl command takes 29s real and 28s system time, and probably also affects other running VMs negatively, dropping the memory cache under them.

It might have sense to compact memory for example hourly or nightly, but definitely not after removing/migrating a single VM out of tens which might be running on that host. And definitely not in parallel when multiple deployments/migrations are in progress.

(and of course having several forms of the sysctl call, differing slightly from each other, is really ugly …)

Thanks in advance for considering a different approach to memory compaction in vmm/kvm/*.

-Yenya

Hello Yenya,

You could alter the behaviour via the remotes/etc/vmm/kvm/kvmrc file:

# Compact memory before running the VM
#CLEANUP_MEMORY_ON_START=yes

# Compact memory after VM stops
CLEANUP_MEMORY_ON_STOP=yes

It is just an on/off switch to turn off the feature globally.
And yes, maybe the default configuration behaviour could be different…

Best Regards,
Anton

Hi Anton,

thanks for the fast reply. Yes, this would solve my problem, but I agree that the default behaviour should be carefully considered :slight_smile:

-Yenya

1 Like

This can indeed be improved, thanks for your feedback.

This issue can be used to track associated improvements.

Subscribed, thanks.

-Yenya