Hello,
I have OpenNebula 5.4.13 installed with two hosts. The case that occurs to me is that a vm that I have as a repository server, when many requests come in, starts to lose packages. Run an “htop” and what happens is that one of the cpu cores goes to 100% while the others stay above 20%.
I have increased the cpus but I still have the same problem.
Repo Server: CentOS Linux release 7.9.2009 (Core)
Template opennebula:
CONTEXT = [
NETWORK = “YES”,
SSH_PUBLIC_KEY = “$USER[SSH_PUBLIC_KEY]” ]
CPU = “4”
DISK = [
IMAGE = “HDD - XXXXXXX”,
IMAGE_UNAME = “XXXXX” ]
GRAPHICS = [
LISTEN = “0.0.0.0”,
TYPE = “VNC” ]
HYPERVISOR = “kvm”
MEMORY = “4096”
MEMORY_UNIT_COST = “MB”
NIC = [
MODEL = “e1000”,
NETWORK = “XXXXX”,
NETWORK_UNAME = “XXXXXX” ]
OS = [
BOOT = “” ]
VCPU = “4”
I have been looking for information and the clue I have found is to modify the following file on the host machines.
/etc/one/vmm_exec/vmm_exec_kvm.conf:
EMULATOR = /usr/libexec/qemu-kvm
#VCPU = 1
OS = [ arch = “x86_64” ]
FEATURES = [ PAE = “no”, ACPI = “yes”, APIC = “no”, HYPERV = “no”, GUEST_AGENT = “no”,
VIRTIO_SCSI_QUEUES = “0” ]
DISK = [ driver = “raw” , cache = “none”]
#NIC = [ filter = “clean-traffic”, model=“virtio” ]
Not sure as this would affect all vm’s on the host. Can you clarify this doubt for me?
Regards