Live migration issue

I’m running OpenNebula 7.0.1 and I’ve run into an issue with live migration.

Whenever I perform a live migration, the VM becomes unusable afterward and I see “I/O ERROR”.
However, cold/offline migration works fine with no issues.

VM internal error is as follows:

It looks like the underlying iSCSI block device becomes unresponsive, but I’ve been monitoring it closely and the iSCSI sessions and multipath mappings remain stable—there’s no link flapping or disconnects.

root@manager01:~# pvs
  PV                                   VG         Fmt  Attr PSize   PFree 
  /dev/mapper/one_system_data_ceph_11T vg-one-106 lvm2 a--  <11.00t 10.96t
root@manager01:~# vgs
  VG         #PV #LV #SN Attr   VSize   VFree 
  vg-one-106   1   2   0 wz--n- <11.00t 10.96t
root@manager01:~# lvs
  LV             VG         Attr       LSize  Pool           Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv-one-23-0    vg-one-106 Vwi---tz-k 40.00g lv-one-23-pool                                               
  lv-one-23-pool vg-one-106 twi---tz-k 40.00g                                                              
root@manager01:~# multipathd show paths
hcil     dev dev_t pri dm_st  chk_st dev_st  next_check      
16:0:0:0 sdc 8:32  50  active ready  running XX........ 10/40
15:0:0:0 sdb 8:16  10  active ready  running XX........ 10/40
17:0:0:0 sdd 8:48  50  active ready  running XX........ 11/40
18:0:0:0 sde 8:64  10  active ready  running XX........ 10/40
12月 26 16:07:37 compute-node01 kernel: br2310: port 2(one-23-0) entered disabled state
12月 26 16:07:37 compute-node01 kernel: device one-23-0 left promiscuous mode
12月 26 16:07:37 compute-node01 kernel: br2310: port 2(one-23-0) entered disabled state
12月 26 16:07:38 compute-node01 kernel: kauditd_printk_skb: 2 callbacks suppressed
12月 26 16:07:38 compute-node01 kernel: audit: type=1400 audit(1766736458.318:438): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="libvirt-5e011c1f-39bd-4a19-ab30-3d527c9d3f48" pid=342340 comm="apparmor_parser"
12月 26 16:07:38 compute-node01 kernel: br2310: port 1(trunk0.2310) entered disabled state
12月 26 16:07:38 compute-node01 kernel: device trunk0.2310 left promiscuous mode
12月 26 16:07:38 compute-node01 kernel: br2310: port 1(trunk0.2310) entered disabled state
12月 26 16:07:32 compute-node2 kernel: br2310: port 1(trunk0.2310) entered blocking state
12月 26 16:07:32 compute-node2 kernel: br2310: port 1(trunk0.2310) entered disabled state
12月 26 16:07:32 compute-node2 kernel: device trunk0.2310 entered promiscuous mode
12月 26 16:07:32 compute-node2 kernel: br2310: port 1(trunk0.2310) entered blocking state
12月 26 16:07:32 compute-node2 kernel: br2310: port 1(trunk0.2310) entered forwarding state
12月 26 16:07:32 compute-node2 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): br2310: link becomes ready
12月 26 16:07:33 compute-node2 kernel: audit: type=1400 audit(1766736453.262:303): apparmor="STATUS" operation="profile_load" profile="unconfined" name="libvirt-5e011c1f-39bd-4a19-ab30-3d527c9d3f48" pid=1251122 comm="apparmor_parser"
12月 26 16:07:33 compute-node2 kernel: audit: type=1400 audit(1766736453.402:304): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-5e011c1f-39bd-4a19-ab30-3d527c9d3f48" pid=1251125 comm="apparmor_parser"
12月 26 16:07:33 compute-node2 kernel: audit: type=1400 audit(1766736453.550:305): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-5e011c1f-39bd-4a19-ab30-3d527c9d3f48" pid=1251129 comm="apparmor_parser"
12月 26 16:07:33 compute-node2 kernel: audit: type=1400 audit(1766736453.698:306): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="libvirt-5e011c1f-39bd-4a19-ab30-3d527c9d3f48" pid=1251133 comm="apparmor_parser"
12月 26 16:07:33 compute-node2 kernel: br2310: port 2(one-23-0) entered blocking state
12月 26 16:07:33 compute-node2 kernel: br2310: port 2(one-23-0) entered disabled state
12月 26 16:07:33 compute-node2 kernel: device one-23-0 entered promiscuous mode
12月 26 16:07:33 compute-node2 kernel: br2310: port 2(one-23-0) entered blocking state
12月 26 16:07:33 compute-node2 kernel: br2310: port 2(one-23-0) entered forwarding state
12月 26 16:07:33 compute-node2 kernel: audit: type=1400 audit(1766736453.866:307): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-5e011c1f-39bd-4a19-ab30-3d527c9d3f48" pid=1251158 comm="apparmor_parser"
12月 26 16:07:34 compute-node2 kernel: audit: type=1400 audit(1766736454.014:308): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="libvirt-5e011c1f-39bd-4a19-ab30-3d527c9d3f48" pid=1251161 comm="apparmor_parser"
12月 26 16:07:34 compute-node2 kernel: audit: type=1400 audit(1766736454.150:309): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="libvirt-5e011c1f-39bd-4a19-ab30-3d527c9d3f48" pid=1251164 comm="apparmor_parser"
12月 26 16:07:34 compute-node2 kernel: audit: type=1400 audit(1766736454.286:310): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="libvirt-5e011c1f-39bd-4a19-ab30-3d527c9d3f48" pid=1251167 comm="apparmor_parser"
12月 26 16:07:34 compute-node2 kernel: audit: type=1400 audit(1766736454.426:311): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="libvirt-5e011c1f-39bd-4a19-ab30-3d527c9d3f48" pid=1251170 comm="apparmor_parser"
12月 26 16:07:34 compute-node2 kernel: audit: type=1400 audit(1766736454.570:312): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="libvirt-5e011c1f-39bd-4a19-ab30-3d527c9d3f48" pid=1251173 comm="apparmor_parser"

I’ve checked and troubleshot many parts of the environment but still can’t find the root cause. I’d really appreciate any guidance or support.

Thanks a lot!

There are no errors in the VM logs.

Hello @kuz,

For further posts, please include the log in txt or similar instead of image, it will help us check if there’s any missing string easily. It does have an xpath as empty, probably our engineer team will have to try and see if they can replicate that.

On the other hand, there are some things to check:

  • Verify Storage and Block Device Stability During Migration
  • Check libvirt and qemu Logs

It would be nice to know as well the command you execute to perform the live migration. And from which platform, so I can point you to the right page in Documentation.

Regards,

Thanks for your reply. Based on your suggestions, I reran the tests and collected the logs. I’d really appreciate your support in helping me analyze them.

I created a new VM for testing, and all subsequent tests are based on this VM (test-one / one-33).

 root@manager01:/home/oneuser# onevm list
  ID USER     GROUP    NAME                                                                  STAT  CPU     MEM HOST                                                 TIME
  33 oneadmin oneadmin test-one                                                              runn    4      8G 10.3.230.1                                       0d 00h30


root@compute-node01:/opt/shell_code# pvs
  PV                                   VG         Fmt  Attr PSize   PFree  
  /dev/mapper/one_system_data_ceph_11T vg-one-106 lvm2 a--  <11.00t <11.00t
  /dev/mapper/one_system_data_ceph_12T vg-one-108 lvm2 a--  <12.00t <11.77t
  /dev/mapper/one_system_data_ceph_5T  vg-one-107 lvm2 a--   <5.00t   4.96t
root@compute-node01:/opt/shell_code# vgs
  VG         #PV #LV #SN Attr   VSize   VFree  
  vg-one-106   1   0   0 wz--n- <11.00t <11.00t
  vg-one-107   1   2   0 wz--n-  <5.00t   4.96t
  vg-one-108   1  12   0 wz--n- <12.00t <11.77t
root@compute-node01:/opt/shell_code# lvs
  LV             VG         Attr       LSize  Pool           Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv-one-27-0    vg-one-107 Vwi-aotz-k 40.00g lv-one-27-pool        6.89                                   
  lv-one-27-pool vg-one-107 twi---tz-k 40.00g                       6.89   12.29                           
  lv-one-28-0    vg-one-108 Vwi---tz-k 40.00g lv-one-28-pool                                               
  lv-one-28-pool vg-one-108 twi---tz-k 40.00g                                                              
  lv-one-29-0    vg-one-108 Vwi---tz-k 40.00g lv-one-29-pool                                               
  lv-one-29-pool vg-one-108 twi---tz-k 40.00g                                                              
  lv-one-30-0    vg-one-108 Vwi---tz-k 40.00g lv-one-30-pool                                               
  lv-one-30-pool vg-one-108 twi---tz-k 40.00g                                                              
  lv-one-31-0    vg-one-108 Vwi-aotz-k 40.00g lv-one-31-pool        6.96                                   
  lv-one-31-pool vg-one-108 twi---tz-k 40.00g                       6.96   12.31                           
  lv-one-32-0    vg-one-108 Vwi-aotz-k 40.00g lv-one-32-pool        6.94                                   
  lv-one-32-pool vg-one-108 twi---tz-k 40.00g                       6.94   12.31                           
  lv-one-33-0    vg-one-108 Vwi-aotz-k 40.00g lv-one-33-pool        6.85                                   
  lv-one-33-pool vg-one-108 twi---tz-k 40.00g                       6.85   12.29                           
root@compute-node01:/opt/shell_code# virsh domblklist one-33
 目标   源
------------------------------------------------
 vda    /var/lib/one//datastores/108/33/disk.0
 hda    /var/lib/one//datastores/108/33/disk.1

The VM is currently running normally and I can execute any commands without issues.

Live migration test: compute-node2 → compute-node1
Migration result: After live migration, the VM hangs and I can’t run any commands. Inside the VM, I see the following error:

Please refer to the log file for:
• the migration process(I don’t see any obvious error messages—everything looks normal.)

oned.log (154.5 KB)

33.log (3.8 KB)

qemu-one-33.log (3.4 KB)

• the iSCSI status(For the iSCSI status, please refer to the log file. After reviewing the logs, I didn’t see any iSCSI/multipath link drops or flapping, so I believe the storage connection is stable.)

manager01-iscsi-status.log (69.9 KB)

compute-node2-iscsi-status.log (97.3 KB)

compute-node1-iscsi-status.log (123.9 KB)

Under the same iSCSI and multipath configuration (no changes made), cold migration is not affected. For VMs affected by live migration, running undeploy and then deploy restores normal operation (the VM can run commands again). The data inside the VM remains intact.

I’m not sure what’s causing this. The VM ends up hanging, and I’d appreciate any support or guidance.

Thank you for sharing all these details with us!
The issue might be related to some CPU on source and destination hosts incompatibilities Do these hypervisors have the same CPU model?
Could you, please, share with us the output of onevm show -j 33 command (assuming the test VM has 33 as its ID as you have mentioned in your previous post)?
Does power off → on cycle helps to resolve the migrated VM hang issue?

Yes, both compute nodes have the same physical CPU model. They are both:

processor       : 103
vendor_id       : GenuineIntel
cpu family      : 6
model           : 85
model name      : Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz
stepping        : 7
microcode       : 0x5003901
cpu MHz         : 2100.000
cache size      : 36608 KB
physical id     : 1
siblings        : 52
core id         : 27
cpu cores       : 26
apicid          : 119
initial apicid  : 119
fpu             : yes
fpu_exception   : yes
cpuid level     : 22
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
vmx flags       : vnmi preemption_timer posted_intr invvpid ept_x_only ept_ad ept_1gb flexpriority apicv tsc_offset vtpr mtf vapic ept vpid unrestricted_guest vapic_reg vid ple shadow_vmcs pml ept_mode_based_exec tsc_scaling
bugs            : spectre_v1 spectre_v2 spec_store_bypass swapgs taa itlb_multihit mmio_stale_data retbleed eibrs_pbrsb gds bhi its
bogomips        : 4201.19
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual

The VM is using QEMU’s virtual CPU (no physical CPU pinning). Please refer to the output of onevm show -j 33:

show33.log (6.3 KB)

Even if I shut down (power off) the VM and start it again, the issue still persists. It only recovers after I run undeploy and then deploy (undeploy → deploy).

I’m out of ideas for further testing at this point, and I’d really appreciate your support.

Which OS do you use on the hypervisor nodes?

Debian 12 included a version of QEMU that had a bug that caused live migrations with virtio net devices to get stuck: Bug#1115484: bookworm-pu: package qemu/1:7.2+dfsg-7+deb12u17

We ran into this issue a couple of months ago on a test cluster. qemu/1:7.2+dfsg-7+deb12u17 should fix that.

Thank you very much for your support. I’m indeed running Debian 12.12 with QEMU emulator version 7.2.16 (Debian 1:7.2+dfsg-7+deb12u16). Following your guidance, I upgraded to QEMU 7.2.20, and that resolved the live migration issue.

I really appreciate your help.:+1: :rose:

You’re welcome! It was a really frustrating one that caused some gray hair on my end.