Live migration problem

Hello,

we are facing problem with live migration of VMs. We are running recent stable version of one - v 5.0.2

One logs :
[Z0][VMM][E]: migrate: Command “virsh --connect qemu:///system migrate --live one-17 qemu+ssh://node1/system” failed: error: internal error: early end of file from monitor, possible problem: 2016-08-30T18:05:00.687981Z qemu-system-x86_64: load of migration failed: Input/output error

In libvirt logs -
2016-08-30T18:05:00.687981Z qemu-system-x86_64: load of migration failed: Input/output error

Anyone has some idea whats going on?

System datastore is located on GlusterFS mounted with FUSE(symlink from /var/lib/one/datastore/100 to /gluster/volume on all hosts).

This is manual virsh debug from CLI :
oneadmin@node2:~/datastores/100/17$ virsh --debug 0 --connect qemu:///system migrate --live one-17 qemu+ssh://node1/system
migrate: live(bool): (none)
migrate: domain(optdata): one-17
migrate: desturi(optdata): qemu+ssh://node1/system
migrate: found option : one-17
migrate: trying as domain NAME
migrate: found option : one-17
migrate: trying as domain NAME
error: internal error: early end of file from monitor, possible problem: 2016-08-30T18:11:23.807923Z qemu-system-x86_64: load of migration failed: Input/output error

Thanks.

Probably this:

I found the problem,

the managed switch for the internal network had not the same mtu size as
the E5-2620 servers. After setting the same mtu everywhere, migrations
are now working.
https://www.redhat.com/archives/libvirt-users/2013-October/msg00107.html

MTU on internal network (interfaces between node1 and node2 where ssh connection is established between nodes) is 1500 - this is linux setting. Switch in network between servers has the MTU 9200 on all interfaces. So this probably will not be problematic. I will check OVS configuration and let you know if i found problem.

Thanks

I did’t found any problem. Is my datastore properly configured when i am using Filesystem - qcow2 mode type ? Then I FUSE mount gluster volume to /var/lib/one/datastores/101/ (101 is ID of that DS). No additional configuration required for gluster ?

And what about SYSTEM_DS, are is shared accross nodes?