Opennebula 5.4 Ceph 12 KVM troubles with VM HA

Hi.
I try to deploy cloud with OpenNebula 5.4 and Ceph as a storage.
I can’t understand how to connect Ceph to OpenNebula to achive VM HA. If I have understood docs correctly I have to use shared image datastore and any type of system datastore. Do i have to connect image datastore as Ceph LVM or i can use CephFS? How can i connect CephFS to OpenNebula?

Use Ceph for IMAGE and SYSTEM DS.

http://docs.opennebula.org/5.4/deployment/open_cloud_storage_setup/ceph_ds.html

I did like in this instruction, but when i try to migrate VM i recieve this:
[one.vm.migrate] System datastore migration not supported by TM driver

If I shutdown the host with a running VM, the host goes to the ERROR state and the state of the VM becomes UNKNOWN, but the automatic migration doesn’t happen (nothing actually happens). If I then push the button “LIVE Migrate” everything is OK, so manual migration works.
My HOST_HOOK config:
HOST_HOOK = [
NAME = “error”,
ON = “ERROR”,
COMMAND = “ft/host_error.rb”,
ARGUMENTS = “$ID -m -p 3”,
REMOTE = “no” ]

Hello,
You need setup fencing or use -f key for disabling fencing. Read description in oned.conf.

I disabled fencing, but VM still in UNKNOWN status.

oned.log

Mon Mar 26 22:14:57 2018 [Z0][InM][I]: Command execution fail: ‘if [ -x “/var/tmp/one/im/run_probes” ]; then /var/tmp/one/im/run_probes kvm /var/lib/one//datastores 4124 20 2 sp-3; else exit 42; fi’
Mon Mar 26 22:14:57 2018 [Z0][InM][I]: ssh: connect to host sp-3 port 22: No route to host
Mon Mar 26 22:14:57 2018 [Z0][InM][I]: ExitCode: 255
Mon Mar 26 22:15:00 2018 [Z0][InM][I]: Command execution fail: ‘if [ -x “/var/tmp/one/im/run_probes” ]; then /var/tmp/one/im/run_probes kvm /var/lib/one//datastores 4124 20 2 sp-3; else exit 42; fi’

ARGUMENTS = “$ID -m -f -p 3”

It is doesn’t work too.

My datasore configurations:
Attributes

Image
ALLOW_ORPHANS YES
BRIDGE_LIST sp-1
CEPH_HOST sp-1 sp-2 sp-3
CEPH_SECRET 8d21e41b-b6c0-44cc-bd64-b7fb95c04c18
CEPH_USER libvirt
CLONE_TARGET SELF
DISK_TYPE RBD
DRIVER raw
DS_MAD ceph
LN_TARGET NONE
POOL_NAME one
RESTRICTED_DIRS /
SAFE_DIRS /var/tmp
TM_MAD ceph

System
Attributes
ALLOW_ORPHANS YES
BRIDGE_LIST sp-1
CEPH_HOST sp-1 sp-2 sp-3
CEPH_SECRET 8d21e41b-b6c0-44cc-bd64-b7fb95c04c18
CEPH_USER libvirt
DISK_TYPE RBD
DS_MIGRATE NO
POOL_NAME one
RESTRICTED_DIRS /
SAFE_DIRS /var/tmp
SHARED YES
TM_MAD ceph
TYPE SYSTEM_DS

Thank you for your help.
It finaly works. Based on your recomedation, I disabled the fencing, but the main my mistake were bad Images that I used. I just downloaded Centos qcow2.
ARGUMENTS = “$ID -m -u -p 1”,