Ceph Datastore creates VMs outside of Ceph

I have a Ceph datastore and have verified the oneadmin user has access to ceph and rbd tools with the key specified in the datastore config. The datastore also reports some disk space in Opennebula, though the reported data is incorrect.

onedatastore show 108:

ALLOW_ORPHANS="YES"
BRIDGE_LIST="dfw1-x9srw-9828"
CEPH_HOST="ceph1-mon ceph2-mon ceph3-mon"
CEPH_KEY="/var/lib/one/client.opennebula.keyring"
CEPH_SECRET="c339f845-6614-4e76-a9bb-843ef24a5b0c"
CEPH_USER="opennebula"
DISK_TYPE="RBD"
DS_MIGRATE="NO"
HOST="dfw1-x9srw-9828"
POOL_NAME="rbd"
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp"
SHARED="YES"
TM_MAD="ceph"
TYPE="SYSTEM_DS"
[oneadmin@dfw1-x9srw-9828 ~]$ rbd -c /etc/ceph/ceph.conf --user opennebula -k ~/client.opennebula.keyring list | wc -l
25

It can see images

[oneadmin@dfw1-x9srw-9828 ~]$ rbd -c /etc/ceph/ceph.conf --user opennebula -k ~/client.opennebula.keyring create rbd/testimage --size 1G
[oneadmin@dfw1-x9srw-9828 ~]$ rbd -c /etc/ceph/ceph.conf --user opennebula -k ~/client.opennebula.keyring rm rbd/testimage
Removing image: 100% complete...done.

Creating and removing images also works

/var/log/one/sched.log

Sat Jun 23 09:45:51 2018 [Z0][VM][D]: Found 1 pending/rescheduling VMs.
Sat Jun 23 09:45:51 2018 [Z0][HOST][D]: Discovered 2 enabled hosts.
Sat Jun 23 09:45:51 2018 [Z0][VM][D]: VMs in VMGroups:

Sat Jun 23 09:45:51 2018 [Z0][SCHED][D]: Dispatching VMs to hosts:
        VMID    Priority        Host    System DS
        --------------------------------------------------------------
        38      0               5       108

/var/log/one/38.log

Sat Jun 23 09:45:51 2018 [Z0][VM][I]: New state is ACTIVE
Sat Jun 23 09:45:51 2018 [Z0][VM][I]: New LCM state is PROLOG
Sat Jun 23 09:45:55 2018 [Z0][VM][I]: New LCM state is BOOT
Sat Jun 23 09:45:55 2018 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/38/deployment.0
Sat Jun 23 09:45:55 2018 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Sat Jun 23 09:45:55 2018 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Sat Jun 23 09:45:56 2018 [Z0][VMM][I]: ExitCode: 0
Sat Jun 23 09:45:56 2018 [Z0][VMM][I]: Successfully execute virtualization driver operation: deploy.
Sat Jun 23 09:45:56 2018 [Z0][VMM][I]: Successfully execute network driver operation: post.
Sat Jun 23 09:45:56 2018 [Z0][VM][I]: New LCM state is RUNNING

Versions of the related components and OS (frontend, hypervisors, VMs):
CentOS 7 on both frontend and node. Ceph access on both frontend and node.
OpenNebula 5.4.6

Steps to reproduce:

  1. Create datastore according to config above
  2. Provision VM using the system datastore mentioned above

Current results:
VM is created as a file in the local filesystem, not an RBD image in the Ceph pool:

<disk type='file' device='disk'>
  <driver name='qemu' type='qcow2' cache='none'/>
  <source file='/var/lib/one//datastores/108/39/disk.0'/>
  <backingStore/>
  <target dev='vda' bus='virtio'/>
  <alias name='virtio-disk0'/>
  <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</disk>

Expected results:
VM gets a rbd disk inside the Ceph pool

Im having the same issue using OpenNebula 5.6. I didnt see any rbd created in the ceph’s system datastore. The deployment file that will goes to system datastore is actually on host’s filesystem.