Image Datastore does not support transfer mode: ssh

Hi,
please help to solve following error:
Image Datastore does not support transfer mode: ssh

Details:
OS = Debian 9.8
I use LVM on top of software RAID1 (mdadm).
Installed OpenNebula 5.8.0, also installed node on the same server, so I have single server: front-end + node.
Created new data store, backend = Filesystem - qcow2 mode
Data store attributes are OK (DRIVER=qcow2, TM_MAD=qcow2).
Created empty image in this data store, assigned this image to VM template.
After template instantiation VM stuck in PENDING forever.
File /var/log/one/sched.log shows:
Error deploying virtual machine 36 to HID: 0. Reason: [one.vm.deploy] Image Datastore does not support transfer mode: ssh

Other data stores do not contains this error.
Strange that transfer mode for this data store is qcow2, but error is about ssh.

Could you please point me to the right direction how to solve that?
Thanks.

Hello, you should use TM_MAD=ssh
Here is my config ssh image datastore

ID             : 1                   
NAME           : ImDS
USER           : oneadmin            
GROUP          : oneadmin            
CLUSTERS       : 100,101,102,103,104 
TYPE           : IMAGE               
DS_MAD         : fs                  
TM_MAD         : ssh                 
BASE PATH      : /var/lib/one//datastores/1
DISK_TYPE      : FILE                
STATE          : READY

The point is to use exactly qcow2 data store to avoid image copying.

OpenNebula 5.6.1 config:
ID : 100
NAME : img-qcow2
USER : oneadmin
GROUP : oneadmin
CLUSTERS : 0
TYPE : IMAGE
DS_MAD : fs
TM_MAD : qcow2
BASE PATH : /var/lib/one//datastores/100
DISK_TYPE : FILE
STATE : READY

OpenNebula 5.8.0 config:
ID : 108
NAME : img-qcow2
USER : oneadmin
GROUP : oneadmin
CLUSTERS : 0
TYPE : IMAGE
DS_MAD : fs
TM_MAD : qcow2
BASE PATH : /var/lib/one//datastores/108
DISK_TYPE : FILE
STATE : READY

There is no difference of both configs. But during image instantiation:
5.6.1 - no problems
5.8.0 - error

Could you please post qcow2 data store config? If you have such DS.

Have accidentally found the way how to solve that:

  1. Create qcow2 storage
  2. Change TM_MAD from qcow2 to shared -> extra attributes related to ssh will be added.
  3. Change TM_MAD back to qcow2

This sequence will create following attributes:
CLONE_TARGET_SSH = SYSTEM
DISK_TYPE_SSH = FILE
LN_TARGET_SSH = SYSTEM
TM_MAD_SYSTEM = ssh

Exactly same approach should be made when creating storage for ‘Raw Device Mapping’ (for other storage backends probably too).
Described issues reproduced on testing system also.
Could be that 5.8.0 version contains bug related to storage (some default values are not set). Would be nice that support could add their comments regarding that.

Thanks for your feedback. Since 5.8 TM_MAD includes a list of compatible system datastores. This enables the use of different transfer modes using the same image datastore.

shared is compatible with system datastore of type shared and ssh. However qcow2 is not. Although it can work with it, we are missing the following options in oned.conf:

TM_MAD_CONF = [                                                                  
    NAME = "qcow2", LN_TARGET = "NONE", CLONE_TARGET = "SYSTEM", SHARED = "YES", 
    DRIVER = "qcow2",   DS_MIGRATE = "YES", TM_MAD_SYSTEM = "ssh", 
    LN_TARGET_SSH = "SYSTEM",  CLONE_TARGET_SSH = "SYSTEM", 
    DISK_TYPE_SSH = "FILE"   

We’ll update the configuration file and add a note in the documentation. Thanks again!

Hi Ruben,

I had made the suggested changes, but it is still showing error in sched.log & oned.log. I am using nfs share. below are my configuration & error message.

indent preformatted text by 4 spaces

TM_MAD_CONF = [
NAME = “qcow2”, LN_TARGET = “NONE”, CLONE_TARGET = “SYSTEM”, SHARED = “YES”,
DRIVER = “qcow2”, DS_MIGRATE = “YES”, TM_MAD_SYSTEM = “ssh”,
LN_TARGET_SSH = “SYSTEM”, CLONE_TARGET_SSH = “SYSTEM”,
DISK_TYPE_SSH = “FILE”
]

Blockquote
Sat Oct 19 15:04:50 2019 [Z0][VNET][D]: Discovered 5 vnets.
Sat Oct 19 15:04:50 2019 [Z0][VM][E]: Error deploying virtual machine 217 to HID: 1. Reason: [one.vm.deploy] Image Datastore does not support transfer mode: qcow2
Sat Oct 19 15:04:50 2019 [Z0][SCHED][D]: Dispatching VMs to hosts:
VMID Priority Host System DS
--------------------------------------------------------------
"
please suggest

Hi Amit,

You’d need to update the configuration of the already created datastores, Just set the values in the Datastore attributes as they are not automatically propagated from oned.conf.

Hope this helps.

Best Regards,
Anton Todorov

Thanks Anton,
It is resolved, someone had created another system datastore by mistake. after removal of that, it works as expected.

Thanks @ruben. Do you think it could be compatible with shared system datastore?

To explain my use case:

  • We have 2 clusters
  • The first cluster
    • use qcow2 because everything is on a SAN
    • export the datastore with NFS for the frontend
  • Then we started a second cluster
    • reuse the image datastore exposed with NFS
    • use local disk for system datastore (actually a lizardfs)

So I wonder if adding the following could work:

TM_MAD_CONF = [                                                                  
    NAME = "qcow2", LN_TARGET = "NONE", CLONE_TARGET = "SYSTEM", SHARED = "YES", 
    DRIVER = "qcow2",   DS_MIGRATE = "YES", TM_MAD_SYSTEM = "shared", 
    LN_TARGET_SHARED = "SYSTEM",  CLONE_TARGET_SHARED = "SYSTEM", 
    DISK_TYPE_SHARED = "FILE"   

More generally, qcow2, shared and ssh could work together:

  • the image datastore could store qcow2 files
  • a system datastore could use qcow2 transfert mode (with backing store)
  • another system datastore could prefer ssh (the frontend makes a copy with SSH to the node)
  • a last one could use shared to copy file from a mount point (like our NFS case)

Regards.

Replying to myself, I found this in source:

TM_MAD_CONF = [
    NAME = "ceph", LN_TARGET = "NONE", CLONE_TARGET = "SELF", SHARED = "YES",
    DS_MIGRATE = "NO", DRIVER = "raw", ALLOW_ORPHANS="mixed",
    TM_MAD_SYSTEM = "ssh,shared", LN_TARGET_SSH = "SYSTEM", CLONE_TARGET_SSH = "SYSTEM",
    DISK_TYPE_SSH = "FILE", LN_TARGET_SHARED = "NONE",
    CLONE_TARGET_SHARED = "SELF", DISK_TYPE_SHARED = "RBD"
]

and

TM_MAD_CONF = [
    NAME = "dev", LN_TARGET = "NONE", CLONE_TARGET = "NONE", SHARED = "YES",
    TM_MAD_SYSTEM = "ssh,shared",
    LN_TARGET_SSH = "SYSTEM", LN_TARGET_SHARED = "NONE",
    DISK_TYPE_SSH = "FILE", DISK_TYPE_SHARED = "FILE",
    CLONE_TARGET_SSH = "SYSTEM", CLONE_TARGET_SHARED =  "SELF"
]

I’ll make tests when my fellow colleagues will stop using the infra tonight :wink:

Regards.

So, I configured oned.conf with the following

TM_MAD_CONF = [
    NAME = "qcow2", LN_TARGET = "NONE", CLONE_TARGET = "SYSTEM", SHARED = "YES",
    DRIVER = "qcow2", TM_MAD_SYSTEM = "ssh,shared",
    LN_TARGET_SSH = "SYSTEM", CLONE_TARGET_SSH = "SYSTEM", DISK_TYPE_SSH = "FILE",
    LN_TARGET_SHARED = "SYSTEM", CLONE_TARGET_SHARED = "SYSTEM", DISK_TYPE_SHARED = "FILE"
]

After updating my image DS I have:

ID             : 101                 
NAME           : image               
USER           : nebula              
GROUP          : oneadmin            
CLUSTERS       : 100,101,102         
TYPE           : IMAGE               
DS_MAD         : fs                  
TM_MAD         : qcow2               
BASE PATH      : /var/lib/one//datastores/101
DISK_TYPE      : FILE                
STATE          : READY               

[…]

DATASTORE TEMPLATE                                                              
ALLOW_ORPHANS="NO"
CLONE_TARGET="SYSTEM"
CLONE_TARGET_SHARED="SYSTEM"
CLONE_TARGET_SSH="SYSTEM"
COMPATIBLE_SYS_DS="100,107"
DISK_TYPE="FILE"
DISK_TYPE_SHARED="FILE"
DISK_TYPE_SSH="FILE"
DRIVER="qcow2"
DS_MAD="fs"
LN_TARGET="NONE"
LN_TARGET_SHARED="SYSTEM"
LN_TARGET_SSH="SYSTEM"
TM_MAD="qcow2"
TM_MAD_SYSTEM="ssh,shared"
TYPE="IMAGE_DS"

My system DS for the new cluster is:

DATASTORE 107 INFORMATION                                                       
ID             : 107                 
NAME           : test-cluster-system 
USER           : nebula              
GROUP          : oneadmin            
CLUSTERS       : 102                 
TYPE           : SYSTEM              
DS_MAD         : -                   
TM_MAD         : shared              
BASE PATH      : /var/lib/one//datastores/107
DISK_TYPE      : FILE                
STATE          : READY               

[…]

DATASTORE TEMPLATE                                                              
ALLOW_ORPHANS="NO"
DISK_TYPE="FILE"
DS_MIGRATE="YES"
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp"
SHARED="YES"
TM_MAD="shared"
TYPE="SYSTEM_DS"

I thought that in this case:

  • when a VM starts on my old cluster, it will use the qcow2 CLONE script (since it’s system DS has TM_MAD=qcow2
  • when a VM starts on my new cluster 102, the prolog will use the CLONE script of the shared TM

But that’s not the case as you can see from the logs:

CLONE qcow2 one-frontend:/var/lib/one//datastores/101/3904c5a65d0f9cc9467dc0411aa5706d nebula83:/var/lib/one//datastores/107/336626/disk.0 336626 101
CLONE qcow2 one-frontend:/var/lib/one//datastores/101/58764bcb6379098b2b5a4448b661073a nebula83:/var/lib/one//datastores/107/336626/disk.1 336626 101

What I’m missing?