SSH datastore mvds does not copy back to frontend

Hi guys,

I have the following issue. I wanted to try ssh datastores (we only used Ceph and NFS until now) and therefor created two Datastores. When the VM is instantiated the image is transferred correctly via ssh to the system datastore on the host. When the machine is shutdown it is not transferred back to the frontend. The tm tries to copy it locally on the kvm host to the corresponding imagedatastore dir (when i create that dir it actually gets copied there)

in the log i see:

Command execution fail: /var/lib/one/remotes/tm/ssh/mvds kvm-host-name:/var/lib/one//datastores/113/2228/disk.0 /var/lib/one//datastores/112/fca4482948f631360567b1cdbc6fb45a 2228 112


+ SRC=kvm-host-name:/var/lib/one//datastores/113/2228/disk.0 Tue May 10 10:15:49 2016 [Z0][TM][I]: + DST=/var/lib/one//datastores/112/fca4482948f631360567b1cdbc6fb45a Tue May 10 10:15:49 2016 [Z0][TM][I]: + VMID=2228 Tue May 10 10:15:49 2016 [Z0][TM][I]: + DSID=112 Tue May 10 10:15:49 2016 [Z0][TM][I]: + '[' -z '' ']' Tue May 10 10:15:49 2016 [Z0][TM][I]: + TMCOMMON=/var/lib/one/remotes/tm/ Tue May 10 10:15:49 2016 [Z0][TM][I]: + . /var/lib/one/remotes/tm/

what did I do wrong here?

we are using Opennebula 4.14.2 on Ubuntu 14.04

that is the configuration of the datastores:

the imagedatastore (accessible on the frontend only) with the following configuration:


and the system datastore (accessible on the kvm host only) with the following configuration:

Seems like your hitting bug (no clue why they didn’t fix this prior to releasing 4.14). You can use the patched script in that bugreport or just replace the content of the mvds script with the script below:

#!/bin/bash -x

# -------------------------------------------------------------------------- #
# Copyright 2002-2015, OpenNebula Project, OpenNebula Systems                #
#                                                                            #
# Licensed under the Apache License, Version 2.0 (the "License"); you may    #
# not use this file except in compliance with the License. You may obtain    #
# a copy of the License at                                                   #
#                                                                            #
#                                 #
#                                                                            #
# Unless required by applicable law or agreed to in writing, software        #
# distributed under the License is distributed on an "AS IS" BASIS,          #
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.   #
# See the License for the specific language governing permissions and        #
# limitations under the License.                                             #
#--------------------------------------------------------------------------- #

# mvds host:remote_system_ds/disk.i fe:SOURCE vmid dsid
#   - fe is the front-end hostname
#   - SOURCE is the path of the disk image in the form DS_BASE_PATH/disk
#   - host is the target host to deploy the VM
#   - remote_system_ds is the path for the system datastore in the host
#   - vmid is the id of the VM
#   - dsid is the target datastore (0 is the system datastore)



if [ -z "${ONE_LOCATION}" ]; then


SRC_PATH="$(arg_path $SRC)"
SRC_HOST="$(arg_host $SRC)"


# Move the image back to the datastore

log "Moving $SRC to datastore as $DST"
exec_and_log "$SCP -r $SRC $DST" "Error copying $SRC to $DST"

if $SSH $SRC_HOST ls ${SRC_PATH_SNAP} >/dev/null 2>&1; then
    exec_and_log "rsync -r --delete ${SRC_HOST}:${SRC_PATH_SNAP}/ ${DST_SNAP}"

exit 0

I’ll try it and report back if it works, but seems to be the issue …

It works! Thx!

good to hear that this fixed your issue