Hello
I have installed qemu with gluster support on worker node.
Then when i’m tring to create VM in logs i’m getting this:
Wed Jun 22 15:20:33 2016 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/108/deployment.0
Wed Jun 22 15:20:34 2016 [Z0][VMM][I]: ExitCode: 0
Wed Jun 22 15:20:34 2016 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Wed Jun 22 15:20:36 2016 [Z0][VMM][I]: Command execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy ‘/var/lib/one//datastores/0/108/deployment.0’ ‘w1node.hoster.kg’ 108 w1node.hoster.kg
Wed Jun 22 15:20:36 2016 [Z0][VMM][I]: error: Failed to create domain from /var/lib/one//datastores/0/108/deployment.0
Wed Jun 22 15:20:36 2016 [Z0][VMM][I]: error: internal error: process exited while connecting to monitor: [2016-06-22 09:20:41.431266] E [MSGID: 104007] [glfs-mgmt.c:632:glfs_mgmt_getspec_cbk] 0-glfs-mgmt: failed to fetch volume file (key:gv0) [Invalid argument]
Wed Jun 22 15:20:36 2016 [Z0][VMM][I]: [2016-06-22 09:20:41.431438] E [MSGID: 104024] [glfs-mgmt.c:734:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with remote-host: mcloud (Permission denied) [Permission denied]
Wed Jun 22 15:20:36 2016 [Z0][VMM][I]: qemu-system-x86_64: -drive file=gluster://mcloud:24007/gv0/b775e2239784da16b8ef9400b539f84a,if=none,id=drive-scsi0-0-0,format=qcow2,cache=none: could not open disk image gluster://mcloud:24007/gv0/b775e2239784da16b8ef9400b539f84a: Gluster connection failed for server=mcloud port=24007 volume=gv0 image=b775e2239784da16b8ef9400b539f84a transport=tcp: Permission denied
Wed Jun 22 15:20:36 2016 [Z0][VMM][I]:
Wed Jun 22 15:20:36 2016 [Z0][VMM][E]: Could not create domain from /var/lib/one//datastores/0/108/deployment.0
Wed Jun 22 15:20:36 2016 [Z0][VMM][I]: ExitCode: 255
Wed Jun 22 15:20:36 2016 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.
Wed Jun 22 15:20:36 2016 [Z0][VMM][E]: Error deploying virtual machine: Could not create domain from /var/lib/one//datastores/0/108/deployment.0
Wed Jun 22 15:20:36 2016 [Z0][VM][I]: New LCM state is BOOT_FAILURE
I;m alread did:
gluster volume set gv0 server.allow-insecure on
gluster volume set gv0 storage.owner-uid 9869
gluster volume set gv0 storage.owner-gid 9869
chown oneadmin:oneadmin /gluster
df -h
w1node:/gv0 280G 7.7G 272G 3% /gluster
ls -al /var/lib/one/datastores/
104 → /gluster/storage/104
105 → /gluster/storage/105
106 → /gluster/storage/106
cat ds.conf
NAME = “SATA”
DS_MAD = fs
TM_MAD = shared
DISK_TYPE = GLUSTER
GLUSTER_HOST = mcloud:24007
GLUSTER_VOLUME = gv0
CLONE_TARGET=“SYSTEM”
LN_TARGET=“NONE”
but getting perssion denied
any one having idea how to fix it ?
thx
Daryl_Lee
(Daryl Lee)
June 22, 2016, 4:38pm
2
“Failed to create domain from /var/lib/one//datastores/0/108/deployment.0”
Check the permissions for the /var/lib/one/datastores and beyond to ensure oneadmin:oneadmin is set for it.
"error: internal error: process exited while connecting to monitor: "
This makes me think your permissions are messed up on your node.
“failed to connect with remote-host: mcloud (Permission denied) [Permission denied]”
since it’s 2 errors down this may be a red haring, but check to make sure your gluster node isn’t firewalled off or has the volume restricted.
Hi, thx for reply
here is output of “ls”
root@w1node:/var/lib/one# ls -al
total 98208
drwxr-xr-x 12 oneadmin oneadmin 4096 Jun 23 09:32 .
drwxr-xr-x 48 root root 4096 Jun 20 15:44 …
-rw------- 1 oneadmin oneadmin 7536 Jun 22 15:32 .bash_history
drwx------ 3 oneadmin oneadmin 4096 Jun 21 11:37 .cache
-rw-rw-r-- 1 oneadmin oneadmin 3339 Jun 21 11:42 config
drwx------ 3 oneadmin oneadmin 4096 Jun 21 11:37 .config
drwxr-xr-x 10 oneadmin oneadmin 4096 Jun 22 15:31 datastores
drwx------ 3 oneadmin oneadmin 4096 Jun 21 11:37 .local
drwx------ 2 root root 4096 Jun 3 11:01 lost+found
-rw-r–r-- 1 oneadmin oneadmin 50764800 Jun 21 11:42 old
drwx------ 2 oneadmin oneadmin 4096 Jun 2 11:06 .one
-rw-r–r-- 1 oneadmin oneadmin 49718272 Jun 23 09:32 one.db
drwxr-xr-x 9 oneadmin oneadmin 4096 Jun 2 13:10 remotes
-rw------- 1 oneadmin oneadmin 397 Jun 21 17:36 .sqlite_history
drwxr-xr-x 2 oneadmin root 4096 Jun 2 11:44 .ssh
drwxrwxr-x 2 oneadmin oneadmin 4096 Jun 22 17:43 sunstone_vnc_tokens
-rw------- 1 oneadmin oneadmin 5218 Jun 21 17:32 .viminfo
drwxr-xr-x 114 oneadmin oneadmin 4096 Jun 22 17:42 vms
root@w1node:/var/lib/one#
inside datastore
root@w1node:/var/lib/one/datastores# ls -al
total 40
drwxr-xr-x 10 oneadmin oneadmin 4096 Jun 22 15:31 .
drwxr-xr-x 12 oneadmin oneadmin 4096 Jun 23 09:32 …
lrwxrwxrwx 1 root root 18 Jun 16 17:36 0 → /gluster/storage/0
drwxr-xr-x 4 oneadmin oneadmin 4096 Jun 15 16:27 0_0
lrwxrwxrwx 1 root root 18 Jun 16 17:36 1 → /gluster/storage/1
drwxrwxr-x 2 oneadmin oneadmin 4096 Jun 21 15:50 100
drwxrwxr-x 2 oneadmin oneadmin 4096 Jun 22 10:44 101
drwxrwxr-x 2 oneadmin oneadmin 4096 Jun 22 15:15 102
drwxrwxr-x 2 oneadmin oneadmin 4096 Jun 22 10:44 103
lrwxrwxrwx 1 root root 9 Jun 22 15:31 104 → /gluster/
lrwxrwxrwx 1 root root 9 Jun 22 15:31 105 → /gluster/
lrwxrwxrwx 1 root root 9 Jun 22 15:31 106 → /gluster/
drwxr-xr-x 2 oneadmin oneadmin 4096 Jun 16 14:16 1_1
drwxrwxr-x 2 oneadmin oneadmin 4096 Jun 2 11:06 2
drwxrwxr-x 2 oneadmin oneadmin 4096 Jun 22 17:42 .isofiles
root@w1node:/var/lib/one/datastores#
ls -al /
drwxr-xr-x 6 oneadmin oneadmin 4096 Jun 22 15:31 gluster
inside
gluster and gluster - datastore
root@w1node:/gluster/storage# ls -al
total 40
drwxr-xr-x 10 oneadmin oneadmin 4096 Jun 22 15:19 .
drwxr-xr-x 6 oneadmin oneadmin 4096 Jun 22 15:31 …
drwxr-xr-x 5 oneadmin oneadmin 4096 Jun 22 17:42 0
drwxr-xr-x 6 oneadmin oneadmin 4096 Jun 17 14:10 0_0
drwxr-xr-x 2 oneadmin oneadmin 4096 Jun 22 17:42 1
drwxrwxr-x 2 oneadmin oneadmin 4096 Jun 22 15:16 104
drwxrwxr-x 2 oneadmin oneadmin 4096 Jun 22 15:18 105
drwxrwxr-x 2 oneadmin oneadmin 4096 Jun 22 15:20 106
drwxr-xr-x 2 oneadmin oneadmin 4096 Jun 17 09:59 1_1
drwxrwxr-x 2 oneadmin oneadmin 4096 Jun 2 11:06 2
root@w1node:/gluster/storage#
permissions looks good
Daryl_Lee
(Daryl Lee)
June 23, 2016, 6:26pm
4
Okay permissions look pretty good. Another time i’ve seen this error is when either the node or the frontend didn’t have the GlusterFS volume mounted properly due to troubleshooting or changing things around. Check your mounts and make sure both the OpenNebula node and the OpenNebula frontend are looking at the same thing.
Also check inside /gluster/storage/0 and see if there is a 108 folder in there on both the node and the frontend.
Snowman
(Martin)
January 17, 2018, 6:07pm
5
Where was the problem ? I have exact same problem on my new cluser and I dont know how to fix this either.