Hi all,
I’ve been testing ONE with LVM datastore back-end for the past month, and everything went great so far until I do a test the scenario of a host failure. After I restarted all the KVM host nodes, all the deployed VMs are fine. But when I instantiate new VMs, somehow the new VMs cannot boot. On the vnc console it shows “Boot failed: not a bootable disk” or “no bootable device”.
I’m still new to OpenNebula system, am I missing something? Your help really appreciated.
Best regards,
Ryan
Versions of the related components and OS (frontend, hypervisors, VMs):
Current components:
OpenNebula version 5.8.1
frontend: centos 7
host-node1: centos 7 (KVM)
host-node2: centos 7 (KVM)
SAN node: using open-isci for shared LUN
Image Datastore
DATASTORE 100 INFORMATION
ID : 100
NAME : image_ds-lvm
USER : oneadmin
GROUP : oneadmin
CLUSTERS : 0
TYPE : IMAGE
DS_MAD : fs
TM_MAD : fs_lvm
BASE PATH : /var/lib/one//datastores/100
DISK_TYPE : BLOCK
STATE : READY
DATASTORE CAPACITY
TOTAL: : 8G
FREE: : 3.5G
USED: : 4.5G
LIMIT: : -
PERMISSIONS
OWNER : um-
GROUP : u--
OTHER : ---
DATASTORE TEMPLATE
ALLOW_ORPHANS="NO"
CLONE_TARGET="SYSTEM"
DISK_TYPE="BLOCK"
DRIVER="raw"
DS_MAD="fs"
LN_TARGET="SYSTEM"
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp"
TM_MAD="fs_lvm"
TYPE="IMAGE_DS"
IMAGES
42
43
44
System Datastore
DATASTORE 101 INFORMATION
ID : 101
NAME : lvm_system
USER : oneadmin
GROUP : oneadmin
CLUSTERS : 0
TYPE : SYSTEM
DS_MAD : -
TM_MAD : fs_lvm
BASE PATH : /var/lib/one//datastores/101
DISK_TYPE : FILE
STATE : READY
DATASTORE CAPACITY
TOTAL: : 20G
FREE: : 16.6G
USED: : 3.4G
LIMIT: : -
PERMISSIONS
OWNER : um-
GROUP : u--
OTHER : ---
DATASTORE TEMPLATE
ALLOW_ORPHANS="NO"
BRIDGE_LIST="node1 node2"
DISK_TYPE="FILE"
DS_MIGRATE="YES"
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp"
SHARED="YES"
TM_MAD="fs_lvm"
TYPE="SYSTEM_DS"
IMAGES
Steps to reproduce:
Poweroff/reboot all KVM host node.
Current results:
-All new instantiated VMs cannot boot.
-The previously deployed guest VMs still can be run.