OpenNebula Storage - Basic Question

I want to use iSCSI Storage for my nodes. Can I mount the same LUN to both the nodes?
Is it important to have an NFS share between the nodes? Is there any workaround for not using NFS share?

I am not able to migrate VMs between the compute nodes without using NFS share. :frowning:


Never used iSCSI, but it is a block device. This means you can never ever mount the same LUN on more than one machine. No way around it. iSCSI is not a shared storage. If you want shared storage (e.g.: you want to migrate machines between hosts) you will need something like NFS, Gluster, Ceph, …

Three things you could do with iSCSI and Opennebula

  1. use iSCSI as a disk backend for a NFS server if your storage does not support NFS directly
  2. if you do not need migration between hosts, use a dedicated iSCSI LUN for every KVM host and use it instead of a local disc. For this just mount the LUN to the corresponding directory under /var/lib/one/datastores on the node
  3. expose a LUN directly to a VM. this document should explain everything you need ->

Hope that helps and I understood what you wanted to do …


Thankyou for quick reply! I definitely require migration between hosts.
I am going to explore GlusterFS and Ceph, see how they work. I have tried NFS share but was having some problems with it.

On your 3rd point, will migration of VMs between hosts be possible with that? (

no matter what you do: if you need migration between hosts you need a shared system DS.

tbh: NFS would be the easiest thing by far. all other solutions will be much more effort and are much more complicated to use. What kind of issue did you have with NFS? Maybe I can help you with that.

here you can see what datastore types you can use for what:

Some of the softwares our development team uses were not running properly on NFS Storage.

I cannot find info on GlusterFS in OpenNebula Documentation, although found Ceph. Is Ceph the recommended way to go? Or would GlusterFS work as well?

Never used Gluster and cannot tell you anything on that topic. Maybe someone else can help here?

Ceph is recommended for medium or big sized installations afaik. But be aware that setting up and maintaining a ceph cluster needs a lot of ressources (hardware), knowhow and time. I would really recommend going for an NFS solution. We used NFS only for our production Opennebula Cluster until we had a size of more than 2500 CPU cores and 14 TB of memory.

We used both enterprise Storage and NFS servers built on top of Ubuntu on standard server hardware …

We’re trying out GlusterFS for now, migration is working perfectly. Need to test few more things. We have about 11,000 total threads and half the number of cores, but for memory it goes in a very large amount in petabytes.

NFS would definitely not work, some internal softwares our development team made is not compatible and requires block level access to storage, and also NFS will become a bottleneck (speed) as we expand our OpenNebula Clusters.

Depending on circumstances, I’d throw LizardFS (or it’s former parent, MooseFS) into the ring. I need a scalable, shared FS which consists of the storage of my hosts, so for me it’s LizardFS. Considered Ceph, but it deemed me too experimental at this time, especially considering the FS part. That said, I only played briefly with MooseFS about 10 years ago, but had no real use case back then. Using it’s fork LizardFS now for ~3 weeks in a production environment I’m kind of happy. It doesn’t convert my GBit- and HDD-based infrastructure into a space-age unlimited IOPs unlimited TB setup — but at least it tries to :wink: And, I’m not alone in liking LizardFS for real use cases …

Never tried LizardFS, so I can’t say. But I would not call Ceph experimental. CephFS is, but you don’t need it in combination with Opennebula as it uses RBD that is rock stable from my experience.

Anyways. A standalone NFS installation is easiest I would say. People use it since ages. It is simple to set up and use. There is tons of tutorials out there and you do not have to maintain a distributed software …

I’m actually using iscsi as a storage backend accessed from 6 ONE backend.
All backends access the LUN at the same time.

For this you need a filesystem that understand this, and we decided to go with ocfs2.

Performance is pretty good, recovery from a dead node works without a problem, etc. We are very happy with iscsi+ocfs2.

This will allow you to do live migration, etc.