Trying to understand the datastore concept. How to mount and use off of a host (node)?

I’ve got a physical host, mdskvm-p01 that I’ve added to OpenNebula Sunstone GUI. I NFS mount the default /var/lib/one folder from OpenNebula Sunstone over to the physical mdskvm-p01. That works for some small VM’s as a POC.

However now I want to add a 2TB LUN to the mdskvm-p01 host and have this recognized as an OpenNebula datastore. Reading the documents this seems less intuitive then I thought. The docs seem to indicate that I have to get the 2TB mounted over on the OpenNebula Sunstone side and on the node as well but what I want to achieve is to have the 2TB mounted on the node and when OpenNebula deploye a VM there, it would go on to this 2TB LUN. How would I go about doing this?

Cheers,
TK

I’ve read the documentation here:

http://docs.opennebula.org/4.10/administration/storage/fs_ds.html

But the first solution (shared) and second solution (SSH) doesn’t seem to be scalable enough. If I have 100 hosts and I need to scale this over and have the whole stack managed by OpenNebula, I can see significant contention happening on the FiberChannel or the NFS (NAS) to make this a viable storage solution for my setup. I do see the value of assigning LUN to each node and having OpenNebula deploy VM’s to each node w/ each VM running on that node indipendantly. Not able to see how this can be achieved through OpenNebula. Any hints? My OpenNebula is running within a virtual machine.

Cheers,
TK

Depending on your shared server, you are right. A regular NFS export will simply not do. However, if you have a high end NAS you would be able to scale up a lot.

However SSH should scale up a lot. You can mount your 2TB in your node, and OpenNebula will SCP over the image to the node, and store it in the 2TB mountpoint. That’s fine if you have a single node, if you have more than one, you can use the fs_lvm [1] driver for example.

[1] http://docs.opennebula.org/4.14/administration/storage/fs_lvm_ds.html

I decided to use GlusterFS. That seems to do what I need and aggregate or replicate the SAN bricks off of each node. It appears that the writes to each node will be done directly via the XFS on which the particular VM resides. My goal in asking this question is about bandwidth. Unless I have a 10GB/s card, I will take a hit and am trying to ensure writes are happening as fast as possible via SAN / NAS / DAS.

Thank you very much for the feedback. I’ll try out the link above as well as it’s a lab and we are test driving this.

Cheers,
TK