Opennebula + FC datastore

Hi! I have NetApp E2660 with FC-oE. I have 4 nodes of Opennebula. All nodes see block storage device (/dev/mapper/mpatha2), but work only first node, other can’t see changes. How I can use datastore on all nodes?

Thanks!
P.S. Sorry for my English.

Антон Зубков forum@opennebula.org writes:

Hi! I have NetApp E2660 with FC-oE. I have 4 nodes of Opennebula. All
nodes see block storage device (/dev/mapper/mpatha2), but work only
first node, other can’t see changes. How I can use datastore on all
nodes?

Hello,

For file base VM disks like qcwo2, you need to use a shared filesystem
like OCFS2 or GFS2.

Thoses requires the corosync/pacemaker/dlm/cLVM stack.

In your setup using a san we have:

  • physical nodes connected to the SAN using multipath, each have the
    corosync/pacemaker/dlm/cLVM/GFS2 stack

  • in pacemaker, we define an NFS export for the datastore to be used by
    the frontend

  • the frontend is a VM managed by pacemaker, it can’t access the
    multipath devices directly so it use the NFS exported by a node

This configuration is quite touchy, sometimes a node get out of the
corosync cluster without obvious reason and things start to be fancy.

Today, our GFS2 had a corruption after a crash of a node, we needed to
stop all the VMs and nodes and run an FSCK on the 4TB storage, it tooks
3 hours.

Regards.

Footnotes:
[1] http://docs.opennebula.org/4.14/administration/storage/lvm_ds.html

Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF

Hi, I have similar setup with FC LUN. You can setup:

  • Clustered Volume Group using DLM+CLVM
  • Cons: No thin volumes, No snapshoting
  • Pros: Looks like matured technology that Shared VG
  • Shared Volume Group using lvmlockd+sanlock (lvmlockd also supports DLM if you need corosync stack)
  • Pros: Thin volumes, Snapshoting
  • Cons: New technology, need testing

Than you need add LVM Block datastore

NetApp provides some API for manage LUNs and also supports NFS export. So easy way is:

export NFS share and use it as datastore - no need for clustered filesystem or lvm.
write own datastore mad for manage netapp luns

Hi all!
Thank you very much for yours unswers! I will look for manuals for setting up :slightly_smiling:
Thank’s!

NetApp E2600 can’t export NFS :frowning:

Hi there,

we have an IBM Pureflex Chassis with compute nodes and an IBM Storewize 7000 storage backend. The storage is wired via FCoE to the Pureflex chassis and thus is exposed to the nodes as multipath devices, just as in your case.

At first I tried GlusterFS on a single multipath device mounted on all nodes. But that didn’t work out too well (3-4 out of 10 deployment failing). So I looked into CEPH and found that the latest release version 9.0.2 INFERNALIS just added support for multipath devices (YEAH! :wink: ).

So I went with exporting a separate 500GB block device to each of the compute nodes and creating a CEPH cluster across those devices (used the quick start guide, plus bits and pieces from the documentation: http://docs.ceph.com/docs/master/start/quick-ceph-deploy/).

I then combined this setup with system storage (another 2TB block device exported to each node) and used the SSH transfer driver.

So far this setup works very well and fast.

Hint:
I had the benefit of having a second OpenNebula cloud setup at my site, which I used to host a ceph-deploy node. That came in very handy, since it allows you to setup all CEPH nodes from that machine.

1 Like

Very strange see when Opennebula can’t work with FC! All variants is sucks! (no snapshots, technical preview, etc). All official docs pure for start use Opennebula for production!