Help with iscsi_libvirt datastores

Hello all

I am a little bit puzzled about the iscsi_libvirt datastore available in OpenNebula 5.2.

Prerequisites
I have two nodes with Opennebula 5.2 (one holding also Sunstone), and both have an ISCSI target set using libvirt.

From both nodes I can list the two volumes available in the ISCSI target

Now, looking into the documentation, I have tried to define a new ISCSI-libvirt datastore, without success.

  • if trying to use the interface wizard, I get the error “[DatastoreAllocate] No DS_MAD in template.”
  • if using advanced mode, I can define the datastore using
    ===
    NAME = iscsi
    DISK_TYPE = "ISCSI"
    DS_MAD = "iscsi_libvirt"
    TM_MAD = "iscsi_libvirt"
    ISCSI_HOST = "10.11.12.13"
    ISCSI_USER = "iscsi_user"
    ISCSI_USAGE = “the_iscsi_usage”
    ===
    but size is zero and cannot store any image in the datastore, as I get error “insufficient space”

Any help appreciated

Hey,

@hangar.hosting did you find a solution for your problem?

I have the same problem with OpenNebula 5.4.6. Maybe i do something basically wrong. I try the same as you, but under OpenNebula the size of the data-storage is only 1MB. Via open-iscsi (Ubuntu 16.04 TLS) i can find the ISCSI target.

Is there a guide on how to do it right? I’m not getting any smarter about the OpenNebula documentation.

Many thanks

Hello Jack

as far as I remember, I gave up on ISCSI, as OpenNebula is not using it in a shared way (as VMWare does, by formatting into vmfs)

Therefore, I would have use the iscsi repository only for one node, with no much use.

Finally, I ended up using NFS.

Anyway, I would be interested in a shared ISCSI storage solution

Regards

Stefaniu Criste

Hangar Hosting

Hello,
my 2 cents:

  • I tried to use iscsi in a shared way like vmware, using Oracle Linux and OCFS2, but I had some deadlock twice in the FS, resolved with a global reboot (all nodes in the cluster). And it is not very easy to scale.
  • Right now, I am using iscsi with FS_LVM storage, and it is working correctly (no clvmd, see OpenNebula 5.6 docs). The problem with this setting (or with my settings) is that the live migration is not working like I would like. I have a solution using 100% ISCSI, and this is bad.

If I would like to build a new cluster, I would have a FS_LVM datastore for system images, but a global /var/lib/one/datastores shared using NFS or something usable (I am thinking about linbit drbd dual master volume…)

So that, image datastore is NFS (allow temporary failure), /var/lib/one/datastore is NFS+HA or DRBD, and system images are link to FS_LVM on iSCSI.

I do not built this cluster yet, but I would like to test it one day :slight_smile:

That was my 2 cents.
Best regards,
Nicolas

Hey,

thank you two for your answers.

What a pity!
@hangar.hosting
With NFS I wanted to try it next.

@nicolas_belan
I see your approach as meaningful, to use the “image datastore” via filesystem like “shared mode” and for the “system datastore” using LVM.
But what I do not quite understand at LVM, the nodes must also be directly connected to the storage?
Because my thought is that I would connect the storage via iSCSI (open-iscsi) to the host with the running front-end and use it for the LVM.

So get the nodes the connection via the front-end to the storage? Because iscsi can connected only to one devices for no data losses.
Or how did you use “iscsi with FS_LVM storage”? Please describe how you have set it up.

Many thanks
Best regards,
Jack

Hi,

I am connecting nodes to iscsi, yes. Or if you prefer, I declare local storage devices using multipathd.
So that, you have multipathing (and path failsafe) at node level. KVM is not aware of device location, nor device settings.
Now, I have 2 paths for Equallogic, and 4 for Compellent, with different OS settings (sysctls).
And devices (storage) is declared in one as local device storage. But storage volumes (iscsi) are not declared in the host running sunstone (no needs). I am using BRIDGE_LIST for that:

DATASTORE TEMPLATE
BRIDGE_LIST="gimli04"
SAFE_DIRS="/var/tmp"
SHARED="YES"
TM_MAD="fs_lvm"
TYPE="SYSTEM_DS"
[..]

Using FS_LVM, i am attaching iscsi volumes to different nodes (you have to allow it at storage level) at the same time.

Regards,
Nicolas.

1 Like

Hi All,

Kind of late to the party but,
I am facing similar issue with Opennebula 5.12 cluster
Nodes are KVM over Ubuntu20.04.
If I mount iscsi datastore directly to Ubuntu or KVM, it works fine but on Opennebula it shows as 1MB capacity and does not work.

Nicolas’s work around looks good but, FS_LVM and adding iscsi volumes to each node will add lot of management overhead.
I also have old Equallogic nodes as storage.

Any other recommendations ? Appreciate your help on this.

regards
Sagar