Opennebula + ScaleIO

Hi! Anybody use ScaleIO with OpenNebula? I try use ceph but speed is very slow.

Hi, we don’t have experiences with OpenNebula + ScaleIO, but we have nice implementations of OpenNebula + IBM Spectrum Scale. If you would be more interested in more info, let me know.
Bye, Pavel

Hi Anton,

I am replying here quoting your message from other thread

As it is not clear what configuration do you have (data replication, disks SSD,HDD or both, CPU model, RAM usage) I am posting values from a POC setup that is close to your setup. The tests are half an year old and with previous(not latest) software version of StorPool.

  • 4x Linux nodes with kernel 3.14.29 (don’t ask customer requirements)
  • 4x HDD (Hitachi HUA722010CLA330) on each node (16 HDD total)
  • 2x 10Gb network on each node (data balanced on both interfaces, mix of Mellanox and Intel cards)
  • 2x 10G switches (for redundancy)
  • StorPool storage software is using(on each node) ~20GB of the ECC RAM (internal structures and write back cache), 3x CPU Intel(R) Xeon(R) CPU E5-1620 v2 @ 3.70GHz. The RAM, and the CPU cores (and their threads) are isolated in separate cgroup.

Here are the results from single 400GB StorPool volume (block device) with replication 3(each data block is saved on three separate servers). In brackets are the values from the “hybrid” pool - one of the replicas is on SSD(mix of data-center grade Micron and Intel drives). The tests done using FIO.

LATENCY(ms):

  • Random Read (block size 4k, qdepth 64): 26.5 ms (0.62)
  • Random Write (block size 4k, qdepth 64): 1.45 ms (2.5)

IOPS:

  • Random Read (block size 4k, qdepth 64): 2426 IOPS (104026)
  • Random Write (block size 4k, qdepth 64): 44180 IOPS (26757)

BW(MB/s):

  • Sequential Read (block size 1M, qdepth 64): 1605 MB/s (2084)
  • Sequential Write (block size 1M, qdepth 64): 428 MB/s (427)

Here are the results from same tests but ran on several block devices in parallel, each block device is separate pre-allocated volume (values averaged)

LATENCY(ms):

  • Random Read (block size 4k, qdepth 64): 90.4 ms (1.55)
  • Random Write (block size 4k, qdepth 64): 5.5 ms (7.3)

IOPS:

  • Random Read (block size 4k, qdepth 64): 2824 IOPS (165377)
  • Random Write (block size 4k, qdepth 64): 46379 IOPS (36790)

BW(MB/s):

  • Sequential Read (block size 1M, qdepth 64): 2464 MB/s (2072)
  • Sequential Write (block size 1M, qdepth 64): 527 MB/s (422)

The results from same tests running on your hardware could be better or slightly worse depending on the exact setup.

The results inside VM depends highly on the configuration of the visualization software,

Kind Regads,
Anton Todorov

Hi! Thanks for you answer. StorPool is good solution but my budget does not have such a sum.