LizardFS, anyone?

Hello,

Can I add CEPH to the mix? @gandalf, why did you ruled out CEPH? I am also interested in comparison between CEPH and LizardFS.

I use CEPH myself, and so far it looks good. On another project, we used GlusterFS, and in my opinion,
CEPH is much better than GlusterFS.

My setup is as follows: I have ~25 OpenNebula/CEPH nodes (5+ years old, so I don’t expect
to have an extreme performance), each with two HDDs. I have a RAID-1 volume for / and swap, and the rest of disk space is used without RAID as two CEPH OSDs.

Using 5+ years old disks, which were powered off for at least two years, allowed me to assess the robustness of CEPH :-). So far I did not experience a loss of data, even though out out the original ~50 disks about 10 have failed and been replaced so far.

I use only CEPH RBD, not CEPH filesystem. Altough I plan to use S3/Swift via radosgw as object storage for a different project.

Recently, I added a SSD-only pool and a SSD-first pool (the primary replica is kept on SSD, the other replicas on the spinning rust), and I plan to do performance testing of ONe VMs soon (probably after upgrade to 5.2).

There is also an older thread on the same topic, which mentions even more storage systems:

So, can anybody compare CEPH to LizardFS?

Thanks,

-Yenya