OpenNebula Frontend Questions

Hello Everyone,

I am going to give OpenNebula a fair chance on this go around. However I have some basic questions.

  • Why is the NFS Share on the frontend?

  • How big should the NFS Share be?

  • I have 3 servers I would like to use as nodes, Can I install the frontend in a VM and then add the nodes later?

That’s it for now, if i could get a detailed response I would appreciate it very much.

Just an FYI I have tried OpenStack, ownCloud, and oVirt. I have had 3 Servers set up as KVM Hosts and managed it with virt-manager on a workstation.

So far failures except the last one, however I am looking for more functionality and stability. My goal of course is to be able to do Virtualization and Application provisioning in a / the cloud.

Thanks everyone,
Michael

Michael Cooper opennebula@discoursemail.com writes:

Hello Everyone,

Hello,

   I am going to give OpenNebula a fair chance on this go around. However I have some basic questions.
  • Why is the NFS Share on the frontend?

This is an example setup, for example:

  • in one of our setup the NFS share is provided by a NAS

  • in another one everything is on a SAN

  • How big should the NFS Share be?

  • I have 3 servers I would like to use as nodes, Can I install the
    frontend in a VM and then add the nodes later?

That’s it for now, if i could get a detailed response I would
appreciate it very much.

Just an FYI I have tried OpenStack, ownCloud, and oVirt. I have had 3
Servers set up as KVM Hosts and managed it with virt-manager on a
workstation.

So far failures except the last one, however I am looking for more
functionality and stability. My goal of course is tp be able to do
Virtualization and Application provisioning in a / the cloud.

I think the simplest you can start[1] by looking at the OpenNebula Cloud
Reference Architecture[2].

Regards.

Footnotes:
[1] http://opennebula.systems/jumpstart/

[2] https://support.opennebula.pro/hc/en-us/articles/204210319

Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF

Hi Michel,

NFS is not required, it’s depents on you storage.

You can use shared storage SAN over FC, iSCSI and also NFS
Or you can put local disks to each server and setup ceph cluster

I personaly use SAN over FC. On compute nodes I have cLVM and GFS2. Block LVM driver form VM images and GFS for system DS and VM qcow2 images. I plan run frontned in VM, which I manage using Pacemaker.

I don’t know what HW you have, possibilities are endless

Hello Kristian,

    I have 2 dell sc1435 servers with 16 gb of ram each with dual

Quad-Core AMD Opteron™ Processor 2354 and each with 1 tb drives, I have
1 HP DL385 G5 Dual Quad-Core AMD Opteron™ Processor 2356. I also have 1
Intel® Core™ i7 CPU K 875 @ 2.93GHz it has 3 - 2 TB drives and 1 - 1
TB Drive. Each of the servers have 2 Nics and I am using 192.168.xxx.x for
1st NIC with access to internet and VMs communication to each other, as
well as my network. The Second Nic is set to 192.168.xx.x this one is a
private network with no gateway so I can do storage and management on it.

I have the core i7 set up as a stand alone VM Host with Kimchi installed on
it and it is set up as an NFS/iSCSI San,

I need help because i am confused on all of this, I am able to create
instances and I have created a network and an ssh exception in the default
group rules.

Also I was wondering if it has something like openstack to deploy
applications.

Thanks,
Michael

Michael A Cooper
Linux & Zerto Certified Professional

Please forget this line here “I need help because i am confused on all of
this, I am able to create instances and I have created a network and an ssh
exception in the default group rules.”, it pertains to something else,
sorry for that.

Michael A Cooper
Linux & Zerto Certified Professional

Hi Michel, so you can mount NFS on compute nodes to /var/lib/one/datastores and on frontend node to /var/lib/one/datastores too. Setup password less ssh access from frontend to compute nodes under oneadmin user and thats all.

I think that in your case isbest to use NFS.

I use more complicated setup, because I have shared FiberChannel storage with 2x4Gbps links to each node and 4x4Gbps link to storage. In you case, you want use ethernet, so easiest solution is to use NFS, because you can not get much performace from iscsi, so there is no need to run complicated setup with some clustered filesystem like GFS and clustered LVM etc…

Those 1TB drives you can put to that NFS server

If you want to use 1TB drives in compute nodes, you have to use SSH Transfer Driver http://docs.opennebula.org/5.0/deployment/open_cloud_storage_setup/fs_ds.html#ssh-transfer-mode

But, you will not have abbility to live migrate VMs and also deploying wile take more time.

For deploying applications you can use xml-rpc api and one-flow service

http://docs.opennebula.org/5.0/advanced_components/application_flow_and_auto-scaling/overview.html
http://docs.opennebula.org/5.0/integration/system_interfaces/introapis.html

That is what I was thinking as well, I figured with only 2 x 1 gb nic cards
that would be enough to handle the traffic as well as the VM communication.

Thanks for your input

Michael A Cooper
Linux & Zerto Certified Professional

Hello Kristian,

       I have the Front-End up and running now, So far I am excited

about it. I will prepare the hosts tomorrow. I am really excited about this
it seems much easier than OpenStack.
I also noticed there is a market place and some apps already there which is
awesome. So now i just have to learn some of the things and how to
configure them.

Thanks again

Michael A Cooper
Linux & Zerto Certified Professional