Sorry for my ignorance, but I can’t seem to understand how I can directly store images on the hosts to avoid having to transfer the images to the host from the sunstone server every time I launch a VM.
I’m using the default datastores that come when installing opennebula, how do I create a new datastore inside a host that can be used to transfer images automatically when they are first launched and then leave them there for future VM’s.
It seems that the default datastores are configured as TM_MAD="ssh" by default. This imply that each time you deploy a VM it’s disks are actually ssh’ed to the target host. See the Filesystem Datastore documentation. AFAIK (but I may be wrong!), the System Datastore is the central repository, and it must be kept on the Front-end (Sunstone), to allow deployment to different hosts. If you were using a shared transfer method (NFS, GlusterFS, etc.) then the DS could be virtually on all hosts and frontends.
When a VM is started, opennebula has to create it’s disks based on the image, either by ssh or cp if shared storage is used.
What is the output of, on your nfs system:
If you are using nfs as shared storage - is the network traffic generated by cp on nfs so much of concern?
I think you may be able to achive a setup limited to local storage using federated all-in-one frontend/node servers: http://docs.opennebula.org/5.4/advanced_components/data_center_federation/overview.html
Each opennebula server would use the local disk datastores (no nfs), but disadvantage would be that you will have to explicitly select in which federation zone you start VM’s, and you lose live migration feature, not sure about offline migration. I have not tried such a setup, so a developer would have to confirm this is feasible.
Even though traffic generated is not primarily the issue, it’s still a waste of traffic - I think if opennebula adds the following functionality it would really improve the user experience.
When an image is launched, opennebula checks the host if the image exists or not - if it doesn’t exists or the MD5 hash doesn’t match, then starts transferring that image to OpenNebula host, afterwards sends the instructions (template, etc.) for the disk to be created directly by the host, followed by the VM launching.
If the image already exists, it simply sends the instructions to create the disk and then deploys the disk locally.
Thus only one transfer is required on the host, meaning that for deployments that do not have a fast private network (or has multiple clusters that are not in the same region) then the image must be only transferred once.
This saves bandwidth, as well as time for the user.
What do you think? I’m assuming this functionality is not possible with OpenNebula at this current moment but I don’t see any limitations that can stop this from becoming a reality.