jamdev12
(Jesus Malena)
April 8, 2015, 6:43pm
1
Hi all,
Thanks for reading my topic. Very simple. Does the latest version of OpenNebula (4.12) have compatibility support with VSphere 6?
I would like to upgrade to the latest version of VSphere and want to know if I will break the OpenNebula install if I do.
Thanks,
Jesus
1 Like
tinova
(Tino VĂĄzquez)
April 9, 2015, 2:24pm
2
Hi Jesus,
We are currently working on the integration with vSphere 6 and we are founding some issues, 4.12 is not compatible with it.
Having said this, we will make sure that the next version of OpenNebula (4.14) is fully compatible with vSphere 6.
Jose_Ramos
(Jose Ramos)
September 17, 2015, 9:34am
3
Hello Tino,
what are the news about this issue?
Is OpenNebula 4.14 Beta2 compatible with VSphere 6?
I am mostly interested on having full compatibility with the ESXi 6 hypervisor.
Thanks a lot.
Jose
tinova
(Tino VĂĄzquez)
September 17, 2015, 9:38am
4
Hi,
One quick question, are your referring to the ESX drivers or to the vCenter drivers?
Jose_Ramos
(Jose Ramos)
September 17, 2015, 9:48am
5
Hi Tino,
ESX drivers. I am running my Opennebula platform on top of ESXi hypervisors v5.5 and I need to upgrade them to ESXi 6.0 which is free to use (in contrast with v5.5 which was running on a demo license).
Thanks for your quick response.
Jose
tinova
(Tino VĂĄzquez)
September 17, 2015, 10:08am
6
Hi,
Unfortunately we didnât have the resources to test OpenNebula ESX drivers against 6.0. We would appreciate any feedback you may have testing OpenNebula 4.14 with vSphere 6.0.
Iâd like to take this opportunity to say that the future strategy in the context of the OpenNebula project is to support vCenter directly rather that interact directly with the ESX. We suggest you consider the migration from the ESX drivers to the vCenter drivers. If you need help with the migration path, OpenNebula Systems (http://opennebula.systems/company/ ) can provide it.
jsepulve
(Jose Luis SepĂșlveda)
November 20, 2015, 1:18pm
7
Hi
Iâm testing opennebula 4.14.1 with esxi 6.0.0
I have found this problems:
when create cd image: error [cannot determine image size]
when create os image: error Not enough space in datastore
Use the vmfs datastores in the esxi host
Create a datablock image in the datastore is ok
May be i have something wrong, or is a problem with esxi 6
iâm going to try esxi 5.5
Jose
Hi,
I am focused on deploy Centos7, opennebula 4.14.2 and one Esxi6 host licensed by âessentialâ flavor. Using vmware driver.
After many trials, I have sucesse to deploy the âttylinux -VMWareâ template from Marketplace to ESXi, using Sunstone or CLI.
But, I am facing a very strange problem: When I try to execute a âsimpleâ SUSPEND command from the Sunstone, the processes occurs. But, suddendly, on event conclusion, the Esxi host receive a command to remove the same instance from the inventory.
The âPoweroff hardâ command promote the same behavior.
Firstly I suspect from the Libvirt incompatibility. But, I try many times the same commands with success. I am using the same user/pass(oneadmin) configured on Opennebula, ESXi and âvmwarercâ file.
Follow my log.s for these two cases:
FIRST(suspend command and resume command):
- The instance 21 runs normally:
==========START=========
Fri Feb 26 16:19:36 2016 [Z0][VM][I]: New state is ACTIVE
Fri Feb 26 16:19:36 2016 [Z0][VM][I]: New LCM state is PROLOG
Fri Feb 26 16:19:38 2016 [Z0][VM][I]: New LCM state is BOOT
Fri Feb 26 16:19:38 2016 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/21/deployment.0
Fri Feb 26 16:19:42 2016 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Fri Feb 26 16:19:50 2016 [Z0][VMM][I]: Successfully execute virtualization driver operation: deploy.
Fri Feb 26 16:19:50 2016 [Z0][VMM][I]: Successfully execute network driver operation: post.
Fri Feb 26 16:19:50 2016 [Z0][VM][I]: New LCM state is RUNNING
Fri Feb 26 17:44:50 2016 [Z0][VM][I]: New LCM state is SAVE_SUSPEND
Fri Feb 26 17:45:07 2016 [Z0][VMM][I]: Successfully execute virtualization driver operation: save.
Fri Feb 26 17:45:07 2016 [Z0][VMM][I]: Successfully execute network driver operation: clean.
Fri Feb 26 17:45:07 2016 [Z0][VM][I]: New state is SUSPENDED
Fri Feb 26 17:45:07 2016 [Z0][VM][I]: New LCM state is LCM_INIT
Fri Feb 26 18:00:43 2016 [Z0][LCM][I]: Restoring VM
Fri Feb 26 18:00:43 2016 [Z0][VM][I]: New state is ACTIVE
Fri Feb 26 18:00:43 2016 [Z0][VM][I]: New LCM state is BOOT_SUSPENDED
Fri Feb 26 18:00:43 2016 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Fri Feb 26 18:00:44 2016 [Z0][VMM][I]: Command execution fail: /var/lib/one/remotes/vmm/vmware/restore '/vmfs/volumes/104/21/checkpoint' '10.10.0.241' 'one-21' 21 10.10.0.241
Fri Feb 26 18:00:44 2016 [Z0][VMM][I]: /var/lib/one/remotes/vmm/vmware/vmware_driver.rb:168:in `restore': undefined local variable or method `id' for #<VMwareDriver:0x00000002b4df00> (NameError)
Fri Feb 26 18:00:44 2016 [Z0][VMM][I]: from /var/lib/one/remotes/vmm/vmware/restore:37:in `<main>'
Fri Feb 26 18:00:44 2016 [Z0][VMM][I]: ExitCode: 1
Fri Feb 26 18:00:44 2016 [Z0][VMM][I]: Failed to execute virtualization driver operation: restore.
Fri Feb 26 18:00:44 2016 [Z0][VMM][E]: Error restoring VM
Fri Feb 26 18:00:44 2016 [Z0][VM][I]: New state is SUSPENDED
Fri Feb 26 18:00:44 2016 [Z0][VM][I]: New LCM state is LCM_INIT==========END===========
==============END============================
VMWARE VSPHERE CLIENT EVENT LOG
=====START==============
User oneadmin@10.10.0.73 logged out (login time: 26/02/2016 17:45:08, number of API invocations: 0, user agent: libvirt-esx)
info
26/02/2016 17:45:08
oneadmin
Removed one-21
info
26/02/2016 17:45:08
one-21
oneadmin
User oneadmin@10.10.0.73 logged in as libvirt-esx
info
26/02/2016 17:45:08
oneadmin
User oneadmin@10.10.0.73 logged out (login time: 26/02/2016 17:45:02, number of API invocations: 0, user agent: libvirt-esx)
info
26/02/2016 17:45:06
oneadmin
one-21 is suspended
info
26/02/2016 17:45:06
one-21
oneadmin
User oneadmin@10.10.0.73 logged in as Ruby
info
26/02/2016 17:45:05
oneadmin
one-21 is being suspended
info
26/02/2016 17:45:02
one-21
oneadmin
User oneadmin@10.10.0.73 logged in as libvirt-esx
info
26/02/2016 17:45:02
oneadmin
User oneadmin@10.10.0.73 logged out (login time: 26/02/2016 17:44:54, number of API invocations: 0, user agent: libvirt-esx)
info
26/02/2016 17:45:00
oneadmin
==========END============
vmware.log, last lines...
==================START=============================
2016-02-26T20:45:06.404Z| vcpu-0| I120: VigorTransport_ServerSendResponse opID=9a3f5788 seq=849: Completed PowerState request.
2016-02-26T20:45:06.404Z| vmx| I120: Stopping VCPU threads...
2016-02-26T20:45:06.404Z| vcpu-0| I120: VMMon_WaitForExit: vcpu-0: worldID=76644
2016-02-26T20:45:06.404Z| vcpu-1| I120: VMMon_WaitForExit: vcpu-1: worldID=76646
2016-02-26T20:45:06.405Z| svga| I120: SVGA thread is exiting
2016-02-26T20:45:06.405Z| vmx| I120:
2016-02-26T20:45:06.405Z| vmx| I120+ OvhdMem: Final (Power Off) Overheads
2016-02-26T20:45:06.405Z| vmx| I120: reserved | used
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem excluded cur max avg | cur max avg
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_MainMem : 131072 131072 - | - - -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_VmxText : 6400 6400 - | - - -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_VmxTextLibs : 15360 15360 - | - - -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem Total excluded : 152832 152832 - | - - -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem Actual maximum : 152832 | -
2016-02-26T20:45:06.405Z| vmx| I120:
2016-02-26T20:45:06.405Z| vmx| I120: reserved | used
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem paged cur max avg | cur max avg
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_KSTATS_vmm : 0 0 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_STATS_vmm : 4 4 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_KSTATS_device : 0 0 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_STATS_device : 2 2 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_KSTATS_migrate : 0 0 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_DiskLibMemUsed : 3075 3075 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_SvgaSurfaceTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_SvgaRestoreBufferArray : 0 0 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_SvgaSDirtyCache : 6 6 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_SvgaShaderTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_SvgaShaderText : 0 0 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_SvgaContextData : 0 0 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_SvgaContextCache : 0 0 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_SvgaDXRTViewTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.405Z| vmx| I120: OvhdMem OvhdUser_SvgaDXDSViewTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SvgaDXSRViewTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SvgaDXBlendStateTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SvgaDXElementLayoutTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SvgaDXDepthStencilTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SvgaDXRasterizerStateTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SvgaDXSamplerTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SvgaDXContextTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SvgaDXShaderTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SvgaDXStreamOutTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SvgaDXQueryTable : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxAllocTrack : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxCallStackProf : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_PEBS : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxGlobals : 1152 1152 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxGlobalsLibs : 3584 3584 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxHeap : 8704 8704 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxHeapFreeList : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxMainMemCheck : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxMks : 33 33 - | 2 11 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxMks3d : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxMksGLRenderer : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxMksGLTransient : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxMksLLVM : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxMksScreenTemp : 6146 6146 - | 0 283 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxMksVnc : 2158 2158 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxMksScreen : 2049 2049 - | 1 376 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxMksSVGAVO : 4096 4096 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxPhysMemRingBuf : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxPhysMemErrPages : 10 10 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxReplayCheck : 0 4096 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxFTCptOutputBuf : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxSLEntryBuf : 128 128 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxThreadMks : 512 512 - | 512 512 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxThreadVmx : 512 512 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxThreadsVcpu : 512 512 - | 0 512 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxThreadsWorker : 4608 4608 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VmxThreadSvga : 512 512 - | 512 512 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem Total paged : 37803 41899 - | 1027 2206 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem Actual maximum : 41899 | 2174
2016-02-26T20:45:06.406Z| vmx| I120:
2016-02-26T20:45:06.406Z| vmx| I120: reserved | used
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem nonpaged cur max avg | cur max avg
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SharedArea : 141 141 - | 77 77 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_BusMemTraceBitmap : 7 7 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_PFrame : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VProbe : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VIDE_KSEG : 16 16 - | 16 16 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VGA : 64 64 - | 64 64 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_PShareMPN : 2 2 - | 1 1 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_BalloonMPN : 1 1 - | 1 1 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_P2MUpdateBuffer : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_ServicesMPN : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_LocalApic : 2 2 - | 2 2 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_BusError : 1 1 - | 1 1 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VBIOS : 8 8 - | 8 8 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VnicGuest : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VnicMmap : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_TestDev : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_LSIBIOS : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_LSIRings : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_PCIPBIOS : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_PVSCSIBIOS : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_PVSCSIKickReg : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SAS1068BIOS : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SBIOS : 16 16 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_AHCIBIOS : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_FlashRam : 128 128 - | 128 128 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_IOFilters : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_AsyncIO : 1024 1024 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SMM : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SVGAFB : 1024 1024 - | 1024 1024 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SVGAMEM : 64 512 - | 64 64 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_HDAudioReg : 3 3 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_EHCIRegister : 1 1 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_XhciRegister : 1 1 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_PhysMemDebug : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_HyperV : 2 2 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_HVIOBitmap : 3 3 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_HVMSRBitmap : 2 2 - | 1 1 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VHVGuestMSRBitmap : 2 2 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_vhvCachedVMCS : 2 2 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_vhvNestedAPIC : 2 2 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_StateLoggerLogBuf : 0 512 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_PCIP : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_VMsafe : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_MonWired : 2 2 - | 2 2 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_MonLow : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_MonWiredNuma : 0 58 - | 29 58 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_MonNuma : 0 339 - | 0 306 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_MonOther : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_SVMLowMem : 3 3 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_PAEShadow : 2 2 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdUser_FTCpt : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem Total nonpaged : 2523 3880 - | 1418 1753 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem Actual maximum : 3880 | 1753
2016-02-26T20:45:06.406Z| vmx| I120:
2016-02-26T20:45:06.406Z| vmx| I120: reserved | used
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem anonymous cur max avg | cur max avg
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_Alloc : 142 142 - | 12 122 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_BusMemFrame : 321 325 - | 201 201 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_BusMemFramePGAR : 2 2 - | 2 2 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_BusMemTracePGAR : 2 2 - | 2 2 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_BusMem2MRegionPGAR : 3 3 - | 3 3 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_BusMemZapListMPN : 1 1 - | 1 1 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_MMU : 110 1554 - | 30 34 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_ScratchAS : 24 324 - | 6 6 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_MonTLB : 12 12 - | 12 12 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_DT : 1 1 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_TCCoherency : 12 12 - | 12 12 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_TC : 1026 1924 - | 900 1800 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_ChainInfo : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_Island : 4 4 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_BusMemScratchAS : 4 4 - | 4 4 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_PlatformScratchAS : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_BackdoorHintsMPN : 3 3 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_HV : 4 4 - | 2 2 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_VHV : 6 6 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_VNPTShadow : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_VNPTShadowCache : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_VNPTBackmap : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_SVMIDT : 2 2 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_CallStackProfAnon : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_Numa : 30 30 - | 14 28 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_NumaTextRodata : 245 245 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_NumaDataBss : 94 94 - | 94 94 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_NumaLargeData : 1024 1024 - | 0 1012 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_WiredNuma : 58 58 - | 58 58 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_GPhysTraced : 80 80 - | 34 37 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_GPhysHWMMU : 409 409 - | 47 67 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_GPhysNoTrace : 40 40 - | 11 22 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_BTScratchPage : 1 1 - | 1 1 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_PhysMemGart : 72 72 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_PhysMemErr : 7 7 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_StateLoggerBufferPA : 1 1 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_TraceALot : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_VIDE : 4 4 - | 4 4 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_VMXNETWake : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_BusLogic : 8 8 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_Ahci : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_PVSCSIShadowRing : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_LSIRings : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_FTCptScratchAS : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_DebugStore : 0 0 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem OvhdMon_VProbe : 1 1 - | 0 0 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem Total anonymous : 3753 6399 - | 1450 3524 -
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem Actual maximum : 4651 | 2379
2016-02-26T20:45:06.406Z| vmx| I120:
2016-02-26T20:45:06.406Z| vmx| I120: OvhdMem: memsize 512 MB VMK fixed 705 pages var(mem) 513 pages var(cpu) 8 cbrcOverhead 0 pages total 979 pages
2016-02-26T20:45:06.406Z| vmx| I120: VMMEM: Maximum Reservation: 203MB (MainMem=512MB) VMK=3MB
2016-02-26T20:45:06.407Z| vmx| I120: Tools: ToolsRunningStatus_Exit, delayedRequest is 0x32591FB0
2016-02-26T20:45:06.407Z| vmx| I120: SVMotion_PowerOff: Not running Storage vMotion. Nothing to do
2016-02-26T20:45:06.407Z| vmx| I120: Virtual Device for ide0:0 was already successfully destroyed
2016-02-26T20:45:06.408Z| mks| I120: MKS-RemoteMgr: Stopping VNC server at 0.0.0.0:5921
2016-02-26T20:45:06.408Z| mks| I120: MKS PowerOff
2016-02-26T20:45:06.408Z| mks| I120: MKS thread is exiting
2016-02-26T20:45:06.408Z| vmx| I120: SVMotion_PowerOff: Not running Storage vMotion. Nothing to do
2016-02-26T20:45:06.410Z| vmx| I120: Vix: [76643 mainDispatch.c:1188]: VMAutomationPowerOff: Powering off.
2016-02-26T20:45:06.411Z| vmx| I120: WORKER: asyncOps=3 maxActiveOps=1 maxPending=0 maxCompleted=0
2016-02-26T20:45:06.460Z| vmx| I120: Vix: [76643 mainDispatch.c:4292]: VMAutomation_ReportPowerOpFinished: statevar=1, newAppState=1875, success=1 additionalError=0
2016-02-26T20:45:06.460Z| vmx| I120: Vix: [76643 mainDispatch.c:4311]: VMAutomation: Ignoring ReportPowerOpFinished because the VMX is shutting down.
2016-02-26T20:45:06.460Z| vmx| A115: ConfigDB: Setting cleanShutdown = "TRUE"
2016-02-26T20:45:06.461Z| vmx| I120: Vigor_ClientRequestCb: failed to do op=3 on unregistered device 'GuestInfo' (cmd=queryFields)
2016-02-26T20:45:06.461Z| vmx| I120: Vix: [76643 mainDispatch.c:4292]: VMAutomation_ReportPowerOpFinished: statevar=0, newAppState=1870, success=1 additionalError=0
2016-02-26T20:45:06.461Z| vmx| I120: Vix: [76643 mainDispatch.c:4311]: VMAutomation: Ignoring ReportPowerOpFinished because the VMX is shutting down.
2016-02-26T20:45:06.461Z| vmx| I120: Transitioned vmx/execState/val to suspended
2016-02-26T20:45:06.462Z| vmx| I120: VMXVmdbCbVmVmxExecStateVal: no more client connections.
2016-02-26T20:45:06.462Z| vmx| I120: VMX idle exit
2016-02-26T20:45:06.462Z| vmx| I120: VMIOP: Exit
2016-02-26T20:45:06.471Z| vmx| I120: Vix: [76643 mainDispatch.c:843]: VMAutomation_LateShutdown()
2016-02-26T20:45:06.471Z| vmx| I120: Vix: [76643 mainDispatch.c:793]: VMAutomationCloseListenerSocket. Closing listener socket.
2016-02-26T20:45:06.472Z| vmx| I120: Flushing VMX VMDB connections
2016-02-26T20:45:06.472Z| vmx| I120: VigorTransport_ServerCloseClient: Closing transport 32263570 (err = 0)
2016-02-26T20:45:06.472Z| vmx| I120: VigorTransport_ServerDestroy: server destroyed.
2016-02-26T20:45:06.478Z| vmx| I120: VMX exit (0).
2016-02-26T20:45:06.478Z| vmx| I120: AIOMGR-S : stat o=12 r=23 w=70168 i=4 br=361432 bw=2276569
2016-02-26T20:45:06.478Z| vmx| I120: OBJLIB-LIB: ObjLib cleanup done.
2016-02-26T20:45:06.478Z| vmx| W110: VMX has left the building: 0.
============================END===================================================
I will try the Esxi5.5 with the same configurationâŠ
OK guys, the same behavior with the ESXi5.5. Doing âsuspendâ action through Sunstone results on a vm(instance) exclusion from ESXi inventory at the end of process. But, the instance continues on the Sunstone inventory with SUPENDED status. But will never wakeup againâŠ
jeovanevs
(Jeovanevs)
October 27, 2016, 11:56pm
10
Iâve had the same problem.
It seems to be with the ruby script /var/lib/one/remotes/vmm/vmware/vmware_driver.rb.
As you can see in your log:
At line 168 the script try to define the domain based on the id variable, but the id variable was not defined yet. So what I have done:
I put this verification before the line 168
if not defined? (id)
id = vm_id
end
And now I can restore the suspended VMs.
I hope it works for you.
jeovanevs
(Jeovanevs)
October 28, 2016, 12:04am
11
Astor_Palmeira:
OK guys, the same behavior with the ESXi5.5. Doing âsuspendâ action through Sunstone results on a vm(instance) exclusion from ESXi inventory at the end of process. But, the instance continues on the Sunstone inventory with SUPENDED status. But will never wakeup againâŠ
The VM disks and files still exist in the datastore, but it was unregistered from vSphere inventory. Because it was not excluded from the datastore, the Sunstone still preserve the last status.
You can restore the VM using the vSphere CLI, or the ESXi console, or using Sunstone interface after patch the vmware_driver.rb as explained earlier.
Hi Jeovanevs,
I will try your âfixâ as fast as I can and post the results.
Do you need to use ESXI hosts stand alone too?
I do not understand why OpenNebula development decided to do not maintain
the driver.
mcabrerizo
(Miguel Ăngel Alvarez Cabrerizo)
October 28, 2016, 9:10am
13
Hi guys,
I provide my feedback for those visiting this topic.
OpenNebula 5.2 and ESX 6 through vCenter looks fine. A running VM can be suspended and resumed.
Iâll update this if I can try with ESX 5.5
Cheers!
jeovanevs
(Jeovanevs)
October 28, 2016, 6:12pm
14
Hi guys,
Iâm using One 4.10.2 along ESXi 5.0 with the latest updates.
Tip: The ESXi needs to be in the trial version, or with a payed licence, because the free version have some limitation to deploy VMs trough command line.
Unhappyly the newer versions of ESXi deprecated the direct command line management in favor of the, expensive, vCenter . Thus, it is not interesting($$) maintain support to the old drivers.
1 Like
Hi friend,
Your fix worked like a charm!
Thank you
I will try to test with Esxi6u2!
sanyo
(sanyo)
July 26, 2017, 9:21pm
16
ESX drivers. I am running my Opennebula platform on top of ESXi hypervisors v5.5 and I need to upgrade them to ESXi 6.0 which is free to use (in contrast with v5.5 which was running on a demo license).
Do you mean that ESXi v6 adds a free management API for a virtlib/virsh in contrast to ESXi v4x-v5x?
Can you post a proof if it is actually as it looks from your earlier statements at least if I understood them correct.
So if ONE ESXi driver would be still supported it would work with a perpetual free license of ESXi v6 ?
Can a modern ONE v5.4 Medusa be used to manage an old ONE v4.1x (directly or via a cloud API like econe or open API) which would manage ESXi directly without vCenter?
ONE v5.xâ>econeâ>ONE v4.1xâ>direct libvirt ESXi API â ESXi v5.x