Don't see remote VM status

Hello,
I use opnnebula 4.8.
I follow informations in Managing Hosts — OpenNebula 4.8 documentation

I want to manage VM on remote host (one2) from host (one1) so what I did:

  • Add oneadmin ssh pub key to one2 host → one1: ssh oneadmin@one2 → No password asked

  • add @IP of one2 in /etc/hosts of one1

  • Uncomment in oned.conf (one1):
    IM_MAD = [
    name = “kvm”,
    executable = “one_im_ssh”,
    arguments = “-r 3 -t 15 kvm-probes” ]

  • restart opennebula and sunstone (one1]

  • add one2 as host (one1)

So one2 has state: Monitored but no VM show and 1 zombie!!!
root@one1:~# onehost show 12
HOST 12 INFORMATION
ID : 12
NAME : one2
CLUSTER : -
STATE : MONITORED
IM_MAD : kvm
VM_MAD : kvm
VN_MAD : ovswitch
LAST MONITORING TIME : 09/12 11:40:31

HOST SHARES
TOTAL MEM : 62.9G
USED MEM (REAL) : 2.6G
USED MEM (ALLOCATED) : 0K
TOTAL CPU : 1200
USED CPU (REAL) : 13
USED CPU (ALLOCATED) : 0
RUNNING VMS : 0

MONITORING INFORMATION
ARCH=“x86_64”
CPUSPEED=“2201”
HOSTNAME=“one2”
HYPERVISOR=“kvm”
MODELNAME=“Intel(R) Xeon(R) CPU E5-2420 v2 @ 2.20GHz”
NETRX=“1398517”
NETTX=“469479”
RESERVED_CPU=“”
RESERVED_MEM=“”
TOTAL_ZOMBIES=“1”
VERSION=“4.8.0”
ZOMBIES=“one-4”

VIRTUAL MACHINES

ID USER     GROUP    NAME            STAT UCPU    UMEM HOST             TIME

root@one1:~#

Of course, the “zombie” is my VM.

Now, when I run on one2:
root@one2:#cd /var/tmp/one/im/
root@one2:#./run_probes kvm-probes
…/…/vmm/kvm/poll:121:in get_all_vm_info': private method split’ called for nil:NilClass (NoMethodError)
from …/…/vmm/kvm/poll:116:in each' from ../../vmm/kvm/poll:116:in get_all_vm_info’
from …/…/vmm/kvm/poll:471:in `print_all_vm_template’
from …/…/vmm/kvm/poll:515
ERROR MESSAGE --8<------
Error executing poll.sh
ERROR MESSAGE ------>8–

I add in poll.sh: ENV[‘LC_ALL’]=‘C’ and now I have no error but still not see VM informations on one1 host.

Why have I not VM informations?
Have I miss something in set up?

Any help would be pleasure? :slight_smile:

NB: I hope my english in clear

Hi,

Did you create the VM manually, before adding the host to opennebula?

yes, i did

Then it makes sense that opennebula reports it as a zombie VM. If the VM is named one- but was not created by opennebula, it is assumed that something went wrong.

It the VM was named differently, you could see it as a “wild VM”, and those can be imported to do basic life-cycle management.

I don’t think I understand you but when I said I create VM manually on one2 host, I do onetemplate instantiate 0 “scribe”

this is what one2 host have:
root@one2:~# virsh list
ID Nom État

2 one-4 en cours d’exécution

root@one2:~# onevm list
ID USER GROUP NAME STAT UCPU UMEM HOST TIME
4 oneadmin oneadmin SCRIBE runn 8 16G one2 0d 04h47

and on “one1” host:

root@one1:~# onevm list
ID USER GROUP NAME STAT UCPU UMEM HOST TIME
0 oneadmin oneadmin AMON runn 1 8G one 4d 22h05
1 oneadmin oneadmin HORUS runn 1 8G one 4d 22h04
2 oneadmin oneadmin SATIS runn 5 16G one 4d 22h04
3 oneadmin oneadmin PRONOTE runn 2 16G one 4d 22h03

root@one1:~# virsh list
ID Nom État

2 one-0 en cours d’exécution
3 one-1 en cours d’exécution
4 one-2 en cours d’exécution
5 one-3 en cours d’exécution

is this explain what you said?

Wait a moment, how many opennebula installations do you have?

2 servers named one1 and one2 with opennebula installed on each.

Then what you are trying to do is currently not supported. We are working on opennebula to opennebula cloudbursting, but that will be for the 5.2 release.

Ok.
so, I must stop/uninstall on one2 host opennebula server; just leave libvirtd running, that’s right?