I have created successfully mounted NFS shares on my nodes and front end. on the servers themselves they are fully functional and showing the correct capacity and file structure.
however when i try to create datastores in sunstone using these shares, the share is created but shows no capacity. I checked the oned log and found this error:
Datastore files (2) successfully monitored.
Fri Mar 15 13:52:26 2024 [Z0][ImM][D]: Datastore default (1) successfully monitored.
Fri Mar 15 13:52:27 2024 [Z0][ReM][D]: Req:7152 UID:0 IP:127.0.0.1 one.hostpool.info invoked
Fri Mar 15 13:52:27 2024 [Z0][ReM][D]: Req:7152 UID:0 one.hostpool.info result SUCCESS, “<HOST_POOL><ID…”
Fri Mar 15 13:52:27 2024 [Z0][ImM][I]: Command execution failed (exit code: 255): /var/lib/one/remotes/datastore/fs/monitor 107
Fri Mar 15 13:52:27 2024 [Z0][ImM][E]: Error monitoring datastore 107: LQ==. Decoded info: -
Fri Mar 15 13:52:27 2024 [Z0][ReM][D]: Req:6608 UID:0 IP:127.0.0.1 one.hostpool.info invoked
Fri Mar 15 13:52:27 2024 [Z0][ReM][D]: Req:6608 UID:0 one.hostpool.info result SUCCESS, “<HOST_POOL><ID…”
Fri Mar 15 13:52:27 2024 [Z0][ReM][D]: Req:9488 UID:0 IP:127.0.0.1 one.hostpool.info invoked
Fri Mar 15 13:52:27 2024 [Z0][ReM][D]: Req:9488 UID:0 one.hostpool.info result SUCCESS, “<HOST_POOL><ID…”
Fri Mar 15 13:52:27 2024 [Z0][ImM][I]: Command execution failed (exit code: 255): /var/lib/one/remotes/datastore/fs/monitor 108
Fri Mar 15 13:52:27 2024 [Z0][ImM][E]: Error monitoring datastore 108: LQ==. Decoded info: -
Fri Mar 15 13:52:27 2024 [Z0][ImM][I]: Command execution failed (exit code: 255): /var/lib/one/remotes/tm/shared/monitor 109
Fri Mar 15 13:52:27 2024 [Z0][ImM][E]: Error monitoring datastore 109: LQ==. Decoded info: -
i cant find any info on the 255 exit code, i have checked the permissions which seem good, the account password on my servers for oneadmin, is the same as the account being used to connect to the NAS, and using the account to manually map and mount works just fine.
its just sunstone that cant seem to monitor the data stores. I looked to see if i could find config info for the host nodes to correct any issues, the host monitoring is just fine.
Versions of the related components and OS (frontend, hypervisors, VMs):
All nodes front and back are on almalinux 9, all running server with kvm host, i run a GUI on fron end for convenience. The version of opennebula is 6.8.0.
Can anyone point me to a possible remedy or where to look deeper for possible answers.
Any help gratefully received. Im new to OpenNebula so be kind…