Db upgrade community 6.0.0 to 6.2.0

After upgrading one from 6.0.0 to 6.2.0 I’m not able to start one.
onedb upgrade -v --sqlite ./one.db

oneadmin@one-mast-01:~$ onedb upgrade -v --sqlite ./one.db
Version read:
Shared tables 6.0.0 : Database migrated from 5.12.0 to 6.0.0 (OpenNebula 6.0.0) by onedb command.
Local tables  6.0.0 : Database migrated from 5.12.0 to 6.0.0 (OpenNebula 6.0.0) by onedb command.

Sqlite database backup stored in /var/lib/one/one.db_2021-11-6_0:24:44.bck
Use 'onedb restore' to restore the DB.

>>> Running migrators for shared tables
Database already uses version 6.0.0

>>> Running migrators for local tables
Database already uses version 6.0.0

Total time: 0.06s
ERROR: Database upgrade to the latest versions (local 6.0.0, shared 6.0.0)
wasn't successful due to missing migration descriptors. Migrators are
provided as part of Enterprise Edition for customers with active subscription.
For community with non-commercial deployments they are provided via a
dedicated migration package, which must be obtained separately.

The database will be restored
Sqlite database backup restored in ./one.db
----------------------------------------
Sat Nov  6 00:14:06 2021 [Z0][ONE][I]: Log level:3 [0=ERROR,1=WARNING,2=INFO,3=DEBUG]
Sat Nov  6 00:14:06 2021 [Z0][ONE][I]: Support for xmlrpc-c > 1.31: yes
Sat Nov  6 00:14:06 2021 [Z0][ONE][I]: Using hostname: one-mast-01
Sat Nov  6 00:14:06 2021 [Z0][ONE][I]: sqlite has enabled: SQLITE_ENABLE_UPDATE_DELETE_LIMIT
Sat Nov  6 00:14:06 2021 [Z0][ONE][I]: Checking database version.
Sat Nov  6 00:14:06 2021 [Z0][ONE][E]: Database version mismatch ( local_db_versioning). Installed OpenNebula 6.2.0 (c84c3303) needs DB version '6.2.0', and existing DB version is '6.0.0'.
Sat Nov  6 00:14:06 2021 [Z0][ONE][E]: Use onedb to upgrade DB.
"/var/log/one/oned.log" 301L, 15378C ```

I think you need to contact support to get the migrations

Thanks for the reminder.
Email requested the migration packages for opennebula 6.2 CE.
Sent to community-manager@opennebula.io

After installing the 6.2 migration deb, and applyimg the onedb upgrade openenebula 6.2 came up. Initially I was able to login to a running vm via sunstone but since upgrading via apt update && apt upgrade I am not able to login to the sunstone console.

How can I get sunstone working?

The command line however seems to be working ok.

My next test is with terraform.

oneadmin@one-mast-01:~$ onehost list
  ID NAME                                                                       CLUSTER    TVM      ALLOCATED_CPU      ALLOCATED_MEM STAT
   3 one-node-03                                                                default      0       0 / 800 (0%)      0K / 31G (0%) on  
   2 one-node-02                                                                default      0       0 / 800 (0%)      0K / 31G (0%) err 
   1 one-node-01                                                                default      0      0 / 1600 (0%)      0K / 31G (0%) on  
   0 one-mast-01                                                                default      1    200 / 3200 (6%)    4G / 61.8G (6%) on  
oneadmin@one-mast-01:~$ onevm list
  ID USER     GROUP    NAME                                               STAT  CPU     MEM HOST                                     TIME
 398 oneadmin oneadmin CentOS 7 cicd-398                                  poff    2      4G one-mast-01                          0d 20h16
oneadmin@one-mast-01:~$ onevnet list
  ID USER     GROUP    NAME                                                           CLUSTERS   BRIDGE                            LEASES
  93 oneadmin lab-ks-o lab-ks-one-node-vnet                                           0          br0                                    0
  92 oneadmin dev-ks-o dev-ks-one-node-vnet                                           0          br0                                    0
   0 oneadmin oneadmin public-net                                                     0          br0                                    1

As you can see from the following screenshots the original sunstone login page is rendered but when I login I get a Sinatra error page.

I can however see the oneprovision dashboard.

/var/log/one/

./
398.log
fireedge.error
fireedge.log
monitor.log
novnc.log
oned.log
oneflow.log
onegate.log
onehem.log
sched.log
sunstone.log
vcenter_monitor.log

/var/log/one/fireedge.log

^[[31m Error: ENOENT: no such file or directory, open '/usr/lib/one/fireedge/etc/fireedge-server.conf' ^[[0m
^[[31m Error: ENOENT: no such file or directory, open '/usr/lib/one/fireedge/etc/fireedge-server.conf' ^[[0m
^[[31m Error: ENOENT: no such file or directory, open '/usr/lib/one/fireedge/etc/fireedge-server.conf' ^[[0m
^[[31m Error: ENOENT: no such file or directory, open '/usr/lib/one/fireedge/etc/fireedge-server.conf' ^[[0m
^[[31m Error: ENOENT: no such file or directory, open '/usr/lib/one/fireedge/etc/fireedge-server.conf' ^[[0m
^[[31m Error: ENOENT: no such file or directory, open '/usr/lib/one/fireedge/etc/fireedge-server.conf' ^[[0m
^[[31m Error: ENOENT: no such file or directory, open '/usr/lib/one/fireedge/etc/fireedge-server.conf' ^[[0m
^[[31m Error: ENOENT: no such file or directory, open '/usr/lib/one/fireedge/etc/fireedge-server.conf' ^[[0m
[HPM] Proxy created: /fireedge/vmrc  -> http://localhost:2616
[HPM] Subscribed to http-proxy events: [ 'error', 'close' ]
^[[32m Server listen in 0.0.0.0:2616 ^[[0m

/var/log/one/fireedge.error

Warning: Invalid argument supplied to oneOfType. Expected an array of check functions, but received undefined at index 0.



Hi All, My sunstone problem has gone away after another apt-update/upgrade. I can now login and use the sunstone dashboard.

@DennisTait Do you have something in the ONE_LOCATION variable?

Hi, After checking /etc/one/oned.conf I have not defined ONE_LOCATION

Looking at the following snippet I am not changing the location /etc/one/ just using the default and everything so far seems to be working fine, I can login via sunstone and oneprovision. I have not tested any of the oneprovision features available in opennebula 6.2.0 CE yet.

#*******************************************************************************
# Hook Manager Configuration
#*******************************************************************************
# The Driver (HM_MAD)
# -----------------------------------------------
#
# Used to execute the Hooks:
#   executable: path of the hook driver executable, can be an
#               absolute path or relative to $ONE_LOCATION/lib/mads (or
#               /usr/lib/one/mads/ if OpenNebula was installed in /)
#
#   arguments : for the driver executable, can be an absolute path or relative
#               to $ONE_LOCATION/etc (or /etc/one/ if OpenNebula was installed.
#               in /)
#

I can also confirm Terraform 0.3.0 also allows me to create vm’s using ini, plan, apply and destroy.

I’l raise a separate ticket for terraform issue I have, when the terraform vm creation loop completes, the vm’s are running but I get an error which stops the ansible configuration scripts from running.

16:02 $ ./terraform-init-dev.sh

Providers required by configuration:
.
├── provider[registry.terraform.io/opennebula/opennebula] ~> 0.3.0
├── provider[registry.terraform.io/hashicorp/local]
├── provider[registry.terraform.io/hashicorp/template]
├── provider[registry.terraform.io/hashicorp/null]
└── module.core

Initializing modules...

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/null from the dependency lock file
- Reusing previous version of opennebula/opennebula from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Reusing previous version of hashicorp/template from the dependency lock file
- Using previously-installed hashicorp/template v2.2.0
- Using previously-installed hashicorp/null v3.1.0
- Using previously-installed opennebula/opennebula v0.3.0
- Using previously-installed hashicorp/local v2.1.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary