OpenNebula Frontend UI HA setup

We had setup opennebula 5.2 using centos7 with 2 nodes, Now we want to setup HA cluster environment between both frontend nodes for sunstone UI. i also followed mention official link: http://docs.opennebula.org/5.2/advanced_components/ha/frontend_ha_setup.html

But getting some issue on that:

[root@node1 ~]# pcs status
Cluster name: opennebula
Stack: corosync
Current DC: node1 (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Fri Jan 27 12:46:56 2017 Last change: Mon Jan 23 17:35:55 2017 by root via cibadmin on node1

2 nodes and 2 resources configured

Online: [ node1 node2 ]

Full list of resources:

fence_node1 (stonith:fence_ilo_ssh): Stopped
fence_node2 (stonith:fence_ilo_ssh): Stopped

Failed Actions:

  • fence_node1_start_0 on node2 ‘unknown error’ (1): call=19, status=Error, exitreason=‘none’,
    last-rc-change=‘Tue Jan 24 14:21:28 2017’, queued=0ms, exec=11646ms
  • fence_node2_start_0 on node2 ‘unknown error’ (1): call=20, status=Error, exitreason=‘none’,
    last-rc-change=‘Tue Jan 24 14:21:28 2017’, queued=0ms, exec=11315ms
  • fence_node1_start_0 on node1 ‘unknown error’ (1): call=27, status=Error, exitreason=‘none’,
    last-rc-change=‘Mon Jan 23 17:35:46 2017’, queued=0ms, exec=11574ms
  • fence_node2_start_0 on node1 ‘unknown error’ (1): call=33, status=Error, exitreason=‘none’,
    last-rc-change=‘Mon Jan 23 17:35:59 2017’, queued=0ms, exec=11592ms

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled


Also i setup in httpd.conf on both nodes:

<Location /server-status>
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1

Please suggest us how to resolve it.

=======================================================
[root@node1 ~]# pcs config
Cluster Name: opennebula
Corosync Nodes:
node1 node2
Pacemaker Nodes:
node1 node2

Resources:
Group: apache
Resource: httpd_vip (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=192.168.40.100 cidr_netmask=24
Operations: start interval=0s timeout=20s (httpd_vip-start-interval-0s)
stop interval=0s timeout=20s (httpd_vip-stop-interval-0s)
monitor interval=10s timeout=20s (httpd_vip-monitor-interval-10s)
Resource: httpd_ser (class=ocf provider=heartbeat type=apache)
Attributes: configfile=/etc/httpd/conf/httpd.conf statusurl=http://127.0.0.1/server-status
Operations: start interval=0s timeout=40s (httpd_ser-start-interval-0s)
stop interval=0s timeout=60s (httpd_ser-stop-interval-0s)
monitor interval=10 timeout=20s (httpd_ser-monitor-interval-10)

Stonith Devices:
Resource: fence_node1 (class=stonith type=fence_ilo_ssh)
Attributes: pcmk_host_list=node1 ipaddr=202.xx.xx.135 login=… passwd=… action=reboot secure=yes delay=30
Operations: monitor interval=20s (fence_node1-monitor-interval-20s)
Resource: fence_node2 (class=stonith type=fence_ilo_ssh)
Attributes: pcmk_host_list=node2 ipaddr=202.xx.xx.136 login=… passwd=… action=reboot secure=yes delay=30
Operations: monitor interval=20s (fence_node2-monitor-interval-20s)
Fencing Levels:

Location Constraints:
Ordering Constraints:
Colocation Constraints:
Ticket Constraints:
Alerts:
No alerts defined

Resources Defaults:
No defaults set
Operations Defaults:
No defaults set

Cluster Properties:
cluster-infrastructure: corosync
cluster-name: opennebula
dc-version: 1.1.15-11.el7_3.2-e174ec8
have-watchdog: false
no-quorum-policy: ignore

Quorum:
Options:

Hello, please paste pcs config via for ex. pastebin.com and also corosync.conf

2 Likes

Thanks for your support…

Now i have successfully configured opennebula HA setup between two nodes:

[root@node5 drbd]# pcs status
Cluster name: mycluster
Stack: corosync
Current DC: node5 (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Thu Mar 2 14:03:42 2017 Last change: Thu Mar 2 14:03:30 2017 by root via crm_attribute on node5

2 nodes and 9 resources configured

Online: [ node4 node5 ]

Full list of resources:

ClusterIP (ocf::heartbeat:IPaddr2): Started node4
opennebula (systemd:opennebula): Started node4
opennebula-sunstone (systemd:opennebula-sunstone): Started node4
opennebula-gate (systemd:opennebula-gate): Started node4
opennebula-flow (systemd:opennebula-flow): Started node4
Master/Slave Set: WebDataClone [WebData]
Masters: [ node4 node5 ]
WebSite (ocf::heartbeat:apache): Started node4
WebFS (ocf::heartbeat:Filesystem): Started node4

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled