PostgreSQL Cassandra Upgrade
PostgreSQL Cassandra Upgrade
:toc: left
:toclevels: 3
:sectnums:
:author: ETK
:revnumber: PA1
:revdate: 2020-11-27
:sectnums:
:sectnumlevels: 5
:xrefstyle: short
:doctype: book
:idprefix:
:idseparator:
:imagesdir: images
== Introduction
{empty} +
{empty} +
=== Goals
{empty} +
=== Requirements/Prerequisites
It is important to have correct repo from which we can install new version of
PostgreSQL, in our case PostgreSQL 11.10.
Easiest way to check this is to run `*_yum search postgresql11_*`, and if the
response is 'No matches found' we need to install repo from which we can install
PostgreSQL.
....
## Commands to install PostgreSQL 11.10 repo:
yum search postgresql11
yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/
pgdg-redhat-repo-latest.noarch.rpm
....
....
## Expected output:
root@CONS_a04pos18044:~# yum search postgresql11
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-
manager to register.
Warning: No matches found for: postgresql11
No matches found
root@CONS_a04pos18044:~# yum install -y
https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-
repo-latest.noarch.rpm
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-
manager to register.
pgdg-redhat-repo-latest.noarch.rpm
| 6.8 kB 00:00:00
Examining /var/tmp/yum-root-9GuLj0/pgdg-redhat-repo-latest.noarch.rpm: pgdg-redhat-
repo-42.0-14.noarch
Marking /var/tmp/yum-root-9GuLj0/pgdg-redhat-repo-latest.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package pgdg-redhat-repo.noarch 0:42.0-14 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===================================================================================
===================================================================================
===========
Package Arch
Version Repository
Size
===================================================================================
===================================================================================
===========
Installing:
pgdg-redhat-repo noarch 42.0-
14 /pgdg-redhat-repo-latest.noarch
11 k
........
Installed:
pgdg-redhat-repo.noarch 0:42.0-14
Complete!
Also, for perserving the same settings as they were before we need to check last
few lines from postgresql.conf and pg_hba.conf.
NOTE: We will need to add the customized options from configuration files to newly
upgraded PostgreSQL configuration files.
....
## commands to run:
cp /srv/postgres/data/postgresql.conf /opt/postgresql.conf.pre95upgrade
cp /srv/postgres/data/pg_hba.conf /opt/pg_hba.conf.pre95upgrade
....
....
##Output from our pre95upgrade files with customized options:
root@CONS_a04pos18043:~# cat /opt/postgresql.conf.pre95upgrade | tail -20
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
NOTE: Below environments is used to test this upgrade procedure and its outputs are
in the steps as expected output.
{empty} +
{empty} +
....
## Commands to check status of the cluster:
pcs status
drbdadm status
....
....
## Expected output:
root@CONS_a04pos18043:~# pcs status
Cluster name: pgcluster
Stack: corosync
Current DC: a04pos18043 (version 1.1.20-5.el7-3c4c782f70) - partition with quorum
Last updated: Fri Nov 27 12:17:05 2020
Last change: Thu Nov 26 17:39:37 2020 by root via cibadmin on a04pos18043
2 nodes configured
5 resources configured
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
root@CONS_a04pos18043:~# drbdadm status
postgres role:Primary
disk:UpToDate
peer role:Secondary
replication:Established peer-disk:UpToDate
....
{empty} +
NOTE: In the expected output we can see that our `*_MASTER PG node_*` is
a04pos18043 and `*_SLAVE PG node_*` is a04pos18044.
{empty} +
NOTE: We will remove the slave from the cluster first and run the upgrade on it,
then proceed with the upgrade on master.
{empty} +
We will stop DRBD replication by changing the IP of the slave peer to localhost IP
(127.0.0.1).
The file in which we are changing the IP is `*_/etc/drbd.d/postgres.res_*`.
IP of the slave in our case is 192.168.18.44.
{empty} +
....
## Commands to stop DRBD replication:
drbdadm disconnect postgres
sed -i 's/<slave_IP>/127.0.0.1/g' /etc/drbd.d/postgres.res
drbdadm adjust postgres
drbdadm disconnect postgres
drbdadm status
....
....
## Expected output:
root@CONS_a04pos18043:~# drbdadm disconnect postgres
root@CONS_a04pos18043:~# sed -i 's/192.168.18.44/127.0.0.1/g'
/etc/drbd.d/postgres.res
root@CONS_a04pos18043:~# drbdadm adjust postgres
root@CONS_a04pos18043:~# drbdadm disconnect postgres
root@CONS_a04pos18043:~# drbdadm status
postgres role:Primary
disk:UpToDate
1 node configured
5 resources configured
Online: [ a04pos18043 ]
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
....
{empty} +
Dependencies Resolved
===================================================================================
===================================================================================
===========
Package Arch
Version Repository
Size
===================================================================================
===================================================================================
===========
Installing:
postgresql11-server x86_64
11.10-1PGDG.rhel7 pgdg11
4.7 M
Installing for dependencies:
libicu x86_64
50.2-3.el7 cent-os7
6.9 M
postgresql11 x86_64
11.10-1PGDG.rhel7 pgdg11
1.7 M
postgresql11-libs x86_64
11.10-1PGDG.rhel7 pgdg11
363 k
........
Installed:
postgresql11-server.x86_64 0:11.10-1PGDG.rhel7
Dependency Installed:
libicu.x86_64 0:50.2-3.el7 postgresql11.x86_64 0:11.10-
1PGDG.rhel7 postgresql11-libs.x86_64 0:11.10-1PGDG.rhel7
Complete!
Dependencies Resolved
===================================================================================
===================================================================================
===========
Package Arch
Version Repository
Size
===================================================================================
===================================================================================
===========
Installing:
postgresql11-plperl x86_64
11.10-1PGDG.rhel7 pgdg11
64 k
........
Installed:
postgresql11-plperl.x86_64 0:11.10-1PGDG.rhel7
Complete!
....
....
## Expected output:
root@CONS_a04pos18044:~# yum erase postgresql95
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-
manager to register.
Resolving Dependencies
--> Running transaction check
---> Package postgresql95.x86_64 0:9.5.7-1PGDG.rhel7 will be erased
--> Processing Dependency: postgresql95 = 9.5.7-1PGDG.rhel7 for package:
postgresql95-server-9.5.7-1PGDG.rhel7.x86_64
--> Processing Dependency: postgresql95(x86-64) = 9.5.7-1PGDG.rhel7 for package:
postgresql95-server-9.5.7-1PGDG.rhel7.x86_64
--> Running transaction check
---> Package postgresql95-server.x86_64 0:9.5.7-1PGDG.rhel7 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
===================================================================================
===================================================================================
===========
Package Arch
Version Repository
Size
===================================================================================
===================================================================================
===========
Removing:
postgresql95 x86_64
9.5.7-1PGDG.rhel7 @DB
6.6 M
Removing for dependencies:
postgresql95-server x86_64
9.5.7-1PGDG.rhel7 @DB
17 M
........
Removed:
postgresql95.x86_64 0:9.5.7-1PGDG.rhel7
Dependency Removed:
postgresql95-server.x86_64 0:9.5.7-1PGDG.rhel7
Complete!
Dependencies Resolved
===================================================================================
===================================================================================
===========
Package Arch
Version Repository
Size
===================================================================================
===================================================================================
===========
Removing:
postgresql95-libs x86_64
9.5.7-1PGDG.rhel7 @DB
688 k
........
Removed:
postgresql95-libs.x86_64 0:9.5.7-1PGDG.rhel7
Complete!
root@CONS_a04pos18044:~#
....
{empty} +
NOTE: We are now creating new cluster on what was `*_SLAVE PG node_*` in original
cluster.
{empty} +
NOTE: master_IP in our case is IP from master node in original cluster (in test
case it is 192.168.18.43).
{empty} +
NOTE: `*_SLAVE PG node_*` from original cluster will be referred as `*_OLD SLAVE PG
node_*` and `*_MASTER PG node_*` will be referred as `*_OLD MASTER PG node_*` from
here till the end.
{empty} +
....
## Commands to configure new DRBD replication:
sed -i 's/<master_IP>/127.0.0.1/g' /etc/drbd.d/postgres.res
drbdadm create-md postgres
modprobe drbd
drbdadm up postgres
drbdadm -- --overwrite-data-of-peer primary postgres
mkfs.xfs -f /dev/drbd0
drbdadm disconnect postgres
....
....
## Expected output:
root@CONS_a04pos18044:~# sed -i 's/192.168.18.43/127.0.0.1/g'
/etc/drbd.d/postgres.res
root@CONS_a04pos18044:~# drbdadm create-md postgres
You want me to create a v08 style flexible-size internal meta data block.
There appears to be a v08 flexible-size internal meta data block
already in place on /dev/vdb at byte offset 10739314688
md_offset 10739314688
al_offset 10739281920
bm_offset 10738950144
Even though it looks like this would place the new meta data into
unused space, you still need to confirm, as this is only a guess.
## Check status:
root@CONS_a04pos18044:~# drbdadm status
postgres role:Primary
disk:UpToDate
....
{empty} +
NOTE: slave_node_ID is the hostname of the node that was originally slave node.
{empty} +
....
## Commands to :
pcs cluster auth -u hacluster -p ******* <slave_node_ID>
pcs cluster setup --name pgcluster <slave_node_ID>
pcs cluster enable --all
pcs cluster start --all
pcs property set stonith-enabled=false
pcs property set no-quorum-policy=ignore
pcs resource defaults resource-stickness=100
pcs status
....
....
## Expected output:
root@CONS_a04pos18044:~# pcs cluster auth -u hacluster -p Er1csson# a04pos18044
a04pos18044: Authorized
root@CONS_a04pos18044:~# pcs cluster setup --name pgcluster a04pos18044
Destroying cluster on nodes: a04pos18044...
a04pos18044: Stopping Cluster (pacemaker)...
a04pos18044: Successfully destroyed cluster
## Check status:
root@CONS_a04pos18044:~# pcs status
Cluster name: pgcluster
Stack: corosync
Current DC: a04pos18044 (version 1.1.20-5.el7-3c4c782f70) - partition with quorum
Last updated: Fri Nov 27 14:27:34 2020
Last change: Fri Nov 27 14:24:54 2020 by root via cibadmin on a04pos18044
1 node configured
0 resources configured
Online: [ a04pos18044 ]
No resources
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
....
{empty} +
## Check status:
root@CONS_a04pos18044:~# pcs status
Cluster name: pgcluster
Stack: corosync
Current DC: a04pos18044 (version 1.1.20-5.el7-3c4c782f70) - partition with quorum
Last updated: Fri Nov 27 14:49:27 2020
Last change: Fri Nov 27 14:49:22 2020 by root via cibadmin on a04pos18044
1 node configured
4 resources configured
Online: [ a04pos18044 ]
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
....
{empty} +
....
## Commands to :
chown -R postgres:postgres /srv/postgres
su - postgres
/usr/pgsql-11/bin/initdb -D /srv/postgres/data
....
....
## Expected output:
root@CONS_a04pos18044:~# chown -R postgres:postgres /srv/postgres
root@CONS_a04pos18044:~# su - postgres
....
{empty} +
sed -i "/Environment=PGDATA/c\Environment=PGDATA=/srv/postgres/data/"
/usr/lib/systemd/system/postgresql-11.service
su - postgres
echo "## ETK ADD ON" >> /srv/postgres/data/postgresql.conf
echo "listen_addresses = '*'" >> /srv/postgres/data/postgresql.conf
echo "port = 5432" >> /srv/postgres/data/postgresql.conf
echo "wal_level = hot_standby" >> /srv/postgres/data/postgresql.conf
echo "synchronous_commit = on" >> /srv/postgres/data/postgresql.conf
echo "archive_mode = on" >> /srv/postgres/data/postgresql.conf
echo "archive_command = 'cp %p /nfsshare/pg_prod/pg_archive/%f'" >>
/srv/postgres/data/postgresql.conf
echo "max_wal_senders = 5" >> /srv/postgres/data/postgresql.conf
echo "wal_keep_segments = 32" >> /srv/postgres/data/postgresql.conf
echo "hot_standby = on" >> /srv/postgres/data/postgresql.conf
echo "max_standby_archive_delay = -1" >> /srv/postgres/data/postgresql.conf
echo "max_standby_streaming_delay = -1" >> /srv/postgres/data/postgresql.conf
echo "wal_receiver_status_interval = 2" >> /srv/postgres/data/postgresql.conf
echo "hot_standby_feedback = on" >> /srv/postgres/data/postgresql.conf
echo "restart_after_crash = off" >> /srv/postgres/data/postgresql.conf
echo "max_connections = 1499" >> /srv/postgres/data/postgresql.conf
exit
....
....
## Expected output:
root@CONS_a04pos18044:~# sed -i
"/Environment=PGDATA/c\Environment=PGDATA=/srv/postgres/data/"
/usr/lib/systemd/system/postgresql-11.service
root@CONS_a04pos18044:~# su - postgres
-bash-4.2$ sed -i "/PGDATA=/c\PGDATA=/srv/postgres/data" .bash_profile
-bash-4.2$ echo "PATH=$PATH:/usr/pgsql-11/bin" >> .bash_profile
-bash-4.2$ echo "export PATH" >> .bash_profile
-bash-4.2$ echo "export PS1='[\u@\h \W]\$ '" >> .bash_profile
-bash-4.2$ exit
root@CONS_a04pos18044:~# sed -i
"/Environment=PGDATA/c\Environment=PGDATA=/srv/postgres/data/"
/usr/lib/systemd/system/postgresql-11.service
root@CONS_a04pos18044:~# su - postgres
[postgres@a04pos18044 ~]$ echo "## ETK ADD ON" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "listen_addresses = '*'" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "port = 5432" >> /srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "wal_level = hot_standby" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "synchronous_commit = on" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "archive_mode = on" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "archive_command = 'cp %p
/nfsshare/pg_prod/pg_archive/%f'" >> /srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "max_wal_senders = 5" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "wal_keep_segments = 32" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "hot_standby = on" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "max_standby_archive_delay = -1" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "max_standby_streaming_delay = -1" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "wal_receiver_status_interval = 2" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "hot_standby_feedback = on" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "restart_after_crash = off" >>
/srv/postgres/data/postgresql.conf
[postgres@a04pos18044 ~]$ echo "max_connections = 1499" >>
/srv/postgres/data/postgresql.conf
## Check status:
root@CONS_a04pos18044:~# grep Environment /usr/lib/systemd/system/postgresql-
11.service
# Note: avoid inserting whitespace in these Environment= lines, or you may
Environment=PGDATA=/srv/postgres/data/
Environment=PG_OOM_ADJUST_FILE=/proc/self/oom_score_adj
Environment=PG_OOM_ADJUST_VALUE=0
[postgres@a04pos18044 ~]$ cat .bash_profile
[ -f /etc/profile ] && source /etc/profile
PGDATA=/srv/postgres/data
export PGDATA
# If you want to customize your settings,
# Use the file below. This is not overridden
# by the RPMS.
[ -f /var/lib/pgsql/.pgsql_profile ] && source /var/lib/pgsql/.pgsql_profile
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/pgsql-11/bin
export PATH
export PS1='[\u@\h \W]$ '
root@CONS_a04pos18044:~# su - postgres
[postgres@a04pos18044 ~]$ cat /srv/postgres/data/postgresql.conf | tail -21
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------
....
{empty} +
* *Step 10_PG_upgrade:* `*_On OLD SLAVE PG node in cluster_*` - start PostgreSQL
and initialize m2mdb
{empty} +
....
## Commands to start PostgreSQL and initialize m2mdb:
su - postgres
/usr/pgsql-11/bin/pg_ctl -D /srv/postgres/data -l logfile start
psql -c "ALTER USER postgres PASSWORD 'P0stgr3s';"
exit
/opt/tmp/consolidator-DB-objects/5.6.0/CXC1734724_P1A63_Postgres/01_install.bsh
....
....
## Expected output:
root@CONS_a04pos18044:~# su - postgres
Last login: Fri Nov 27 15:28:45 CET 2020 on pts/2
[postgres@a04pos18044 ~]$ /usr/pgsql-11/bin/pg_ctl -D /srv/postgres/data -l logfile
start
waiting for server to start.... done
server started
[postgres@a04pos18044 ~]$ psql -c "ALTER USER postgres PASSWORD 'P0stgr3s';"
ALTER ROLE
[postgres@a04pos18044 ~]$ exit
root@CONS_a04pos18044:~#
/opt/tmp/consolidator-DB-objects/5.6.0/CXC1734724_P1A63_Postgres/01_install.bsh
.........
-------------------------------------
Done - M2M DM DB successfully created
-------------------------------------
## Check status:
root@CONS_a04pos18044:~# su - postgres
Last login: Fri Nov 27 16:08:48 CET 2020 on pts/1
[postgres@a04pos18044 ~]$ psql
psql (11.10)
Type "help" for help.
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------
+-----------------------
m2mdb | m2mdb | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
+
| | | | |
postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres
+
| | | | |
postgres=CTc/postgres
(4 rows)
....
{empty} +
* *Step 11_PG_upgrade:* `*_On OLD SLAVE PG node in cluster_*` - configure and add
resource pgclusterDb to newly created cluster
{empty} +
....
## Commands to configure and add resource pgclusterDb:
cp /usr/lib/ocf/resource.d/heartbeat/pgsql
/usr/lib/ocf/resource.d/heartbeat/pgsql.95
sed -i "/^OCF_RESKEY_pgctl_default/c\OCF_RESKEY_pgctl_default=/usr/pgsql-11/bin/
pg_ctl" /usr/lib/ocf/resource.d/heartbeat/pgsql
sed -i "/^OCF_RESKEY_psql_default/c\OCF_RESKEY_psql_default=/usr/pgsql-11/bin/psql"
/usr/lib/ocf/resource.d/heartbeat/pgsql
## Check status:
root@CONS_a04pos18044:~# grep OCF_RESKEY_psql_default
/usr/lib/ocf/resource.d/heartbeat/pgsql
OCF_RESKEY_psql_default=/usr/pgsql-11/bin/psql
: ${OCF_RESKEY_psql=${OCF_RESKEY_psql_default}}
<content type="string" default="${OCF_RESKEY_psql_default}" />
root@CONS_a04pos18044:~# grep OCF_RESKEY_pgctl_default
/usr/lib/ocf/resource.d/heartbeat/pgsql
OCF_RESKEY_pgctl_default=/usr/pgsql-11/bin/pg_ctl
: ${OCF_RESKEY_pgctl=${OCF_RESKEY_pgctl_default}}
<content type="string" default="${OCF_RESKEY_pgctl_default}" />
Online: [ a04pos18044 ]
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
....
{empty} +
* *Step 12_PG_upgrade:* `*_On OLD SLAVE node in old cluster_*` - Configure Bucardo
replication
{empty} +
{empty} +
* *Step 13_PG_upgrade:* `*_On BOTH nodes_*` - make backup on old master node and
import it on old slave node
{empty} +
....
On OLD MASTER node (in our case a04pos18043):
cd /tmp
su - postgres -c "pg_dump m2mdb > /tmp/m2mdb.sql"
scp /tmp/m2mdb.sql <old_slave_IP>:/var/lib/pgsql/
{empty} +
* *Step 14_PG_upgrade:* `*_On ALL CONSOLIDATOR nodes_*` - Change jdbc IP to new VIP
{empty} +
NOTE: following steps need to be executed one Consolidator after the other to avoid
service disruption
{empty} +
{empty} +
* *Step 15_PG_upgrade:* `*_On ALL CONSOLIDATOR nodes_*` - Change jdbc IP to new VIP
{empty} +
....
## Commands to Transfer GEO IP to the new cluster:
On OLD MASTER node (in our case a04pos18043):
pcs resource disable pgclusterIpGeo
## Check status:
root@CONS_a04pos18043:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
default qlen 1000
link/ether 52:54:00:a2:38:bd brd ff:ff:ff:ff:ff:ff
inet 192.168.18.43/24 brd 192.168.18.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
root@CONS_a04pos18043:~# pcs status
Cluster name: pgcluster
Stack: corosync
Current DC: a04pos18043 (version 1.1.20-5.el7-3c4c782f70) - partition with quorum
Last updated: Fri Nov 27 17:04:03 2020
Last change: Fri Nov 27 17:03:10 2020 by root via cibadmin on a04pos18043
1 node configured
6 resources configured (2 DISABLED)
Online: [ a04pos18043 ]
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
root@CONS_a04pos18044:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 127.0.0.1/26 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
default qlen 1000
link/ether 52:54:00:00:1e:2e brd ff:ff:ff:ff:ff:ff
inet 192.168.18.44/24 brd 192.168.18.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
root@CONS_a04pos18044:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group
default qlen 1000
link/ether 52:54:00:00:1e:2e brd ff:ff:ff:ff:ff:ff
inet 192.168.18.44/24 brd 192.168.18.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet 192.168.18.45/26 brd 192.168.18.255 scope global eth0
valid_lft forever preferred_lft forever
root@CONS_a04pos18044:~# pcs status
Cluster name: pgcluster
Stack: corosync
Current DC: a04pos18044 (version 1.1.20-5.el7-3c4c782f70) - partition with quorum
Last updated: Fri Nov 27 17:04:11 2020
Last change: Fri Nov 27 17:03:18 2020 by root via cibadmin on a04pos18044
1 node configured
6 resources configured
Online: [ a04pos18044 ]
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
....
{empty} +
* *Step 16_PG_upgrade:* `*_On OLD MASTER node_*` - stop and disable Bucardo
{empty} +
....
## Commands to stop and disable bucardo:
systemctl stop bucardo
systemctl disable bucardo
....
....
## Expected output:
root@CONS_a04pos18043:~# systemctl stop bucardo
Killed
root@CONS_a04pos18043:~# systemctl disable bucardo
Removed symlink /etc/systemd/system/multi-user.target.wants/bucardo.service.
Removed symlink /etc/systemd/system/bucardo.service.
## Check status:
root@CONS_a04pos18043:~# systemctl status bucardo
● bucardo.service - SYSV: Bucardo replication service
Loaded: loaded (/usr/lib/systemd/system/bucardo.service; disabled; vendor
preset: disabled)
Active: failed (Result: signal) since Fri 2020-11-27 17:42:28 CET; 23s ago
Docs: man:systemd-sysv-generator(8)
....
{empty} +
....
## Commands to :
pcs cluster destroy --all
....
....
## Expected output:
root@CONS_a04pos18043:~# pcs cluster destroy --all
a04pos18043: Stopping Cluster (pacemaker)...
a04pos18043: Successfully destroyed cluster
## Check status:
root@CONS_a04pos18043:~# pcs status
Error: cluster is not currently running on this node
....
{empty} +
* *Step 18_PG_upgrade:* `*_On OLD MASTER node_*` - install PostgreSQL 11.10 and
remove PostgreSQL 9.5
{empty} +
....
## Commands to setup DRBD replication:
sed -i 's/127.0.0.1/<old_master_ip>/g' /etc/drbd.d/postgres.res
drbdadm adjust postgres
....
....
## Expected output:
root@CONS_a04pos18043:~# sed -i 's/127.0.0.1/192.168.18.43/g'
/etc/drbd.d/postgres.res
root@CONS_a04pos18043:~# drbdadm adjust postgres
## Check status:
root@CONS_a04pos18043:~# grep address /etc/drbd.d/postgres.res
address 192.168.18.43:7791;
address 192.168.18.44:7791;
* *Step 20_PG_upgrade:* `*_On BOTH nodes_*` - setup DRBD replication on old slave
master node and sync DRBD
{empty} +
....
## Commands to update DRBD replication on OLD MASTER node:
sed -i 's/127.0.0.1/<old_slave_ip>/g' /etc/drbd.d/postgres.res
drbdadm adjust postgres
drbdadm disconnect postgres
modprobe drbd
drbdsetup detach /dev/drbd0
drbdsetup del-minor /dev/drbd0
drbdadm create-md postgres
drbdadm up postgres
drbdadm disconnect postgres
drbdadm connect --discard-my-data postgres
On OLD SLAVE node (in our case 192.168.18.44):
drbdadm connect postgres
....
....
## Expected output:
root@CONS_a04pos18043:~# sed -i 's/127.0.0.1/192.168.18.44/g'
/etc/drbd.d/postgres.res
root@CONS_a04pos18043:~# drbdadm adjust postgres
root@CONS_a04pos18043:~# drbdadm disconnect postgres
root@CONS_a04pos18043:~# modprobe drbd
root@CONS_a04pos18043:~# drbdsetup detach /dev/drbd0
root@CONS_a04pos18043:~# drbdsetup del-minor /dev/drbd0
root@CONS_a04pos18043:~# drbdadm create-md postgres
You want me to create a v08 style flexible-size internal meta data block.
There appears to be a v08 flexible-size internal meta data block
already in place on /dev/vdb at byte offset 10739314688
md_offset 10739314688
al_offset 10739281920
bm_offset 10738950144
Even though it looks like this would place the new meta data into
unused space, you still need to confirm, as this is only a guess.
## Check status:
root@CONS_a04pos18043:~# drbdadm status
postgres role:Secondary
disk:Inconsistent
peer role:Primary
replication:SyncTarget peer-disk:UpToDate done:4.74
....
{empty} +
su - postgres
sed -i "/PGDATA=/c\PGDATA=/srv/postgres/data" .bash_profile
echo "PATH=$PATH:/usr/pgsql-11/bin" >> .bash_profile
echo "export PATH" >> .bash_profile
echo "export PS1='[\u@\h \W]\$ '" >> .bash_profile
exit
....
....
## Expected output:
root@CONS_a04pos18043:~# cp /usr/lib/ocf/resource.d/heartbeat/pgsql
/usr/lib/ocf/resource.d/heartbeat/pgsql.95
root@CONS_a04pos18043:~# sed -i "/^OCF_RESKEY_pgctl_default/c\
OCF_RESKEY_pgctl_default=/usr/pgsql-11/bin/pg_ctl"
/usr/lib/ocf/resource.d/heartbeat/pgsql
root@CONS_a04pos18043:~# sed -i "/^OCF_RESKEY_psql_default/c\
OCF_RESKEY_psql_default=/usr/pgsql-11/bin/psql"
/usr/lib/ocf/resource.d/heartbeat/pgsql
root@CONS_a04pos18043:~# su - postgres
-bash-4.2$ sed -i "/PGDATA=/c\PGDATA=/srv/postgres/data" .bash_profile
-bash-4.2$ echo "PATH=$PATH:/usr/pgsql-11/bin" >> .bash_profile
-bash-4.2$ echo "export PATH" >> .bash_profile
-bash-4.2$ echo "export PS1='[\u@\h \W]\$ '" >> .bash_profile
## Check status:
root@CONS_a04pos18043:~# grep OCF_RESKEY_pgctl_default
/usr/lib/ocf/resource.d/heartbeat/pgsql
OCF_RESKEY_pgctl_default=/usr/pgsql-11/bin/pg_ctl
: ${OCF_RESKEY_pgctl=${OCF_RESKEY_pgctl_default}}
<content type="string" default="${OCF_RESKEY_pgctl_default}" />
root@CONS_a04pos18043:~# grep OCF_RESKEY_psql_default
/usr/lib/ocf/resource.d/heartbeat/pgsql
OCF_RESKEY_psql_default=/usr/pgsql-11/bin/psql
: ${OCF_RESKEY_psql=${OCF_RESKEY_psql_default}}
<content type="string" default="${OCF_RESKEY_psql_default}" />
....
{empty} +
* *Step 22_PG_upgrade:* `*_On BOTH nodes_*` - add old master node to new cluster
configuration as slave
{empty} +
NOTE: This step can be started after DRBD sync has been finished
{empty} +
....
## Commands to check if the DRBD sync has been finished:
drbdadm status
....
....
## Expected output:
root@CONS_a04pos18044:~# drbdadm status
postgres role:Primary
disk:UpToDate
peer role:Secondary
replication:Established peer-disk:UpToDate
....
{empty} +
....
## Commands to add old master node to new cluster configuration as slave:
## Check status:
root@CONS_a04pos18043:~# pcs status
Cluster name: pgcluster
Stack: corosync
Current DC: a04pos18044 (version 1.1.20-5.el7-3c4c782f70) - partition with quorum
Last updated: Fri Nov 27 18:39:40 2020
Last change: Fri Nov 27 18:34:44 2020 by hacluster via crmd on a04pos18044
2 nodes configured
5 resources configured
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
....
{empty} +
* *Step 23_PG_upgrade:* `*_On MASTER node_*` - Check if new slave can be promoted
to master
{empty} +
....
## Command to check if new slave can be promoted to master:
pcs resource move pgclusterGroup <new_slave_ID>
....
....
## Expected output:
root@CONS_a04pos18044:~# pcs resource move pgclusterGroup a04pos18043
root@CONS_a04pos18044:~#
## Check status:
2 nodes configured
6 resources configured
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
root@CONS_a04pos18044:~#
2 nodes configured
6 resources configured
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
....
{empty} +
....
## Command to check if Bucardo is already installed:
rpm -qa | grep bucardo
....
....
## Expected output:
root@CONS_a04pos18043:~# rpm -qa | grep bucardo
root@CONS_a04pos18043:~#
root@CONS_a04pos18044:~# rpm -qa | grep bucardo
root@CONS_a04pos18044:~#
....
{empty} +
* *Step 02_PG_bucardo:* `*_On OLD SLAVE node_*` - Install Bucardo
{empty} +
....
## Command to install Bucardo:
cd /tmp/
wget http://<REPO_IP>/repositories/CONSOLIDATOR/5.6.0-RC4/Packages/consolidator-
bucardo-5.5.0-1.noarch.rpm
yum install -y consolidator-bucardo-5.5.0-1.noarch.rpm
....
....
## Expected output:
root@CONS_a04pos18044:~# cd /tmp/
root@CONS_a04pos18044:/tmp# wget
http://192.168.18.10/repositories/CONSOLIDATOR/5.6.0-RC4/Packages/consolidator-
bucardo-5.5.0-1.noarch.rpm
--2020-11-27 16:39:55--
http://192.168.18.10/repositories/CONSOLIDATOR/5.6.0-RC4/Packages/consolidator-
bucardo-5.5.0-1.noarch.rpm
Connecting to 192.168.18.10:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 242634 (237K) [application/x-rpm]
Saving to: ‘consolidator-bucardo-5.5.0-1.noarch.rpm’
100%
[==================================================================================
=====================================================>] 242,634 --.-K/s in
0.001s
root@CONS_a04pos18044:/tmp# ll
total 260
drwxr-xr-x 4 m2mdm m2mdm 30 Nov 26 17:47 consolidator
-rw-r--r-- 1 root root 242634 Oct 7 22:51 consolidator-bucardo-5.5.0-
1.noarch.rpm
drwxr-xr-x 4 root root 4096 Nov 26 17:47 CXC1734724_P1A63_Cassandra
drwxr-xr-x 19 postgres postgres 8192 Nov 26 17:48 CXC1734724_P1A63_Postgres
-rw-r--r-- 1 root root 26 Nov 26 17:47 Postgre_CXC_Path
root@CONS_a04pos18043:/tmp# yum install -y consolidator-bucardo-5.5.0-1.noarch.rpm
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-
manager to register.
Examining consolidator-bucardo-5.5.0-1.noarch.rpm: consolidator-bucardo-5.5.0-
1.noarch
Marking consolidator-bucardo-5.5.0-1.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package consolidator-bucardo.noarch 0:5.5.0-1 will be installed
--> Processing Dependency: perl-DBD-Pg for package: consolidator-bucardo-5.5.0-
1.noarch
.............
Installed:
consolidator-bucardo.noarch 0:5.5.0-1
Dependency Installed:
perl-Compress-Raw-Bzip2.x86_64 0:2.061-3.el7 perl-Compress-Raw-Zlib.x86_64
1:2.061-4.el7 perl-DBD-Pg.x86_64 0:2.19.3-4.el7 perl-DBI.x86_64 0:1.627-4.el7
Complete!
....
{empty} +
* *Step 03_PG_bucardo:* `*_On BOTH nodes_*` - Check Bucardo replication in
pg_hba.conf
{empty} +
NOTE: If the first command doesn't return anything than run other 3, otherwise,
skip them.
....
## Command to install Bucardo:
grep bucardo /srv/postgres/data/pg_hba.conf
....
## Expected output:
root@CONS_a04pos18044:~# grep bucardo /srv/postgres/data/pg_hba.conf
root@CONS_a04pos18044:~#
root@CONS_a04pos18044:~# echo "## Bucardo access "
>> /srv/postgres/data/pg_hba.conf
root@CONS_a04pos18044:~# echo "local all bucardo trust"
>> /srv/postgres/data/pg_hba.conf
{empty} +
* *Step 04_PG_bucardo:* `*_On BOTH nodes_*` - Create Bucardo database
{empty} +
....
## Commands:
cd /tmp
runuser -l postgres -c "psql -c 'CREATE DATABASE bucardo;'"
sudo -u postgres psql -c "CREATE USER bucardo WITH LOGIN SUPERUSER ENCRYPTED
PASSWORD 'bucardo-runner';"
sudo -u postgres psql -c "ALTER USER bucardo PASSWORD 'bucardo-runner';"
runuser -l postgres -c "psql -c 'CREATE EXTENSION plperl;'"
....
....
## Expected output:
root@CONS_a04pos18044:~# cd /tmp
root@CONS_a04pos18044:/tmp# runuser -l postgres -c "psql -c 'CREATE DATABASE
bucardo;'"
s -c "psql -c 'CREATE EXTENSION plperl;'"CREATE DATABASE
root@CONS_a04pos18044:/tmp# sudo -u postgres psql -c "CREATE USER bucardo WITH
LOGIN SUPERUSER ENCRYPTED PASSWORD 'bucardo-runner';"
CREATE ROLE
root@CONS_a04pos18044:/tmp# sudo -u postgres psql -c "ALTER USER bucardo PASSWORD
'bucardo-runner';"
ALTER ROLE
root@CONS_a04pos18044:/tmp# runuser -l postgres -c "psql -c 'CREATE EXTENSION
plperl;'"
CREATE EXTENSION
{empty} +
* *Step 05_PG_bucardo:* `*_On BOTH nodes_*` - Set up Bucardo configuration files
{empty} +
....
## Commands:
su - bucardo
cp /opt/consolidator-bucardo/dbschema/bucardo.schema .
echo "export DBPORT=5432" > /opt/consolidator-bucardo/conf/bucardo-env.sh
echo "export DBUSER=bucardo" >> /opt/consolidator-bucardo/conf/bucardo-env.sh
echo "export DBNAME=bucardo" >> /opt/consolidator-bucardo/conf/bucardo-env.sh
....
## Expected output:
root@CONS_a04pos18044:/tmp# su - bucardo
[bucardo@a04pos18044 ~]$ cp /opt/consolidator-bucardo/dbschema/bucardo.schema .
[bucardo@a04pos18044 ~]$ echo "export DBPORT=5432" >
/opt/consolidator-bucardo/conf/bucardo-env.sh
[bucardo@a04pos18044 ~]$ echo "export DBUSER=bucardo" >> /opt/consolidator-
bucardo/conf/bucardo-env.sh
[bucardo@a04pos18044 ~]$ echo "export DBNAME=bucardo" >> /opt/consolidator-
bucardo/conf/bucardo-env.sh
[bucardo@a04pos18044 ~]$
[bucardo@a04pos18044 ~]$ echo "dbport=5432" >> $HOME/.bucardorc
[bucardo@a04pos18044 ~]$ echo "dbname=bucardo" >> $HOME/.bucardorc
[bucardo@a04pos18044 ~]$ echo "dbuser=bucardo" >> $HOME/.bucardorc
[bucardo@a04pos18044 ~]$ echo "piddir=/opt/consolidator-bucardo/run" >>
$HOME/.bucardorc
[bucardo@a04pos18044 ~]$ echo "log_level=DEBUG" >> $HOME/.bucardorc
[bucardo@a04pos18044 ~]$ echo
"log_conflict_file=/opt/consolidator-bucardo/log/bucardo_conflict.log" >>
$HOME/.bucardorc
[bucardo@a04pos18044 ~]$ echo
"reason_fil=/opt/consolidator-bucardo/log/bucardo.restart.reason.log" >>
$HOME/.bucardorc
[bucardo@a04pos18044 ~]$ echo
"warning_file=/opt/consolidator-bucardo/log/bucardo.warning.log" >>
$HOME/.bucardorc
[bucardo@a04pos18044 ~]$
[bucardo@a04pos18044 ~]$ echo "192.168.18.44:5432:bucardo:bucardo:bucardo-runner" >
$HOME/.pgpass
[bucardo@a04pos18044 ~]$ chmod 0600 $HOME/.pgpass
[bucardo@a04pos18044 ~]$
[bucardo@a04pos18044 ~]$ echo "dbhost=192.168.18.44" >> $HOME/.bucardorc
[bucardo@a04pos18044 ~]$ echo "export DBHOST=192.168.18.44" >> /opt/consolidator-
bucardo/conf/bucardo-env.sh
[bucardo@a04pos18044 ~]$ echo "export DBPASW=bucardo-runner" > /opt/consolidator-
bucardo/conf/.bucardopwd
[bucardo@a04pos18044 ~]$ echo "dbhost=192.168.18.44" >> $HOME/.bucardorc
You may want to check over the configuration variables next, by running:
bucardo show all
Change any setting by using: bucardo set foo=bar
....
{empty} +
* *Step 06_PG_bucardo:* `*_On OLD SLAVE node_*` - Set up Bucardo replication
{empty} +
....
## Commands:
bucardo add database <old_slave_ID_upgrade> dbname=m2mdb dbhost=<old_slave_ip>
dbuser=bucardo dbpass=bucardo-runner
bucardo add database <old_master_ID_upgrade> dbname=m2mdb dbhost=<old_master_ip>
dbuser=bucardo dbpass=bucardo-runner
bucardo add table m2mdb.% db=<old_slave_ID_upgrade> relgroup=m2mdb_relgroup_upgrade
bucardo add dbgroup m2mdb_dbgroup_upgrade <old_master_ID_upgrade>:source
<old_slave_ID_upgrade>:target
bucardo add sync m2mdb_sync_upgrade relgroup=m2mdb_relgroup_upgrade
dbs=m2mdb_dbgroup_upgrade conflict_strategy=bucardo_latest
....
....
## Expected output:
{empty} +
* *Step 01_CASS_upgrade:* `*_On FIRST Cassandra node_*` - Check Cassandra status
{empty} +
....
## Commands to check Cassandra status:
nodetool status
....
....
## Expected output:
root@CONS_a04sac18047:~# nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID
Rack
UN 192.168.18.47 1.09 MiB 256 100.0% 9242c25e-e64f-4d1d-
bde9-5c7e84fa26bf rack1
UN 192.168.18.48 1.12 MiB 256 100.0% d763b6cc-0518-461c-
8ccd-e5b6ae6156a6 rack1
UN 192.168.18.49 869.1 KiB 256 100.0% eec504f1-be03-451f-
ac90-b503f115d1ac rack1
....
{empty} +
* *Step 02_CASS_upgrade:* `*_On FIRST Cassandra node_*` - Check Cassandra rpm
{empty} +
....
## Commands to check Cassandra rpm:
rpm -qa | grep cassandra
....
....
## Expected output:
root@CONS_a04sac18049:~# rpm -qa | grep cassandra
apache-cassandra-3.11.0_E000-1.noarch
....
{empty} +
* *Step 03_CASS_upgrade:* `*_On FIRST Cassandra node_*` - Stop Cassandra service
and remove current rpm
{empty} +
....
## Commands to check Cassandra rpm:
systemctl stop cassandra
rpm -e <rpm_name>
....
....
## Expected output:
root@CONS_a04sac18047:~# systemctl stop cassandra
root@CONS_a04sac18047:~# rpm -e apache-cassandra-3.11.0_E000-1.noarch
warning: /etc/cassandra/default.conf/cassandra.yaml saved as
/etc/cassandra/default.conf/cassandra.yaml.rpmsave
## Check status:
root@CONS_a04sac18047:~# nodetool status
-bash: /usr/bin/nodetool: No such file or directory
root@CONS_a04sac18047:~# systemctl status cassandra
● cassandra.service - LSB: start and stop cassandra daemon
Loaded: loaded (/etc/rc.d/init.d/cassandra; bad; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-12-01 14:33:01 CET; 2min 10s
ago
Docs: man:systemd-sysv-generator(8)
Process: 20726 ExecStop=/etc/rc.d/init.d/cassandra stop (code=exited,
status=0/SUCCESS)
Main PID: 13736 (code=exited, status=143)
{empty} +
* *Step 04_CASS_upgrade:* `*_On FIRST Cassandra node_*` - Download new Cassandra
rpm from repository
{empty} +
....
## Commands to download new Cassandra rpm from repository:
_repository_IP=<REPO_IP>
cd /opt && wget -A "consolidator-cassandra*.rpm" -q -r -nd http://$
{_repository_IP}/repositories/DB/Cassandra/3.11.8/
cd /opt && wget -A "jdk-8u2*.rpm" -q -r -nd
http://${_repository_IP}/repositories/DB/Cassandra/
....
....
## Expected output:
root@CONS_a04sac18047:~# _repository_IP=192.168.18.10
root@CONS_a04sac18047:~# cd /opt && wget -A "consolidator-cassandra*.rpm" -q -r -nd
http://${_repository_IP}/repositories/DB/Cassandra/3.11.8/
root@CONS_a04sac18047:/opt# cd /opt && wget -A "jdk-8u2*.rpm" -q -r -nd http://$
{_repository_IP}/repositories/DB/Cassandra/
## Check files:
root@CONS_a04sac18047:/opt# ls -lah
total 341M
drwxr-xr-x. 7 root root 4.0K Dec 1 14:36 .
dr-xr-xr-x. 19 root root 267 Nov 30 14:31 ..
-rw-r--r-- 1 root root 57K Nov 30 14:16 cassandra.yaml
-rw-r--r-- 1 root root 57K Dec 1 14:31 cassandra.yaml.preupgrade
-rw-r--r-- 1 root root 37M Oct 23 09:30 consolidator-cassandra-3.11.8-
1.noarch.rpm
drwxr-xr-x 6 m2mdm m2mdm 199 Nov 30 14:31 consolidator-healthcheck-agent
drwxr-xr-x 20 spark spark 4.0K Nov 30 14:33 consolidator-spark-service
-rw-r--r--. 1 root root 12M Mar 12 2020 docker-compose
drwxr-x--- 8 root esa 81 Nov 30 14:32 ESA
-rw-r--r-- 1 root root 2.1K Nov 30 14:31
healthcheck_config.xml_20201130_143156.bkp
-rw-r--r-- 1 root root 2.1K Nov 30 14:31 healthcheck_config.xml.bkp
-rw-r--r-- 1 root root 2.1K Nov 30 14:30 healthcheck_spark.xml
drwxr-xr-x 8 root root 293 Nov 30 14:14 java
-rw-r--r-- 1 root root 172M May 3 2020 jdk-8u251-linux-x64.rpm
-rw-r--r-- 1 root root 122M Sep 15 11:19 jdk-8u261-linux-x64.rpm
drwxr-xr-x 7 root root 164 Nov 30 14:32 tmp
root@CONS_a04sac18047:/opt#
....
{empty} +
* *Step 05_CASS_upgrade:* `*_On FIRST Cassandra node_*` - Install new rpms
{empty} +
....
## Commands to check Cassandra rpm:
rpm -ivh /opt/jdk-8u2*.rpm
rpm -ivh /opt/consolidator-cassandra-3.11.8*
....
....
## Expected output:
root@CONS_a04sac18047:/opt# rpm -ivh /opt/jdk-8u2*.rpm
warning: /opt/jdk-8u251-linux-x64.rpm: Header V3 RSA/SHA256 Signature, key ID
ec551f03: NOKEY
Preparing... ################################# [100%]
package jdk1.8-2000:1.8.0_261-fcs.x86_64 is already installed
package jdk1.8-2000:1.8.0_261-fcs.x86_64 (which is newer than jdk1.8-
2000:1.8.0_251-fcs.x86_64) is already installed
root@CONS_a04sac18047:/opt# rpm -ivh /opt/consolidator-cassandra-3.11.8*
Preparing... ################################# [100%]
Updating / installing...
1:consolidator-cassandra-3.11.8-1 ################################# [100%]
#Check rpms:
root@CONS_a04sac18047:/opt# rpm -qa | grep jdk
jdk1.8-1.8.0_261-fcs.x86_64
root@CONS_a04sac18047:/opt# rpm -qa | grep cassandra
consolidator-cassandra-3.11.8-1.noarch
....
{empty} +
* *Step 06_CASS_upgrade:* `*_On FIRST Cassandra node_*` - Adjust Cassandra
configuration
{empty} +
....
## Commands to adjust Cassandra configuration:
scp -q /opt/consolidator-cassandra/conf/cassandra.yaml.ETK /opt/consolidator-
cassandra/conf/cassandra.yaml
....
{empty} +
* *Step 07_CASS_upgrade:* `*_On FIRST Cassandra node_*` - Start Cassandra service
{empty} +
....
## Commands to check Cassandra rpm:
systemctl start cassandra
....
....
## Expected output:
root@CONS_a04sac18047:~# systemctl start cassandra
....
{empty} +
* *Step 09_CASS_upgrade:* `*_On FIRST Cassandra node_*` - Check Cassandra service
{empty} +
NOTE: Once you start Cassandra service you need to log out of the current terminal
and log in to Cassandra again, or run following command:`sudo su -`.
....
## Commands to start new shell session and check Cassandra status
sudo su -
nodetool status
cqlsh <first_node_ip> -u cassandra -p ******
## Expected output:
root@CONS_a04sac18047:~# sudo su -
Last login: Fri Dec 4 11:20:41 CET 2020 from 172.17.71.252 on pts/0
root@CONS_a04sac18047:~# nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID
Rack
UN 192.168.18.47 901.39 KiB 256 100.0% 6737b2ee-0854-41a2-
9aae-191e017040ab rack1
UN 192.168.18.48 946.54 KiB 256 100.0% 595ad5c5-db81-482f-
b716-6362e14d6a40 rack1
UN 192.168.18.49 880.72 KiB 256 100.0% ec1e1fd7-d9b6-4c3d-
826f-e4c6682bf090 rack1
NOTE: Once the upgrade is successfully done on the first node repeat the process
for each node in the cluster one by one.