0% found this document useful (0 votes)
366 views

Installation and Configuration of Oracle 19c RAC With ASM Over Oracle ZFS Storage

The document provides steps to install and configure Oracle 19c RAC with ASM over Oracle ZFS Storage. It describes downloading required software, setting up the Oracle ZFS Storage simulator, creating storage pools and LUNs, and installing the iSCSI initiator on the RAC nodes to connect to the LUNs. Key steps include importing the Oracle ZFS simulator appliance, configuring NTP, DNS and storage pools, creating 7 LUNs for ASM data and recovery areas, and installing the iSCSI initiator on both nodes to connect to the LUNs for Oracle database and grid infrastructure installation.

Uploaded by

pavan0927
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
366 views

Installation and Configuration of Oracle 19c RAC With ASM Over Oracle ZFS Storage

The document provides steps to install and configure Oracle 19c RAC with ASM over Oracle ZFS Storage. It describes downloading required software, setting up the Oracle ZFS Storage simulator, creating storage pools and LUNs, and installing the iSCSI initiator on the RAC nodes to connect to the LUNs. Key steps include importing the Oracle ZFS simulator appliance, configuring NTP, DNS and storage pools, creating 7 LUNs for ASM data and recovery areas, and installing the iSCSI initiator on both nodes to connect to the LUNs for Oracle database and grid infrastructure installation.

Uploaded by

pavan0927
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Installation and Configuration Oracle 19c RAC with ASM over Oracle

ZFS Storage
Y V Ravi Kumar (OCM, Oracle ACED, Co-Author (x3) books and Technical Reviewer (x2) books)

Introduction
In this article, we are going to demonstrate installation and configuration of Oracle 19c RAC with ASMLib
over Oracle ZFS Storage Simulator.

Download Oracle ZFS Storage Simulator for Oracle Virtual Box Simulator
https://www.oracle.com/downloads/server-storage/sun-unified-storage-sun-simulator-downloads.html

Download Oracle Virtual Box, Oracle Database 19c Grid Infrastructure (19.3) and Oracle Database 19c
(19.3) for Linux x86-64
https://www.oracle.com/database/technologies/oracle19c-linux-downloads.html
ABSTRACT
Note: We have applied July 2020 Patch for Oracle GI (19.3).In this article, we are going to demonstrate
installation
Critical Patch Update (CPU) Program Jul 2020 Patch Availability Documentand configuration
(PAD) (Doc ID of Oracle 19c RAC with
2664876.1).
ASMLib over Oracle ZFS Storage Simulator.
For configuration of ZFS on Linux follows documentation.
Y V Ravi Kumar
http://www.oracle.com/technetwork/server-storage/sun-unified-storage/documentation/iscsi-lun-
Oracle Certified Master/Oracle ACE Director/Co-
linux-2014-pdf-2120982.pdf
Author (x3) books/Technical Reviewer (x2) books

We will need the following media to perform the installation of the environment:
Oracle Linux : Download the media from Oracle Linux 7 Update 2 - 64 Bit.
https://edelivery.oracle.com
Installation And Configuration Of
Oracle 19c RAC With ASM Over Oracle
ZFS Storage
Environment Setup

Installing Oracle ZFS Storage

File that was downloaded with the ZFS appliance, we must import it through the Import Appliance in the
VirtualBox menu.

1|Page
After importing, we must start the basic Oracle ZFS configuration, click on the “START” button to start
the ZFS virtual machine.

On this screen we will perform the Oracle ZFS configuration, fill in the information on our screen to
simulate the environment.

2|Page
3|Page
Password used was oracle. Please enter | ESC-1 Done.

After the configuration is finished, we will access the console via browser http://192.168.2.150:215

The User we use is “root” and the password “oracle”.

4|Page
Oracle ZFS configuration - Startup Storage.
Now let's start the initial configuration of ZFS and prepare the same with the disks.

Some information about ZFS is presented on this screen, click on “START”.

5|Page
On this screen we have the ZFS network configuration, it is not necessary to make any changes, click on the
“COMMIT” button to save these settings and go to the next step.

In this screen we have the DNS configuration of ZFS, it is not necessary to make any changes, click on the
“COMMIT” button to save these settings and go to the next step. Please make sure DNS Domain is “localdomain”.

6|Page
On this screen we have the NTP configuration of ZFS, it is not necessary to make any changes, click on the
“COMMIT” button to save these settings and go to the next step.

On this screen we have the NFS configuration of ZFS, it is not necessary to make any changes, click on the
“COMMIT” button to save these settings and go to the next step.

7|Page
On this screen we have the configuration of STORAGE of ZFS, it is not necessary to make any changes, click on the
“COMMIT” button to save these settings and go to the next step, on the last step click on “LATER” and then confirm.

8|Page
Configuring Luns on ZFS
A project can be defined in Oracle ZFS Storage Appliance for related group volumes. A project allows
property inheritance for file systems and LUNs presented from the project and also allows quotas and
reservations to be applied. Click on “SHARES”, then click on “PROJECTS”, select the option “+” next to
the word Projects. Before that click on “Configuration” and “Storage” and specify size of the POOL size
and import and click on “Commit”.

PS Note: By default, 74.5 GB size of the complete volume and based on data pool. You can use the
options striped, mirrored etc.

9|Page
Now edit the project name for RAC and then click on APPLY.

10 | P a g e
Ready now that we have the project ready click on the word “LUNs” and then on the “+” next to the
word Luns, so that we can create the disk volumes.

Now let's start the creation of the first volume, ASM_DATA1, first select the name of the project “RAC”.
After selecting the project, put the name of the volume in this case we will put it as “ASM_DATA1”, we
will set the volume size to “ 10 GB ”, the size of the block size to “ 8k ” and we will select the group
“ISCI_RACS”, after done this click on “APPLY”.

11 | P a g e
We will now repeat the steps to create the second volume, click on the word “LUNs” and then on the
“+” next to the word Luns. After selecting the project, put the name of the volume in this case we will
put it as “ASM_DATA2”, we will set the volume size to “ 10 GB ”, the size of the block size to “ 8k ” and
we will select the group “ISCI_RACS”, after done this click on “APPLY”.

We will now repeat the steps to create the third volume, click on the word “LUNs” and then on the “+”
next to the word Luns. After selecting the project, put the name of the volume in this case we will put it

12 | P a g e
as “ASM_DATA3”, we will set the volume size to “ 10 GB ”, the size of the block size to “ 8k ” and we will
select the group “ISCI_RACS”, after done this click on “APPLY”.

We will now repeat the steps to create the fourth volume, click on the word “LUNs” and then on the “+”
next to the word Luns. After selecting the project, put the name of the volume in this case we will put it
as “ASM_DATA4”, we will set the volume size to “ 10 GB ”, the block size to “ 8k ” and we will select the
group “ISCI_RACS”, after done this click on “APPLY”.

13 | P a g e
We will now repeat the steps to create the fifth volume, click on the word “LUNs” and then on the “+”
next to the word Luns. After selecting the project, put the name of the volume in this case we will put it
as “ASM_RECO1”, we will set the volume size to “ 10 GB ”, the block size to “ 8k ” and we will select the
group “ISCI_RACS”, after done this click on “APPLY”.

14 | P a g e
We will now repeat the steps to create the sixth volume, click on the word “LUNs” and then on the “+”
next to the word Luns. After selecting the project, put the name of the volume in this case we will put it
as “ASM_RECO2”, we will set the volume size to “ 10 GB ”, the block size to “ 8k ” and we will select the
group “ISCI_RACS”, after done this click on “APPLY”.

15 | P a g e
We will now repeat the steps to create the seventh volume, click on the word “LUNs” and then on the
“+” next to the word Luns. After selecting the project, put the name of the volume in this case we will
put it as “ASM_RECO3”, we will set the volume size to “ 10 GB ”, the block size to “ 8k ” and we will
select the group “ISCI_RACS”, after done this click on “APPLY”.

16 | P a g e
Ready now we already have all the volumes created for us to use in Oracle RAC.

17 | P a g e
Installing the Oracle Linux iSCSI Initiator in cluster nodes (rac1 and rac2)

The Oracle Linux iSCSI initiator package is not installed by default so must be installed manually. The
package can be installed using one of these options. Use the yum command as root to execute a text-
based installation as shown:

Execute the following commands on both the nodes: rac1 and rac2

[root@rac1 ~]# yum install iscsi-initiator-utils


Loaded plugins: langpacks, ulninfo
ol7_UEKR3 2.5 kB 00:00:00
(1/5): ol7_UEKR3/x86_64/updateinfo 116 kB 00:00:00
ol7_latest 2.7 kB 00:00:00
(2/5): ol7_latest/x86_64/group 660 kB 00:00:00
(3/5): ol7_latest/x86_64/updateinfo 2.9 MB 00:00:02
(4/5): ol7_latest/x86_64/primary_db 35 MB 00:00:06
(5/5): ol7_UEKR3/x86_64/primary_db 66 MB 00:00:16
Resolving Dependencies
--> Running transaction check
---> Package iscsi-initiator-utils.x86_64 0:6.2.0.873-32.0.1.el7 will be updated
--> Processing Dependency: iscsi-initiator-utils = 6.2.0.873-32.0.1.el7 for package: iscsi-initiator-utils-
iscsiuio-6.2.0.873-32.0.1.el7.x86_64
---> Package iscsi-initiator-utils.x86_64 0:6.2.0.874-17.0.3.el7 will be an update
--> Running transaction check
---> Package iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.873-32.0.1.el7 will be updated
---> Package iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.874-17.0.3.el7 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

===================================================================================
Package Arch Version Repository Size
===================================================================================
Updating:
iscsi-initiator-utils x86_64 6.2.0.874-17.0.3.el7 ol7_latest 429 k
Updating for dependencies:
iscsi-initiator-utils-iscsiuio x86_64 6.2.0.874-17.0.3.el7 ol7_latest 95 k

Transaction Summary
===================================================================================
Upgrade 1 Package (+1 Dependent package)

Total download size: 524 k


Is this ok [y/d/N]: y
Downloading packages:
No Presto metadata available for ol7_latest
warning: /var/cache/yum/x86_64/7Server/ol7_latest/packages/iscsi-initiator-utils-iscsiuio-6.2.0.874-
17.0.3.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Public key for iscsi-initiator-utils-iscsiuio-6.2.0.874-17.0.3.el7.x86_64.rpm is not installed
(1/2): iscsi-initiator-utils-iscsiuio-6.2.0.874-17.0.3.el7.x86_64.rpm | 95 kB 00:00:00

18 | P a g e
(2/2): iscsi-initiator-utils-6.2.0.874-17.0.3.el7.x86_64.rpm | 429 kB 00:00:00
-------------------------------------------------------------------------------------------------------
Total 892 kB/s | 524 kB 00:00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Importing GPG key 0xEC551F03:
Userid : "Oracle OSS group (Open Source Software group) <[email protected]>"
Fingerprint: 4214 4123 fecf c55b 9086 313d 72f9 7b74 ec55 1f03
Package : 7:oraclelinux-release-7.2-1.0.5.el7.x86_64 (@anaconda/7.2)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : iscsi-initiator-utils-6.2.0.874-17.0.3.el7.x86_64 1/4
Updating : iscsi-initiator-utils-iscsiuio-6.2.0.874-17.0.3.el7.x86_64 2/4
Cleanup : iscsi-initiator-utils-6.2.0.873-32.0.1.el7.x86_64 3/4
Cleanup : iscsi-initiator-utils-iscsiuio-6.2.0.873-32.0.1.el7.x86_64 4/4
Verifying : iscsi-initiator-utils-iscsiuio-6.2.0.874-17.0.3.el7.x86_64 1/4
Verifying : iscsi-initiator-utils-6.2.0.874-17.0.3.el7.x86_64 2/4
Verifying : iscsi-initiator-utils-iscsiuio-6.2.0.873-32.0.1.el7.x86_64 3/4
Verifying : iscsi-initiator-utils-6.2.0.873-32.0.1.el7.x86_64 4/4

Updated:
iscsi-initiator-utils.x86_64 0:6.2.0.874-17.0.3.el7

Dependency Updated:
iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.874-17.0.3.el7

Complete!
[root@rac1 ~]#

[root@rac1 ~]# chkconfig iscsi on


Note: Forwarding request to 'systemctl enable iscsi.service'.
Created symlink from /etc/systemd/system/remote-fs.target.wants/iscsi.service to
/usr/lib/systemd/system/iscsi.service.
[root@rac1 ~]#

[root@rac1 ~]# chkconfig iscsid on


Note: Forwarding request to 'systemctl enable iscsid.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/iscsid.service to
/usr/lib/systemd/system/iscsid.service.

[root@rac1 ~]# systemctl list-dependencies iscsi


iscsi.service
● ├─iscsi-shutdown.service
● ├─iscsid.service
● ├─system.slice
● └─remote-fs-pre.target
[root@rac1 ~]#

19 | P a g e
[root@rac1 ~]# systemctl list-dependencies iscsid
iscsid.service
● └─system.slice
[root@rac1 ~]#

[root@rac1 ~]# service iscsi start


Redirecting to /bin/systemctl start iscsi.service

[root@rac1 ~]# service iscsid start


Redirecting to /bin/systemctl start iscsid.service

Note: Repeat the same commands in another cluster node (rac2)

Identifying the Host IQN for cluster nodes


Now we are going to identify the HOST IQN, so that we can inform Oracle ZFS which machines
will identify the LUNS of the environment.
[root@rac1 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1988-12.com.oracle:e7a57ff151db
[root@rac1 ~]#

[root@rac2 ~]# cat /etc/iscsi/initiatorname.iscsi


InitiatorName=iqn.1988-12.com.oracle:e7a57ff151db
[root@rac2 ~]#

Note: If HOST IQN then use the following method to change

[root@rac2 ~]# mv /etc/iscsi/initiatorname.iscsi /var/tmp/initiatorname.iscsi.backup


[root@rac2 ~]#
[root@rac2 ~]# echo "InitiatorName=`/sbin/iscsi-iname`" > /etc/iscsi/initiatorname.iscsi
[root@rac2 ~]#
[root@rac2 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1988-12.com.oracle:aaea2b108131
[root@rac2 ~]#
[root@rac2 ~]# service iscsi start
Redirecting to /bin/systemctl start iscsi.service
[root@rac2 ~]#
[root@rac2 ~]# service iscsid start
Redirecting to /bin/systemctl start iscsid.service
[root@rac2 ~]#
[root@rac2 ~]# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1988-12.com.oracle:aaea2b108131
[root@rac2 ~]#

As information on the installation that you will perform the InitiatorName will be different than this
documentation, as it generates this identifier during the installation of iscsi-initiator.

20 | P a g e
Make a note of these identifiers, as they will be necessary for our use.

Setting up CHAP Authentication

These settings must be made on both servers.

To configure CHAP authentication, edit the /etc/iscsi/iscsid.conf file to make the following changes:
To enable CHAP authentication, remove the # character at the beginning of the following line:
node.session.auth.authmethod = CHAP

To set the CHAP username and password, complete the following steps:
Edit the lines that define the CHAP username and password to remove the # character from the
beginning of these lines:
node.session.auth.username = username
node.session.auth.password = password

Change username to the IQN we found. For this example, the username is:
iqn.1988-12.com.oracle:e7a57ff151db - RAC1
iqn.1988-12.com.oracle:aaea2b108131 - RAC2

The configuration should look like this on RAC1.


node.session.auth.username = iqn.1988-12.com.oracle:e7a57ff151db
node.session.auth.password = CHAPsecret14

The configuration should look like this on RAC2.


node.session.auth.username = iqn.1988-12.com.oracle:aaea2b108131
node.session.auth.password = CHAPsecret14

After that we must change the username and password so that we can start the transaction with Oracle
ZFS.

Removing the # character in front of the following lines:


node.session.auth.username_in = username
node.session.auth.password_in = password

Define the username and password that will be used.


node.session.auth.username_in = chapuser
node.session.auth.password_in = CHAPsecret22

After making these changes on both servers, save the files.

21 | P a g e
Configuring the Oracle ZFS Storage Appliance Using the Browser User Interface

As a unified storage platform, the Oracle ZFS Storage Appliance supports access to block protocol LUNs
using iSCSI and Fibre Channel protocols. This section describes how to use the Oracle ZFS Storage
Appliance BUI to configure the Oracle ZFS Storage Appliance to recognize the Oracle Linux host and
present iSCSI LUNs to it.

A “Target Group” is created on the Oracle ZFS Storage Appliance to define the ports and the protocol by
which the LUN will be presented to the Oracle Linux server.

Login to Oracle ZFS Storage using administrator user (root) through web browser using the following
URL (The following URL we have got through after installation of Oracle ZFS Storage).

https://192.168.2.150:215

Enter the root user and oracle password and click on “LOGIN”

After logging in, click on “CONFIGURATION”, then click on “SAN”, select the option “ISCSI” and finally
click on “TARGET”.

22 | P a g e
After doing this, click on the “+” sign next to the word target to add the settings below.

In the ALIAS field, define an alias for this TARGET, as shown in the image above, I defined it as “OL”.
Then define the authentication method, which is the CHAP type.

In the item Target CHAP name and CHAP secret, the information must be the same as that configured in
ISCSID.CONF
Target CHAP NAME = chapuser
Target CHAP SECRET = CHAPsecret22
After completing the configuration, click OK.

Well let's move the target created to the Targets Group, place the cursor on the entry that was created
for the iSCSI Targets.

The movement icon appears to the left of the entrance, as shown in the image below.

23 | P a g e
Move to the Target Group, the Target created as shown below

After making the move, click on a pencil button to edit the Target name to change the identification.
Now edit the ISCSI initiator name for ZFS and click OK.

Ready now we already have our Target, configured so that the servers have access to Oracle ZFS
storage. To save this configuration, click on “APPLY”.

Configuring Initiators on Oracle ZFS

An “iSCSI initiator” is defined to restrict which servers have access to a given volume. If more than one
host can write to a given volume simultaneously, inconsistency in the file system cache between hosts
can cause disk image corruption.

To identify the Oracle Linux server for Oracle ZFS Storage Appliance, the iSCSI initiator must be
registered, as we will now do.

Click on “CONFIGURATION”, then click on “SAN”, select the option “ISCSI” and finally click on
“INITIATORS”.

After doing this click on the “+” sign next to the word Initiators to add the settings below.

24 | P a g e
The configuration should look like this on RAC1.
Initiator CHAP name = iqn.1988-12.com.oracle:e7a57ff151db
Initiator CHAP secret = CHAPsecret14
After completing the configuration, click OK.

After doing this, click again on the “+” sign next to the word Initiators to add the second node with the
settings below

25 | P a g e
The configuration should look like this on RAC2.
Initiator CHAP name = iqn.1988-12.com.oracle:aaea2b108131
Initiator CHAP secret = CHAPsecret14
After completing the configuration, click OK.
Ready we already have the initiators of the servers that will be part of Oracle RAC and will access the
storage volumes.

Well let's move the initiators created for the Initiators Group, place the cursor over the entry that was
created for the Initiators.

The movement icon appears to the left of the entrance, as shown in the image below.

Move to Initiators RAC1 and then RAC2 to Group the Initiators as shown below. You can under initiator
Groups (Initiators-0 and Initiators-1).

26 | P a g e
Ready should look like the image below, now let's edit the group and give it a name. After making the
move, click on a pencil button to edit the Target name to change the identification.

Note: Please remove “Initiators-1” after making the group “ISCI_RACS”.

27 | P a g e
Ready now we already have our Initiators configured, so that the servers have access to the same
volume of disks. To save this configuration, click on “APPLY”.

28 | P a g e
Configuring volumes on servers

Now that the LUNs are prepared and available for iSCSI, the LUN must be configured for use by the
Oracle Linux server, by performing the following steps:
We must perform these steps on the two Oracle RAC servers. First, we must create an entry in
/etc/hosts to not need DNS. Edit the /etc/hosts file and include the following entry on both servers.
192.168.2.150 zfs.localdomain zfs

[root@rac1 ~]# cat /etc/hosts


127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

#rac1 - Public, Private and VIP


192.168.2.100 rac1.localdomain rac1
10.1.4.10 rac1-priv.localdomain rac1-priv
192.168.2.110 rac1-vip.localdomain rac1-vip

#rac2 - Public, Private and VIP


192.168.2.200 rac2.localdomain rac2
10.1.4.20 rac2-priv.localdomain rac2-priv
192.168.2.210 rac2-vip.localdomain rac2-vip

#SCAN IPs
192.168.2.10 racp-scan.localdomain racp-scan
192.168.2.20 racp-scan.localdomain racp-scan
192.168.2.30 racp-scan.localdomain racp-scan

#Oracle ZFS Storage


192.168.2.150 zfs.localdomain zfs
[root@rac1 ~]#

[root@rac2 ~]# cat /etc/hosts


127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

#rac1 - Public, Private and VIP


192.168.2.100 rac1.localdomain rac1
10.1.4.10 rac1-priv.localdomain rac1-priv
192.168.2.110 rac1-vip.localdomain rac1-vip

#rac2 - Public, Private and VIP


192.168.2.200 rac2.localdomain rac2
10.1.4.20 rac2-priv.localdomain rac2-priv
192.168.2.210 rac2-vip.localdomain rac2-vip

#SCAN IPs

29 | P a g e
192.168.2.10 racp-scan.localdomain racp-scan
192.168.2.20 racp-scan.localdomain racp-scan
192.168.2.30 racp-scan.localdomain racp-scan

#Oracle ZFS Storage


192.168.2.150 zfs.localdomain zfs
[root@rac2 ~]#

After the inclusion in the hosts file of the two servers, save the files and execute the commands below in
red.

Formatting the LUNs - RAC1

[root@rac1 ~]# service iscsi start


Redirecting to /bin/systemctl start iscsi.service
[root@rac1 ~]#

[root@rac1 ~]# iscsiadm -m discovery -t sendtargets -p zfs


192.168.2.150:3260,2 iqn.1986-03.com.sun:02:c8fe1586-0fb9-42e3-a603-fc5323a59181
[root@rac1 ~]#

[root@rac1 ~]# iscsiadm -m node -p zfs -l


Logging in to [iface: default, target: iqn.1986-03.com.sun:02:c8fe1586-0fb9-42e3-a603-fc5323a59181,
portal: 192.168.2.150,3260] (multiple)
Login to [iface: default, target: iqn.1986-03.com.sun:02:c8fe1586-0fb9-42e3-a603-fc5323a59181, portal:
192.168.2.150,3260] successful.
[root@rac1 ~]#

[root@rac1 ~]# ls -l /dev/sd?


brw-rw----. 1 root disk 8, 0 Aug 15 20:45 /dev/sda
brw-rw----. 1 root disk 8, 16 Aug 15 23:57 /dev/sdb
brw-rw----. 1 root disk 8, 32 Aug 15 23:57 /dev/sdc
brw-rw----. 1 root disk 8, 48 Aug 15 23:57 /dev/sdd
brw-rw----. 1 root disk 8, 64 Aug 15 23:57 /dev/sde
brw-rw----. 1 root disk 8, 80 Aug 15 23:57 /dev/sdf
brw-rw----. 1 root disk 8, 96 Aug 15 23:57 /dev/sdg
brw-rw----. 1 root disk 8, 112 Aug 15 23:57 /dev/sdh
[root@rac1 ~]#

[root@rac1 ~]# fdisk -l

Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0002661e

30 | P a g e
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 209715199 104344576 8e Linux LVM

Disk /dev/mapper/ol_oel72-root: 53.7 GB, 53687091200 bytes, 104857600 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/ol_oel72-swap: 5301 MB, 5301600256 bytes, 10354688 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/mapper/ol_oel72-home: 47.8 GB, 47789899776 bytes, 93339648 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes

Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes

Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes

Disk /dev/sde: 10.7 GB, 10737418240 bytes, 20971520 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes

Disk /dev/sdf: 10.7 GB, 10737418240 bytes, 20971520 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes

Disk /dev/sdg: 10.7 GB, 10737418240 bytes, 20971520 sectors


Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes

31 | P a g e
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disk /dev/sdh: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes

[root@rac1 ~]#
[root@rac1 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table


Building a new DOS disklabel with disk identifier 0xd28413a5.

Command (m for help): n


Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.
[root@rac1 ~]#
[root@rac1 ~]# fdisk /dev/sdc
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table


Building a new DOS disklabel with disk identifier 0x6e30eb70.

Command (m for help): n


Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p

32 | P a g e
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.
[root@rac1 ~]#
[root@rac1 ~]# fdisk /dev/sdd
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table


Building a new DOS disklabel with disk identifier 0x36e02eeb.

Command (m for help): n


Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.
[root@rac1 ~]#
[root@rac1 ~]# fdisk /dev/sde
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table


Building a new DOS disklabel with disk identifier 0x7cc1c380.

33 | P a g e
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.
[root@rac1 ~]#
[root@rac1 ~]# fdisk /dev/sdf
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table


Building a new DOS disklabel with disk identifier 0x754bc898.

Command (m for help): n


Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.
[root@rac1 ~]#
[root@rac1 ~]# fdisk /dev/sdg
Welcome to fdisk (util-linux 2.23.2).

34 | P a g e
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table


Building a new DOS disklabel with disk identifier 0x5e696479.

Command (m for help): n


Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.
[root@rac1 ~]#
[root@rac1 ~]# fdisk /dev/sdh
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table


Building a new DOS disklabel with disk identifier 0x12f5cf35.

Command (m for help): n


Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set

Command (m for help): w


The partition table has been altered!

35 | P a g e
Calling ioctl() to re-read partition table.
Syncing disks.
[root@rac1 ~]#

[root@rac1 ~]# partprobe


[root@rac1 ~]#

[root@rac1 ~]# yum install oracleasm-support oracleasmlib oracleasm


Loaded plugins: langpacks, ulninfo
No package oracleasmlib available.
Resolving Dependencies
--> Running transaction check
---> Package kernel-uek.x86_64 0:3.8.13-118.48.1.el7uek will be installed
--> Processing Dependency: kernel-firmware = 3.8.13-118.48.1.el7uek for package: kernel-uek-3.8.13-
118.48.1.el7uek.x86_64
---> Package oracleasm-support.x86_64 0:2.1.11-2.el7 will be installed
--> Running transaction check
---> Package kernel-uek-firmware.noarch 0:3.8.13-118.48.1.el7uek will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=========================================================================
Package Arch Version Repository Size
=========================================================================
Installing:
kernel-uek x86_64 3.8.13-118.48.1.el7uek ol7_UEKR3 33 M
kernel-uek-firmware noarch 3.8.13-118.48.1.el7uek ol7_UEKR3 2.2 M
oracleasm-support x86_64 2.1.11-2.el7 ol7_latest 85 k

Transaction Summary
=========================================================================
Install 3 Packages

Total download size: 36 M


Installed size: 117 M
Is this ok [y/d/N]: y
Downloading packages:
No Presto metadata available for ol7_UEKR3
(1/3): oracleasm-support-2.1.11-2.el7.x86_64.rpm | 85 kB 00:00:01
(2/3): kernel-uek-firmware-3.8.13-118.48.1.el7uek.noarch.rpm | 2.2 MB 00:00:01
(3/3): kernel-uek-3.8.13-118.48.1.el7uek.x86_64.rpm | 33 MB 00:00:08
----------------------------------------------------------------------------------------------
Total 4.3 MB/s | 36 MB 00:00:08
Running transaction check
Running transaction test
Transaction test succeeded

36 | P a g e
Running transaction
Installing : kernel-uek-firmware-3.8.13-118.48.1.el7uek.noarch 1/3
Installing : kernel-uek-3.8.13-118.48.1.el7uek.x86_64 2/3
Installing : oracleasm-support-2.1.11-2.el7.x86_64 3/3
Note: Forwarding request to 'systemctl enable oracleasm.service'.
Created symlink from /etc/systemd/system/
multi-user.target.wants/oracleasm.service to
/usr/lib/systemd/system/oracleasm.service.
Verifying : kernel-uek-3.8.13-118.48.1.el7uek.x86_64 1/3
Verifying : oracleasm-support-2.1.11-2.el7.x86_64 2/3
Verifying : kernel-uek-firmware-3.8.13-118.48.1.el7uek.noarch 3/3

Installed:
kernel-uek.x86_64 0:3.8.13-118.48.1.el7uek
kernel-uek-firmware.noarch 0:3.8.13-118.48.1.el7uek
oracleasm-support.x86_64 0:2.1.11-2.el7
Complete!
[root@rac1 ~]#

Configuring oracleasm

[root@rac1 ~]# oracleasm configure -i


Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle


Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@rac1 ~]#

[root@rac1 ~]# /usr/sbin/oracleasm init


Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm
[root@rac1 ~]#

Note: Repeat the same commands in another cluster node (rac2)

37 | P a g e
[root@rac1 ~]# rpm -qa | grep asm
oracleasm-support-2.1.11-2.el7.x86_64
objectweb-asm-3.3.1-9.el7.noarch
kde-plasma-networkmanagement-0.9.0.9-7.el7.x86_64
kde-plasma-networkmanagement-libs-0.9.0.9-7.el7.x86_64
kdeplasma-addons-libs-4.10.5-5.el7.x86_64
kdeplasma-addons-4.10.5-5.el7.x86_64
plasma-scriptengine-python-4.11.19-7.el7.x86_64
kde-settings-plasma-19-23.5.0.1.el7.noarch
libatasmart-0.19-6.el7.x86_64
[root@rac1 ~]#

[root@rac1 ~]# oracleasm status


Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
[root@rac1 ~]#

[root@rac1 ~]# /usr/sbin/oracleasm createdisk ASM_DATA1 /dev/sdb1


Writing disk header: done
Instantiating disk: done
[root@rac1 ~]#

[root@rac1 ~]# /usr/sbin/oracleasm createdisk ASM_DATA2 /dev/sdc1


Writing disk header: done
Instantiating disk: done
[root@rac1 ~]#

[root@rac1 ~]# /usr/sbin/oracleasm createdisk ASM_DATA3 /dev/sdd1


Writing disk header: done
Instantiating disk: done
[root@rac1 ~]#

[root@rac1 ~]# /usr/sbin/oracleasm createdisk ASM_DATA4 /dev/sde1


Writing disk header: done
Instantiating disk: done
[root@rac1 ~]#

[root@rac1 ~]# /usr/sbin/oracleasm createdisk ASM_RECO1 /dev/sdf1


Writing disk header: done
Instantiating disk: done
[root@rac1 ~]#

[root@rac1 ~]# /usr/sbin/oracleasm createdisk ASM_RECO2 /dev/sdg1


Writing disk header: done
Instantiating disk: done
[root@rac1 ~]#

[root@rac1 ~]# /usr/sbin/oracleasm createdisk ASM_RECO3 /dev/sdh1

38 | P a g e
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]#

[root@rac1 ~]# /usr/sbin/oracleasm scandisks


Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@rac1 ~]#

[root@rac1 ~]# /usr/sbin/oracleasm listdisks


ASM_DATA1
ASM_DATA2
ASM_DATA3
ASM_DATA4
ASM_RECO1
ASM_RECO2
ASM_RECO3
[root@rac1 ~]#

Note: Formatting LUNs and creating disks using oracleasm if required do it in cluster node
(rac2) also. We have formatted LUNs in rac1 and created all the disks using oracleasm
command.

Login into RAC2 and scan the disks

[root@rac2 ~]# iscsiadm -m discovery -t sendtargets -p zfs


192.168.2.150:3260,2 iqn.1986-03.com.sun:02:c8fe1586-0fb9-42e3-a603-fc5323a59181
[root@rac2 ~]#

[root@rac2 ~]# iscsiadm -m node -p zfs -l


Logging in to [iface: default, target: iqn.1986-03.com.sun:02:c8fe1586-0fb9-42e3-a603-fc5323a59181,
portal: 192.168.2.150,3260] (multiple)
iscsiadm: Could not login to [iface: default, target: iqn.1986-03.com.sun:02:c8fe1586-0fb9-42e3-a603-
fc5323a59181, portal: 192.168.2.150,3260].
iscsiadm: initiator reported error (24 - iSCSI login failed due to authorization failure)
iscsiadm: Could not log into all portals
[root@rac2 ~]#

[root@rac2 etc]# systemctl restart iscsid


[root@rac2 etc]#

[root@rac2 etc]# systemctl restart iscsi


[root@rac2 etc]#

[root@rac2 etc]# iscsiadm -m discovery -t sendtargets -p zfs


192.168.2.150:3260,2 iqn.1986-03.com.sun:02:c8fe1586-0fb9-42e3-a603-fc5323a59181

39 | P a g e
[root@rac2 etc]#

[root@rac2 etc]# iscsiadm -m node -p zfs -l


[root@rac2 etc]#

[root@rac2 etc]# ls -l /dev/sd?


brw-rw----. 1 root disk 8, 0 Aug 15 20:44 /dev/sda1
brw-rw----. 1 root disk 8, 16 Aug 16 00:38 /dev/sdb1
brw-rw----. 1 root disk 8, 32 Aug 16 00:38 /dev/sdc1
brw-rw----. 1 root disk 8, 48 Aug 16 00:38 /dev/sdd1
brw-rw----. 1 root disk 8, 64 Aug 16 00:38 /dev/sde1
brw-rw----. 1 root disk 8, 80 Aug 16 00:38 /dev/sdf1
brw-rw----. 1 root disk 8, 96 Aug 16 00:38 /dev/sdg1
brw-rw----. 1 root disk 8, 112 Aug 16 00:38 /dev/sdh1
[root@rac2 etc]#

[root@rac2 etc]# oracleasm status


Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes
[root@rac2 etc]#

[root@rac2 etc]# /usr/sbin/oracleasm scandisks


Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "ASM_DATA1"
Instantiating disk "ASM_DATA3"
Instantiating disk "ASM_DATA2"
Instantiating disk "ASM_DATA4"
Instantiating disk "ASM_RECO1"
Instantiating disk "ASM_RECO2"
Instantiating disk "ASM_RECO3"
[root@rac2 etc]#

[root@rac2 etc]# /usr/sbin/oracleasm listdisks


ASM_DATA1
ASM_DATA2
ASM_DATA3
ASM_DATA4
ASM_RECO1
ASM_RECO2
ASM_RECO3
[root@rac2 etc]#

40 | P a g e
Oracle 19c GRID Installation (Applying July 2020 Patch):

Select the option – Set up Software Only

Select the option – Configure an Oracle Standalone Cluster

41 | P a g e
Select the option – Create Local SCAN

Specify the cluster nodes

42 | P a g e
Configure SSH Set up

Select Public and ASM & Private interfaces.

43 | P a g e
Select the option – Use Oracle Flex ASM for Storage

Select the option – Yes to configure GIMR

Select the option – No (you can decide based on storage size and chosen type)

44 | P a g e
Select the Disk Path – ASM_DATA1, ASM_DATA2, ASM_DATA3 and ASM_DATA4 for DATADG.

45 | P a g e
46 | P a g e
47 | P a g e
Execute the following scripts as a ‘root’ user in cluster nodes

[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh


Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac1 ~]#

[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh


Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 ~]#

[root@rac1 ~]# /u01/app/19.3.0/grid/root.sh


Performing root user operation.

48 | P a g e
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/19.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:


Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/crsdata/rac1/crsconfig/rootcrs_rac1_2020-08-16_02-23-52PM.log
2020/08/16 14:24:08 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2020/08/16 14:24:08 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2020/08/16 14:24:08 CLSRSC-363: User ignored prerequisites during installation
2020/08/16 14:24:08 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2020/08/16 14:24:11 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2020/08/16 14:24:12 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2020/08/16 14:24:12 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2020/08/16 14:24:12 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2020/08/16 14:24:27 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2020/08/16 14:24:34 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2020/08/16 14:24:58 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2020/08/16 14:24:58 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2020/08/16 14:25:05 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2020/08/16 14:25:05 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2020/08/16 14:25:27 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/08/16 14:25:29 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2020/08/16 14:25:29 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2020/08/16 14:25:35 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2020/08/16 14:25:41 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
Redirecting to /bin/systemctl restart rsyslog.service

ASM has been created and started successfully.

[DBT-30001] Disk groups created successfully. Check /u01/app/oracle/cfgtoollogs/asmca/asmca-


200816PM022613.log for details.

2020/08/16 14:27:51 CLSRSC-482: Running command: '/u01/app/19.3.0/grid/bin/ocrconfig -upgrade


oracle oinstall'
CRS-4256: Updating the profile

49 | P a g e
Successful addition of voting disk 2f9ebef3841f4f3abf13bd7f07d3fe68.
Successfully replaced voting disk group with +DATADG.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 2f9ebef3841f4f3abf13bd7f07d3fe68 (/dev/oracleasm/disks/ASM_DATA1) [DATADG]
Located 1 voting disk(s).
2020/08/16 14:30:10 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2020/08/16 14:31:59 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/08/16 14:31:59 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/08/16 14:35:02 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/08/16 14:36:04 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 ~]#

[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh


Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.


The execution of the script is complete.
[root@rac2 ~]# /u01/app/19.3.0/grid/root.sh
Performing root user operation.

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/19.3.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:


Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...


Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/19.3.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/crsdata/rac2/crsconfig/rootcrs_rac2_2020-08-16_02-36-48PM.log
2020/08/16 14:37:06 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2020/08/16 14:37:06 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2020/08/16 14:37:06 CLSRSC-363: User ignored prerequisites during installation
2020/08/16 14:37:06 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2020/08/16 14:37:07 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.

50 | P a g e
2020/08/16 14:37:07 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2020/08/16 14:37:07 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2020/08/16 14:37:08 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2020/08/16 14:37:10 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2020/08/16 14:37:10 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2020/08/16 14:37:30 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2020/08/16 14:37:30 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2020/08/16 14:37:32 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2020/08/16 14:37:32 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2020/08/16 14:37:55 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2020/08/16 14:37:55 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2020/08/16 14:37:57 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2020/08/16 14:37:59 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
Redirecting to /bin/systemctl restart rsyslog.service
2020/08/16 14:38:16 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2020/08/16 14:38:34 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/08/16 14:40:58 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/08/16 14:40:58 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/08/16 14:42:24 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/08/16 14:45:20 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac2 ~]#

[oracle@rac1 grid]$ ./gridSetup.sh -applyPSU /home/oracle/31305339/


Preparing the home to patch...
Applying the patch /home/oracle/31305339/...
Successfully applied the patch.
The log can be found at: /tmp/GridSetupActions2020-08-16_07-40-54PM/installerPatchActions_2020-
08-16_07-40-54PM.log
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:


/u01/app/19.3.0/grid/install/response/grid_2020-08-16_07-40-54PM.rsp

You can find the log of this install session at:


/tmp/GridSetupActions2020-08-16_07-40-54PM/gridSetupActions2020-08-16_07-40-54PM.log
Moved the install session logs to:
/u01/app/oraInventory/logs/GridSetupActions2020-08-16_07-40-54PM
[oracle@rac1 grid]$

Check the cluster status using the following command

[oracle@rac1 ~]$ grid_env


[oracle@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------

51 | P a g e
ora.LISTENER.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.chad
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.net1.network
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.ons
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE rac1 STABLE
2 ONLINE ONLINE rac2 STABLE
ora.DATADG.dg(ora.asmgroup)
1 ONLINE ONLINE rac1 STABLE
2 ONLINE ONLINE rac2 STABLE
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE rac1 169.254.7.133 10.1.4
.10,STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE rac1 Started,STABLE
2 ONLINE ONLINE rac2 Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE rac1 STABLE
2 ONLINE ONLINE rac2 STABLE
ora.cvu
1 ONLINE ONLINE rac1 STABLE
ora.mgmtdb
1 ONLINE ONLINE rac1 Open,STABLE
ora.qosmserver
1 ONLINE ONLINE rac1 STABLE
ora.rac1.vip
1 ONLINE ONLINE rac1 STABLE
ora.rac2.vip
1 ONLINE ONLINE rac2 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac2 STABLE

52 | P a g e
ora.scan2.vip
1 ONLINE ONLINE rac1 STABLE
ora.scan3.vip
1 ONLINE ONLINE rac1 STABLE
--------------------------------------------------------------------------------
[oracle@rac1 ~]$

[oracle@rac1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Aug 16 21:12:42 2020


Version 19.8.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.8.0.0.0

SQL> select instance_name,instance_number from gv$instance;

INSTANCE_NAME INSTANCE_NUMBER
---------------- ---------------
+ASM1 1
+ASM2 2

SQL>

[oracle@rac1 ~]$ ps -ef | grep pmon


oracle 2001 1 0 20:17 ? 00:00:00 asm_pmon_+ASM1
oracle 7455 3018 0 21:14 pts/0 00:00:00 grep --color=auto pmon
oracle 14843 1 0 20:54 ? 00:00:00 mdb_pmon_-MGMTDB
[oracle@rac1 ~]$

[oracle@rac2 ~]$ ps -ef | grep pmon


oracle 17758 3025 0 21:14 pts/0 00:00:00 grep --color=auto pmon
oracle 24658 1 0 20:27 ? 00:00:00 asm_pmon_+ASM2
[oracle@rac2 ~]$

Using ASMCA – creating diskgroup for Flash Recovery Area (RECODG)

53 | P a g e
54 | P a g e
Installation of Oracle 19c (19.3.0) RDBMS binaries

Select the option – Set Up Software Only

Select the option – “Oracle Real Application Clusters database installation” and click “NEXT”.

55 | P a g e
Check that the 2 nodes are selected, click on the "SSH Connectivity" button and enter the password for
the user "oracle". Click the "Setup" button to configure SSH connectivity, and the "Test" button to test it,
then press "Next".

At the “Database Edition” screen, select the Enterprise Edition option and click “NEXT”.

56 | P a g e
On the “Installation Location” screen, we recommend following the standard suggestion, if necessary,
change the path, and after that click on “NEXT”.

Keep the defaults in "Operating Systems Groups" and press "Next". Ignore warning on the next screen.

57 | P a g e
In the item Prerequisite Checks, in my installation it passed without major problems, if I present
something to you, check the alerts, solve the problems before installation.
After everything is OK click on Install.

58 | P a g e
Execute the following scripts as a ‘root’ user in cluster nodes

[root@rac1 ~]# /u01/app/oracle/product/19.3.0/db_1/root.sh


Performing root user operation.

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/19.3.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@rac1 ~]#

59 | P a g e
[root@rac2 ~]# /u01/app/oracle/product/19.3.0/db_1/root.sh
Performing root user operation.

The following environment variables are set as:


ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/19.3.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:


The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by


Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
[root@rac2 ~]#

[oracle@rac1 db_1]$ export ORACLE_BASE=/u01/app/oracle


[oracle@rac1 db_1]$ export ORACLE_HOME=/u01/app/oracle/product/19.3.0/db_1
[oracle@rac1 db_1]$ ./runInstaller &
[1] 15836
[oracle@rac1 db_1]$ Launching Oracle Database Setup Wizard...
The response file for this session can be found at:
/u01/app/oracle/product/19.3.0/db_1/install/response/db_2020-08-16_09-26-27PM.rsp
You can find the log of this install session at:
/u01/app/oraInventory/logs/InstallActions2020-08-16_09-26-27PM/installActions2020-08-16_09-26-
27PM.log
[1]+ Exit 6 ./runInstaller
[oracle@rac1 db_1]$

60 | P a g e
Creation and Configuration of Oracle 19c (19.3.0) RAC Database

61 | P a g e
62 | P a g e
63 | P a g e
[oracle@rac1 ~]$ srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/19.3.0/db_1
Oracle user: oracle
Spfile: +DATADG/ORCL/PARAMETERFILE/spfile.299.1048632073
Password file: +DATADG/ORCL/PASSWORD/pwdorcl.283.1048630197
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: RECODG,DATADG
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances: orcl1,orcl2
Configured nodes: rac1,rac2
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed

64 | P a g e
[oracle@rac1 ~]$

[oracle@rac1 ~]$ srvctl status database -d orcl


Instance orcl1 is running on node rac1
Instance orcl2 is running on node rac2
[oracle@rac1 ~]$

[oracle@rac1 ~]$ sqlplus sys/oracle@orcl as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sun Aug 16 22:54:17 2020


Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select instance_name, instance_number from gv$instance;

INSTANCE_NAME INSTANCE_NUMBER
-------------------------------------------------------------
orcl2 2
orcl1 1

SQL>

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED


---------------------------------------------------------------------------------------------
2 PDB$SEED READ ONLY NO
3 PDB READ WRITE NO

SQL>

Restarting complete stack:

Just start Oracle ZFS Storage first and start both the cluster nodes (rac1 and rac2). The sequence of steps
is below.

Login as ‘root’ user and stop cluster using the following command.

[root@rac2 ~]# crsctl stop cluster -all


CRS-2673: Attempting to stop 'ora.crsd' on 'rac2'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac2'
CRS-2673: Attempting to stop 'ora.chad' on 'rac2'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'rac2'
CRS-2673: Attempting to stop 'ora.qosmserver' on 'rac2'

65 | P a g e
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac1'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'rac1'
CRS-2677: Stop of 'ora.orcl.db' on 'rac2' succeeded
CRS-33673: Attempting to stop resource group 'ora.asmgroup' on server 'rac2'
CRS-2673: Attempting to stop 'ora.RECODG.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.DATADG.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rac2'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac2'
CRS-2677: Stop of 'ora.RECODG.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.orcl.db' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1'
CRS-2677: Stop of 'ora.DATADG.dg' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac2'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac2'
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'rac2'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2677: Stop of 'ora.cvu' on 'rac2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1'
CRS-2677: Stop of 'ora.rac2.vip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.scan2.vip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'rac2' succeeded
CRS-2677: Stop of 'ora.chad' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.chad' on 'rac1'
CRS-2677: Stop of 'ora.chad' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'rac1'
CRS-2677: Stop of 'ora.mgmtdb' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'rac1'
CRS-2677: Stop of 'ora.MGMTLSNR' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
[root@rac2 ~]#

Cluster Node (rac1):

[root@rac1 ~]# systemctl restart iscsid


[root@rac1 ~]# systemctl restart iscsi
[root@rac1 ~]# iscsiadm -m discovery -t sendtargets -p zfs
192.168.2.150:3260,2 iqn.1986-03.com.sun:02:c8fe1586-0fb9-42e3-a603-fc5323a59181

66 | P a g e
[root@rac1 ~]# iscsiadm -m node -p zfs -l

[root@rac1 ~]# /usr/sbin/oracleasm scandisks


Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "ASM_DATA1"
Instantiating disk "ASM_DATA3"
Instantiating disk "ASM_DATA2"
Instantiating disk "ASM_DATA4"
Instantiating disk "ASM_RECO1"
Instantiating disk "ASM_RECO2"
Instantiating disk "ASM_RECO3"

[root@rac1 ~]# /usr/sbin/oracleasm listdisks


ASM_DATA1
ASM_DATA2
ASM_DATA3
ASM_DATA4
ASM_RECO1
ASM_RECO2
ASM_RECO3
[root@rac1 ~]#

Cluster Node (rac2):

[root@rac2 ~]# systemctl restart iscsid


[root@rac2 ~]# systemctl restart iscsi
[root@rac2 ~]# iscsiadm -m discovery -t sendtargets -p zfs
192.168.2.150:3260,2 iqn.1986-03.com.sun:02:c8fe1586-0fb9-42e3-a603-fc5323a59181
[root@rac2 ~]# iscsiadm -m node -p zfs -l
[root@rac2 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "ASM_DATA1"
Instantiating disk "ASM_DATA2"
Instantiating disk "ASM_DATA3"
Instantiating disk "ASM_DATA4"
Instantiating disk "ASM_RECO1"
Instantiating disk "ASM_RECO2"
Instantiating disk "ASM_RECO3"
[root@rac2 ~]# /usr/sbin/oracleasm listdisks
ASM_DATA1
ASM_DATA2
ASM_DATA3
ASM_DATA4
ASM_RECO1

67 | P a g e
ASM_RECO2
ASM_RECO3
[root@rac2 ~]#

Starting the cluster stack and Oracle database:

[oracle@rac1 ~]$ su - root


Password:
Last login: Mon Aug 17 09:23:50 CDT 2020 on pts/0

[root@rac1 ~]# . oraenv


ORACLE_SID = [root] ? +ASM1
ORACLE_HOME = [/home/oracle] ? /u01/app/19.3.0/grid
The Oracle base has been set to /u01/app/oracle

[root@rac1 ~]# crsctl start cluster -all


CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac2'
CRS-2676: Start of 'ora.storage' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-2676: Start of 'ora.storage' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
[root@rac1 ~]# exit
logout

[oracle@rac1 ~]$ . oraenv

68 | P a g e
ORACLE_SID = [+ASM1] ? orcl
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/19.3.0/db_1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@rac1 ~]$

[oracle@rac1 ~]$ srvctl start database -d orcl

[oracle@rac1 ~]$ srvctl status database -d orcl


Instance orcl1 is running on node rac1
Instance orcl2 is running on node rac2
[oracle@rac1 ~]$

Summary: Finally, Oracle 19c (19.8.0) GI and Oracle (19.3.0) Database on Oracle ZFS Storage (OS8.8). We
hope we have posted all the steps to configure Oracle ZFS Storage for Cluster Nodes. Good luck. Thanks
for referring the article.

69 | P a g e

You might also like