Installation and Configuration of Oracle 19c RAC With ASM Over Oracle ZFS Storage
Installation and Configuration of Oracle 19c RAC With ASM Over Oracle ZFS Storage
ZFS Storage
Y V Ravi Kumar (OCM, Oracle ACED, Co-Author (x3) books and Technical Reviewer (x2) books)
Introduction
In this article, we are going to demonstrate installation and configuration of Oracle 19c RAC with ASMLib
over Oracle ZFS Storage Simulator.
Download Oracle ZFS Storage Simulator for Oracle Virtual Box Simulator
https://www.oracle.com/downloads/server-storage/sun-unified-storage-sun-simulator-downloads.html
Download Oracle Virtual Box, Oracle Database 19c Grid Infrastructure (19.3) and Oracle Database 19c
(19.3) for Linux x86-64
https://www.oracle.com/database/technologies/oracle19c-linux-downloads.html
ABSTRACT
Note: We have applied July 2020 Patch for Oracle GI (19.3).In this article, we are going to demonstrate
installation
Critical Patch Update (CPU) Program Jul 2020 Patch Availability Documentand configuration
(PAD) (Doc ID of Oracle 19c RAC with
2664876.1).
ASMLib over Oracle ZFS Storage Simulator.
For configuration of ZFS on Linux follows documentation.
Y V Ravi Kumar
http://www.oracle.com/technetwork/server-storage/sun-unified-storage/documentation/iscsi-lun-
Oracle Certified Master/Oracle ACE Director/Co-
linux-2014-pdf-2120982.pdf
Author (x3) books/Technical Reviewer (x2) books
We will need the following media to perform the installation of the environment:
Oracle Linux : Download the media from Oracle Linux 7 Update 2 - 64 Bit.
https://edelivery.oracle.com
Installation And Configuration Of
Oracle 19c RAC With ASM Over Oracle
ZFS Storage
Environment Setup
File that was downloaded with the ZFS appliance, we must import it through the Import Appliance in the
VirtualBox menu.
1|Page
After importing, we must start the basic Oracle ZFS configuration, click on the “START” button to start
the ZFS virtual machine.
On this screen we will perform the Oracle ZFS configuration, fill in the information on our screen to
simulate the environment.
2|Page
3|Page
Password used was oracle. Please enter | ESC-1 Done.
After the configuration is finished, we will access the console via browser http://192.168.2.150:215
4|Page
Oracle ZFS configuration - Startup Storage.
Now let's start the initial configuration of ZFS and prepare the same with the disks.
5|Page
On this screen we have the ZFS network configuration, it is not necessary to make any changes, click on the
“COMMIT” button to save these settings and go to the next step.
In this screen we have the DNS configuration of ZFS, it is not necessary to make any changes, click on the
“COMMIT” button to save these settings and go to the next step. Please make sure DNS Domain is “localdomain”.
6|Page
On this screen we have the NTP configuration of ZFS, it is not necessary to make any changes, click on the
“COMMIT” button to save these settings and go to the next step.
On this screen we have the NFS configuration of ZFS, it is not necessary to make any changes, click on the
“COMMIT” button to save these settings and go to the next step.
7|Page
On this screen we have the configuration of STORAGE of ZFS, it is not necessary to make any changes, click on the
“COMMIT” button to save these settings and go to the next step, on the last step click on “LATER” and then confirm.
8|Page
Configuring Luns on ZFS
A project can be defined in Oracle ZFS Storage Appliance for related group volumes. A project allows
property inheritance for file systems and LUNs presented from the project and also allows quotas and
reservations to be applied. Click on “SHARES”, then click on “PROJECTS”, select the option “+” next to
the word Projects. Before that click on “Configuration” and “Storage” and specify size of the POOL size
and import and click on “Commit”.
PS Note: By default, 74.5 GB size of the complete volume and based on data pool. You can use the
options striped, mirrored etc.
9|Page
Now edit the project name for RAC and then click on APPLY.
10 | P a g e
Ready now that we have the project ready click on the word “LUNs” and then on the “+” next to the
word Luns, so that we can create the disk volumes.
Now let's start the creation of the first volume, ASM_DATA1, first select the name of the project “RAC”.
After selecting the project, put the name of the volume in this case we will put it as “ASM_DATA1”, we
will set the volume size to “ 10 GB ”, the size of the block size to “ 8k ” and we will select the group
“ISCI_RACS”, after done this click on “APPLY”.
11 | P a g e
We will now repeat the steps to create the second volume, click on the word “LUNs” and then on the
“+” next to the word Luns. After selecting the project, put the name of the volume in this case we will
put it as “ASM_DATA2”, we will set the volume size to “ 10 GB ”, the size of the block size to “ 8k ” and
we will select the group “ISCI_RACS”, after done this click on “APPLY”.
We will now repeat the steps to create the third volume, click on the word “LUNs” and then on the “+”
next to the word Luns. After selecting the project, put the name of the volume in this case we will put it
12 | P a g e
as “ASM_DATA3”, we will set the volume size to “ 10 GB ”, the size of the block size to “ 8k ” and we will
select the group “ISCI_RACS”, after done this click on “APPLY”.
We will now repeat the steps to create the fourth volume, click on the word “LUNs” and then on the “+”
next to the word Luns. After selecting the project, put the name of the volume in this case we will put it
as “ASM_DATA4”, we will set the volume size to “ 10 GB ”, the block size to “ 8k ” and we will select the
group “ISCI_RACS”, after done this click on “APPLY”.
13 | P a g e
We will now repeat the steps to create the fifth volume, click on the word “LUNs” and then on the “+”
next to the word Luns. After selecting the project, put the name of the volume in this case we will put it
as “ASM_RECO1”, we will set the volume size to “ 10 GB ”, the block size to “ 8k ” and we will select the
group “ISCI_RACS”, after done this click on “APPLY”.
14 | P a g e
We will now repeat the steps to create the sixth volume, click on the word “LUNs” and then on the “+”
next to the word Luns. After selecting the project, put the name of the volume in this case we will put it
as “ASM_RECO2”, we will set the volume size to “ 10 GB ”, the block size to “ 8k ” and we will select the
group “ISCI_RACS”, after done this click on “APPLY”.
15 | P a g e
We will now repeat the steps to create the seventh volume, click on the word “LUNs” and then on the
“+” next to the word Luns. After selecting the project, put the name of the volume in this case we will
put it as “ASM_RECO3”, we will set the volume size to “ 10 GB ”, the block size to “ 8k ” and we will
select the group “ISCI_RACS”, after done this click on “APPLY”.
16 | P a g e
Ready now we already have all the volumes created for us to use in Oracle RAC.
17 | P a g e
Installing the Oracle Linux iSCSI Initiator in cluster nodes (rac1 and rac2)
The Oracle Linux iSCSI initiator package is not installed by default so must be installed manually. The
package can be installed using one of these options. Use the yum command as root to execute a text-
based installation as shown:
Execute the following commands on both the nodes: rac1 and rac2
Dependencies Resolved
===================================================================================
Package Arch Version Repository Size
===================================================================================
Updating:
iscsi-initiator-utils x86_64 6.2.0.874-17.0.3.el7 ol7_latest 429 k
Updating for dependencies:
iscsi-initiator-utils-iscsiuio x86_64 6.2.0.874-17.0.3.el7 ol7_latest 95 k
Transaction Summary
===================================================================================
Upgrade 1 Package (+1 Dependent package)
18 | P a g e
(2/2): iscsi-initiator-utils-6.2.0.874-17.0.3.el7.x86_64.rpm | 429 kB 00:00:00
-------------------------------------------------------------------------------------------------------
Total 892 kB/s | 524 kB 00:00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Importing GPG key 0xEC551F03:
Userid : "Oracle OSS group (Open Source Software group) <[email protected]>"
Fingerprint: 4214 4123 fecf c55b 9086 313d 72f9 7b74 ec55 1f03
Package : 7:oraclelinux-release-7.2-1.0.5.el7.x86_64 (@anaconda/7.2)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : iscsi-initiator-utils-6.2.0.874-17.0.3.el7.x86_64 1/4
Updating : iscsi-initiator-utils-iscsiuio-6.2.0.874-17.0.3.el7.x86_64 2/4
Cleanup : iscsi-initiator-utils-6.2.0.873-32.0.1.el7.x86_64 3/4
Cleanup : iscsi-initiator-utils-iscsiuio-6.2.0.873-32.0.1.el7.x86_64 4/4
Verifying : iscsi-initiator-utils-iscsiuio-6.2.0.874-17.0.3.el7.x86_64 1/4
Verifying : iscsi-initiator-utils-6.2.0.874-17.0.3.el7.x86_64 2/4
Verifying : iscsi-initiator-utils-iscsiuio-6.2.0.873-32.0.1.el7.x86_64 3/4
Verifying : iscsi-initiator-utils-6.2.0.873-32.0.1.el7.x86_64 4/4
Updated:
iscsi-initiator-utils.x86_64 0:6.2.0.874-17.0.3.el7
Dependency Updated:
iscsi-initiator-utils-iscsiuio.x86_64 0:6.2.0.874-17.0.3.el7
Complete!
[root@rac1 ~]#
19 | P a g e
[root@rac1 ~]# systemctl list-dependencies iscsid
iscsid.service
● └─system.slice
[root@rac1 ~]#
As information on the installation that you will perform the InitiatorName will be different than this
documentation, as it generates this identifier during the installation of iscsi-initiator.
20 | P a g e
Make a note of these identifiers, as they will be necessary for our use.
To configure CHAP authentication, edit the /etc/iscsi/iscsid.conf file to make the following changes:
To enable CHAP authentication, remove the # character at the beginning of the following line:
node.session.auth.authmethod = CHAP
To set the CHAP username and password, complete the following steps:
Edit the lines that define the CHAP username and password to remove the # character from the
beginning of these lines:
node.session.auth.username = username
node.session.auth.password = password
Change username to the IQN we found. For this example, the username is:
iqn.1988-12.com.oracle:e7a57ff151db - RAC1
iqn.1988-12.com.oracle:aaea2b108131 - RAC2
After that we must change the username and password so that we can start the transaction with Oracle
ZFS.
21 | P a g e
Configuring the Oracle ZFS Storage Appliance Using the Browser User Interface
As a unified storage platform, the Oracle ZFS Storage Appliance supports access to block protocol LUNs
using iSCSI and Fibre Channel protocols. This section describes how to use the Oracle ZFS Storage
Appliance BUI to configure the Oracle ZFS Storage Appliance to recognize the Oracle Linux host and
present iSCSI LUNs to it.
A “Target Group” is created on the Oracle ZFS Storage Appliance to define the ports and the protocol by
which the LUN will be presented to the Oracle Linux server.
Login to Oracle ZFS Storage using administrator user (root) through web browser using the following
URL (The following URL we have got through after installation of Oracle ZFS Storage).
https://192.168.2.150:215
Enter the root user and oracle password and click on “LOGIN”
After logging in, click on “CONFIGURATION”, then click on “SAN”, select the option “ISCSI” and finally
click on “TARGET”.
22 | P a g e
After doing this, click on the “+” sign next to the word target to add the settings below.
In the ALIAS field, define an alias for this TARGET, as shown in the image above, I defined it as “OL”.
Then define the authentication method, which is the CHAP type.
In the item Target CHAP name and CHAP secret, the information must be the same as that configured in
ISCSID.CONF
Target CHAP NAME = chapuser
Target CHAP SECRET = CHAPsecret22
After completing the configuration, click OK.
Well let's move the target created to the Targets Group, place the cursor on the entry that was created
for the iSCSI Targets.
The movement icon appears to the left of the entrance, as shown in the image below.
23 | P a g e
Move to the Target Group, the Target created as shown below
After making the move, click on a pencil button to edit the Target name to change the identification.
Now edit the ISCSI initiator name for ZFS and click OK.
Ready now we already have our Target, configured so that the servers have access to Oracle ZFS
storage. To save this configuration, click on “APPLY”.
An “iSCSI initiator” is defined to restrict which servers have access to a given volume. If more than one
host can write to a given volume simultaneously, inconsistency in the file system cache between hosts
can cause disk image corruption.
To identify the Oracle Linux server for Oracle ZFS Storage Appliance, the iSCSI initiator must be
registered, as we will now do.
Click on “CONFIGURATION”, then click on “SAN”, select the option “ISCSI” and finally click on
“INITIATORS”.
After doing this click on the “+” sign next to the word Initiators to add the settings below.
24 | P a g e
The configuration should look like this on RAC1.
Initiator CHAP name = iqn.1988-12.com.oracle:e7a57ff151db
Initiator CHAP secret = CHAPsecret14
After completing the configuration, click OK.
After doing this, click again on the “+” sign next to the word Initiators to add the second node with the
settings below
25 | P a g e
The configuration should look like this on RAC2.
Initiator CHAP name = iqn.1988-12.com.oracle:aaea2b108131
Initiator CHAP secret = CHAPsecret14
After completing the configuration, click OK.
Ready we already have the initiators of the servers that will be part of Oracle RAC and will access the
storage volumes.
Well let's move the initiators created for the Initiators Group, place the cursor over the entry that was
created for the Initiators.
The movement icon appears to the left of the entrance, as shown in the image below.
Move to Initiators RAC1 and then RAC2 to Group the Initiators as shown below. You can under initiator
Groups (Initiators-0 and Initiators-1).
26 | P a g e
Ready should look like the image below, now let's edit the group and give it a name. After making the
move, click on a pencil button to edit the Target name to change the identification.
27 | P a g e
Ready now we already have our Initiators configured, so that the servers have access to the same
volume of disks. To save this configuration, click on “APPLY”.
28 | P a g e
Configuring volumes on servers
Now that the LUNs are prepared and available for iSCSI, the LUN must be configured for use by the
Oracle Linux server, by performing the following steps:
We must perform these steps on the two Oracle RAC servers. First, we must create an entry in
/etc/hosts to not need DNS. Edit the /etc/hosts file and include the following entry on both servers.
192.168.2.150 zfs.localdomain zfs
#SCAN IPs
192.168.2.10 racp-scan.localdomain racp-scan
192.168.2.20 racp-scan.localdomain racp-scan
192.168.2.30 racp-scan.localdomain racp-scan
#SCAN IPs
29 | P a g e
192.168.2.10 racp-scan.localdomain racp-scan
192.168.2.20 racp-scan.localdomain racp-scan
192.168.2.30 racp-scan.localdomain racp-scan
After the inclusion in the hosts file of the two servers, save the files and execute the commands below in
red.
30 | P a g e
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 209715199 104344576 8e Linux LVM
31 | P a g e
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disk /dev/sdh: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
[root@rac1 ~]#
[root@rac1 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
32 | P a g e
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
33 | P a g e
Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Partition 1 of type Linux and of size 10 GiB is set
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
34 | P a g e
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
35 | P a g e
Calling ioctl() to re-read partition table.
Syncing disks.
[root@rac1 ~]#
Dependencies Resolved
=========================================================================
Package Arch Version Repository Size
=========================================================================
Installing:
kernel-uek x86_64 3.8.13-118.48.1.el7uek ol7_UEKR3 33 M
kernel-uek-firmware noarch 3.8.13-118.48.1.el7uek ol7_UEKR3 2.2 M
oracleasm-support x86_64 2.1.11-2.el7 ol7_latest 85 k
Transaction Summary
=========================================================================
Install 3 Packages
36 | P a g e
Running transaction
Installing : kernel-uek-firmware-3.8.13-118.48.1.el7uek.noarch 1/3
Installing : kernel-uek-3.8.13-118.48.1.el7uek.x86_64 2/3
Installing : oracleasm-support-2.1.11-2.el7.x86_64 3/3
Note: Forwarding request to 'systemctl enable oracleasm.service'.
Created symlink from /etc/systemd/system/
multi-user.target.wants/oracleasm.service to
/usr/lib/systemd/system/oracleasm.service.
Verifying : kernel-uek-3.8.13-118.48.1.el7uek.x86_64 1/3
Verifying : oracleasm-support-2.1.11-2.el7.x86_64 2/3
Verifying : kernel-uek-firmware-3.8.13-118.48.1.el7uek.noarch 3/3
Installed:
kernel-uek.x86_64 0:3.8.13-118.48.1.el7uek
kernel-uek-firmware.noarch 0:3.8.13-118.48.1.el7uek
oracleasm-support.x86_64 0:2.1.11-2.el7
Complete!
[root@rac1 ~]#
Configuring oracleasm
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
37 | P a g e
[root@rac1 ~]# rpm -qa | grep asm
oracleasm-support-2.1.11-2.el7.x86_64
objectweb-asm-3.3.1-9.el7.noarch
kde-plasma-networkmanagement-0.9.0.9-7.el7.x86_64
kde-plasma-networkmanagement-libs-0.9.0.9-7.el7.x86_64
kdeplasma-addons-libs-4.10.5-5.el7.x86_64
kdeplasma-addons-4.10.5-5.el7.x86_64
plasma-scriptengine-python-4.11.19-7.el7.x86_64
kde-settings-plasma-19-23.5.0.1.el7.noarch
libatasmart-0.19-6.el7.x86_64
[root@rac1 ~]#
38 | P a g e
Writing disk header: done
Instantiating disk: done
[root@rac1 ~]#
Note: Formatting LUNs and creating disks using oracleasm if required do it in cluster node
(rac2) also. We have formatted LUNs in rac1 and created all the disks using oracleasm
command.
39 | P a g e
[root@rac2 etc]#
40 | P a g e
Oracle 19c GRID Installation (Applying July 2020 Patch):
41 | P a g e
Select the option – Create Local SCAN
42 | P a g e
Configure SSH Set up
43 | P a g e
Select the option – Use Oracle Flex ASM for Storage
Select the option – No (you can decide based on storage size and chosen type)
44 | P a g e
Select the Disk Path – ASM_DATA1, ASM_DATA2, ASM_DATA3 and ASM_DATA4 for DATADG.
45 | P a g e
46 | P a g e
47 | P a g e
Execute the following scripts as a ‘root’ user in cluster nodes
48 | P a g e
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/19.3.0/grid
49 | P a g e
Successful addition of voting disk 2f9ebef3841f4f3abf13bd7f07d3fe68.
Successfully replaced voting disk group with +DATADG.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 2f9ebef3841f4f3abf13bd7f07d3fe68 (/dev/oracleasm/disks/ASM_DATA1) [DATADG]
Located 1 voting disk(s).
2020/08/16 14:30:10 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2020/08/16 14:31:59 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/08/16 14:31:59 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/08/16 14:35:02 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/08/16 14:36:04 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 ~]#
50 | P a g e
2020/08/16 14:37:07 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2020/08/16 14:37:07 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2020/08/16 14:37:08 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2020/08/16 14:37:10 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2020/08/16 14:37:10 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2020/08/16 14:37:30 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2020/08/16 14:37:30 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2020/08/16 14:37:32 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2020/08/16 14:37:32 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2020/08/16 14:37:55 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2020/08/16 14:37:55 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2020/08/16 14:37:57 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2020/08/16 14:37:59 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
Redirecting to /bin/systemctl restart rsyslog.service
2020/08/16 14:38:16 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2020/08/16 14:38:34 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2020/08/16 14:40:58 CLSRSC-343: Successfully started Oracle Clusterware stack
2020/08/16 14:40:58 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2020/08/16 14:42:24 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2020/08/16 14:45:20 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac2 ~]#
51 | P a g e
ora.LISTENER.lsnr
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.chad
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.net1.network
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
ora.ons
ONLINE ONLINE rac1 STABLE
ONLINE ONLINE rac2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE rac1 STABLE
2 ONLINE ONLINE rac2 STABLE
ora.DATADG.dg(ora.asmgroup)
1 ONLINE ONLINE rac1 STABLE
2 ONLINE ONLINE rac2 STABLE
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE rac1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE rac1 169.254.7.133 10.1.4
.10,STABLE
ora.asm(ora.asmgroup)
1 ONLINE ONLINE rac1 Started,STABLE
2 ONLINE ONLINE rac2 Started,STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE rac1 STABLE
2 ONLINE ONLINE rac2 STABLE
ora.cvu
1 ONLINE ONLINE rac1 STABLE
ora.mgmtdb
1 ONLINE ONLINE rac1 Open,STABLE
ora.qosmserver
1 ONLINE ONLINE rac1 STABLE
ora.rac1.vip
1 ONLINE ONLINE rac1 STABLE
ora.rac2.vip
1 ONLINE ONLINE rac2 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac2 STABLE
52 | P a g e
ora.scan2.vip
1 ONLINE ONLINE rac1 STABLE
ora.scan3.vip
1 ONLINE ONLINE rac1 STABLE
--------------------------------------------------------------------------------
[oracle@rac1 ~]$
INSTANCE_NAME INSTANCE_NUMBER
---------------- ---------------
+ASM1 1
+ASM2 2
SQL>
53 | P a g e
54 | P a g e
Installation of Oracle 19c (19.3.0) RDBMS binaries
Select the option – “Oracle Real Application Clusters database installation” and click “NEXT”.
55 | P a g e
Check that the 2 nodes are selected, click on the "SSH Connectivity" button and enter the password for
the user "oracle". Click the "Setup" button to configure SSH connectivity, and the "Test" button to test it,
then press "Next".
At the “Database Edition” screen, select the Enterprise Edition option and click “NEXT”.
56 | P a g e
On the “Installation Location” screen, we recommend following the standard suggestion, if necessary,
change the path, and after that click on “NEXT”.
Keep the defaults in "Operating Systems Groups" and press "Next". Ignore warning on the next screen.
57 | P a g e
In the item Prerequisite Checks, in my installation it passed without major problems, if I present
something to you, check the alerts, solve the problems before installation.
After everything is OK click on Install.
58 | P a g e
Execute the following scripts as a ‘root’ user in cluster nodes
59 | P a g e
[root@rac2 ~]# /u01/app/oracle/product/19.3.0/db_1/root.sh
Performing root user operation.
60 | P a g e
Creation and Configuration of Oracle 19c (19.3.0) RAC Database
61 | P a g e
62 | P a g e
63 | P a g e
[oracle@rac1 ~]$ srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/19.3.0/db_1
Oracle user: oracle
Spfile: +DATADG/ORCL/PARAMETERFILE/spfile.299.1048632073
Password file: +DATADG/ORCL/PASSWORD/pwdorcl.283.1048630197
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: RECODG,DATADG
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: oper
Database instances: orcl1,orcl2
Configured nodes: rac1,rac2
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed
64 | P a g e
[oracle@rac1 ~]$
INSTANCE_NAME INSTANCE_NUMBER
-------------------------------------------------------------
orcl2 2
orcl1 1
SQL>
SQL>
Just start Oracle ZFS Storage first and start both the cluster nodes (rac1 and rac2). The sequence of steps
is below.
Login as ‘root’ user and stop cluster using the following command.
65 | P a g e
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac1'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'rac1'
CRS-2677: Stop of 'ora.orcl.db' on 'rac2' succeeded
CRS-33673: Attempting to stop resource group 'ora.asmgroup' on server 'rac2'
CRS-2673: Attempting to stop 'ora.RECODG.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.DATADG.dg' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'rac2'
CRS-2673: Attempting to stop 'ora.cvu' on 'rac2'
CRS-2677: Stop of 'ora.RECODG.dg' on 'rac2' succeeded
CRS-2677: Stop of 'ora.orcl.db' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1'
CRS-2677: Stop of 'ora.DATADG.dg' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac2'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac2'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN3.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac2'
CRS-2673: Attempting to stop 'ora.scan3.vip' on 'rac2'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2677: Stop of 'ora.cvu' on 'rac2' succeeded
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1'
CRS-2677: Stop of 'ora.rac2.vip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded
CRS-2677: Stop of 'ora.scan2.vip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.scan3.vip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.qosmserver' on 'rac2' succeeded
CRS-2677: Stop of 'ora.chad' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.chad' on 'rac1'
CRS-2677: Stop of 'ora.chad' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.mgmtdb' on 'rac1'
CRS-2677: Stop of 'ora.mgmtdb' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.MGMTLSNR' on 'rac1'
CRS-2677: Stop of 'ora.MGMTLSNR' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
[root@rac2 ~]#
66 | P a g e
[root@rac1 ~]# iscsiadm -m node -p zfs -l
67 | P a g e
ASM_RECO2
ASM_RECO3
[root@rac2 ~]#
68 | P a g e
ORACLE_SID = [+ASM1] ? orcl
ORACLE_HOME = [/home/oracle] ? /u01/app/oracle/product/19.3.0/db_1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@rac1 ~]$
Summary: Finally, Oracle 19c (19.8.0) GI and Oracle (19.3.0) Database on Oracle ZFS Storage (OS8.8). We
hope we have posted all the steps to configure Oracle ZFS Storage for Cluster Nodes. Good luck. Thanks
for referring the article.
69 | P a g e