Silo - Tips Oracle 11g r2 Grid Infrastructure Installation On 2 Node Cluster Using Virtualbox
Silo - Tips Oracle 11g r2 Grid Infrastructure Installation On 2 Node Cluster Using Virtualbox
Oracle 11g R2 RAC system necessarily works on grid architecture. The master RAC node must have
grid infrastructure installed with the remaining nodes replicated accordingly. The Oracle 11gR2 grid
infrastructure installation integrates oracle clusterware and oracle ASM. Oracle Clusterware is
responsible for High Availability framework, process monitoring, event management and group
membership. On the other hand, oracle ASM suffices the need of a conventional file system while
offering multiple add-on features like online disk manipulation, auto I/O load balancing, stripping and
mirroring of data and finally ease the data storage management.
The RAC customers/users often report their difficulties and issues in the installation phase. Before
planning the grid infrastructure installation, it has to be prepared appropriately so that it suffices all the
requirements of a clusterware and memory management. There are multiple areas to be looked upon
before the grid installation like network, system kernel parameters, NTPD settings, node connectivity,
and ASM disk setup. The document describes the prerequisites of a system to be planned for gird
installation. Please note that the document is not a guide for RAC installation but a reference to
provide hands-on with RAC installation on oracle virtualbox.
Approach
The document illustrates the installation of Oracle 11gR2 grid infrastructure and 2 node RAC database
on virtualbox. The two virtualbox machine images installed with OEL, would serve as the cluster
nodes. Initially, we will create only one virtual machine image and do all sorts of configurations
required for the grid installation. The master node would be then cloned to create the second node
participating in the cluster.
Memory requirements
The host system running the two virtual machine images must have sufficient memory to run both
images simultaneously. RAM above 4GB would be best suited for the demo.
Software Requirement
Following software can be procured from Oracle Technology Network in order to follow this
demonstration
System setup
Create a virtualbox machine image and install Oracle Enterprise Linux on the same. Recommended
base memory (RAM) and startup disk size is shown in the below screenshots.
a) RAM sizing
b) Virtual disk sizing
c) Network adaptor setting - Enable two adaptors for bridged networking and one for NAT. Bridged
adaptor would serve for public and private network interfaces respectively.
Once the virtual machine image has been created, start the OEL installation using Enterprise-R5-U5-
Server-x86_64-dvd.iso disk file.
Select the default settings in all the wizards. Disable the SELinux and firewalls. Do not create
additional account as user creation and permissions would be part of a separate activity. Install the
virtualbox guest additions to enable the sharing of grid and database software files from host OS
location.
Note that the package kernel-uek-devel-2.6.32-300.32.2.el5uek.rpm must be updated for the proper
enablement of virtualbox guest additions.
a) Mount the shared memory filesystem (/tmpfs) and ensure that it has enough size for automatic
memory management.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
oracle-validated x86_64 1.1.0-15.el5 el5_latest 24 k
Updating:
udev x86_64 095-14.27.0.3.el5_7.1 el5_latest 2.4 M
Installing for dependencies:
device-mapper-multipath-libs x86_64 0.4.9-46.0.5.el5 el5_latest 168 k
iscsi-initiator-utils x86_64 6.2.0.872-13.0.1.el5 el5_latest 1.0 M
kernel-uek x86_64 2.6.32-300.32.2.el5uek el5_latest 26 M
kernel-uek-firmware noarch 2.6.32-300.32.2.el5uek el5_latest 3.7 M
libXp i386 1.0.0-8.1.el5 el5_latest 22 k
libaio-devel i386 0.3.106-5 el5_latest 12 k
libaio-devel x86_64 0.3.106-5 el5_latest 11 k
oraclelinux-release x86_64 5-8.0.2 el5_latest 2.7 k
ql2xxx-firmware noarch 1.01.01-0.2.el5 el5_latest 442 k
sysstat x86_64 7.0.2-11.el5 el5_latest 187 k
unixODBC x86_64 2.2.11-10.el5 el5_latest 291 k
unixODBC-devel i386 2.2.11-10.el5 el5_latest 738 k
unixODBC-devel x86_64 2.2.11-10.el5 el5_latest 793 k
unixODBC-libs i386 2.2.11-10.el5 el5_latest 551 k
unixODBC-libs x86_64 2.2.11-10.el5 el5_latest 554 k
Updating for dependencies:
device-mapper-multipath x86_64 0.4.9-46.0.5.el5 el5_latest 97 k
irqbalance x86_64 2:0.55-17.el5 el5_latest 21 k
kexec-tools x86_64 1.102pre-154.0.3.el5_8.1 el5_latest 602 k
kpartx x86_64 0.4.9-46.0.5.el5 el5_latest 465 k
libbdevid-python x86_64 5.1.19.6-75.0.9.el5 el5_latest 69 k
mkinitrd i386 5.1.19.6-75.0.9.el5 el5_latest 482 k
mkinitrd x86_64 5.1.19.6-75.0.9.el5 el5_latest 471 k
nash x86_64 5.1.19.6-75.0.9.el5 el5_latest 1.4 M
util-linux x86_64 2.13-0.59.0.1.el5 el5_latest 1.9 M
Transaction Summary
================================================================================
Install 16 Package(s)
Upgrade 10 Package(s)
c) Edit the /etc/hosts file to add public ip, private ip and virtual ip addresses for the proposed two
nodes. Here note that we are including the network configuration for the second node as well. The
reason for the pre configuration is to sync the network settings at both the nodes.
#Public
192.168.1.100 rac1.oracle.com rac1
192.168.1.200 rac2.oracle.com rac2
#Private
10.177.240.100 rac1-priv.oracle.com rac1-priv
10.177.240.200 rac2-priv.oracle.com rac2-priv
#Virtual
192.168.1.10 rac1-vip.oracle.com rac1-vip
192.168.1.11 rac2-vip.oracle.com rac2-vip
In addition, manually set the public and private ip for eth0 and eth1 network interfaces as shown in the
below screen dump.
d) Verify the system kernel requirements. The oracle-validated installation would reset the parameters
as required for the oracle database installation.
e) Verify the limits.conf values. Like kernel parameters, these values are too set by oracle-validated
package.
f) Verify the /etc/pam.d/login values. Include the entry [highlighted below as bold]
h) Synchronization setup for NTPD and restart. Include “-x” in the OPTIONS to enable the server
synchronization.
i) Create groups and users. Note that “oracle” user is part of oinstall and dba groups
#ORACLE SETTINGS
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=rac1.oracle.com; export ORACLE_HOSTNAME
ORACLE_UNQNAME=RAC1; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1; export ORACLE_HOME
ORACLE_SID=rac1; export ORACLE_SID
ORACLE_TERM=term; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH
PATH=$PATH:$HOME/bin
export PATH
umask 0022
l) Note the “umask” of “oracle” user is set to default as “0022”. If not add the default umask in
.bash_profile and .bashrc also
n) Create the sharable disks under “Storage”. Create three new hard disks of fixed size. By default,
they will be created as “Normal”. Edit the disk properties from virtual manager and make them
“Sharable”.
o) Restart the virtual machine RAC1. Check the installation of ASM packages. The rpm packages
“oracleasmlib”, “oracleasm-support” and “oracleasm” are mandatory for the ASM configuration. The
packages “oracleasm-support” and “oracleasm” are available in the oracle-validated installation
package. The package “oracleasmlib” has to be downloaded from OTN corresponding to the kernel
version.
p) Configure oracleasm
r) Unzip the Grid Infrastructure folder. Change ownership of “grid” folder to “oracle:oinstall” Install
the RPM package “cvuqdisk” from the /install/grid/rpm folder
t) Clone the RAC1 to RAC2. It might take up some time. Once the cloned image RAC2 is done,
remove the additional sharable disks from RAC2 image under the “Storage” attribute. Add the existing
sharable disks (from RAC1 image).
u) Start RAC2 node. Edit the eth0 and eth1 ip addresses. Change the hostname, Edit the
/home/oracle/.bash_profile.
v) Go to node RAC1 and establish the ssh connectivity. The user “oracle” must have a password so as
to establish passwordless connectivity between the two nodes. The script asks for the “oracle” user
passwords and updates the “authorized_keys”, “id_rsa” keys, and “known_hosts” files at
/home/oracle/.ssh location.
w) Verify the cluster setup using cluvfy.sh. The cluster verification utility verifies the nodes for ssh
connectivity, user equivalence, free memory, kernel parameters, package existence, and ntpd clock
synchronization. Note that the cluster verfiication utility is employed in the pre-cluster installation stage
(-pre crsinst).
<<Output truncated>>
Pre-check for cluster services setup was successful.
Checking Temp space: must be greater than 120 MB. Actual 8213 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3999 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual
16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-09-23_05-
16-40PM. Please wait ...
c) The installer open the wizard to select the “Installation Option”. Select “Install and Configure Grid
Infrastructure for a Cluster” and click “Next”.
d) The next wizard prompts for the selection for “Installation Type”. Select “Typical Installation” to
prioritize default settings and click “Next”. The “Advanced Installation” options lets you to select the
language, different passwords and GNS settings for SCAN cluster name.
e) The third wizard does the “Cluster Configuration”. It will show a default, but modifiable SCAN name
and only the master RAC node i.e. RAC1 here.
Click “Add” button to add the other participating nodes (public hostname and virtual hostname) in the
cluster. Here, rac2.oracle.com (public) and rac2-vip.oracle.com (virtual hostname) is added.
Click “SSH Connectivity” to setup and test the SSH connectivity between the cluster nodes. Provide
the “oracle” user password and click “Test”/”Setup”.
f) The “Install Location” wizard provides you to select the locations for “Oracle Base”, “Software
Location” or grid home, “Cluster Registry Storage Type”, SYSASM Password and OSASM group.
Select the appropriate values and click “Next”.
Note that the selection of correct OSASM group is required to list the ASM disks in the next wizards.
g) The “ASM Disk Group” wizard lists the configured ASM disks. “DATA” is the default Disk Group
Name”. Select “External” redundancy to avoid mirroring of disks in the failure groups during
demonstrations (though it is required in the production RAC setups). Check all the participating disks
and click “Next”.
If the ASM disks are not getting listed, try to change the location by specifying correct location in
“Change Discovery Path”.
h) The next wizard “Create Inventory” specifies the default location for the storage of installation files.
It is required for the first installation only.
I) The next wizard “Prerequisite Checks” would check the system setup for the grid installation. The
checks would be same as the one which we validated in the last section. After the verification, the
wizard would be directed to “Summary” wizard.
j) The “Summary” wizard will list the summary of the prerequisite check performed by the installer. If
the check is successful, the below screen appears. If any of the checks are failed, the page lists the
failed checks, fixable or not fixable recommendation, and ‘ignore’ option. If the failed checks can be
fixed, it must be resolved before further proceedings. If the failed checks are ignorable, select the
‘ignore’ option. Click “Finish” to move ahead.
k) The “Setup” wizard shows the installation tasks and the progress bar.
l) In between of the installation (stage “Execute Root Scripts for Install Grid Infrastructure for a
Cluster”), below dialog box pops-up and prompts for the execution of two scripts on all the nodes in
the cluster. The scripts starts the Oracle High Availability Services, cluster processes and configures
oracle grid infrastructure for the cluster. In addition, it also creates (and starts) ASM service and
diskgroup DATA.
Execute both the scripts on the master node RAC1 first. Upon the successful execution of the scripts,
execute the scripts on the remaining nodes. Sample output of the script execution is as below
Checking swap space: must be greater than 500 MB. Actual 3928 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
Similar output can be observed while executing the scripts on the other nodes also. The message
'UpdateNodeList' was successful confirms the successful execution of the script.
n) The “Finish” wizard confirms the successful installation of the Oracle Grid Infrastructure software
Oracle Database Software Installation
Once the oracle grid infrastructure is installed successfully, oracle database software installation can
be initiated. The step by step listing and related description is as follows.
a) Unzip the database software zipped files as “root” user. Modify the owner and group details of the
“database” folder.
Checking Temp space: must be greater than 120 MB. Actual 4870 MB Passed
Checking swap space: must be greater than 150 MB. Actual 3999 MB Passed
Checking monitor: must be configured to display at least 256 colors. Actual
16777216 Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-09-24_05-
28-58AM. Please wait ...
c) The “Configure Security Updates” asks for the email contact to sync in the security updates to the
user. Since its not mandatory, uncheck the option and click “Next”.
d) In the wizard “Installation Option”, select the option “Install database software only” and click “Next”.
It implies that we shall create the database separately using the “dbca” utility.
e) Under the “Grid Options” wizard, the “Real Application Clusters database installation” is selected by
default due to detection of the clusterware by the installer. Note that both the nodes i.e. RAC1 and
RAC2 are listed and check by default.
However, “Single instance database installation” option allows to create a conventional database with
a single instance connecting to a single database.
f) The “Product Languages” wizard allows you to select the language supported by the database
product.
g) The “Database Edition” wizard allows you to select the appropriate edition to be installed. For
demonstration purposes, we select “Enterprise Edition” and click “Next” to move further.
h) The “Installation Location” wizard allows the user to select the “Oracle Base” and “Software
Location” on the server.
I) The wizard “Operating System Groups” select the OSDBA abd OSOPER groups.
j) In the “Prerequisite Checks” wizard, the installer performs the prerequisite checks.
k) Once the prerequisite checks are performed, the “Summary” wizard lists the summary of validations
and verifications. Click “Finish’ to kick off the installation.
l) The “Install Product” wizard shows the stepwise installation and progress bar
m) Similar to grid installation, a dialog pops up to prompt the execution of Root scripts for Database
Installation. Execute the scripts on both the nodes RAC1 and RAC2 respectively
n) The “Finish” wizard confirms the successful installation of Oracle database software.