Practice 14 Creating A Multitenant RAC Database
Practice 14 Creating A Multitenant RAC Database
Practice 14
Practice Overview
In this practice, you will perform basic tasks to manage a multitenant RAC database. Specifically, you
will perform the following:
• Create a Multitenant RAC database
• Clone a PDB
• Drop a PDB
Practice Assumptions
• The practice assumes that you have the virtual machines srv1 and srv2 up and running.
2. Start the dbca utility and use it to create a new CDB RAC database. Response to the utility
windows as follows:
Storage Options In the Database files storage type: select Automatic Storage
Management (ASM) as the Storage Type.
Enter +DATA/{DB_UNIQUE_NAME} in the Database File
Locations field (can be selected by clicking on the Browse
button).
Select Oracle-Managed Files
Fast Recovery Option Select Specify Fast Recovery Area and enter +FRA in the
Fast Recovery Area field.
Storage Type: Automatic Storage Management (ASM)
Fast Recovery Area: +FRA
Fast Recovery Area Size: 10240 MB
Mark Enable Archiving
4. Using the srvctl utility, check the status of the database and its configuration.
5. Switch the current user to grid and check the services registered in the listener.
Observe that both the CDB database (mtdb) and the pluggable database (pdb1) are registered in
the listener.
su - grid
lsnrctl services
In this section of the practices, you will examine the different ways of connecting to the CDB root and
the pluggable database.
8. Connect to the CDB container (mtdb) using the Easy Connect method and verify that the database
is a multitenant database.
Observe that when you connect to the CDB container, you are technically connected to the root
container. Typically, this container should not have any application data in it.
sqlplus system/oracle@//srv1/mtdb.localdomain
SHOW CON_NAME
SELECT NAME, CDB, CON_ID FROM V$DATABASE;
9. Connect to the pluggable database container (pdb1) using the Easy Connect method.
PDB is the database that application connect to. To the application perspective, the PDB looks and
behaves exactly as the traditional Oracle non-CDB database.
sqlplus system/oracle@//srv1/pdb1.localdomain
SHOW CON_NAME
10. Login to the local root as sysdba then obtain the instance name and CON_ID of the current
instance.
export ORACLE_SID=mtdb1
sqlplus / as sysdba
12. Close PDB1 in the current instance, check on which instance the PDB was closed then open it
again.
Observe that the statement by default opens/closes the PDB in the current instance.
ALTER PLUGGABLE DATABASE PDB1 CLOSE;
SELECT INST_ID,CON_ID,NAME,OPEN_MODE FROM GV$PDBS WHERE NAME='PDB1';
ALTER PLUGGABLE DATABASE PDB1 OPEN;
14. Try connecting to pdb1 using Easy Connect, first in the first node, then in the second node.
Observe connecting to the second node fails.
conn system/oracle@//srv1/pdb1.localdomain
conn system/oracle@//srv2/pdb1.localdomain
16. Switch the current container to the root and then retrieve information about the redo log groups.
You will notice that each instance in the CDB has four multiplexed redo groups (total 16) and they
are all managed by the root container. Redo log groups are always associated to the CDB and they
are used by all the PDBs within the CDB. You cannot create a redo group for a specific PDB.
ALTER SESSION SET CONTAINER=CDB$ROOT;
SELECT INST_ID, GROUP#, CON_ID FROM GV$LOGFILE ORDER BY 1,2,3;
17. Execute the following command to retrieve information about the undo tablespaces in the CDB.
# list of all the undo tablespaces in the CDB:
# in a CDB, tablespace name is not unique. Tablespaces are uniquely identified
# by their names and their CON_ID.
SELECT TABLESPACE_NAME, CON_ID
FROM CDB_TABLESPACES
WHERE CONTENTS = 'UNDO';
# Switch the current container to PDB1 and run the same queries above.
# because the current container isn’t the root, the views will retrieve
# the information of the current container only
ALTER SESSION SET CONTAINER=PDB1;
In this section of the practices, you will create a new container named pdb2 by cloning pdb1.
19. Issue the following command to create a new container named pdb2 by cloning pdb1.
Observe that with a single SQL command, you managed to create a new pluggable database by
copying online another pluggable database. That was nearly impossible with non-CDB databases.
CREATE PLUGGABLE DATABASE pdb2 FROM pdb1;
20. Open pdb2 by issuing the command ALTER PLUGGABLE DATABASE ... OPEN
When the OPEN_MODE is MOUNTED for a PDB, it means it is closed.
21. Verify that the new PDB is registered in the listener then try connecting to it.
host lsnrctl services | grep pdb2
conn system/oracle@//srv1/pdb2.localdomain
conn system/oracle@//srv2/pdb2.localdomain
In this section of the practices, you will create a service for pdb2 that will be managed by clusterware.
22. Connect to the local instance as sysdba then switch the current container to pdb2.
sqlplus / as sysdba
ALTER SESSION SET CONTAINER=pdb2;
23. Verify that a service exists that has the same name as the PDB.
col name format a30
col pdb format a10
SELECT NAME, PDB FROM DBA_SERVICES ORDER BY 1;
24. Restart the CDB and observe the open mode of pdb2.
Observe that PDB2 is not started. This is basically because PDB2 service is not registered in the
clusterware.
srvctl stop database -d mtdb
srvctl start database -d mtdb
sqlplus / as sysdba
SELECT INST_ID, OPEN_MODE FROM GV$PDBS WHERE NAME='PDB2' ORDER BY 1;
25. Verify that the retrieved service name is not managed by the clusterware.
When the command displays nothing, it means no service is being managed by the clusterware.
srvctl status service -db mtdb
27. Create and start a PDB service for pdb2. Set the first node as the preferred node for the service.
srvctl add service -db mtdb -pdb pdb2 -s pdb2srv -preferred mtdb1 -available
mtdb2
srvctl start service -db mtdb -s pdb2srv
29. Verify that the service will start automatically when you restart the system.
srvctl config service -db mtdb -s pdb2srv | grep "Management policy"
30. Try connecting to the pdb2 via pdb2srv service, first in the first instance and then in the second
instance.
The service is available in the first instance only, therefore, connecting to the second instance
fails.
sqlplus system/oracle@//srv1/pdb2srv.localdomain
conn system/oracle@//srv2/pdb2srv.localdomain
31. Verify that restarting the CDB starts the pdb service.
srvctl stop database -d mtdb
srvctl start database -d mtdb
srvctl status service -db mtdb -s pdb2srv
In this section of the practice, you will drop pdb2 including its datafiles.
32. Issue the following command to delete the service associated with pdb2. The service must be
stopped before you can delete it.
srvctl stop service -db mtdb -s pdb2srv
srvctl remove service -db mtdb -s pdb2srv
33. Connect to the local instance as sysdba, stop pdb2, then drop it including its datafiles.
Observe that while you are dropping pdb2, the other pdbs are still online and in operation.
sqlplus / as sysdba
Cleanup
34. Run the dbca utility and drop the mtdb database.
Summary
Multitenant architecture is a significant solution for database consolidation and it is definitely the
future of how Oracle database will be developed. In this practice you learnt some fundamentals
about Oracle multitenant database. Specifically, you performed the following:
• Create a Multitenant RAC database
• Clone a PDB
• Drop a PDB