DB Monitoring & Performance Script
DB Monitoring & Performance Script
The Monitoring of predefined events that generates a message or warning when a certain threshold has been
exceeded. This is done in an effort to ensure that an issue doesn't become a problem. The database monitoring
is required for the following reason:
<!--[if !supportLists]-->– <!--[endif]-->Smooth running of production
<!--[if !supportLists]-->– <!--[endif]-->Keeping an eye on development
<!--[if !supportLists]-->– <!--[endif]-->Database performance
<!--[if !supportLists]-->– <!--[endif]-->In Support of an SLA (service level agreement)
Types of DB Monitoring
1. Status
2. Performance
3. Trend Analysis
Status Monitoring:
Monitor the current status of an event and reports when it exceeds a defined threshold.
Database:
<!--[if !supportLists]-->– <!--[endif]-->Database/Listener
<!--[if !supportLists]-->– <!--[endif]-->Monitor Alert. log Message on regular basis.
<!--[if !supportLists]-->– <!--[endif]-->Check all last night backup is successful.
<!--[if !supportLists]-->– <!--[endif]-->Tablespace/Datafiles full or Fragmented.
<!--[if !supportLists]-->– <!--[endif]-->Identify bad growth of segment.
<!--[if !supportLists]-->– <!--[endif]-->Identify at least 1 top resource consuming query
<!--[if !supportLists]-->– <!--[endif]-->Monitor Locking
<!--[if !supportLists]-->– <!--[endif]-->Check Maximum Extent about to be reached.
<!--[if !supportLists]-->– <!--[endif]-->Redo log Tracking
<!--[if !supportLists]-->– <!--[endif]-->UNDO and Temp Segment Free space.
<!--[if !supportLists]-->– <!--[endif]-->Monitor Running Job
<!--[if !supportLists]-->– <!--[endif]-->Tracking DB User/Session Information.
<!--[if !supportLists]-->– <!--[endif]-->Important Object Information
OS:
<!--[if !supportLists]-->– <!--[endif]-->SGA/PGA information
<!--[if !supportLists]-->– <!--[endif]-->CPU Usage Information
<!--[if !supportLists]-->– <!--[endif]-->Memory Utilization
<!--[if !supportLists]-->– <!--[endif]-->Disk Utilization
Performance Monitoring:
Monitor a defined set of performance statistics. This is done in an effort to maintain the best possible DB
performance.
Trend Analysis Monitoring:
Collect the historical data for specified events and analyze these data on schedule basis to reveal any potential
problems. For Example watching growth of data in a tablespace and predicting when it will fill.
Apart from the above checklist some of the other checklist a DBA are using. It is depend on the requirement. I
am mentioning here some of the related query and scripts. It is fully related to DB Monitoring Purpose.
Note: Keep every one informed specially your senior or Junior DBA, System Admin, Manager and do not forget
to document very important update.
Database Information:
*******************************************************************************************************************************
***********************************
Track OS Reboot Time:
net statistics server
systeminfo | find "Up Time" -- to find system last uptime
systeminfo | find "System Boot Time" -- to find system boot time
net statistics workstation | find "Statistics" Workstation Statistics for \\A5541TAG-WKS --perticular workstation
statistics
Database and Instance Last start time:
SELECT to_char(startup_time,'DD-MON-YYYY HH24:MI:SS') "DB Startup Time"
FROM sys.v_$instance;
SELECT SYSDATE-logon_time "Days", (SYSDATE-logon_time)*24 "Hours"
from sys.v_$session where sid=1;
Track Database Version:
SELECT * from v$version;
Track Database Name and ID information:
SELECT DBID, NAME FROM V$DATABASE;
Track Database Global Name information:
SELECT * FROM GLOBAL_NAME;
Track Database Instance name:
SELECT INSTANCE_NAME FROM V$INSTANCE;
Track Database Host Details:
SELECT UTL_INADDR.GET_HOST_ADDRESS, UTL_INADDR.GET_HOST_NAME FROM DUAL;
Display information about database services
SELECT name, network_name FROM dba_services ORDER BY name;
Track Database Present Status:
SELECT created, RESETLOGS_TIME, Log_mode FROM V$DATABASE;
DB Character Set Information:
Select * from nls_database_parameters;
Track Database default information:
Select username, profile, default_tablespace, temporary_tablespace from dba_users;
Track Total Size of Database:
select a.data_size+b.temp_size+c.redo_size "Total_Size (GB)"
from ( select sum(bytes/1024/1024/1024) data_size
from dba_data_files ) a, ( select nvl(sum(bytes/1024/1024/1024),0) temp_size
from dba_temp_files ) b, ( select sum(bytes/1024/1024/1024) redo_size
from sys.v_$log ) c;
Total Size of Database with free space:
Select round(sum(used.bytes) / 1024 / 1024/1024 ) || ' GB' "Database Size",round(free.p / 1024 / 1024/1024) || '
GB' "Free space"
from (select bytes from v$datafile
union all
select bytes from v$tempfile
union all
select bytes from v$log) used, (select sum(bytes) as p from dba_free_space) free group by free.p;
Track Database Structure:
select name from sys.v_$controlfile;
select group#,member from sys.v_$logfile;
Select F.file_id Id, F.file_name name, F.bytes/(1024*1024) Mbyte,
decode(F.status,'AVAILABLE','OK',F.status) status, F.tablespace_name Tspace
from sys.dba_data_files F
order by tablespace_name;
Tablespace/Datafile/Temp/UNDO Information:
*******************************************************************************************************************************
***********************************
Track Tablespace Used/Free Space:
SELECT /* + RULE */ df.tablespace_name "Tablespace", df.bytes / (1024 * 1024) "Size (MB)",
SUM(fs.bytes) / (1024 * 1024) "Free (MB)", Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "%
Free", Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
FROM dba_free_space fs, (SELECT tablespace_name,SUM(bytes) bytes
FROM dba_data_files
GROUP BY tablespace_name) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name,df.bytes
UNION ALL
SELECT /* + RULE */ df.tablespace_name tspace,
fs.bytes / (1024 * 1024), SUM(df.bytes_free) / (1024 * 1024), Nvl(Round((SUM(fs.bytes) - df.bytes_used) *
100 / fs.bytes), 1), Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
FROM dba_temp_files fs, (SELECT tablespace_name,bytes_free,bytes_used
FROM v$temp_space_header
GROUP BY tablespace_name,bytes_free,bytes_used) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
ORDER BY 4 DESC;
Track all Tablespaces with free space < 10%
Select a.tablespace_name,sum(a.tots/1048576) Tot_Size, sum(a.sumb/1024)
Tot_Free, sum(a.sumb)*100/sum(a.tots) Pct_Free, ceil((((sum(a.tots) * 15) - (sum(a.sumb)*100))/85 )/1048576)
Min_Add
from (select tablespace_name,0 tots,sum(bytes) sumb
from dba_free_space a
group by tablespace_name
union
Select tablespace_name,sum(bytes) tots,0 from dba_data_files
group by tablespace_name) a group by a.tablespace_name
having sum(a.sumb)*100/sum(a.tots) < 10
order by pct_free;
Track Tablespace Fragmentation Details:
Select a.tablespace_name,sum(a.tots/1048576) Tot_Size,
sum(a.sumb/1048576) Tot_Free, sum(a.sumb)*100/sum(a.tots) Pct_Free,
sum(a.largest/1024) Max_Free,sum(a.chunks) Chunks_Free
from ( select tablespace_name,0 tots,sum(bytes) sumb,
max(bytes) largest,count(*) chunks
from dba_free_space a
group by tablespace_name
union
select tablespace_name,sum(bytes) tots,0,0,0 from dba_data_files
group by tablespace_name) a group by a.tablespace_name
order by pct_free;
Track Non-Sys owned tables in SYSTEM Tablespace:
SELECT owner, table_name, tablespace_name FROM dba_tables WHERE tablespace_name = 'SYSTEM'
AND owner NOT IN ('SYSTEM', 'SYS', 'OUTLN');
Track Default and Temporary Tablespace:
SELECT * FROM DATABASE_PROPERTIES where PROPERTY_NAME like '%DEFAULT%';
select username,temporary_tablespace,default_tablespace from dba_users where username='HRMS'; --for
Particular User
Select default_tablespace,temporary_tablespace,username from dba_users; --for All Users
Track DB datafile used and free space:
SELECT SUBSTR (df.NAME, 1, 40) file_name,dfs.tablespace_name, df.bytes / 1024 / 1024
allocated_mb, ((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0)) used_mb,
NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb
FROM v$datafile df, dba_free_space dfs
WHERE df.file# = dfs.file_id(+)
GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes,dfs.tablespace_name
ORDER BY file_name;
Track Datafile with Archive Details:
SELECT NAME, a.status, DECODE (b.status, 'Active', 'Backup', 'Normal') arc, enabled, bytes, change#, TIME
ARCHIVE FROM sys.v_$datafile a, sys.v_$backup b WHERE a.file# = b.file#;
Track Datafiles with highest I/O activity:
Select * from (select name,phyrds, phywrts,readtim,writetim
from v$filestat a, v$datafile b
where a.file#=b.file#
order by readtim desc) where rownum <6;
Track Datafile as per the Physical Read/Write Percentage:
WITH totreadwrite AS (SELECT SUM (phyrds) phys_reads, SUM (phywrts) phys_wrts FROM v$filestat)
SELECT NAME, phyrds, phyrds * 100 / trw.phys_reads read_pct, phywrts, phywrts * 100 / trw.phys_wrts
write_pct FROM totreadwrite trw, v$datafile df, v$filestat fs WHERE df.file# = fs.file# ORDER BY phyrds DESC;
Checking Autoextend ON/OFF for Datafile:
select substr(file_name,1,50), AUTOEXTENSIBLE from dba_data_files
select tablespace_name,AUTOEXTENSIBLE from dba_data_files;
More on Tablespace/Datafile size click on the link: DB Tablespace/Datafile Details
Temp Segment:
Track Temp Segment Free space:
SELECT tablespace_name, SUM(bytes_used/1024/1024) USED, SUM(bytes_free/1024/1024) FREE
FROM V$temp_space_header
GROUP BY tablespace_name;
SELECT A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM v$sort_segment A, (SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM v$tablespace B, v$tempfile C
WHERE B.ts#= C.ts#
GROUP BY B.name, C.block_size ) D
WHERE A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;
Track Who is Currently using the Temp:
SELECT b.tablespace, ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",
a.sid||','||a.serial# SID_SERIAL, a.username, a.program
FROM sys.v_$session a, sys.v_$sort_usage b, sys.v_$parameter p
WHERE p.name = 'db_block_size' AND a.saddr = b.session_addr
ORDER BY b.tablespace, b.blocks;
Undo & Rollback Segment:
Monitor UNDO information:
select to_char(begin_time,'hh24:mi:ss'),to_char(end_time,'hh24:mi:ss'),
maxquerylen,ssolderrcnt,nospaceerrcnt,undoblks,txncount from v$undostat
order by undoblks;
Track Active Rollback Segment:
SELECT r.NAME, l.sid, p.spid, NVL (p.username, 'no transaction') "Transaction",
p.terminal "Terminal" FROM v$lock l, v$process p, v$rollname r
WHERE l.sid = p.pid(+) AND TRUNC (l.id1(+) / 65536) = r.usn AND l.TYPE(+) = 'TX' AND l.lmode(+) = 6
ORDER BY R.NAME;
Track Currently Who is using UNDO and TEMP:
SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial,
NVL(s.username, 'None') orauser, s.program, r.name undoseg,
t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo"
FROM sys.v_$rollname r, sys.v_$session s, sys.v_$transaction t, sys.v_$parameter x
WHERE s.taddr = t.addr AND r.usn = t.xidusn(+) AND x.name = 'db_block_size';
Redolog Information:
*******************************************************************************************************************************
***********************************
Track Redo Generation by Calender Year:
select to_char(first_time,'mm.DD.rrrr') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') "23"
from v$log_history group by to_char(first_time,'mm.DD.rrrr')
order by day;
Track Redo generation by day:
select trunc(completion_time) logdate, count(*) logswitch, round((sum(blocks*block_size)/1024/1024)) "REDO
PER DAY (MB)" from v$archived_log
group by trunc(completion_time) order by 1;
Track How much full is the current redo log file:
SELECT le.leseq "Current log sequence No", 100*cp.cpodr_bno/le.lesiz "Percent Full",
cp.cpodr_bno "Current Block No", le.lesiz "Size of Log in Blocks"
FROM x$kcccp cp, x$kccle le
WHERE le.leseq =CP.cpodr_seq
AND bitand(le.leflg,24) = 8;
Monitor Running Jobs:
*******************************************************************************************************************************
***********************************
Long Jobs:
Select username,to_char(start_time, 'hh24:mi:ss dd/mm/yy') started, time_remaining remaining, message
from v$session_longops
where time_remaining = 0 order by time_remaining desc;
Monitor Long running Job:
SELECT SID, SERIAL#, opname, SOFAR, TOTALWORK,
ROUND(SOFAR/TOTALWORK*100,2) COMPLETE
FROM V$SESSION_LONGOPS
WHERE TOTALWORK != 0 AND SOFAR != TOTALWORK order by 1;
Track Long Query Progress in database:
SELECT a.sid, a.serial#, b.username , opname OPERATION, target OBJECT,
TRUNC(elapsed_seconds, 5) "ET (s)", TO_CHAR(start_time, 'HH24:MI:SS') start_time,
ROUND((sofar/totalwork)*100, 2) "COMPLETE (%)"
FROM v$session_longops a, v$session b
WHERE a.sid = b.sid AND b.username not IN ('SYS', 'SYSTEM') AND totalwork > 0
ORDER BY elapsed_seconds;
Track Running RMAN backup status:
SELECT SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK,
ROUND(SOFAR/TOTALWORK*100,2) "%_COMPLETE"
FROM V$SESSION_LONGOPS
WHERE OPNAME LIKE 'RMAN%' AND OPNAME NOT LIKE '%aggregate%'
AND TOTALWORK != 0 AND SOFAR != TOTALWORK;
Monitor Import Rate:
Oracle Import Utility usually takes hours for very large tables and we need to track the execution of Oracle
Import Process. Below option can help you monitor the rate at which rows are being imported from a running
import job.
select substr(sql_text,instr(sql_text,'into "'),30) table_name,
rows_processed, round((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes,
trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60)) rows_per_minute
from sys.v_$sqlarea
where sql_text like 'insert %into "%' and command_type = 2 and open_versions > 0;
Displays SQL statements for the current database sessions.
SELECT s.sid, s.status, s.process, s.schemaname, s.osuser, a.sql_text, p.program
FROM v$session s, v$sqlarea a, v$process p
WHERE s.SQL_HASH_VALUE = a.HASH_VALUE
Note: I am not responsible of any of the script is harming your database so before using directly on Prod
DB. Please check it on Test environment first and make sure then go for it.
Please send your corrections, suggestions, and feedback to me. I may credit your contribution.
Thank you.
------------------------------------------------------------------------------------------------------------
------------
Daily Checks:
Verify the success of archive log backups, based on the backup interval.
Check the space usage of the archive log file system for both primary and standby DB.
Check the space usage and verify all the tablespace usage is below critical level once in a day.
Verify Rollback segments.
Check the database performance, periodic basis usually in the morning very first hour after the
night shift schedule backup has been completed.
Check the sync between the primary database and standby database, every 20 min.
Make a habit to check out the new alert.log entry hourly specially if getting any error.
Clear the trace files in the udump and bdump directory as per the policy.
Verify all the monitoring agent, including OEM agent and third party monitoring agents.
Weekly Checks:
Perform level 0 or cold backup as per the backup policy. Note the backup policy can be changed
as per the requirement. Don’t forget to check out the space on disk or tape before performing
level 0 or cold backup.
Check the database statistics collection. On some databases this needs to be done every day
depending upon the requirement.
Verify the schedule jobs and clear the output directory. You can also automate it.
Archive the alert logs (if possible) to reference the similar kind of error in future.
Visit the home page of key vendors.
Checks for the critical patch updates from oracle make sure that your systems are in compliance
with CPU patches.
Verify the accuracy of the DR mechanism by performing a database switch over test. This can be
done once in six months based on the business requirements.
------------------------------------------------------------------------------------------------------------
-------------------------------------------
Below is the brief description about some of the important concept including important SQL
scripts. You can find more scripts on my different post by using blog search option.
Make sure the database is available. Log into each instance and run daily reports or test scripts.
You can also automate this procedure but it is better do it manually. Optional implementation:
use Oracle Enterprise Manager's 'probe' event.
Verify DBSNMP is running:
Log on to each managed machine to check for the 'dbsnmp' process. For Unix: at the command
line, type ps –ef | grep dbsnmp. There should be two dbsnmp processes running. If not, restart
DBSNMP.
Each morning one of your prime tasks is to check backup log, backup drive where your actual
backup is stored to verify the night backup.
In the next subsequent work check the location where daily archiving stored. Verify the archive
backup on disk or tape.
For each instance, verify that enough free space exists in each tablespace to handle the day’s
expected growth. As of <date>, the minimum free space for <repeat for each tablespace>: [ <
tablespace > is < amount > ]. When incoming data is stable, and average daily growth can be
calculated, then the minimum free space should be at least <time to order, get, and install more
disks> days’ data growth. Go to each instance, run query to check free mb in
tablespaces/datafiles. Compare to the minimum free MB for that tablespace. Note any low-space
conditions and correct it.
Status should be ONLINE, not OFFLINE or FULL, except in some cases you may have a special
rollback segment for large batch jobs whose normal status is OFFLINE. Optional: each database
may have a list of rollback segment names and their expected statuses.For current status of each
ONLINE or FULL rollback segment (by ID not by name), query on V$ROLLSTAT. For storage
parameters and names of ALL rollback segment, query on DBA_ROLLBACK_SEGS. That view’s
STATUS field is less accurate than V$ROLLSTAT, however, as it lacks the PENDING OFFLINE and
FULL statuses, showing these as OFFLINE and ONLINE respectively.
Connect to each managed system. Use 'telnet' or comparable program. For each managed
instance, go to the background dump destination, usually $ORACLE_BASE/<SID>/bdump. Make
sure to look under each managed database's SID. At the prompt, use the Unix ‘tail’ command to
see the alert_<SID>.log, or otherwise examine the most recent entries in the file. If any ORA-
errors have appeared since the previous time you looked, note them in the Database Recovery
Log and investigate each one. The recovery log is in <file>.
Look for segments in the database that are running out of resources (e.g. extents) or growing at
an excessive rate. The storage parameters of these segments may need to be adjusted. For
example, if any object reached 200 as the number of current extents, upgrade the max_extents
to unlimited. For that run query to gather daily sizing information, check current extents, current
table sizing information, current index sizing information and find growth trends
Space-bound objects’ next_extents are bigger than the largest extent that the tablespace can
offer. Space-bound objects can harm database operation. If we get such object, first need to
investigate the situation. Then we can use ALTER TABLESPACE <tablespace> COALESCE. Or add
another datafile. Run spacebound.sql. If all is well, zero rows will be returned.
To check CPU utilization, go to =>system metrics=>CPU utilization page. 400 is the maximum
CPU utilization because there are 4 CPUs on phxdev and phxprd machine. We need to investigate
if CPU utilization keeps above 350 for a while.
Nothing is more valuable in the long run than that the DBA be as widely experienced, and as
widely read, as possible. Readings should include DBA manuals, trade journals, and possibly
newsgroups or mailing lists.
For each object-creation policy (naming convention, storage parameters, etc.) have an
automated check to verify that the policy is being followed. Every object in a given tablespace
should have the exact same size for NEXT_EXTENT, which should match the tablespace default
for NEXT_EXTENT. As of 10/03/2012, default NEXT_EXTENT for DATAHI is 1 gig (1048576
bytes), DATALO is 500 mb (524288 bytes), and INDEXES is 256 mb (262144 bytes). To check
settings for NEXT_EXTENT, run nextext.sql. To check existing extents, run existext.sql
To check missing PK, run no_pk.sql. To check disabled PK, run disPK.sql. All primary key indexes
should be unique. Run nonuPK.sql to check. All indexes should use INDEXES tablespace. Run
mkrebuild_idx.sql. Schemas should look identical between environments, especially test and
production. To check data type consistency, run datatype.sql. To check other object consistency,
run obj_coord.sql.
Look in SQL*Net logs for errors, issues, Client side logs, Server side logs and Archive all Alert
Logs to history
For new update information made a habit to visit home pages of key vendors such as: Oracle
Corporation: http://www.oracle.com, http://technet.oracle.com, http://www.oracle.com/support,
http://www.oramag.com
Review changes in segment growth when compared to previous reports to identify segments with
a harmful growth rate.
Review common Oracle tuning points such as cache hit ratio, latch contention, and other points
dealing with memory management. Compare with past reports to identify harmful trends or
determine impact of recent tuning adjustments. Make the adjustments necessary to avoid
contention for system resources. This may include scheduled down time or request for additional
resources.
Review database file activity. Compare to past output to identify trends that could lead to
possible contention.
Review Fragmentation:
Investigate fragmentation (e.g. row chaining, etc.), Project Performance into the Future
Compare reports on CPU, memory, network, and disk utilization from both Oracle and the
operating system to identify trends that could lead to contention for any one of these resources
in the near future. Compare performance trends to Service Level Agreement to see when the
system will go out of bounds.
--------------------------------------------------------------------------------------------
Useful Scripts:
--------------------------------------------------------------------------------------------
Script: To check free, pct_free, and allocated space within a tablespace
SELECT tablespace_name, largest_free_chunk, nr_free_chunks, sum_alloc_blocks,
sum_free_blocks
, to_char(100*sum_free_blocks/sum_alloc_blocks, '09.99') || '%' AS pct_free
FROM ( SELECT tablespace_name, sum(blocks) AS sum_alloc_blocks
FROM dba_data_files
GROUP BY tablespace_name),
( SELECT tablespace_name AS fs_ts_name, max(blocks) AS largest_free_chunk
, count(blocks) AS nr_free_chunks, sum(blocks) AS sum_free_blocks
FROM dba_free_space
GROUP BY tablespace_name )
WHERE tablespace_name = fs_ts_name;
Script: To analyze tables and indexes
BEGIN
dbms_utility.analyze_schema ( '&OWNER', 'ESTIMATE', NULL, 5 ) ;
END ;
Script: To find out any object reaching <threshold>
SELECT e.owner, e.segment_type , e.segment_name , count(*) as nr_extents , s.max_extents
, to_char ( sum ( e.bytes ) / ( 1024 * 1024 ) , '999,999.90') as MB
FROM dba_extents e , dba_segments s
WHERE e.segment_name = s.segment_name
GROUP BY e.owner, e.segment_type , e.segment_name , s.max_extents
HAVING count(*) > &THRESHOLD
OR ( ( s.max_extents - count(*) ) < &&THRESHOLD )
ORDER BY count(*) desc;
The above query will find out any object reaching <threshold> level extents, and then you have
to manually upgrade it to allow unlimited max_extents (thus only objects we expect to be big are
allowed to become big.
Script: To identify space-bound objects. If all is well, no rows are returned.
SELECT a.table_name, a.next_extent, a.tablespace_name
FROM all_tables a,( SELECT tablespace_name, max(bytes) as big_chunk
FROM dba_free_space
GROUP BY tablespace_name ) f
WHERE f.tablespace_name = a.tablespace_name AND a.next_extent > f.big_chunk;
Run the above query to find the space bound object . If all is well no rows are returned if found
something then look at the value of next extent. Check to find out what happened then use
coalesce (alter tablespace <foo> coalesce;). and finally, add another datafile to the tablespace if
needed.
Script: To find tables that don't match the tablespace default for NEXT extent.
SELECT segment_name, segment_type, ds.next_extent as Actual_Next
, dt.tablespace_name, dt.next_extent as Default_Next
FROM dba_tablespaces dt, dba_segments ds
WHERE dt.tablespace_name = ds.tablespace_name
AND dt.next_extent !=ds.next_extent AND ds.owner = UPPER ( '&OWNER' )
ORDER BY tablespace_name, segment_type, segment_name;
Script: To check existing extents
SELECT segment_name, segment_type, count(*) as nr_exts
, sum ( DECODE ( dx.bytes,dt.next_extent,0,1) ) as nr_illsized_exts
, dt.tablespace_name, dt.next_extent as dflt_ext_size
FROM dba_tablespaces dt, dba_extents dx
WHERE dt.tablespace_name = dx.tablespace_name
AND dx.owner = '&OWNER'
GROUP BY segment_name, segment_type, dt.tablespace_name, dt.next_extent;
The above query will find how many of each object's extents differ in size from the tablespace's
default size. If it shows a lot of different sized extents, your free space is likely to become
fragmented. If so, need to reorganize this tablespace.
Script: To find tables without PK constraint
SELECT table_name FROM all_tables
WHERE owner = '&OWNER'
MINUS
SELECT table_name FROM all_constraints
WHERE owner = '&&OWNER' AND constraint_type = 'P';
Script: To find out which primary keys are disabled
SELECT owner, constraint_name, table_name, status
FROM all_constraints
WHERE owner = '&OWNER' AND status = 'DISABLED' AND constraint_type = 'P';
Script: To find tables with nonunique PK indexes.
SELECT index_name, table_name, uniqueness
FROM all_indexes
WHERE index_name like '&PKNAME%'
AND owner = '&OWNER' AND uniqueness = 'NONUNIQUE'
SELECT c.constraint_name, i.tablespace_name, i.uniqueness
FROM all_constraints c , all_indexes i
WHERE c.owner = UPPER ( '&OWNER' ) AND i.uniqueness = 'NONUNIQUE'
AND c.constraint_type = 'P' AND i.index_name = c.constraint_name;
Script: To check datatype consistency between two environments
SELECT table_name, column_name, data_type, data_length,data_precision,data_scale,nullable
FROM all_tab_columns -- first environment
WHERE owner = '&OWNER'
MINUS
SELECT table_name,column_name,data_type,data_length,data_precision,data_scale,nullable
FROM all_tab_columns@&my_db_link -- second environment
WHERE owner = '&OWNER2'
order by table_name, column_name;
Script: To find out any difference in objects between two instances