UAT Database Observations
UAT Database Observations
Negative value of flash cache hit is noticed in few AWR reports. This seems to be a bug to us. Please
raise an issue with Oracle support to get the fix. Also check with Oracle support if below metalink ID
is referring to this bug.
2) Indexes of table ACTB_DAILY_LOG, ACTB_HISTORY,
ICTB_BACK_DATED_EVENTS, ICTB_ACC_PR, ICTB_ITM_TOV has degree
set as DEFAULT
Refer doc ID - High Wait Counts For ges generic event wait Caused by
RMV Processes (Doc ID 2638402.1)
Observation – The high wait counts due to "ges generic event" wait event for RMV*
processes can be ignored when seen in an AWR.
Action – Check with oracle support if any action is required on this to
supress this wait event.
5) SQL_ID - cc9x1qm1js2su
INSERT INTO fbtb_txnlog_details_hist (BranchCode, FunctionId, XrefId, UserId, TxnStageId, TxnStatus,
Timestamp,CheckerId, ErrorCode, ReqXML, RespXML, OnlineStatus, STAGESTARTDATE, STAGEENDDATE, ADVICE,
ADVICEXML, STAGESTATUS, SEQUENCE_NO) (SELECT BranchCode, FunctionId, XrefId, UserId, TxnStageId,
TxnStatus, Timestamp, CheckerId, ErrorCode, ReqXML, RespXML, OnlineStatus, STAGESTARTDATE,
STAGEENDDATE, ADVICE, ADVICEXML, STAGESTATUS, SEQUENCE_NO FROM fbtb_txnlog_details WHERE
BRANCHCODE = :1 and TO_DATE(to_char(timestamp, :"SYS_B_0"), :"SYS_B_1") <= :2 )
SQL statement with SQL_ID "cc9x1qm1js2su" was executed 10 times and had
an average elapsed time of 22 seconds.
6) SQL_ID - 2b8731qgfmzc4
UPDATE ACTBS_DAILY_LOG A SET A.VDBAL_UPDATE_FLAG = 'I' WHERE ROWID = :B1
Action: Gather stats for table ACTB_DAILY_LOG and rebuild all indexes
online. This should be done when volume is highest in this table. Then
lock the statistics.
7) SQL_ID - gjbaqh71p95ay
SQL statement with SQL_ID "gjbaqh71p95ay" was executed 745 times and
had an average elapsed time of 1.6 seconds.
8) SQL_ID - 4417kf5xa92dh
SQL statement with SQL_ID "4417kf5xa92dh" was executed 1197 times and
had an average elapsed time of 4.7 seconds.
SQL statement with SQL_ID "cdkwffa8jjafa" was executed 39 times and had
an average elapsed time of 56 seconds.
Action: Gather stats for table ACTB_DAILY_LOG and rebuild all indexes
online. This should be done when volume is highest in this table. Then
lock the statistics.
SQL statement with SQL_ID "gf923sxqjmv7t" was executed 39 times and had
an average elapsed time of 2.7 seconds.
execute dbms_sqltune.accept_sql_profile(task_name
=>'sql_tuning_task_gf923sxqjmv7t', task_owner => 'SYS', replace =>TRUE);
SQL statement with SQL_ID "a32jn7kgk891t" was executed 161 times and had
an average elapsed time of 6.1 seconds.
Action: Gather stats for table ACTB_HISTORY and rebuild the indexes.
SQL statement with SQL_ID "3498fun4v64d7" was executed 171 times and had
an average elapsed time of 1.9 seconds.
Action: Gather stats for table ACTB_DAILY_LOG and rebuild all indexes
online. This should be done when volume is highest in this table. Then
lock the statistics.
13) SQL_ID - 7yd1csaj61tsb
SQL statement with SQL_ID "7yd1csaj61tsb" was executed 2824 times and
had an average elapsed time of 1.5 seconds.
SQL statement with SQL_ID "1q78sym88q5d3" was executed 1106 times and
had an average elapsed time of 0.94 seconds.
Action: Accept the below profile for better performance of the query.
16) _lm_drm_disable , _gc_policy_time : As per Oracle support the recommended value for this
parameter should be 7 for disabling. We can see the current value set by Bank for
_lm_drm_disable , _gc_policy_time as 0 and 4. We are still seeing many wait events related
drm in multiple reports.
Action: Please change below value as per Oracle support recommendation -
_lm_drm_disable = 7
_gc_policy_time = 0
17) enq: IV – contention: This wait event is appearing in some reports ---- lmd process is
same on both nodes and # of cpus are also same on both nodes.....
Action: Check with MOS if this is a bug or hidden parameter _ges_server_processes needs to
be set
4th Aug
Node 1 at 11:00 AM
Node 1 at 2:20 PM
Node 1 at 3:05 PM
Node 1 at 4:00 PM
Node 1 at 5:00 PM
Node 2 at 9:00 AM
Node 2 at 11:00 AM
Node 2 at 2:45 PM
Node 2 at 3:45 PM
Node 2 at 4:50 PM
7th Aug.
Node 1 at 9:30 AM
Node 1 at 11:30 AM
Node 1 at 1:20 PM
Node 1 at 3:45 PM
Node 1 at 5:15 PM
Node 2 at 10:00 AM
Node 2 at 12:15 PM
Node 2 at 2:40 PM
Node 2 at 4:30 PM
8th Aug
CPU run queue is fine on Node 1 and Node2.
At 9:00 AM
At 12 PM
At 4:33 PM
At 9:20 AM
At 12:20 PM
At 4:51 PM
Top processes details and load average are captured in below screenshot. Load average is fine on
Node 1 and Node2.