Essential Shell Scripts to Automate Routine Tasks
Essential Shell Scripts to Automate Routine Tasks
Asfaw Gedamu
Introduction
This document provides a collection of shell scripts for monitoring various aspects of an Oracle
database server. These scripts can be configured to run periodically using crontab and send email
alerts in case of any issues.
Scripts
• Performs Oracle database backups (full, incremental, archive, cold) using RMAN.
• Supports compressing backups and running parallel backup jobs.
SCRIPT PREPARATION:
cat gg_alert.sh
#!/bin/bash
EMAIL_LIST="[email protected]"
OIFS=$IFS
IFS="
"
NIFS=$IFS
function status {
OUTPUT=`$GG_HOME/ggsci << EOF
info all
exit
EOF`
}
function alert {
for line in $OUTPUT
do
if [[ $(echo "${line}"|egrep 'STOP|ABEND' >/dev/null;echo $?) =
0 ]]
then
GNAME=$(echo "${line}" | awk -F" " '{print $3}')
GSTAT=$(echo "${line}" | awk -F" " '{print $2}')
GTYPE=$(echo "${line}" | awk -F" " '{print $1}')
case $GTYPE in
"MANAGER")
cat $GG_HOME/dirrpt/MGR.rpt | mailx -s "${HOSTNAME} - GoldenGate
${GTYPE} ${GSTAT}" $NOTIFY ;;
"EXTRACT"|"REPLICAT")
cat $GG_HOME/dirrpt/"${GNAME}".rpt |mailx -s "${HOSTNAME} -
GoldenGate ${GTYPE} ${GNAME} ${GSTAT}" $EMAIL_LIST ;;
esac
fi
done
}
export GG_HOME=/goldengate/install/software/gghome_1
export ORACLE_HOME=/oracle/app/oracle/product/12.1.0/db_1
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
status
alert
Below script is helpful in monitoring lag in standby database and send mail to DBAs in case the
lag is increasing. For the script to work, make sure dataguard broker is enabled between primary
and standby database.
SCRIPT PREPARATION:
cat /home/oracle/dgmgrl_standby_lag.sh
#!/bin/bash
export ORACLE_HOME=/oracle/app/oracle/product/12.1.0/dbhome_1
export ORACLE_SID=primdb
export PATH=$ORACLE_HOME/bin:$PATH
echo -e “show database stydb”|${ORACLE_HOME}/bin/dgmgrl
sys/orcl1234 > DB_DG_DATABASE.log
cat /home/oracle/DB_DG_DATABASE.log | grep “Apply Lag” >
FILTERED_DB_DG_DATABASE.log
time_value=`cut -d ” ” -f 14 FILTERED_DB_DG_DATABASE.log`
time_param=`cut -d ” ” -f 15 FILTERED_DB_DG_DATABASE.log`
if [[ “$time_param” == “minutes” && “$time_value” -ge 1 ]]
then
mailx -s “DREAIDB LAG is in minutes
” [email protected]<DB_DG_DATABASE.log
else
if [[ “$time_param” == “seconds” && “$time_value” -ge 30 ]]
then
mailx -s “DREAIDB LAG is in seconds
” [email protected]<DB_DG_DATABASE.log
else
if [[ “$time_param” == “hour(s)” && “$time_value” -ge 1 ]]
then
mailx -s “DREAIDB LAG is in hours ” [email protected]
<DB_DG_DATABASE.log
fi
fi
fi
cat rman_arch_del.sh
#!/bin/bash
export ORACLE_HOME=/oracle/app/oracle/product/12.1.0.2.0
export ORACLE_SID=PARIS12C
export PATH=$ORACLE_HOME/bin:$PATH
delBackup () {
rman log=/home/oracle/arch_del.log << EOF
connect target /
DELETE noprompt ARCHIVELOG ALL COMPLETED BEFORE ‘sysdate-1’;
CROSSCHECK ARCHIVELOG ALL;
DELETE EXPIRED ARCHIVELOG ALL;
exit
EOF
}
# Main
delBackup
1. Prepare the blocker.sql file.[ for blocking sessions more than 10 seconds)
2. Shell script.(/home/oracle/monitor/blocker.sh )
export ORACLE_HOME=/oracle/app/oracle/product/12.1.0/dbhome_1
export ORACLE_SID=ORCL
export PATH=$ORACLE_HOME/bin:$PATH
logfile=/home/oracle/monitor/block_alert.log
sqlplus -s "/as sysdba" > /dev/null << EOF
spool $logfile
@/home/oracle/monitor/blocker.sql
spool off
exit
EOF
count=`cat $logfile|wc -l`
if [ $count -ge 1 ];
then mailx -s "BLOCKING SESSION REPORTED IN PROD DB ( > 10 SEC)
" [email protected] < $logfile
fi
The following is a shell script that will trigger a mail alert, if the utilization of the asm diskgroup
reached 90 percent.
cat /export/home/oracle/asm_dg.sh
export ORACLE_HOME=/oracle/app/oracle/product/12.1.0.2/dbhome_1
export ORACLE_SID=PRODDB1
export PATH=$ORACLE_HOME/bin:$PATH
logfile=/export/home/oracle/asm_dg.log
sqlplus -s "/as sysdba" > /dev/null << EOF spool $logfile
SET LINESIZE 150
SET PAGESIZE 9999
SET VERIFY off
COLUMN group_name
FORMAT a25 HEAD 'DISKGROUP_NAME'
COLUMN state FORMAT a11 HEAD 'STATE'
COLUMN type FORMAT a6 HEAD 'TYPE'
COLUMN total_mb FORMAT 999,999,999 HEAD 'TOTAL SIZE(GB)'
COLUMN free_mb FORMAT 999,999,999 HEAD 'FREE SIZE (GB)'
COLUMN used_mb FORMAT 999,999,999 HEAD 'USED SIZE (GB)'
COLUMN pct_used FORMAT 999.99 HEAD 'PERCENTAGE USED'
3. Configure in crontab:
0,15,30,45 * * * * /export/home/oracle/asm_dg.sh
Audit succeeded.
cat /export/home/oracle/invalid_log.sh
export ORACLE_HOME=/oracle/app/oracle/product/12.1.0/dbhome_1
export ORACLE_SID=SBIP18DB
export PATH=$ORACLE_HOME/bin:$PATH
logfile=/export/home/oracle/test.log
sqlplus -s "/as sysdba" > /dev/null << EOF
spool $logfile
set pagesize 1299
set lines 299
col username for a15
col userhost for a13
col timestamp for a39
col terminal for a23
SELECT username,userhost,terminal,to_char(timestamp,'DD/MM/YY
HH24:MI:SS' ) "TIMESTAMP" ,
CASE
when returncode=1017 then 'INVALID-attempt'
when returncode=28000 then 'account locked'
end "FAILED LOGIN ACTION"
FROM dba_audit_session where timestamp > sysdate-1/9and
returncode in (1017,28000);
spool off
exit
EOF
count=`cat $logfile|wc -l`
#echo $count
if [ $count -ge 4 ];
then
mailx -s "INVALID ATTEMPS IN DB " [email protected] <
$logfile
fi
4. Configure in crontab:
0,15,30,45 * * * * /export/home/oracle/invalid_log.sh
7. A script for file system alert
Below is script to notification when a mount point or filesystem usage crosses a threshold value.
For solaris
#!/bin/sh
do
if [ $val -ge 90 ]
then
echo "The $fs usage high $val% \n \n \n `df -h $fs`" | mailx -s
"Filesystem $fs Usage high on Server `hostname`"
[email protected]
fi
done
Put in crontab:
00 * * * * /usr/local/scripts/diskalert.sh
do
if [ $val -ge 80 ]
then
echo "The $fs usage high $val% \n \n \n `df -h $fs`" | mailx -s
"Filesystem $fs Usage high on Server `hostname`"
[email protected]
fi
done
00 * * * * /usr/local/scripts/zpoolusage.sh
Alert log size will grow in Oracle database from day to day. So for housekeeping, we need to
move the existing alert log to a backup location and compress there. Upon moving the alert log,
the database will create a fresh alert log automatically.
We need to define the ORACLE_HOME in the script. and ORACLE_SID will be passed as an
argument while running the script.
# $Header: rotatealertlog.sh
#!/bin/bash
echo ========================
echo Set Oracle Database Env
echo ========================
echo =======
echo Extract Alert log location
echo =======
export VAL_DUMP=$(${ORACLE_HOME}/bin/sqlplus -S /nolog <<EOF
conn /as sysdba
set pages 0 feedback off;
prompt
SELECT value from v\$parameter where NAME='core_dump_dest';
exit;
EOF
)
export LOCATION=`echo ${VAL_DUMP} | perl -lpe'$_ = reverse' |awk
'{print $1}'|perl -lpe'$_ = reverse'`
export ALERTDB=${LOCATION}/alert_$ORACLE_SID.log
export ELOG=$( echo ${ALERTDB} | sed s/cdump/trace/)
echo =======
echo Compress current
echo =======
if [ -e "$ELOG" ] ; then
mv ${ELOG} ${ELOG}_${TO_DATE};
gzip ${ELOG}_${TO_DATE};
> ${ELOG}
else
echo not found
fi
exit
2. Configure in crontab:
00 22 * * 5 /u01/app/oracle/dbscripts/rotatealertlog.sh PRODDB
9. Monitoring Tablespace
Below script can be configured in crontab to send a notification to the support DBAs in case
tablespace usage crosses a threshold.
1. First, make the below .sql file, which will be used inside the shell script.
In this script we have defined the threshold as 90%. You can change it as per your requirement.
cat
/export/home/oracle/Housekeeping/scripts/tablespace_alert.sql
At the beginning of the script, we need to define the env variables like ORACLE_HOME,
PATCH, LD_LIBRARY_PATH, ORACLE_SID.
cat
/export/home/oracle/Housekeeping/scripts/tablespace_threshold.ks
h
#!/bin/sh
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_SID=PRODDB
cd /export/home/oracle/Housekeeping/scripts
logfile=/export/home/oracle/Housekeeping/scripts/Tablespace_aler
t.log
cnt1=`ps -ef|grep pmon|grep $ORACLE_SID|wc -l`
if [ $cnt1 -eq 1 ];
then
sqlplus -s "/as sysdba" > /dev/null << EOF
spool $logfile
@/export/home/oracle/Housekeeping/scripts/tablespace_alert.sql
spool off
exit
EOF
# If there are more then these two lines in the output file,
mail it.
count=`cat $logfile|wc -l`
#echo $count
if [ $count -ge 4 ];
then
mailx -s "TABLESPACE ALERT FOR PROD DB " [email protected]
<$logfile
fi
fi
Configure a shell script to monitor alert log for all the databases on a server once in every 15
min.And in the case of any ORA- error mail to the DBA TEAM.
Below script is prepared using the ADRCI utility of oracle 11g. It will monitor alert log for all
the databases having same oracle base.
SCRIPT:(Adrci_alert_log.ksh)
LOG_DIR=/export/home/oracle/Housekeeping/logs/alert_log_check_da
ily.txt
adrci_homes=( $(adrci exec="show homes" | egrep -e rdbms ))
echo '##############################' > $LOG_DIR
echo '###########################ALERT LOG OUTPUT FOR LAST 15
MINUTES ###########################' >> $LOG_DIR
echo '##############################' >> $LOG_DIR
done
num_errors=`grep -c 'ORA' $LOG_DIR`
if [ $num_errors != 0 ]
then
fi
0,15,30,45 * * * *
/export/home/oracle/Housekeeping/scripts/Adrci_alert_log.ksh >
/export/home/oracle/Housekeeping/logs/error_alert.log 2>&1
# Remove the comment below to update the file with the new IP
addresses if you want.
# echo "$current_ips" > $filename
1. Initialization:
• Sets the url variable to the load-balanced HTTP link you want to monitor.
• Specifies files to store current and new IP addresses (current_ips.txt and new_ips.txt).
• Sets the maillist variable to the email address for notifications.
• Uses nslookup $url to query DNS for the IP addresses associated with the URL.
• Filters the output using grep Address and extracts IPs using awk '{print $2}'.
• Stores the current IPs in the current_ips variable.
4. Comparing IPs:
• Splits both current_ips and saved_ips into separate arrays for comparison.
• Loops through each IP in the current_ips_array.
• For each IP, checks if it's not present in the saved_ips_array.
• If a new IP is found, adds it to new_ips.txt and increments the new_ips_found counter.
5. Notification:
usage () {
echo "Usage : SID BACKUP_TYPE COMPRESSION PARALLELISM
SID : SID, comma separated list of databases or ALL for
all databases (running)
BACKUP_TYPE : INCR, FULL, COLD or ARCH
COMPRESS : COMPRESS or NOCOMPRESS to compress or not the
backup
PARALLEL : defines the number of channel to use
##Variables definition
BASEDIR=$(dirname "$0")
BACKUP_BASE=/Data_Domain/oracle/prod/
LOGDIR=${BASEDIR}/log
[email protected]
export NLS_DATE_FORMAT='dd/mm/yyyy hh24:mi:ss'
DATE=`date +"%Y%m%d_%H%M%S"`
PATH=$PATH:/usr/local/bin
# Parameters provided
DB_LIST=$1
BACKUP_TYPE=$2
PARALLEL=$4
# Compression validation
if [ $3 = 'COMPRESS' ]; then
COMPRESS='AS COMPRESSED BACKUPSET'
else
if [ $3 = 'NOCOMPRESS' ]; then
COMPRESS=''
else
usage
exit 1
fi
fi
##backup function
function backup_database() {
# Set Oracle Environment for database
ORACLE_SID=$1
ORAENV_ASK=NO
. oraenv
OUTPUT_SID=${ORACLE_SID}
BACKUP_DIR=$BACKUP_BASE/${ORACLE_SID}
LOGFILE=$LOGDIR/rman_backup_${ORACLE_SID}_${BACKUP_TYPE}_${DATE}
.log
else
if [ $BACKUP_TYPE = 'ARCH' ]; then
rman target / << EOF >> $LOGFILE
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR
DEVICE TYPE DISK TO ${CF_BACKUP};
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO
BACKUPSET PARALLELISM ${PARALLEL};
run {
backup ${COMPRESS} archivelog all
$FORMAT_ARCHIVE delete input filesperset 10;
delete noprompt obsolete;
}
exit
EOF
else
rman target / << EOF >> $LOGFILE 2>&1
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR
DEVICE TYPE DISK TO ${CF_BACKUP};
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO
BACKUPSET PARALLELISM ${PARALLEL};
run {
backup ${COMPRESS} archivelog all
$FORMAT_ARCHIVE delete input filesperset 10;
backup ${COMPRESS} ${LEVEL} database
$FORMAT_DATA include current controlfile;
backup ${COMPRESS} archivelog all
$FORMAT_ARCHIVE delete input filesperset 10;
delete noprompt obsolete;
}
exit
EOF
fi
fi
if [ $1 = 'ALL' ]; then
for database in `ps -ef | grep pmon | egrep -v 'ASM|grep' |
awk '{print $8}' | cut -d_ -f3`
do
backup_database $database
done
else
for database in $(echo $1 | sed "s/,/ /g")
do
backup_database $database
done
fi
This script is an RMAN backup script designed to automate backing up Oracle databases. Here's
a breakdown of its functionalities:
1. Setting Up:
• Defines variables for paths, logging, email notification, date format, and adds helper
functions to the system path.
• Creates directories for backups, logs, and automatic backups if they don't exist.
• Checks if the number of arguments provided when running the script is correct (should be
4).
2. Processing Arguments:
3. Validating Input:
4. backup_database function:
5. Backup Logic:
• If DB_LIST is "ALL", the script iterates through all running databases identified using
process listing (ps -ef) and extracts the database names.
• Otherwise, it loops through each database name provided in the comma-separated
DB_LIST.
• For each database, the backup_database function is called to perform the backup process.
Overall, this script automates RMAN backups for Oracle databases based on user-provided
parameters and sends email notifications with log details for success or failure.
BASE_SCHEMA=$1
BASE_TABLE=$2
PARALLEL=$3;
PARTITION=$4
function usage(){
echo "USAGE:
Parameter 1 is the SCHEMA
Parameter 2 is the TABLE NAME
Parameter 3 is the DEGREE of parallelism
Parameter 4 (optional) is the partition (if any)"
}
if [ $# -lt 3 ]; then
usage
exit 1
fi
if [ $# -eq 4 ]; then
PARFILE=${BASE_SCHEMA}_${BASE_TABLE}_${PARTITION}.par
echo "tables=${BASE_SCHEMA}.${BASE_TABLE}:${PARTITION}" >
$PARFILE
START_MESSAGE="Beginning export of partition :
${BASE_SCHEMA}.${BASE_TABLE}:${PARTITION} "
END_MESSAGE "Finished export of partition:
${BASE_SCHEMA}.${BASE_TABLE}:${PARTITION}"
DUMPFILE_BASE=${BASE_SCHEMA}_${BASE_TABLE}_${PARTITION}
LOGFILE_BASE=${BASE_SCHEMA}_${BASE_TABLE}_${PARTITION}
else
PARFILE=${BASE_SCHEMA}_${BASE_TABLE}.par
echo "tables=${BASE_SCHEMA}.${BASE_TABLE}" > $PARFILE
START_MESSAGE="# Beginning export of table :
${BASE_SCHEMA}.${BASE_TABLE}"
END_MESSAGE "# Finished export of table:
${BASE_SCHEMA}.${BASE_TABLE}"
DUMPFILE_BASE=${BASE_SCHEMA}_${BASE_TABLE}
LOGFILE_BASE=${BASE_SCHEMA}_${BASE_TABLE}
fi
echo
"#############################################################"
echo $START_MESSAGE
echo
"##############################################################"
echo " "
LIMIT=$(expr $PARALLEL - 1)
START_TIME=`date`
echo
"##############################################################"
echo $END_MESSAGE
echo "# Start time : $START_TIME "
echo "# End time is: `date`"
echo
export ORAENV_ASK=NO
export ORACLE_SID=$1
. oraenv
TABLE_NAME=$2
PARTITION=$3
function usage(){
echo "USAGE:
Parameter 1 is the SID of the database where you want
to import
Parameter 2 is the TABLE you want to import
Parameter 3 (optional) is the PARTITION name you want
to import (if any)"
}
if [ $# -lt 2 ]; then
usage
exit 1
fi
if [ $# -eq 3 ]; then
PARFILE=${TABLE_NAME}_${PARTITION}.par
START_MESSAGE="Beginning import of partition :
${TABLE_NAME}:${PARTITION} "
END_MESSAGE "Finished import of partition:
${TABLE_NAME}:${PARTITION}"
SEARCH_PATTERN=${BASE_TABLE}_${PARTITION}
SUCCESS_MESSAGE="partition: ${TABLE_NAME}:${PARTITION}
successfully imported, started at"
ERROR_MESSAGE="partition: ${TABLE_NAME}:${PARTITION} failed
to import, check logfile for more info"
MAIL_OBJECT="Successfully imported partition
${TABLE_NAME}:${PARTITION}"
else
PARFILE=${TABLE_NAME}.par
START_MESSAGE="Beginning import of table : ${TABLE_NAME}"
END_MESSAGE "Finished import of table : ${TABLE_NAME}"
SEARCH_PATTERN=${BASE_TABLE}
SUCCESS_MESSAGE="Table ${TABLE_NAME} successfully imported,
started at "
ERROR_MESSAGE="Table ${TABLE_NAME} failed to import, check
logfile for more info"
MAIL_OBJECT="Successfully imported table ${TABLE_NAME}"
fi
#directories
BASEDIR=/u10/
DUMPDIR=$BASEDIR/DUMP
PARFILEDIR=$BASEDIR/parfiles
mkdir -p $PARFILEDIR
START_TIME=`date`
echo
"##############################################################"
echo $END_MESSAGE
echo "# Start time : $START_TIME "
echo "# End time : `date`"
echo
"##############################################################"
# Verifying errors
errors_count=`grep ORA- *${SEARCH_PATTERN}*.log | wc -l`
Conclusion
These shell scripts can be a valuable tool for automating database monitoring tasks and ensuring
the smooth operation of your Oracle environment. By customizing the scripts and configuring
crontab, you can receive timely notifications about potential issues and take necessary actions.
Note:
• Replace placeholders like <URL>, <PUT YOUR EMAIL>, etc. with your specific
values.
• Adjust threshold values and email recipients as needed.