DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Friday, July 31, 2015

How to check archivelog generation size on a hourly basis

alter session set nls_date_format='DD-MON-YYYY:HH24';

set pagesize 60
set linesize 90
set echo off;
set verify off;
set feedback off;
column sysdate  new_value headdate noprint;
column value    new_value dbname   noprint;
ttitle off
select sysdate,value from v$parameter
 where name = 'db_name';

ttitle skip 1 left "(redo_rate  )" -
       center "State Street Corp ITD/DBA" -
       right "page: " sql.pno " " -
skip 1 left   headdate   -
       center "Redo File Volumes by Day    " -
skip 1 left   "database: " dbname  " " -
       center dbname " " -
       right dbname " " -
skip 2;

ttitle on
spool redo_rate.&dbname
col log_size new_value logsize format 999,999,999,999 Heading "LogSize|KBytes";
col logsize format 999,999,999  
col "Total KBytes" format 999,999,999,999,999
rem col "Day" format a18
ttitle off
select distinct bytes/1024 log_size
from   v$log;
ttitle on;
select count(first_time)*&logsize "Total KBytes",
       to_date(to_char(first_time,'DD-MON-YYYY:HH24') ,'DD-MON-YYYY:HH24')
from   v$loghist
group by to_date(to_char(first_time,'DD-MON-YYYY:HH24') ,'DD-MON-YYYY:HH24')
order by 2
/

ttitle off
select distinct bytes/1024 log_size
from   v$log;

How to check archivelog generation size on a daily basis.




alter session set nls_date_format='DD-MON-YYYY:HH24';

set pagesize 60
set linesize 90
set echo off;
set verify off;
set feedback off;
column sysdate  new_value headdate noprint;
column value    new_value dbname   noprint;
ttitle off 
select sysdate,value from v$parameter
 where name = 'db_name';
ttitle skip 1 left "(redo_rate  )" -
       center "State Street Corp ITD/DBA" -
       right "page: " sql.pno " " -
skip 1 left   headdate   -
       center "Redo File Volumes by Day    " -
skip 1 left   "database: " dbname  " " -
       center dbname " " -
       right dbname " " -
skip 2;
ttitle on
spool redo_rate.&dbname 
col log_size new_value logsize format 999,999,999,999 Heading "LogSize|KBytes";
col logsize format 999,999,999   
col "Total KBytes" format 999,999,999,999,999
col "Day" format a15
ttitle off
select distinct bytes/1024 log_size 
from   v$log;
ttitle on;
select count(first_time)*&logsize "Total KBytes",
       to_date(to_char(first_time,'DD-MON-YYYY') ,'DD-MON-YYYY') "Day"
from   v$loghist
group by to_date(to_char(first_time,'DD-MON-YYYY') ,'DD-MON-YYYY')
order by 2
/

ttitle off
select distinct bytes/1024 log_size 
from   v$log;

Problem while Workflow Agent Listener Service startup.

Scenario:
-----------


When we add the Resposibility to the user and save the changes, It does not take effect, Due to the Workflow Agent Listener Service, Work Flow Mailer Service and Document Web Services Service is Not up and also If we try to restart from Concurrent  --> Administer --> Workflow Agent Listener Service  --> Restart,  It keep on Says restarting But Not Starts.

Cause :
---------


The Path for appsborg2.zip may not be added in AF_CLASSPATH in the XML file.


Soultion:
-----------

1. Check whether the GSM is Up and Running in the system --> Profile option or using the Following Query :

select DECODE(b.profile_option_value, 'Y', 'Enabled', 'Disabled') DETAILS
from fnd_profile_options a, fnd_profile_option_values b
where a.APPLICATION_ID = b.APPLICATION_ID
and a.PROFILE_OPTION_ID = b.PROFILE_OPTION_ID
and a.PROFILE_OPTION_NAME = 'CONC_GSM_ENABLED';

2. Check whether the Service Manager is up and Running by the following Query :


select CONCURRENT_QUEUE_NAME, ENABLED_FLAG, MAX_PROCESSES, RUNNING_PROCESSES
from FND_CONCURRENT_QUEUES
where CONCURRENT_QUEUE_NAME like 'FNDSM%';

3. Check the Path for AF_CLASSPATH by

echo $AF_CLASSPATH and check whether it show the appsborg2.zip

Note : It may show appsborg.zip But Not appsborg2.zip .

4. Check the $AF_CLASSPATH in the XML file. If appsborg2.zip is Not there then add that path in the xml file before the appsborg.zip

For Example :

AF_CLASSPATH=/DBA/oracle/dbacomn/util/java/1.5/jdk1.5.0_11/lib/rt.jar:/DBA/oracle/dbacomn/util/java/1.5/jdk1.5.0_11/lib/i18n.jar:/DBA/oracle/dbacomn/java/appsborg2.zip:/DBA/oracle/dbacomn/java/appsborg.zip:/DBA/oracle/dbacomn/java/apps.zip:/DBA/oracle/dbaora/8.0.6/forms60/java:/DBA/oracle/dbacomn/java

Note : Backup the XML file before Editing.

5. Now Run the Autoconfig .

6. Now If possible Bouce the appsTier and check for the Workflow Agent Listener Service. It will be started.


Refered Note id :

How to Troubleshoot when the Workflow Services Go Down
  Doc ID:  564394.1

Wednesday, July 29, 2015

Identification of Long running reports in 10g Oracle Application Reports.

The below script is used to identify the long running reports in 10g Oracle Application Reports.


#!/bin/bash
###############################################################################
# File Name   : long_runn_rpt.sh
# Author      : Vimal Raju
# Date        : 05/12/2015   --- MM/DD/YYYY
# Description : Long running reports
#
###############################################################################

hostName=`hostname`
LOG_LOC=/u01/app/oracle/scripts/log
logfile=${LOG_LOC}/pid.log
tmpfile=/u01/app/oracle/scripts/tmp/tmp.log
mail=dbadesk@test.com
> ${logfile}
time=$1
ltime=`expr ${time} \\* 60 \\* 60 `

ps -ef|grep engine|grep -v "grep"| awk '{print $2}' > ${tmpfile}

tmp=`cat ${tmpfile}|wc -l`
tmp1=1

while [ ${tmp1} -le ${tmp} ]
do
  ct=`cat ${tmpfile}|awk "NR=="${tmp1}""`
  test1=`stat -t /proc/${ct} | awk '{print $14}'`
  test2=`date +%s`
  test3=`expr ${test2} - ${test1}`

  if [ ${test3} -gt ${ltime} ]
  then
      echo " This process PID = ${ct} running long time in ${hostName} " >> ${logfile}
  else
      echo " "
  fi
  tmp1=`expr ${tmp1} + 1`
done

trig=`cat ${logfile}|wc -l`
check_tmp=`cat ${LOG_LOC}/report_trigger_tmp.log`

if [ "${trig}" -gt 0 ]
then
echo C > ${LOG_LOC}/report_trigger.log
elif [ "${check_tmp}" = 'T' ]
then
echo N > ${LOG_LOC}/report_trigger.log
fi

check_crit=`cat ${LOG_LOC}/report_trigger.log `
if [ "${check_crit}" = 'C' ]
then
echo " ${trig} process running long time in ${hostName} " |mailx -s "Critical : Long running reports in ${hostName}"  $mail
echo T > ${LOG_LOC}/report_trigger_tmp.log
elif [ "${check_crit}" = 'N' ]
then
echo " No long running reports in ${hostName} " |mailx -s "Normal : Long running reports in ${hostName}"  $mail
echo "" > ${LOG_LOC}/report_trigger.log
echo "" > ${LOG_LOC}/report_trigger_tmp.log
fi

Standby Archivelog is wrongly being shipped to Oracle home dbs location after database switchover

After switchover of the database from Primary to Standby. The archive from new primary to the new standby is being shipped to the Oracle home dbs location, which is the default location for the initialization parameter. This issue was identified due to the following parameter getting changed after the switch over.

Standby_archive_dest was set to "Auto".



One we changed this parameter to the actual archive log destination the issue got fixed.

So we need to check this parameter after switchover

Saturday, July 18, 2015

Function not available to this responsiblity.Change responsibilities or contact your system administrator









Solution:

Register your custom top directory into default.env file as follows,

1. Go to $INST_TOP/ora/10.1.2/forms/server directory

2. There you will find the default.env file.

3. Add the custom top directory at the end of file.  for e.g

XX_TOP=/home/applmgr/xxhn/

4. Then restart the opmn services then issue get resolved.
 

                                                        Thank You

 Get the Spid for Rman restore channel 



COLUMN CLIENT_INFO FORMAT a30
COLUMN SID FORMAT 999
COLUMN SPID FORMAT 9999

SELECT s.SID, p.SPID, s.CLIENT_INFO
FROM   V$PROCESS p, V$SESSION s
WHERE  p.ADDR = s.PADDR
AND    CLIENT_INFO LIKE 'rman%';
SID SPID         CLIENT_INFO
---- ------------ ------------------------------
  14 8374         rman channel=ORA_SBT_TAPE_1

Thursday, July 16, 2015

ORA-02085: database link DOYEN_REMOTEDB.DOYENSYS@USERNAME connects to DOYEN_REMOTEDB

ERROR:
=====

ORA-02085: database link DOYEN_REMOTEDB.DOYENSYS@USERNAME connects to DOYEN_REMOTEDB

SCENARIO:
========

SQL> select * from dual@DOYEN_REMOTEDB@USERNAME;
select * from dual@DOYEN_REMOTEDB@USERNAME

ERROR in líne 1:

ORA-02085: database link DOYEN_REMOTEDB.DOYENSYS@USERNAME connects to DOYEN_REMOTEDB

REASON :
=======

Global name in the database connecting to the remote is set to TRUE.

select name, value from v$parameter where name in ('db_name', 'db_domain', 'global_names');
NAME                 VALUE
-------------------- ----------------------------------------
db_domain
global_names         TRUE
db_name              mydb

SOLUTION :
=========

Simply at local db set

alter system set global_Names = FALSE;

Now it will work fine.

Wednesday, July 8, 2015

Can Not Login To The Applications: oracle.apps.fnd.framework.OAException: FND_NO_TRANSACTION_ID error

                                  Error While login with username in EBS:



Exception Details.
oracle.apps.fnd.framework.OAException: Could not load application module
'oracle.apps.fnd.sso.login.server.MainLoginPageAM'
...
## Detail 0 ##
oracle.apps.fnd.framework.OAException: Application: FND, Message Name: FND_NO_TRANSACTION_ID.
at oracle.apps.fnd.framework.webui.OAJSPHelper.getTransactionId(OAJSPHelper.java:350)
at oracle.apps.fnd.framework.webui.OAJSPApplicationRegistry.registerApplicationModule(OAJSPApplicationRegistry.java:133) 
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:1279) 
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:532) 
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:420) 
at _OA._jspService(_OA.java:216) 
at com.orionserver.http.OrionHttpJspPage.service(OrionHttpJspPage.java:59) 
at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:335) 
at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:478)
Solution 1:

1. Check for any invalid objects in the database instance, by running for instance the following sql statements as a DBA user in sqlplus:
select owner, object_type, count(*)
from dba_objects
where status='INVALID'
group by owner, object_type;

select owner, object_name
from dba_objects
where status='INVALID'
order by owner;
 3. Check for tablespaces free space, by running for instance the following sql statement as a DBA user in sqlplus:
 
select Total.name "Tablespace Name", nvl(Free_space, 0) Free_space,
nvl(total_space-Free_space, 0) Used_space, total_space
from
 (select tablespace_name, sum(bytes/1024/1024) Free_Space
  from sys.dba_free_space
  group by tablespace_name) Free,
  (select b.name, sum(bytes/1024/1024) TOTAL_SPACE
   from sys.v_$datafile a, sys.v_$tablespace B
   where a.ts# = b.ts#
   group by b.name) Total
where Free.Tablespace_name(+) = Total.name
order by Total.name;

Monday, July 6, 2015

Datapump fails with ORA-27086 on NFS


Error :
====

Export: Release 10.2.0.4.0 - Production on Tuesday, 2 June, 2015 11:27:50
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Release 10.2.0.4.0 - Production
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/backup-nfs/testdb/export/exportfull.dmp"
ORA-27086: unable to lock file - already in use
Linux Error: 37: No locks available
Additional information: 10

Solution:
========

The problem was with NFS locking deamons.

It is most likely due to the locking in the unix system . For this just restart the service locking services

Checking if 'nfslock' service is running:



  
# service nfslock status
rpc.statd is stopped

Start 'nfslock' service:





# /etc/init.d/nfslock start
Starting NFS statd:                                        [  OK  ]

Status:

======




# service nfslock status
rpc.statd (pid  4943) is running...
  
 Starting 'nfslock' service resolved my problem and started performing export to NFS.