DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Monday, July 30, 2018

Convert Oracle 12c database from Non-ASM to ASM     (standalone)

Now from 12c database you can move/relocate datafiles ONLINE menaing users can still able to access the database and
objects while the move/relocate is in process, prior to 12c it was not the case.


Prerequisites:

1. Oracle cluster  need to be installed on that server( for ASM)

2. Create required asm disk groups.

EXAMPLE:

Currently the datafiles are in /u01 mount point. We will move them to ASM DISK GROUP +DATA.

Database Name: TESTDB

Asm disk group: +DATA


oracle $ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Wed Jul 28 20:45:03 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TESTDB (DBID=2123458733)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name TESTDB

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    1024     SYSTEM               YES     /u01/TESTDB/u01/oradata/TESTDB/system01.dbf
2    1114     SYSAUX               NO      /u01/TESTDB/u02/oradata/TESTDB/sysaux01.dbf
3    31974    UNDOTBS1             YES     /u01/TESTDB/u04/oradata/TESTDB/undotbs01.dbf
4    50       USERS                NO      /u01/TESTDB/u03/oradata/TESTDB/users01.dbf
5    100      STAGING_TS           NO      /u01/TESTDB/u01/oradata/TESTDB/staging_01.dbf

……………..Trimmed …………….


186  100      COMP_INDEX         NO      /u01/TESTDB/u02/oradata/TESTDB/COMP_INDEX_05.dbf
187  100      COMP_INDEX         NO      /u01/TESTDB/u01/oradata/TESTDB/COMP_INDEX_04.dbf
188  100      COMP_INDEX         NO      /u01/TESTDB/u03/oradata/TESTDB/COMP_INDEX_03.dbf
189  100      COMP_INDEX         NO      /u01/TESTDB/u01/oradata/TESTDB/COMP_INDEX_02.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    1177     TEMP                 32767       /u01/TESTDB/u03/oradata/TESTDB/temp01.dbf
2    2554     TEMP                 32767       /u01/TESTDB/u03/oradata/TESTDB/temp02.dbf
3    2554     TEMP                 32767       /u01/TESTDB/u03/oradata/TESTDB/temp03.dbf

***********Trimmed ********

29   2554     TEMP                 32767       /u01/TESTDB/u04/oradata/TESTDB/temp29.dbf
30   2554     TEMP                 32767       /u01/TESTDB/u04/oradata/TESTDB/temp30.dbf
31   2554     TEMP                 32767       /u01/TESTDB/u04/oradata/TESTDB/temp31.dbf
32   2554     TEMP                 32767       /u01/TESTDB/u04/oradata/TESTDB/temp32.dbf

RMAN> exit


Datafile Move:
==============
Create SCRIPTS rename datafile scripts and execute

SQL> set heading off pages 999 line 280
SQL> spool move_dbfiles.sql
SQL> select 'ALTER DATABASE MOVE DATAFILE '''||name||''' TO ''+DATA';' from v$datafile order by con_id;
SQL> spool off;

SQL> @move_dbfiles.sql

Tempfile:
=========

SELECT FILE_NAME,BYTES/1024/1024 FROM DBA_TEMP_FILES;

create temporary tablespace TEMP1 tempfile '+DATA' SIZE 100m reuse AUTOEXTEND ON NEXT 10M MAXSIZE 500M;

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP1;

Check the Session that using Temp Tablespace:
==============================================
col hash_value for a40
col tablespace for a10
col username for a15
set linesize 132 pagesize 1000

SQL>SELECT s.sid,s.serial#, s.username, u.tablespace, s.sql_hash_value||'/'||u.sqlhash hash_value, u.segtype, u.contents, u.blocks
FROM v$session s, v$tempseg_usage u
WHERE s.saddr=u.session_addr
order by u.blocks;

SQL>alter system kill session 'sid,serial#' immediate;

SQL>drop tablespace temp including contents and datafiles;


SQL>create temporary tablespace TEMP tempfile '+DATA' SIZE 100m AUTOEXTEND ON NEXT 100M MAXSIZE 2G;

SQL>ALTER TABLESPACE temp ADD TEMPFILE '+DATA' SIZE 100m  AUTOEXTEND ON NEXT 10M MAXSIZE 1G;


SQL>ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP

SQL>drop tablespace temp1 including contents and datafiles;



Logfile:
==========
SQL> select member from v$logfile;

MEMBER
---------------------------------------------------------
/u01/TESTDB/u70/redo/redo01a.log
/u01/TESTDB/u72/redo/redo02b.log
/u01/TESTDB/u70/redo/redo02a.log
/u01/TESTDB/u72/redo/redo01b.log
/u01/TESTDB/u70/redo/redo03a.log
/u01/TESTDB/u72/redo/redo03b.log

SQL>  select group#, status from v$log;

    GROUP# STATUS
---------- ----------------
         1 INACTIVE
         2 INACTIVE
         3 INACTIVE
         4 CURRENT
         5 INACTIVE
         6 INACTIVE

SQL>> alter database drop logfile group 1;

Statement processed

SQL>> alter database add logfile group 1 ('+DATA') size 200M;

Statement processed

SQL>> alter database drop logfile group 2;

Statement processed

SQL>> alter database drop logfile group 3 ;

Statement processed

SQL>> alter database add logfile group 2 ('+DATA') size 200M;

Statement processed

SQL>> alter database add logfile group 3 ('+DATA') size 200M;

Statement processed

SQL>>  select group#, status from v$log;

    GROUP# STATUS
---------- ----------------
         1 UNUSED
         2 UNUSED
         3 UNUSED
         4 CURRENT
         5 INACTIVE
         6 INACTIVE

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
+DATA/TESTDB/ONLINELOG/group_1.260.947891785
+DATA/TESTDB/ONLINELOG/group_2.261.947891801
+DATA/TESTDB/ONLINELOG/group_3.256.947890863
/u01/TESTDB/u72/redo/redo01b.log
/u01/TESTDB/u70/redo/redo03a.log
/u01/TESTDB/u72/redo/redo03b.log


SQL> alter system switch logfile;

Statement processed

SQL>  select group#, status from v$log;

    GROUP# STATUS
---------- ----------------
         1 CURRENT
         2 UNUSED
         3 UNUSED
         4 ACTIVE
         5 INACTIVE
         6 INACTIVE

SQL> alter system checkpoint;

Statement processed

SQL>  select group#, status from v$log;

    GROUP# STATUS
---------- ----------------
         1 CURRENT
         2 UNUSED
         3 UNUSED
         4 INACTIVE
         5 INACTIVE
         6 INACTIVE


SQL> alter database drop logfile group 4;

Statement processed

SQL> alter database drop logfile group 5;

Statement processed

SQL> alter database drop logfile group 6;

Statement processed

SQL> alter database add logfile group 4 ('+DATA') size 200M;

Statement processed

SQL> alter database add logfile group 5 ('+DATA') size 200M;

Statement processed

SQL> alter database add logfile group 6 ('+DATA') size 200M;

Statement processed

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
+DATA/TESTDB/ONLINELOG/group_1.260.947891785
+DATA/TESTDB/ONLINELOG/group_2.261.947891801
+DATA/TESTDB/ONLINELOG/group_3.256.947890863
+DATA/TESTDB/ONLINELOG/group_4.257.947891047
+DATA/TESTDB/ONLINELOG/group_5.258.947891057
+DATA/TESTDB/ONLINELOG/group_6.259.947891065


Controlfile;
==============
SQL> sho parameter control

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
control_file_record_keep_time        integer     10
control_files                        string      /u01/TESTDB/u02/oradata/TESTDB/control01.ctl, /u01/TESTDB/u03/oradata/TESTDB/control02.ctl
control_management_pack_access       string      DIAGNOSTIC+TUNING

SQL> ALTER SYSTEM SET control_files='+DATA' scope=spfile;

SQL> select name from v$controlfile;

SQL>shut immediate;

SQL>startup nomount;

$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Wed Jul 28 23:20:32 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TESTDB (not mounted)

RMAN> restore controlfile to '+DATA' from '/u01/TESTDB/u02/oradata/TESTDB/control01.ctl';

Starting restore at 28-Jul-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1861 device type=DISK

channel ORA_DISK_1: copied control file copy
Finished restore at 28-Jul-17

RMAN> restore controlfile to '+DATA'  from  '/u01/TESTDB/u03/oradata/TESTDB/control02.ctl';

Starting restore at 28-Jul-17
using channel ORA_DISK_1

channel ORA_DISK_1: copied control file copy
Finished restore at 28-Jul-17


MAN> exit

Recovery Manager complete.

Check the newly created control files in ASM

:+ASM:/home/oracle $ asmcmd -p
ASMCMD [+] > lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N        4096   4096  1048576   5722044  5680486                0         5680486              0             N  DATA/

ASMCMD [+DATA/TESTDB] > ls
CONTROLFILE/
DATAFILE/
TEMPFILE/
ASMCMD [+DATA/TESTDB] > cd CONTROLFILE/
ASMCMD [+DATA/TESTDB/CONTROLFILE] > ls
current.448.947892061
current.262.947892111

/home/oracle$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jul 28 23:27:16 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> sho parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      /opt/oracle/product/12.1.0.2.64/dbs/spfileTESTDB.ora

SQL> alter system set control_files='+DATA/TESTDB/CONTROLFILE/current.448.947892061','+RECO/TESTDB/CONTROLFILE/current.262.947892111' scope=spfile;

System altered.

SQL> alter database mount ;

Database altered.

SQL>select name from v$controlfile;

SQL> shut immediate;

SQL> startup;

Tuesday, July 24, 2018

Move the central Inventory (oraInventory) to another location

Solution On Unix:

Find the current location of the central inventory (normally $ORACLE_BASE/oraInventory):

For example:

find /home/oracle -name oraInventory -print

 /u01/app/oracle/oraInventory



Open the oraInst.loc file in /var/opt/oracle and check the value of inventory_loc

cat /var/opt/oracle/oraInst.loc

inventory_loc=/u01/app/oracle/oraInventory inst_group=oinstall


Remark: The oraInst.loc file is simply a pointer to the location of the central inventory (oraInventory)



Copy the oraInventory directory to the destination directory

cp -Rp /home/oracle/oraInventory /app/oracle



Edit the oraInst.loc file to point to the new location

For example:

vi /var/opt/oracle/oraInst.loc

inventory_loc=/app/oracle/oraInventory inst_group=dba

Saturday, July 21, 2018

The referenced database doesn't contain a valid management Repository

OMS 13c agent installation failed with "The referenced database doesn't contain a valid management Repository."

Cause :-

Generally this error comes when the database we are using as OMS database was already used as a database for OMS installation before or A failed OMS installation already happened using this database.

Solution :-

we have to properly clean the database before using it again as a OMS database :-

Below steps I did to clean my database :-

1. Drop sysman related schemas.

[oracle@ ~]$ . .bash_profile
[oracle@ ~]$ sqlplus / as sysdba

SQL> DROP USER SYSMAN CASCADE;

User dropped.
SQL> DROP USER SYSMAN_OPSS CASCADE;

User dropped.

SQL> DROP USER SYSMAN_MDS CASCADE;

User dropped.

SQL> DROP USER SYSMAN_STB CASCADE;

User dropped.

SQL> DROP USER SYSMAN_BIPLATFORM CASCADE;

User dropped.


SQL> DROP USER SYSMAN_RO CASCADE;

User dropped.

2. Remove Synonyms related to sysman accounts :-

DECLARE
  CURSOR l_syn_csr IS
    SELECT 'DROP ' ||
      CASE owner
        WHEN 'PUBLIC'
          THEN 'PUBLIC SYNONYM '
        ELSE 'SYNONYM ' || owner || '.'
      END ||
      synonym_name AS cmd
    FROM
      dba_synonyms
    WHERE
      table_owner IN (
        'SYSMAN',
        'SYSMAN_MDS',
        'MGMT_VIEW',
        'SYSMAN_BIP',
        'SYSMAN_APM',
        'BIP',
        'SYSMAN_OPSS',
        'SYSMAN_RO'
      );
BEGIN
  FOR l_syn_rec IN l_syn_csr LOOP
    BEGIN
      EXECUTE IMMEDIATE l_syn_rec.cmd;
    EXCEPTION
      WHEN OTHERS THEN
        dbms_output.put_line( '===> ' || l_syn_rec.cmd );
        dbms_output.put_line( sqlerrm );
    END;
  END LOOP;
END;
/


PL/SQL procedure successfully completed.


3. Removing remaining Objects and tablespaces :-


SQL> DROP USER mgmt_view CASCADE;

User dropped.

SQL> DROP TABLESPACE mgmt_ecm_depot_ts INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS;

Tablespace dropped.

SQL> DROP TABLESPACE mgmt_tablespace   INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS;

Tablespace dropped.

SQL> DROP TABLESPACE mgmt_ad4j_ts      INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS;

Tablespace dropped.



4.  As proper database cleaning using RepManager dropall didn't happen, so we have to clean up the registry details :-

DELETE
  FROM
    schema_version_registry
  WHERE
    (comp_name,owner) IN (
      ('Authorization Policy Manager','SYSMAN_APM'),
      ('Metadata Services','SYSMAN_MDS'),
      ('Oracle Platform Security Services','SYSMAN_OPSS')
    );

2 rows deleted.
SQL> commit;

Commit complete.

SQL> ALTER SYSTEM SET optimizer_dynamic_sampling = 0 SCOPE=both;

System altered.

Wednesday, July 18, 2018

ORA-02020: Too Many Database Links In Use


PROBLEM:

While querying against a database link, got below error.
select sysdate from dual@DBLINK
*
ERROR at line 1:
ORA-02020: too many database links in use


CAUSE & SOLUTION:

open_links parameter control, the number of database links each session can use without closing it.
If you access a database link in a session, then the link remains open until you close the session.

SYS@ORCL> show parameter open_links
NAME                                       TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
open_links                                   integer                           20
open_links_per_instance              integer                           20

Here open_links is set to 20, i.e a session can access only 20 open database links in that session.When the open db_link connection reaches the limit(open_links), it throws ORA-02020: too many database links in use.

Solution:
1. Increase the open_links parameter (bounce required)

alter system set open_links=50 scope=spfile;

shutdown immediate;

startup

SYS@ORCL> show parameter open_links
NAME                                       TYPE                              VALUE
------------------------------------ --------------------------------- ------------------------------
open_links                                   integer                           50 
open_links_per_instance              integer                           20

Monday, July 16, 2018

Script to kill db session and cancel running current request

set lines 140;
set serveroutput on;
declare

trace_cmd               varchar2(200);
trace_sid               varchar2(100);
trace_serial    varchar2(100);
req_id                  varchar2(100);
sess_status     varchar2(100);
sess_osuser     varchar2(100);
sess_process    varchar2(100);
sess_spid               varchar2(100);


cursor trace_reqid is SELECT a.request_id,d.sid, d.serial#
FROM apps.fnd_concurrent_requests a,
apps.fnd_concurrent_processes b,
v$process c,v$session d
WHERE a.controlling_manager = b.concurrent_process_id
AND c.pid = b.oracle_process_id
AND b.session_id=d.audsid
AND a.request_id in (SELECT fcr.request_id rqst_id
FROM apps.fnd_concurrent_requests fcr, apps.fnd_user fu, apps.fnd_responsibility_tl fr, apps.fnd_concurrent_programs_tl fcp
WHERE fcr.status_code LIKE 'R'
AND fu.user_id = fcr.requested_by
AND fr.responsibility_id = fcr.responsibility_id
AND fcr.concurrent_program_id = fcp.concurrent_program_id
AND fcr.program_application_id = fcp.application_id
AND fcp.user_concurrent_program_name='&a'
AND ROUND (((SYSDATE - fcr.actual_start_date) * 60 * 24), 2) > 10);

begin

open trace_reqid;
loop
fetch trace_reqid into req_id,trace_sid,trace_serial;
exit when trace_reqid%notfound;
dbms_output.put_line('alter system kill session '''||trace_sid||','||trace_serial||''';');
end loop;
close trace_reqid;

open trace_reqid;
loop
fetch trace_reqid into req_id,trace_sid,trace_serial;
exit when trace_reqid%notfound;
dbms_output.put_line('update apps.fnd_concurrent_requests set phase_code=''C'' , status_code=''X'' where request_id='''||req_id||''';');
end loop;
close trace_reqid;

end;
/

Script to kill db session of a running concurrent program

Provide the Concurrent Program name, 
set serveroutput on;
declare

trace_cmd               varchar2(200);
trace_sid               varchar2(100);
trace_serial    varchar2(100);
req_id                  varchar2(100);
sess_status     varchar2(100);
sess_osuser     varchar2(100);
sess_process    varchar2(100);
sess_spid               varchar2(100);


cursor trace_reqid is SELECT a.request_id,d.sid, d.serial#
FROM apps.fnd_concurrent_requests a,
apps.fnd_concurrent_processes b,
v$process c,v$session d
WHERE a.controlling_manager = b.concurrent_process_id
AND c.pid = b.oracle_process_id
AND b.session_id=d.audsid
AND a.request_id in (SELECT fcr.request_id rqst_id
FROM apps.fnd_concurrent_requests fcr, apps.fnd_user fu, apps.fnd_responsibility_tl fr, apps.fnd_concurrent_programs_tl fcp
WHERE fcr.status_code LIKE 'R'
AND fu.user_id = fcr.requested_by
AND fr.responsibility_id = fcr.responsibility_id
AND fcr.concurrent_program_id = fcp.concurrent_program_id
AND fcr.program_application_id = fcp.application_id
AND fcp.user_concurrent_program_name='&a'
AND ROUND (((SYSDATE - fcr.actual_start_date) * 60 * 24), 2) > 10);

begin

open trace_reqid;
loop
fetch trace_reqid into req_id,trace_sid,trace_serial;
exit when trace_reqid%notfound;
dbms_output.put_line('alter system kill session '''||trace_sid||','||trace_serial||''';');
end loop;
close trace_reqid;

end;
/

Queries to monitor the FND_LOBS attachment

1. To determins total number of attachments per program,

SQL> select distinct program_name, count(*) from fnd_lobs group by program_name order by 2 desc;

2. To determine total number of program with Expiration Date,

SQL> select program_name,count(*) from FND_LOBS where expiration_date is not NULL group by program_name order by 2 desc;

3. To determine total number of program with no Expiration Date,

SQL> select program_name,count(*) from FND_LOBS where expiration_date is NULL group by program_name order by 2 desc;

4. To find the size of the attachments Programwise,

SQL> select program_name, count(0), round(sum((dbms_lob.getlength(FILE_DATA)-1)/1024/1024/1024),1) size_gb
from fnd_lobs group by program_name order by size_gb desc;

5. To find the size of the attachments Applicationwise,

SQL>  select fa.application_name, count(0), round(sum(dbms_lob.getlength (FILE_DATA)/1024/1024/1024),1) size_gb
from fnd_lobs fl,
 fnd_document_entities fde,
 fnd_attached_documents fad,
 fnd_documents_vl fd,
 fnd_application_vl fa
where fd.media_id = fl.file_id
 and fd.document_id = fad.document_id
 and fl.program_name = 'FNDATTCH'    ------Change this for other programs
 and fad.entity_name = fde.data_object_code
 and fa.application_id = fde.application_id
group by fa.application_name
order by size_gb desc;

6. To find the size of the attachmentsSize as per the Expiration Date,

SQL> select program_name,round(sum(dbms_lob.getlength (FILE_DATA))/1024/1024/1024,0) "Size(G)"
from APPS.fnd_LOBS where expiration_date is NULL group by program_name order by 2 desc;

SQL> select program_name,round(sum(dbms_lob.getlength (FILE_DATA))/1024/1024/1024,0) "Size(G)"
from APPS.fnd_LOBS where expiration_date is not NULL group by program_name order by 2 desc;


7. To find PCTVERSION of the segment,

SQL> select SEGMENT_NAME, PCTVERSION from dba_lobs where TABLE_NAME ='FND_LOBS';

8. Find which user has uploaded more attachements,

SQL> select fu.user_name, round(sum(dbms_lob.getlength(file_data)/1024/1024),0) size_mb
from fnd_documents fd, fnd_lobs fl, fnd_user fu
where fd.media_id = fl.file_id
and fl.program_name ='FNDATTCH'  ------Change this for other programs
and fd.datatype_id = 6
and fu.user_id=fd.created_by
group by fu.user_name
order by 2 desc;

Few queries for OPP

1. How to monitor the workload of OPP

In general "Administer Concurrent Manager Screen" shows pending and running requests against each manager, but it does not show pending requests against OPP manager. The OPP uses 'Advanced Queue' to find the pending requests that it needs to process, hence it becomes difficult to configure (or) do sizing of OPP without knowing the workload. For example, "Administrator Screen" shows how many requests are pending (or) running against "Standard Managers" using this information, we can size (Increase Process) Standard Manager accordingly.

However, when the OPP begins to process a concurrent request, it will update the processor_id column of fnd_conc_pp_actions with it's concurrent_process_id.  We can use the following query to find pending requests against OPP,

SQL> select REQUEST_ID,PHASE_CODE,STATUS_CODE from fnd_concurrent_requests
where request_id in (select concurrent_request_id from fnd_conc_pp_actions where action_type >= 6 and processor_id is null)
and PHASE_CODE!='C';

get the processor id from conc -> manager -> administer -> OPP -> processes


2. Determine OPP based concurrent requests

The following query lists all OPP based concurrent requests,

select distinct b.user_concurrent_program_name
from apps.fnd_concurrent_requests a, apps.fnd_concurrent_programs_tl b
where a.concurrent_program_id = b.concurrent_program_id
and a.request_id in (select concurrent_request_id from apps.fnd_conc_pp_actions where action_type >= 6 and processor_id is null) order by 1;

3. Determine what the heap size per OPP process,

select DEVELOPER_PARAMETERS from FND_CP_SERVICES
where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP');

4. Update the heap size,

update FND_CP_SERVICES set DEVELOPER_PARAMETERS ='J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx1024m'
where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP');

OPP "outofmemory" best practices

The "outofmemory" error occurs
- when the Output Post Processor does not have sufficient heap size per process defined to handle the large report
- when processing large reports during peak utilization times and the environment is not tuned
- when there are coding errors in the data template

Please find the best practices of OPP process as suggested by Oracle Support as follows,

a. Templates generated using BI Publisher (11g or 12c) are not supported by the EBS. Please use the BI Publisher Desktop version meant for the EBS application and recreate your template using it.

Details to get BIP for EBS:
The BI Publisher Desktop version meant for the EBS application (xdo 5.6.3) is obtained from patch 12395372.
When installed it will show as version 10.1.3.2.1 in MS Word.

Reference:
Section 3. Latest XML Publisher Desktop Patch from:
Note 1138602.1 - Overview of Available Patches for Oracle XML Publisher embedded in the Oracle E-Business Suite

10.1.3.2.1 is certified only with MS Office 2003 and 2007.


b. The scalability setting SHOULD NOT be set at the site level. Set the scalability settings to False at the Site level. Only set scalability settings to True at the template level of large reports.
1. Sign on as the XML Publisher Administrator
2. Navigate to Administration Page
3. Click on the 'FO Processing' plus sign and set the following to 'False' or leave blank:
4. Use XML Publisher's XSLT processor
5. Enable scalable feature of XSLT processor
6. Enable XSLT runtime optimization
7. Save and bounce the managers

c. Set the scalability settings to True at the template level of large reports (including for the template of this particular report - Template code: XXAARPTOBJ)
In the XML Publisher Administration responsibility:
1. query up the Template for the concurrent program that produces a large data file:
2. click on the Edit Configurations button
3. Under the Configuration subtab:
4. Click on the 'FO Processing' plus sign and set:
Use XML Publisher's XSLT processor - True
Enable scalable feature of XSLT processor - True
Enable XSLT runtime optimization set to True
Save and bounce the managers


d. Increase the maximum Java heap size for the OPP.

1. Determine what the heap size per OPP process is currently,

select DEVELOPER_PARAMETERS from FND_CP_SERVICES
where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES
where CONCURRENT_QUEUE_NAME = ‘FNDCPOPP’);

-The default should be:
J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx512m

2. Increase the Heap Space
   
update FND_CP_SERVICES
set DEVELOPER_PARAMETERS =
'J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx1024m'
where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES
where CONCURRENT_QUEUE_NAME = 'FNDCPOPP');

OR

1. Log into applications with the System Administrator responsibility.
2. Navigate to Concurrent -> Program -> Define
3. Query the XML Publisher Template Re-Generator program
4. Set the following value for the Executable Options : -Xmx1024m
5. Save changes.
6. Retest the program.
The OPP queue can be Recreated the using $FND_TOP/patch/115/sql/afopp002.sql file as ‘APPLSYS’ user. On running the script you will be prompted for username and password.

e. Check timeout settings.
There are two new profiles options that can be used to control the timeouts:
Profile Option : Concurrent:OPP Response Timeout Internal Name : CONC_PP_RESPONSE_TIMEOUT Description : Specifies the amount of time a manager waits for OPP to respond to its request for post processing.
Profile Option : Concurrent:OPP Process Timeout Internal Name : CONC_PP_PROCESS_TIMEOUT Description : Specifies the amount of time the manager waits for the OPP to actually process the request.

Sunday, July 15, 2018

Security parameters in 11G and 12C

            There are 5 parameters that are all prefixed with ‘sec’ in an 11g and 12c database. Actually that is a lie because one is now deprecated in 12c. They are all, as you might guess related to security. This blog is about changes in the default values and some thoughts about whether or not the default value is appropriate or not.

  • SEC_CASE_SENSITIVE_LOGON     TRUE in 11GR1 , 11GR2, DEPRECATED IN 12C
  • SEC_MAX_FAILED_LOGIN_ATTEMPTS     default 11GR1,11GR2=10, 12c=3
  • SEC_PROTOCOL_ERROR_FURTHER_ACTION     default is  CONTINUE in 11GR1, 11GR2, drop, 3 in 12c
  • SEC_PROTOCOL_ERROR_TRACE_ACTION     default is TRACE 11GR1,11GR2, 12c
  • SEC_RETURN_SERVER_RELEASE_BANNER     default is FALSE in 11GR1, 11GR2, TRUE in 12c


  • SEC_CASE_SENSITIVE_LOGON
Let’s cover the deprecated one first. This came along in 11g (as all 5 did) and is now deprecated as 12c  is forcing case sensitive passwords. In parallel the orapwd function in 12c no longer has the ignorecase option either.
The one application that I know does not support case sensitive passwords is EBS R12.1.1 but there is a patch (12964564) if you wish to upgrade to 12c (or even continue to run at 11GR1) .

  • SEC_MAX_FAILED_LOGIN_ATTEMPTS
Now defaults to 3 in 12c from the 11Gx default of 10,  which is not unreasonable. This allows a client to attempt a connection 3 times (multiple accounts) before dropping the connection.

  • SEC_PROTOCOL_ERROR_TRACE_ACTION
This takes action if bad packets are identified as coming in from a client. The default is TRACE and again I would challenge that position. The other viable options are LOG which produces a minimal log file and  also an alert to the alert log or just ALERT or do nothing – NONE. The TRACE has the possibility of filling disk up which could have the same effect as a DoS attack which the parameter is trying to stop therefore I think I prefer the LOG option rather than just the ALERT option. However if you are alerting ensure that you have written a trap in whatever you use to parse your alert logs to spit out the message to your monitoring screens. A nice by-product of this alert is that you can now formally pass it onto the network team and you have sufficient evidence to do so.
 
  • SEC_PROTOCOL_ERROR_FURTHER_ACTION
This is where it gets a bit tricky and we have another change in 12c. In 11g the default was CONTINUE, therefore if you had tracing set in the previous parameter you could end up with a lot of logging going on. I think the CONTINUE was the correct option in 11G as you do not want to stop valid connections into a production system because the packet might look bad – not without some degree of authorization at least.
From 12c the default has changed to DROP, 3. This means drop the connection after 3 bad packets have arrived from a client. Which sounds good as potentially a trace file will not become too big. However there is nothing stopping a client attempting many such connections, all with bad packets, which could potentially cause a DoS, not by using all your processes, but by filling your log area.
With this change of default I think it is even more important to know when connections are being dropped by the SEC_PROTOCOL_ERROR_TRACE_ACTION parameter and that is why I would suggest setting SEC_PROTOCOL_ERROR_FURTHER_ACTION to CONTINUE

  • SEC_RETURN_SERVER_RELEASE_BANNER
This parameter specifies whether the server returns complete database software information to unauthenticated clients. The default is FALSE  in all versions despite the 12C  documentation stating that the default is now TRUE. Oracle have chosen that default  as the whole point of security is to not give away any more information than you have to and that cannot be argued with.

Saturday, July 14, 2018

Steps to de install 11g home

export ORACLE_HOME=/home/oracle/product/11.2.0.4/db_1
## detach ORACLE_HOME
$ORACLE_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=$ORACLE_HOME
## confirm $ORACLE_HOME is removed from central inventory:
/home/grid/12.2.0.1/OPatch/opatch lsinventory -all 
## remove files in ORACLE_HOME manually on all nodes
/bin/rm -rf $ORACLE_HOME             
unset ORACLE_HOME

Script to generate DDL of db-links

set lines 800
set echo off
--set feedb off
set termout off
set heading off
SET VERIFY off
set trimspool on
set long 32766
set longchunksize 120

select '--------------------------'||owner||'----'||db_link||'---------------------------------------------------------'||
chr(10)||
chr(10)||
'CREATE OR REPLACE PROCEDURE '||OWNER||'.drop_db_link AS '||
chr(10)||
'BEGIN '||
chr(10)||
'   EXECUTE IMMEDIATE ''DROP DATABASE LINK '||db_link || ' '' '||';'||
chr(10)||
'END drop_db_link ;'||
CHR(10)||
'/'||
chr(10)||
chr(10)||
'exec '||owner||'.DROP_DB_LINK;'||
CHR(10)||
'DROP PROCEDURE '||OWNER||'.drop_db_link ;'||
chr(10)||
chr(10)||
'CREATE OR REPLACE PROCEDURE '||OWNER||'.create_db_link as ' ||
chr(10)||
'BEGIN'||
chr(10)||
'   EXECUTE IMMEDIATE ''CREATE DATABASE LINK '||db_link|| ' '' '||chr(10)||'||''CONNECT TO '||USERNAME||' IDENTIFIED BY xxxx '''||
chr(10)||
'||'||'''USING '''''||HOST||''''''''||';'||
chr(10)||
'END create_db_link;'||
chr(10)||'/' ||
chr(10)||
chr(10)||
'exec '||OWNER||'.create_db_link ;'||
chr(10)||'drop procedure '||OWNER||'.create_db_link ;'||
chr(10) as script
FROM dba_db_links
ORDER BY 1; 

Script to get the DDL of an user

set long 20000 longchunksize 20000 pagesize 0 linesize 1000 feedback off verify off trimspool on
column ddl format a1000

begin
   dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'SQLTERMINATOR', true);
   dbms_metadata.set_transform_param (dbms_metadata.session_transform, 'PRETTY', true);
end;
/

variable v_username VARCHAR2(30);

exec:v_username := upper('&1');

select dbms_metadata.get_ddl('USER', u.username) AS ddl
from   dba_users u
where  u.username = :v_username
union all
select dbms_metadata.get_granted_ddl('TABLESPACE_QUOTA', tq.username) AS ddl
from   dba_ts_quotas tq
where  tq.username = :v_username
and    rownum = 1
union all
select dbms_metadata.get_granted_ddl('ROLE_GRANT', rp.grantee) AS ddl
from   dba_role_privs rp
where  rp.grantee = :v_username
and    rownum = 1
union all
select dbms_metadata.get_granted_ddl('SYSTEM_GRANT', sp.grantee) AS ddl
from   dba_sys_privs sp
where  sp.grantee = :v_username
and    rownum = 1
union all
select dbms_metadata.get_granted_ddl('OBJECT_GRANT', tp.grantee) AS ddl
from   dba_tab_privs tp
where  tp.grantee = :v_username
and    rownum = 1
union all
select dbms_metadata.get_granted_ddl('DEFAULT_ROLE', rp.grantee) AS ddl
from   dba_role_privs rp
where  rp.grantee = :v_username
and    rp.default_role = 'YES'
and    rownum = 1
union all
select to_clob('/* Start profile creation script in case they are missing') AS ddl
from   dba_users u
where  u.username = :v_username
and    u.profile <> 'DEFAULT'
and    rownum = 1
union all
select dbms_metadata.get_ddl('PROFILE', u.profile) AS ddl
from   dba_users u
where  u.username = :v_username
and    u.profile <> 'DEFAULT'
union all
select to_clob('End profile creation script */') AS ddl
from   dba_users u
where  u.username = :v_username
and    u.profile <> 'DEFAULT'
and    rownum = 1
/

set linesize 80 pagesize 14 feedback on trimspool on verify on

Steps to Deinstall 12C Agent


/home/oracle/Oem12CAgent/core/12.1.0.3.0/perl/bin/perl /home/oracle/Oem12CAgent/core/12.1.0.3.0/sysman/install/AgentDeinstall.pl -agentHome /home/oracle/Oem12CAgent/core/12.1.0.3.0/

Steps to Apply Grid patch in 12CR2 Environment

Grid PSU
=======================
Set Proper Path
=====================
export ORACLE_HOME=/home/oracle/product/12.2.0.1/grid
d
which opatch

Download and copy later opatch binaries to oracle home
=============================================
cd $ORACLE_HOME
cp /backup/oracle/p6880880_122010_Linux-x86-64.zip .
mv OPatch OPatch_bkp
mkdir OPatch
chown oracle:oinstall OPatch
unzip p6880880_122010_Linux-x86-64.zip
rm -rf p6880880_122010_Linux-x86-64.zip

Check prerequisites
====================
   For Grid Infrastructure Home, as home user:

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /backup/oracle/12.2.0.1/27100009/27105253
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /backup/oracle/12.2.0.1/27100009/27335416
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /backup/oracle/12.2.0.1/27100009/27128906
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /backup/oracle/12.2.0.1/27100009/26839277
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /backup/oracle/12.2.0.1/27100009/27144050



 Run OPatch SystemSpace Check

cat /tmp/patch_list_gihome.txt
/backup/oracle/12.2.0.1/27100009/27105253
/backup/oracle/12.2.0.1/27100009/27335416
/backup/oracle/12.2.0.1/27100009/27128906
/backup/oracle/12.2.0.1/27100009/26839277
/backup/oracle/12.2.0.1/27100009/27144050


Run the opatch command to check if enough free space is available in the Grid Infrastructure Home:

$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt

Analyze if there are any conflicts ---
---------------------------------
sudo $ORACLE_HOME/OPatch/opatchauto apply /backup/oracle/12.2.0.1/27100009 -analyze

Apply Patch
====================
sudo $ORACLE_HOME/OPatch/opatchauto apply /backup/oracle/12.2.0.1/27100009 -oh /home/oracle/product/12.2.0.1/grid

opatch lsinventory


Transfer partition statistics to another partition




Summary

It is very often to have a partitioned table and want to calculate or estimate the statistics for the partitions. Partitions usually holds millions of records, so calculating or estimating statistics its a bit time consuming effort. But if the partitions approximately have the same number of records then there is no need to calculate statistics for every partition.

One good idea is to calculate statistics for one partition and then to "copy" them to the other partitions.

How to export statistics from one partition and import them to another
In the example will use the table TST.TST_TABLE which is partitioned (PART001, PART002, PART003, ...) and want the partition PART002 to have the same statistics as PART001

First you have to setup a table to hold the exported partition statistics
BEGIN
DBMS_STATS.CREATE_STAT_TABLE(ownname=>'TST', stattab=>'TST_TABLE_STATS', tblspace=>'USERS');
END;
Second you have to export partition PART001 statistics to the table TST_TABLE_STATS.
BEGIN
DBMS_STATS.EXPORT_TABLE_STATS( 'TST', 'TST_TABLE','PART001','TST_TABLE_STATS','MY_TST_TABLE_PART001',FALSE,'TST');
END;
Third you have to import the statistics to the partition PART002
BEGIN
DBMS_STATS.IMPORT_TABLE_STATS( 'TST', 'TST_TABLE','PART002','TST_TABLE_STATS','MY_TST_TABLE_PART001',FALSE, 'TST');
END;
But if you query the dictionary you will notice that the statistics of the partition PART002 still is empty

Tip: You have to update the table that holds the statistics with the following value
UPDATE TST.TST_TABLE_STATS SET C2='PART002';
COMMIT;
Import the statistics again!
BEGIN
DBMS_STATS.IMPORT_TABLE_STATS( 'TST', 'TST_TABLE','PART002','TST_TABLE_STATS','MY_TST_TABLE_PART001',FALSE, 'TST');
END;

SGA resizing history



SGA resizing history

Summary
You can find all the resizing history of the SGA by querying the view V$SGA_RESIZE_OPS. As you know if you have set the parameter sga_target then Oracle automatically SHRINKs and GROWs the underlying pools.

If you want to change how frequently resizing happens, then you have to set the hidden parameter _memory_broker_stat_interval=600 (30 seconds is the default value and 600 is 10 minutes)

Watch an example below how resizing happens:
SELECT component, oper_type, oper_mode, parameter,
initial_size/1024/1024 init_MB,
target_size/1024/1024 target_MB,
final_size/1024/1024 final_MB,
status, start_time, end_time
FROM V$SGA_RESIZE_OPS;

COMPONENT            OPERATION  OPER_MODE PARAMETER            INIT_MB  TARGET_MB   FINAL_MB STATUS    START_TIME    END_TIME
-------------------- ---------- --------- ----------------- ---------- ---------- ---------- -------
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20256      20240      20240 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        3888       3904       3904 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20240      20224      20224 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        3904       3920       3920 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20224      20208      20208 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        3920       3936       3936 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20208      20192      20192 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        3936       3952       3952 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20192      20176      20176 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        3952       3968       3968 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20176      20160      20160 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        3968       3984       3984 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20160      20144      20144 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        3984       4000       4000 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20144      20128      20128 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        4000       4016       4016 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20128      20112      20112 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        4016       4032       4032 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20112      20096      20096 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        4032       4048       4048 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20096      20080      20080 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        4048       4064       4064 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20080      20064      20080 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        4064       4080       4064 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20080      20064      20064 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        4064       4080       4080 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20064      20048      20048 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        4080       4096       4096 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20048      20032      20048 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        4096       4112       4096 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20048      20032      20048 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        4096       4112       4096 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20048      20032      20032 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        4096       4112       4112 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20032      20016      20032 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        4112       4128       4112 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20032      20016      20032 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        4112       4128       4112 COMPLETE  10/09/2017 10/09/20
shared pool          SHRINK     DEFERRED  shared_pool_size        4112       4016       4016 COMPLETE  10/09/2017 10/09/
DEFAULT buffer cache GROW       DEFERRED  db_cache_size          20032      20128      20128 COMPLETE  10/09/2017
shared pool          SHRINK     DEFERRED  shared_pool_size        4016       3920       3920 COMPLETE  10/09/2017 10/09/
DEFAULT buffer cache GROW       DEFERRED  db_cache_size          20128      20224      20224 COMPLETE  10/09/2017
shared pool          SHRINK     DEFERRED  shared_pool_size        3920       3824       3824 COMPLETE  10/09/2017 10/09/
DEFAULT buffer cache GROW       DEFERRED  db_cache_size          20224      20320      20320 COMPLETE  10/09/2017
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20320      20304      20304 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        3824       3840       3840 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20304      20288      20288 COMPLETE  10/09/201
shared pool          GROW       IMMEDIATE shared_pool_size        3840       3856       3856 COMPLETE  10/09/2017 10/09/20
DEFAULT buffer cache SHRINK     IMMEDIATE db_cache_size          20288      20272      20272 COMPLETE  10/09/201
Frequent pool resizing (for example every second) is something that should be avoiding. If you notice SGA constantly resizing you can check for the wait event: SGA: allocation forcing component growth

This wait event means that the process is waiting on an immediate mode memory transfer(because SGA is auto tuned) after an ORA-04031: unable to allocate x bytes of shared memory error for MMAN to get the memory and post it.

if you want to monitor how much memory is available for resizing use the query:
select * from v$sga_dynamic_free_memory;

Rollback statistics



summary

Usually the database keeps a history of the statistics for one table for 31 days.

The following sql will return the number of days statistics are currently retained for.
SELECT DBMS_STATS.GET_STATS_HISTORY_RETENTION FROM dual;
The following sql will show the dates statistics were regathered for a given table.
SELECT TABLE_NAME, STATS_UPDATE_TIME
FROM dba_tab_stats_history
WHERE table_name LIKE 'TEST_ORDER%'
ORDER BY STATS_UPDATE_TIME DESC;
In case you want to rollback specific statistics in the past use the procedure.
BEGIN
DBMS_STATS.restore_table_stats('TESTSCHEMA','TEST_ORDER', TO_DATE('18/12/2018','dd/mm/rrrr'));
END;
/

Find unused indexes with MONITORING USAGE clause


Summary

Creating an index which doesn't mean that the index will be used.

To find unused indexes for a period of time in ORACLE and afterwards you must enable the INDEX MONITORING USAGE

Example


Create a table TESTTABLE with an index.
CREATE TABLE TESTTABLE AS SELECT * FROM ALL_TESTTABLE;
CREATE INDEX TESTTABLE_IDX ON TESTTABLE(OBJECT_TYPE);

Query view to see if index is monitored.
SELECT * FROM V$OBJECT_USAGE;

No monitor is enabled. Enable index monitoring and check the view if started monitoring.

ALTER INDEX TESTTABLE_IDX MONITORING USAGE;

SELECT * FROM V$OBJECT_USAGE;

Now use the index with the query
SELECT COUNT(*) FROM TESTTABLE
WHERE object_type = 'TABLE';

Query again to see how monitor is going.
SQL> SELECT * FROM V$OBJECT_USAGE;

INDEX_NAME                     TABLE_NAME                     MON USE START_MONITORING    END_MONITORING
------------------------------ ------------------------------ --- --- ------------------- -------------------
TESTTABLE_IDX                    TESTTABLE                        YES YES 03/05/2018 11:43:55

To stop monitoring execute

ALTER INDEX TESTTABLE_IDX NOMONITORING USAGE;

To enable and disable monitoring indexes for a specific schema use the following 2 queries:
SELECT 'alter index ' || owner || '.' || index_name || ' MONITORING USAGE;' monitor_index_sql
FROM dba_indexes
WHERE owner = 'AP';

SELECT 'alter index ' || owner || '.' || index_name || ' NOMONITORING USAGE;' monitor_index_sql
FROM dba_indexes
WHERE owner = 'AP'; 
To drop the unused indexes(what is the meaning to support something useless?) execute:
SELECT 'drop index ' || I.OWNER || '.' || U.index_name || ';' drop_sql
FROM V$OBJECT_USAGE U, DBA_INDEXES I
WHERE USED = 'NO'
AND U.INDEX_NAME = I.INDEX_NAME
AND U.TABLE_NAME = I.TABLE_NAME;

Transfer sql profiles from one database to another



Summary

Here is a simple guide to transfer sql profiles from one database to another.

For better understanding assume that you want to transfer one sql profile from the 'source' database to the 'destination' database.

1. Connect to the source database as sysdba and grant the required privileges to the user which will own the staging table.
For this example the user is testuser.
$ sqlplus "/ as sysdba"
GRANT ADMINISTER SQL MANAGEMENT OBJECT TO testuser;
2. Connect as user testuser to the source db and create the staging table.
BEGIN
DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (table_name => 'SQL_PROFILES_STGTABLE');
END;
/
3. Check the existing sql profiles at the source db and copy the desired to the staging table
SELECT * FROM dba_sql_profiles ORDER BY created DESC;

BEGIN
DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (profile_name => 'SYS_SQLPROF_025c8ac7a1c50001', staging_table_name => 'SQL_PROFILES_STGTABLE');
END;
/

SELECT * FROM testuser.SQL_PROFILES_STGTABLE;
4. Copy the staging table SQL_PROFILES_STGTABLE from the source db to the destination db.
5. Grant again at the destination db the required privilege.
GRANT ADMINISTER SQL MANAGEMENT OBJECT TO testuser;
6. Add the sql profiles from the staging table to the destination db.
BEGIN
DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF(REPLACE => TRUE, staging_table_name => 'SQL_PROFILES_STGTABLE');
END;
/

SELECT * FROM dba_sql_profiles ORDER BY created DESC;

Friday, July 13, 2018

Monthly Count of Database Startup And Shutdown


Steps
 
      -Set Your Environment file or .bash_profile
        - export Your ORACLE_SID
         -Save the Below Script in “.sh” format

Monthly Count of Database Startup :

#!/bin/sh
date
location=`find / -name alert_$ORACLE_SID.log 2>/dev/null`
read -p "Please enter which year you want  : " year
echo -e "\e[96m Month in based on your Oracle Version \e "
echo -e "\e[31m Up to 12.1 enter month in words ie:Jul,Aug,Jan.... in 12.2 enter month in numbers ie:02,03,07...."
echo -e "\e[39m"
read -p "Select any Month Please : " month
read -p "If you need DB Startup status in detail ? (y/n):" a
if test "$a" = "y"
then
cat $location | grep -B2 "Starting ORACLE instance" | grep -i $year | grep -i $month
else
cat $location | grep -B2 "Starting ORACLE instance" | grep -i $year | grep -i $month | wc -l
fi

 
 




Monthly Count of Database Downtime :

#!/bin/sh
date
location=`find / -name alert_$ORACLE_SID.log 2>/dev/null`
read -p "Please enter which year you want  : " year
echo -e "\e[96m Month in based on your Oracle Version \e "
echo -e "\e[31m Up-to 12.1 enter month in words ie:Jul,Aug,Jan....in 12.2 enter month in numbers ie:02,03,07....:"
echo -e "\e[39m"
read -p "Select any Month Please : " month
read -p "If you need DB Shutdown status in detail ? (y/n):" a
if test "$a" = "y"
then
cat $location | grep -B2 "Instance shutdown complete" | grep -i $year | grep -i $month
else
cat $location | grep -B2 "Instance shutdown complete" | grep -i $year | grep -i $month | wc -l
fi