DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Saturday, June 30, 2018

OEM Agent Upgrade From 12 1 0 1 to 12 1 0 3

Adding new diskgroup in ASM

GOAL
Before you add the disk(s) to your production diskgroup, you can use the next steps to validate whether the candidate disk(s) is/are in good shape and can be accessed from each node (without harm the existing production diskgroups), as follow:

SOLUTION

1) As Oracle or Grid OS user, run the next commands on each node (if ASMLIB API is used):
# /etc/init.d/oracleasm  scandisks  #(only this command needs to be executed as root user)
$> id
$> /etc/init.d/oracleasm  listdisks
$> /etc/init.d/oracleasm  querydisk <each disk from previous output>
Note: All the disks must be visible from each node.

2) Confirm the disks are visible (I/O) at OS level (as oracle or grid OS user) from each node:
$> id
$> ls -l  <disk name>
$> dd if=<raw/character or block partition device in  question> of=/dev/null count=100 bs=8192
Example:
$> dd if=/dev/raw/raw1 of=/dev/null count=100 bs=8192  # raw device
$> dd if=/dev/oracleasm/disks/PRODA07 of=/dev/null count=100 bs=8192  # ASMLIB device
$> dd if=/dev/rdsk/c1t3d0s6 of=/dev/null count=100 bs=8192 # raw device

3) Verify that kfed can read on the candidate disk(s) at OS level (as oracle or grid OS user) from each node
$> kfed read  <raw/character or block partition device in question>

Example:
$> kfed read /dev/raw/raw1
$> kfed read /dev/oracleasm/disks/PRODA07
$> kfed read /dev/rdsk/c1t3d0s6

Note: If the disk is not a FORMER disk, then kfed on new disks will return:    kfbh.type: 0 ; 0x002: KFBTYP_INVALID and that is expected.

 4) Create a new temporary diskgroup:
SQL> CREATE DISKGROUP TEST EXTERNAL REDUNDANCY DISK '<YOUR CANDIDATE DISK>';

5) Check if the diskgroup is created and mounted:
SQL> SELECT STATE, NAME FROM V$ASM_DISKGROUP;

6) If so, then manually mount it on the other ASM instance(s):
SQL> ALTER DISKGROUP TEST MOUNT;
SQL> SELECT STATE, NAME FROM V$ASM_DISKGROUP;

7) If the previous steps had success and the TEST diskgroup can be successfully mounted (on each ASM instance), then drop the TEST diskgroup as follow:
SQL> alter diskgroup test dismount; --(from all the ASM instances, except from one).
SQL> DROP DISKGROUP TEST; --(from the ASM instance, which the diskgroup is still mounted).

8) Then the candidate disk is ready to be added to the desired diskgroup:
SQL> ALTER DISKGROUP <my production diskgroup> ADD DISK '<your candidate disk>';

Note: The same disks validation steps are also required for ASM Standalone configurations.

ORA-12154: TNS: could not resolve the connect identifier specified


ORA-12154 occurs when the transparent network substrate (TNS) cannot resolve the service name. The service name is specified in the TNSNames.ora file, which is located in your %ORACLE_HOME%\network\admin\ folder. Most often, this error occurs when information in the TNSNames.ora file is incorrect. For example:
  • The .world extension is not included on the database name.
  • The SERVICE_NAME or HOST variable is incorrect.
To resolve this issue, try one of the three following troubleshooting options, in the order listed below.
Option 1: Edit TNSNames.ora
 Provide the full database name, including the .world extension in both of the following locations:
  • The TNSNames.ora file.
And
  • The Server text box of the Connect page.
Option 2: Ensure that Tableau Server Run As User account has permissions to TNSNames.ora (Tableau Server only)
If you have Tableau Server installed, complete the procedure below to ensure that the Tableau Server Run As user account has permissions to the location of the TNSNames.ora file. If the Run As user account does not have permissions, Tableau Server is unable to access the Oracle Data source details.
Step 1 
Verify the location of the TNSNames.ora file, or the equivalent SQLNET.ora and LDAP.ora files on the machine.
Note: By default, the TNSNames.ora file is located in <oracle-directory>\network\admin directory. For example, C:\Oracle_Client\network\admin.
Step 2 
Confirm that the TNS_ADMIN variable points to the location of the file or files described in step 1.
Note: To check the TNS_ADMIN variable, click the Start button, and select Control Panel > System. Click Advanced system settings, click the Advanced tab, and click Environmental Variables button.
Step 3 
On the Windows Start button, select Tableau Server <version> > Configure Tableau Server.
Step 4 
Under Server Run As User, copy the information in the User text box.
Step 5 
Go to the folder where the TNSNames.ora file is located.
Step 6 
Right-click the folder and select Properties. Click the Security tab and click the Edit button.
Step 7 
Under Group or user names, click the Add button.
Step 8 
In the Enter the object names to select text box, paste the details of the Run As User account you copied in step 6.
Step 9 
When finished, click OK.
Step 10 
In the Permissions area,ensure that the Full control and Modify check boxes are selected.
Step 11 
Click OK to close the dialog boxes.

Option 3: Verify that all information in TNSNames.ora is correct
 If the above troubleshooting steps do not resolve the issue, continue reading and complete the procedure to verify the other information in the TNSNames.ora file is provided correctly.
An example of a TNSNames.ora file is shown here:
QAORCL10.world =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = MY_HOST_NAME)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = MY_SERVICE_NAME)
)
)
The three variables of interest in the file are HOST, PORT, and SERVICE_NAME. Copy these variables from the TNSNames.ora file and keep them available. These variables are case sensitive.The following steps describe how to provide these variables for your connection.
Step 1 
From the Tableau Desktop start page, select Connect to Data.
Step 2 
On the Connect page, click Oracle.
Step 3 
Provide the following information from the TNSNames.ora file:
  • In the Server name text box, type the HOST name.
  • In the Service text box, type the SERVICE_NAME.
  • In the Port text box, type the PORT number.
  • Specify whether to use Windows Authentication or a specific user name and password, and then click Connect.
Note: Variables are case sensitive.
Step 4 
Select a schema from the Schema drop-down list, drag a table to the join area, and then click Go to Worksheet.
Step 5 
Important: 
  • Make sure that you save the TNSNames.ora file you use in ASCII encoding. Any other encoding besides ASCII, for example UTF-8 or Unicode, causes the ORA-12154 error message.
  • These steps are usually required even if the Oracle software is already installed on the machine.
Step 6
Download and install the appropriate Oracle drivers from the Tableau Drivers page. Even if an Oracle driver is installed on your computer, it could be incompatible with Tableau and will require the version specified on the Drivers page.

Friday, June 29, 2018

Submenu And Function Listing

+==============================+
| Submenu And Function Listing |
+==============================+

Purpose/Description:
By using this query you can check function and submenus attached to a specific menu

SELECT
a.user_menu_name,
c.prompt
, c.description
FROM
apps.fnd_menus_tl a
, fnd_menu_entries_tl c
WHERE
a.menu_id = c.menu_id
AND a.user_menu_name like '%Navigator Menu%System Administrator GUI%';

Thursday, June 28, 2018

ORA-02266: unique/primary keys in table referenced by enabled foreign keys

Casue:

When issuing truncate on a table we get the below error.

SQL> truncate table OWNER.TABLE;
truncate table OWNER.TABLE
                         *
ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys

This is due to owner.table having reference with other table.

Use the below query to find the relationship.

set lines 200
col owner for a20
col status for a10
col deferrable for a15
col validated for a18
col generated for a15
col deferred for a15
select owner,constraint_name,constraint_type,table_name,status,DEFERRABLE,DEFERRED,VALIDATED,GENERATED from dba_constraints where table_name='TAB1' and constraint_type in ('R','P');


Solution:

Disable the constraints identified in the above query and execute the query again.

alter table OWNER_TABLE.TAB1 disable constraint PK_TAB1 cascade;
alter table OWNER_TABLE.TAB1 disable constraint FK_TAB1_30 cascade;
alter table OWNER_TABLE.TAB1 disable constraint FK_ERROR_3A cascade;

After that run the truncate.

SQL> truncate table OWNER.TABLE;




Import fails with ORA-39126,ORA-06512,ORA-01403

Cause:

The issue happen when we migrate using export from 11gR2 windows and import it in 12cR2 Linux server.

Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [TABLE:"NOMU"."TIPOS_ACCON"]
ORA-01403: no data found

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6266

----- PL/SQL Call Stack -----
object line object
handle number name
c0000000da9e5770 14916 package body SYS.KUPW$WORKER
c0000000da9e5770 6293 package body SYS.KUPW$WORKER
c0000000da9e5770 12280 package body SYS.KUPW$WORKER
c0000000da9e5770 3278 package body SYS.KUPW$WORKER
c0000000da9e5770 6882 package body SYS.KUPW$WORKER
c0000000da9e5770 1259 package body SYS.KUPW$WORKER
c0000000da9cdf78 2 anonymous block


Solution:

The issue is due to trying to import all the schemas including sysman etc. Solution is to exclude all the system related schemas and import only the application schemas.

How to reinstate the old Primary as a Standby after Failover

Cause:

Primary database crash and failed over to standby.


Solution:

We can reinstate old primary as standby by performing flashback on the old primary database.

Step 1:

Please execute the below in new primary

SELECT TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE;

Step 2:

Flashback the old primary to the above scn(taken from step 1).

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> FLASHBACK DATABASE TO SCN standby_became_primary_scn;

Step 3:

Convert to physical standby.

ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

Step 4:

Set up DG and verify log sync.
Start managed recovery.

ERROR: ORA-00257: archiver error. Connect internal only, until freed

Cause:

The archiver process received an error while trying to archive a redo log due to archive location or FRA getting full.

Solution:

This can solved in two ways.

1. Increase FRA
2. Clear old archives.


Increase FRA:

SQL> show parameter db_recovery_file_dest;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      C:\oracle\product\12.2.0.1\flash_recovery_area
db_recovery_file_dest_size           big integer 2G

SQL> ALTER SYSTEM SET db_recovery_file_dest_size='10G' SCOPE=BOTH;

Clear old archives:

Find the space used in flash recovery area by using following SQL:

col ROUND(SPACE_LIMIT/1048576) heading “Space Allocated (MB)” format 999999
col round(space_used/1048576) heading “Space Used (MB)” format 99999
col name format a40
select name, round(space_limit/1048576) As space_limit,round(space_used/1048576) As space_used
from v$RECOVERY_FILE_DEST;

If Flashback log is using more space decrease and increase the FRA.

If archives are using more space move archives to different space or use the below command to remove archives.

 RMAN>delete archivelog until time 'SYSDATE-1';
 or,
 RMAN>delete archivelog all;

ORA-38824: A CREATE OR REPLACE command may not change the EDITIONABLE property during Apex install in 12.2

Issue:

Fresh apex install failed with the below error.

ORA-38824: A CREATE OR REPLACE command may not change the EDITIONABLE property during Apex install in 12.2

This is due to previous version of flow_files installed in the the database.

Solution:

Check whether old version is installed and use drop schema and retry the installation.

select username from dba_users where username like '%FLOW%'

Wednesday, June 27, 2018

adop pre-req admsi.pl not getting invoked

ISSUE:-

For applying application patch in R12 environment, there will be file in readme called "admsi.pl" which need to be run as pre-requisties

But when we running that we are facing below error.

Invoking Oracle Patch Application Assistant…. Please set the DISPLAY variable and re-run this script

SOLUTION:-

Use to run the script in below type
perl $AD_TOP/bin/admsi.pl -cli

Weblogic Smart Update (BSU) Hangs when Starting or Errors with Exception in thread "Main Thread" java.lang.OutOfMemoryError

ISSUE:-


Weblogic Smart Update utilily getting hangs and throwing below error


2018-05-23 09:29:30,344 DEBUG [main] com.bea.plateng.patch.gui.PatchMediator - java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded



java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: GC overhead limit exceeded



REASON:-


The bsu utility uses Java to initialize and it is running out of heap space.


SOLUTION:-

Windows:

1. Edit the file <WL_HOME>\utils\bsu\bsu.cmd

2. Change the setting:

set MEM_ARGS=-Xms256m -Xmx512m

to

set MEM_ARGS=-Xms1024m -Xmx1024m

3. Start bsu.cmd it should now start Oracle Smart Update

Unix / Linux:

1. Edit the file $WL_HOME/utils/bsu/bsu.sh

2. Change the setting:

MEM_ARGS="-Xms256m -Xmx512m"

to

MEM_ARGS="-Xms1024m -Xmx1024m"

3. Start bsu.sh it should now start Oracle Smart Update


Reference:

E-WL: Weblogic Smart Update (BSU) Hangs when Starting or Errors with Exception in thread "Main Thread" java.lang.OutOfMemoryError (Doc ID 2002949.1)



How to remove conflict patches in Weblogic

How to remove conflict patches in weblogic by using bsu.sh

ISSUE:

When trying to apply new patch to the weblogic the following error shown up :-

[appl@ebsdev bsu]$ ./bsu.sh -prod_dir=/u01/oracle/EBSDEV/fs1/FMW_Home/wlserver_10.3/ -patch_download_dir=/u01/oracle/EBSDEV/fs1/FMW_Home/utils/bsu/cache_dir/                                 -patchlist=GFWX -verbose -install
Checking for conflicts...

Conflict(s) detected - resolve conflict condition and execute patch installation again
Conflict condition details follow:
Patch EJUW is mutually exclusive and cannot coexist with patch(es): 7Z9S,DEM4,Z62H,HSZB,5C71,U81V


Solution:

[appl@ebsdev bsu]$ ./bsu.sh -prod_dir=/u01/oracle/EBSDEV/fs1/FMW_Home/wlserver_10.3/ -patchlist=7Z9S,DEM4,Z62H,HSZB,5C71,U81V -verbose -remove

retest the patch again

Tuesday, June 26, 2018

Find all the profile options recently changed in EBS

+===============================================+
| Find all the profile options recently changed |
+===============================================+

select p.profile_option_name SHORT_NAME, n.user_profile_option_name "PROFILE NAME",
decode(v.level_id, 10001, 'Site', 10002, 'Application',
10003, 'Responsibility', 10004, 'User', 10005, 'Server',
10007, 'SERVRESP', 'UnDef') LEVEL_SET,
decode(to_char(v.level_id), '10001', '',
'10002', app.application_short_name, '10003', rsp.responsibility_key,
'10005', svr.node_name, '10006', org.name, '10004', usr.user_name,
'10007', 'Serv/resp', 'UnDef') "CONTEXT", v.profile_option_value VALUE, v.LAST_UPDATE_DATE 
from fnd_profile_options p,
fnd_profile_option_values v,
fnd_profile_options_tl n,
fnd_user usr,
fnd_application app,
fnd_responsibility rsp,
fnd_nodes svr,
hr_operating_units org
where p.profile_option_id = v.profile_option_id (+)
and p.profile_option_name = n.profile_option_name
--and upper(n.user_profile_option_name) like upper('BNE%')
--and trunc(v.LAST_UPDATE_DATE) > trunc(sysdate-170)
and usr.user_id (+) = v.level_value
and rsp.application_id (+) = v.level_value_application_id
and rsp.responsibility_id (+) = v.level_value
and app.application_id (+) = v.level_value
and svr.node_id (+) = v.level_value
and org.organization_id (+) = v.level_value
and v.LAST_UPDATE_DATE is not null
order by last_update_date desc, short_name, level_set;

Profile option with modification date and user in EBS

+=================================+
| Profile option with modification date and user  |
+=================================+

SELECT
t.user_profile_option_name
, profile_option_value
, v.CREATION_DATE
, v.LAST_UPDATE_DATE
, v.CREATION_DATE - v.LAST_UPDATE_DATE "Change Date"
, (SELECT UNIQUE user_name
FROM fnd_user
WHERE user_id = v.created_by) "Created By"
, (SELECT user_name
FROM fnd_user
WHERE user_id = v.last_updated_by) "Last Update By"
FROM
fnd_profile_options o
, fnd_profile_option_values v
, fnd_profile_options_tl t
WHERE
o.profile_option_id = v.profile_option_id
AND o.application_id = v.application_id
AND o.profile_option_name = t.profile_option_name
AND level_id = 10001
AND t.LANGUAGE IN (SELECT language_code
FROM fnd_languages
WHERE installed_flag = 'B'
UNION
SELECT nls_language
FROM fnd_languages
WHERE installed_flag = 'B')
ORDER BY user_profile_option_name;

Monday, June 25, 2018

Hot Cloning of a Remote PDB in 12c

 The remote CDB is in local undo mode and archivelog mode, we don't need to turn the remote database into read-only mode
CDB1 - The remote container database which used for cloing a particular PDB named PSDBT.
CDB2 - The container database where we going to clone a PDB.
Step 1. Connect to the CDB.
    cdb1> select name, log_mode, cdb from v$database;
    NAME      LOG_MODE               CDB
    ---------           ------------                  ---
    CBD1      ARCHIVELOG              YES

Step 2. check Local Undo Mode of Remote CDB.
    cdb1> select property_name, property_value
      2  from database_properties
      3  where property_name='LOCAL_UNDO_ENABLED';
    PROPERTY_NAME         PROPERTY_VALUE
    ---------------------                      -----------------
    LOCAL_UNDO_ENABLED            TRUE

Step 3. Connect to the another CDB.
    cdb2> select name, log_mode, cdb from v$database;
    NAME          LOG_MODE                CDB
    --------             ------------                      ---
    CDB2      NOARCHIVELOG             YES

Step 4. Check Local Undo Mode of source CDB.
    cdb2> select property_name, property_value
      2  from database_properties
      3  where property_name='LOCAL_UNDO_ENABLED';
    PROPERTY_NAME             PROPERTY_VALUE
    ---------------------                          -----------------
    LOCAL_UNDO_ENABLED             TRUE

Step 5. Check tnsping between network.
  
    chech net configuration between source and remote server.
     tnsping cdb1 2
     tnsping cdb2 2
     [oracle@ramkumar ~]$ tnsping psdbt 2
    TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 25-JUN-2018 09:56:14
    Copyright (c) 1997, 2016, Oracle.  All rights reserved.
    Used parameter files:
    /u02/app/oracle/product/12.2.0.1/db_1/network/admin/sqlnet.ora

    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST =   
acs.localdomain)(PORT = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME =   
psdbt.localdomain)))
    OK (0 msec)
    OK (0 msec)

Step 6. Create database link in source CDB.
        Create database link name as cdb_ln using
    a user and password of existing user in target CDB.
    
    cdb2>create database link cdb_ln
      2  connect to c##ram
      3  identified by oracle
      4  using 'psdbt';
    Database link created.
Step 7. Check the link
    cdb2> column DB_LINK format a10
    cdb2> column username format a10
    cdb2> column host format a10
    cdb2> select db_link,username,host from dba_db_links;
    DB_LINK    USERNAME   HOST
    ---------- ---------- ----------
    SYS_HUB               CDB2
    CDB_LN     C##RAM    psdbt

Step 8. Check the PDB datafile path using link.
    cdb2>select name from v$datafile@cdb_ln;
    NAME
    --------------------------------------------------------------------------------
    /u02/app/oracle/oradata/ram/system01.dbf
    /u02/app/oracle/oradata/ram/sysaux01.dbf
    /u02/app/oracle/oradata/ram/undotbs01.dbf
    /u02/app/oracle/oradata/ram/user01.dbf
    /u02/app/oracle/oradata/ram/user02.dbf
    
Step 9. Create a directory in responsible place in OS.
    cdb2>!
    [oracle@ramkumar ~]$ mkdir /u02/app/oracle/oradata/pdbnew
    [oracle@ramkumar ~]$ exit
    exit
Step 10. Clone the Remote PDB using database link.
    cdb2>create pluggable database pdbnew
      2  from psdbt@cdb_ln
      3  file_name_convert=  
('/u02/app/oracle/oradata/ram/','/u02/app/oracle/oradata/pdbnew/');
    Pluggable database created.
  
Step 11. Check the Pluggable database mode
    cdb2>show pdbs
  
        CON_ID         CON_NAME                      OPEN MODE          RESTRICTED
    ----------         ------------------------------                ----------                     ----------
             2            PDB$SEED                       READ ONLY                    NO
             3            LONEPDB                         MOUNTED
             4            LTWOPDB                         MOUNTED
             5            PDBNEW                          MOUNTED
Step 12. Open the Cloned PDB in read only mode
    cdb2>alter pluggable database pdbnew open read only;
  
    Pluggable database altered.
  
    cdb2>show pdbs
        CON_ID       CON_NAME                       OPEN MODE                RESTRICTED
    ---------------    ------------------------                   ----------                              ----------
             2          PDB$SEED                              READ ONLY                         NO
             3          LONEPDB                                MOUNTED
             4          LTWOPDB                                MOUNTED
             5          PDBNEW                                  READ ONLY                        YES
Step 13. Change the session to Newly cloned PDB.

    cdb2>alter session set container=pdbnew;
    Session altered.

Step 14. Check the Consistency of cloned records.
    cdb2>show con_name;
    CON_NAME
    ------------------------------
    PDBNEW

    cdb2>select count(*) from dba_tables;
      COUNT(*)
         ----------
          1751
    
Step 15. Connect session to CDB. 
  
    cdb2>alter session set container=cdb$root;
  
    Session altered.
  
  
Step 16. Check the consistency of records by dblinks.
    cdb2>select count(*) from dba_tables@cdb_ln;
    
      COUNT(*)

        ----------

         1751


Step 17. Close the read only mode and open it normaly.    
  
    cdb2>alter pluggable database pdbnew close immediate;
  
    Pluggable database altered.

Cloning a PDB in Same CDB in 12c

The Hot cloning is ability to clone a active database, and create a copy of that database in multi tenant database environment.
      In Oracle 12c cloing allows the capability and flexibiltiy of rapid provisioning of database.
     The Hot Cloning of a existing PDB in same CDB using create
pluggable database statement.
     Existing PDB         - PSDBT
     New Clone PDB    - PCLONE

step 1. Connect to the CDB
    CON_NAME
    ------------
    CDB$ROOT
step 2. Check the available PDB
    SQL> show pdbs
    CON_ID CON_NAME  OPEN MODE     RESTRICTED
    ----------- ------------------ ---------------------- -------------------
               2 PDB$SEED    READ ONLY       NO
               3 PSDBT           READ WRITE     NO
               4 APPS             READ WRITE     NO
step 3. Check the datafile path 
        A PDB named as PSDBT is going to be cloned. So before going to cloing that PDB we must check there physical path of the datafile.
  
    SQL> select name from v$datafile where con_id=3;
  
    NAME
    --------------------------------------------------------------------------------
    /u02/app/oracle/oradata/acs/psdbt/system01.dbf
    /u02/app/oracle/oradata/acs/psdbt/sysaux01.dbf
    /u02/app/oracle/oradata/acs/psdbt/undotbs01.dbf
    /u02/app/oracle/oradata/acs/psdbt/user01.dbf
    /u02/app/oracle/oradata/acs/psdbt/user02.dbf
step 4. create a directory 
        Create a Physical directory in Operating System responisble
    for the clonig PDB.
    SQL> !
    [oracle@ramkumar ~]$ mkdir /u02/app/oracle/oradata/acs/pclone
    [oracle@ramkumar pclone]$ exit
    exit
step 5. Clone the PDB 
        The PSDBT Pluggable Database is cloned to PCLONE  Pluggable database using below statement.
  
    SQL> create pluggable database pclone from psdbt
      2 file_name_convert=('psdbt','pclone');
    Pluggable database created.
step 6. Check the PDB status
        The cloned Pluggable database is always in mounted stage.
    SQL> show pdbs
     CON_ID CON_NAME       OPEN MODE  RESTRICTED
    ------- --------------------------- ------------------- ----------
          2 PDB$SEED               READ ONLY    NO
          3 PSDBT                      READ WRITE  NO
          4 APPS                        READ WRITE  NO
          5 PCLONE                   MOUNTED
step 7. Open the PDB and Check the Status
    SQL> alter pluggable database pclone open;
    Pluggable database altered.

    SQL> show pdbs
     CON_ID CON_NAME        OPEN MODE  RESTRICTED
    -------- --------------------------- ------------------- ----------
          2 PDB$SEED                   READ ONLY    NO
          3 PSDBT                          READ WRITE  NO
          4 APPS                            READ WRITE  NO
          5 PCLONE                       READ WRITE  NO

DG Broker Configuration for Physical Standby & Switch OVER


It is assumed that Primary and Standby is already configured and both are sync where archive transfer and apply is working fine.

1- On both Primary and standby databases, set the DG_BROKER_START=TRUE to start the DG Broker process:

SQL> ALTER SYSTEM SET DG_BROKER_START=TRUE SCOPE=BOTH;

2- Edit the Listener.ora file on both nodes and add DGMGRL static listener entry to avoid ORA-12154 which can be observed on startup of the standby database after performing a switchover.

Ensure that the GLOBAL_NAME is set to the db_unique_name_dgmgrl

SID_LIST_LISTENER =
  (SID_LIST =
   (SID_DESC =
        (GLOBAL_DBNAME = HPFMDM_dgmgrl)
        (ORACLE_HOME = /u01/app/oracle/HPFMDMuct/11.2.0/dbhome_1)
        (SID_NAME = HPFMDM)
    )
   )
  
3- In TNSNAMES.ORA on both sides, use already added to_primary & to_standby tns entries

Create Configuration:

4- Connect DGMGRL on Primary and connect using SYS/****

[oracle@HPFMDMdb agent12c]$ dgmgrl
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit HPFMDMuction

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL>
DGMGRL> connect sys/********
Connected.

DGMGRL> create configuration 'HPFMDM' as primary database is 'HPFMDM' connect identifier is 'to_primary';
Configuration "HPFMDM" created with primary database "HPFMDM"
DGMGRL>

5- ADD the STANDBY database to the configuration:

DGMGRL> ADD DATABASE 'DRHPFDMDDB' as CONNECT IDENTIFIER IS 'to_standby';
Database "DRHPFDMDDB" added
DGMGRL>
DGMGRL>

6- Check the configuration

DGMGRL> show configuration

Configuration - HPFMDM

  Protection Mode: MaxPerformance
  Databases:
    HPFMDM   - Primary database
    DRHPFDMDDB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
DISABLED

DGMGRL>

7- Enable the configuration

DGMGRL> enable configuration
Enabled.

DGMGRL> show configuration

Configuration - HPFMDM

  Protection Mode: MaxPerformance
  Databases:
    HPFMDM   - Primary database
    DRHPFDMDDB - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL>

8- View the Primary and standby database properties

For Primary:

DGMGRL> show database 'HPFMDM'

Database - HPFMDM

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    HPFMDM

Database Status:
SUCCESS

DGMGRL>

For Standby:

DGMGRL> show database 'DRHPFDMDDB'

Database - DRHPFDMDDB

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       0 seconds (computed 1 second ago)
  Apply Rate:      342.00 KByte/s
  Real Time Query: OFF
  Instance(s):
    DRHPFDMDDB

Database Status:
SUCCESS

DGMGRL>
DGMGRL> show database verbose  'HPFMDM'

Database - HPFMDM

  Role:            PRIMARY
  Intended State:  TRANSPORT-ON
  Instance(s):
    HPFMDM

  Properties:
    DGConnectIdentifier             = 'to_primary'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'auto'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = '+DATA/DRHPFDMDDB, +DATA/HPFMDM'
    LogFileNameConvert              = '+DATA/DRHPFDMDDB, +DATA/HPFMDM'
    FastStartFailoverTarget         = ''
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    ApplyLagThreshold               = '0'
    TransportLagThreshold           = '0'
    TransportDisconnectedThreshold  = '30'
    SidName                         = 'HPFMDM'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.160)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=HPFMDM_dgmgrl.HBMSU.AC.AE)(INSTANCE_NAME=HPFMDM)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = '+REC/HPFMDM/archivelog/'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = '%t_%s_%r.dbf'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS

DGMGRL>

***** LogXptMode='ASYNC' means DataGuard is in MaxPerformance mode. If it required to be changed from MaxPerformance to MaxAvailability Mode, then LogXptMode must be changed to 'SYNC'  using:

DGMGRL> edit database standby set property logxptmode=SYNC;
Property "logxptmode" updated
DGMGRL>


9- Monitoring the DG Broker configuration

DGMGRL> show database 'HPFMDM' statusreport
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT

DGMGRL> show database 'DRHPFDMDDB' statusreport
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT

DGMGRL> show database 'HPFMDM' logxptstatus
LOG TRANSPORT STATUS
PRIMARY_INSTANCE_NAME STANDBY_DATABASE_NAME               STATUS
                HPFMDM               DRHPFDMDDB                     

DGMGRL>
DGMGRL> show database 'HPFMDM' InconsistentProperties
INCONSISTENT PROPERTIES
   INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE


10- Performing a switchover

Note: In this case, currently the Primary Database is HPFMDM and the Standby database is DRHPFDMDDB.

DGMGRL> switchover to 'DRHPFDMDDB'
Performing switchover NOW, please wait...
New primary database "DRHPFDMDDB" is opening...
Operation requires shutdown of instance "HPFMDM" on database "HPFMDM"
Shutting down instance "HPFMDM"...
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "HPFMDM" on database "HPFMDM"
Starting instance "HPFMDM"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "DRHPFDMDDB"

DGMGRL> SHOW CONFIGURATION;

Configuration
  Name:                HPFMDM
  Enabled:             YES
  Protection Mode:     MaxAvailability
  Databases:
    DRHPFDMDDB - Primary database
    HPFMDM - Physical standby database

Fast-Start Failover: DISABLED

Current status for "HPFMDM":
SUCCESS