DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Monday, July 31, 2017

ORA-01555 / ORA-600 in standby database after switchover

After doing a SwitchOver from Production to standby(open read only), We were facing frequent ORA-01555 error.

This is a Oracle bug in 11.2 as per the below oracle doc.

Bug 22241601 ORA-600 [kdsgrp1] / ORA-1555 / ORA-600 [ktbdchk1: bad dscn] / ORA-600 [2663] due to Invalid Commit SCN in INDEX ( Doc ID 1608167.1 ).

As a solution we need to set below parameter on Production and standby.

alter system set "_ktb_debug_flags"=8 scope=both;

After this there was no issue in queries running in standby database.

Tuesday, July 18, 2017

Deinstall of Grid Infrastructure Cluster

Step 1:

How to clean up a failed Grid Infrastructure installation.
It specifically focuses on what to do if the "root.sh" script fails during this process and you want to rewind and start again.


On all cluster nodes except the last, run the following command as the "root" user.

perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force


Step 2:

On the last cluster node, run the following command as the "root" user.

perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force -lastnode

This final command will blank the OCR configuration and voting disk.

You should be in a position to rerun the "root.sh" file now, but if you are using ASM, you will need to prepare your ASM disks before doing so.

Step 3: As grid user, run $GRID_HOME/deinstall/deinstall

finally as root in all racnodes

rm -rf /etc/oraInst.loc
rm -rf /opt/ORCLfmap

Step 4:

ASM Disks

Once you attempt an installation, your ASM disks are marked as being used, so they can no longer be used as candidate disks. To revert them to candidate disk do the following.

Overwrite the header for the relevant partitions using the "dd" command.

dd if=/dev/zero of=/dev/sdb1 bs=1024 count=100

Remove and create the ASM disk for each partition.

/etc/init.d/oracleasm deletedisk DATA /dev/sdb1
 /etc/init.d/oracleasm createdisk DATA /dev/sdb1

The disks will now be available as candidate disks.



Script to know the top sessions and locks

col machine for a40

SELECT * FROM (SELECT TO_CHAR (s.logon_time,'mm/dd hh:mi:ssAM') loggedon, s.sid, s.status,FLOOR (last_call_et / 60)
"Last Call Elapsed Time (m)", s.username, s.osuser, p.spid, s.module || ' - ' || s.program uprogram, s.machine,
s.sql_hash_value FROM v$session s, v$process p WHERE p.addr = s.paddr AND s.TYPE = 'USER' AND s.username IS
NOT NULL AND s.status = 'ACTIVE' AND last_call_et > 0 ORDER BY 4 DESC) WHERE ROWNUM < 11;


select /*+RULE */ s.sid, s.serial#, p.spid, s.process, substr(s.module,1,20) module, to_char(sysdate - s.last_call_et/60/60/24,'DD-MON-YYYY HH24:MI') last_call,
decode(block ,0,'NO','YES') BLOCKER, decode(request,0,'NO','YES') WAITER from
v$lock l,v$session s, v$process p where (l.request > 0 or l.block > 0 )
and s.sid = l.sid and p.addr = s.paddr order by last_call_et desc;

Set up and control automatic failover using the Fast-Start Failover



Step 1:

Fast-Start Failover: Installation. The FSFO application software is automatically loaded as part of the
standard installation when an Oracle 11g database home is created. Either DGMGRL or Oracle EM Grid Control
can be used to control the FSFO when a complete database home installation is present. Alternatively,
FSFO may be installed by downloading the Oracle 11g Client installation software from otn.oracle.com and
then installing just the Oracle Client Administrator on the desired server; however, it’s important to
note that when it has been installed on a separate server, the FSFO can only be managed via the DGMGRL utility.

Fast-Start Failover: Basic Configuration. Since it’s certainly possible that more than one physical standby database
could exist in a Data Guard configuration, the first thing that I’ll need to establish is which physical standby
database should be paired with the primary database in case a fast-start failover is initiated. I’ll do that by
setting a value for the FastStartFailoverTarget parameter via the DGMGRL utility.
Note that I’ve chosen the primary database as the fast-start failover target for the selected physical standby database as well:

DGMGRL> EDIT DATABASE orcl_primary SET PROPERTY FastStartFailoverTarget = 'orcl_stdby1';

DGMGRL> EDIT DATABASE orcl_stdby1 SET PROPERTY FastStartFailoverTarget = 'orcl_primary';


Step 2:

Next, I’ll establish how long the Fast-Start Failover Observer should wait until it decides that the primary database
is unreachable by setting a value of 180 seconds for the FastStartFailoverThreshold parameter:

EDIT CONFIGURATION SET PROPERTY FastStartFailoverThreshold = '180';
Now that the basic fast-start failover configuration is completed, I can confirm its status with the SHOW FAST_START FAILOVER command:

DGMGRL> show fast_start failover

Fast-Start Failover: DISABLED
  Threshold:           90 seconds
  Target:              (none)
  Observer:            orcl_stdby1
  Lag Limit:           30 seconds
  Shutdown Primary:    TRUE
  Auto-reinstate:      TRUE

Configurable Failover Conditions
  Health Conditions:
    Corrupted Controlfile          YES
    Corrupted Dictionary           YES
    Inaccessible Logfile            NO
    Stuck Archiver                  NO
    Datafile Offline               YES

  Oracle Error Conditions:
    (none)

DGMGRL> show database verbose orcl_primary;

Database
  Name:            orcl_primary
  Role:            PRIMARY
  Enabled:         YES
  Intended State:  TRANSPORT-ON
  Instance(s):
    orcl_primary

  Properties:
    DGConnectIdentifier             = 'orcl_primary'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'OPTIONAL'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = ''
    FastStartFailoverTarget         = 'orcl_stdby1'
    StatusReport                    = '(monitor)'
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    HostName                        = '11gPrimary'
    SidName                         = 'orcl_primary'
    StandbyArchiveLocation          = '/u01/app/oracle/flash_recovery_area/ORCL/'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = 'log_%s_%t_%r.arc'
    LatestLog                       = '(monitor)'
    TopWaitEvents                   = '(monitor)'

Current status for "orcl_primary":
SUCCESS

DGMGRL> show database verbose orcl_stdby1

Database
  Name:            orcl_stdby1
  Role:            PHYSICAL STANDBY
  Enabled:         YES
  Intended State:  APPLY-ON
  Instance(s):
    orcl_stdby1

  Properties:
    DGConnectIdentifier             = 'orcl_stdby1'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'OPTIONAL'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = '/u01/app/oracle/oradata/orcl/, /u01/app/oracle/oradata/stdby/'
    FastStartFailoverTarget         = 'orcl_primary'
    StatusReport                    = '(monitor)'
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    HostName                        = '11gStdby'
    SidName                         = 'orcl_stdby1'
    StandbyArchiveLocation          = '/u01/app/oracle/flash_recovery_area/STDBY/'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = 'log_%s_%t_%r.arc'
    LatestLog                       = '(monitor)'
    TopWaitEvents                   = '(monitor)'

Current status for "orcl_stdby1":
SUCCESS

Step 3:

Enable fast start failover:

enable fast_start failover

Step 4:  Activating the Fast-Start Failover Observer

Now that the configuration of FSFO is complete, all I need to do is enable the configuration via DGMGRL as shown below. Note that I’m also enabling logging of Data Guard Broker activity for the command-line utility so that I can track any unexpected issues related to the FSFO’s performance or configuration:

[oracle@11gStdby ~]$ dgmgrl -logfile 11gStdby1_observer.log
DGMGRL for Linux: Version 11.1.0.6.0 - Production

Copyright (c) 2000, 2005, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/oracle
Connected.
DGMGRL> ENABLE FAST_START FAILOVER;
Enabled.
Finally, it’s time to start up FSFO. Once again, I’ll use DGMGRL to start the Fast-Start Failover Observer process:

DGMGRL> START OBSERVER;
Once the FSFO is started, I can confirm that it’s been activated properly with the SHOW CONFIGURATION and SHOW DATABASE commands:

DGMGRL> show configuration verbose

Configuration
  Name:                MAA_orcl
  Enabled:             YES
  Protection Mode:     MaxPerformance
  Databases:
    orcl_primary - Primary database
    orcl_stdby1  - Physical standby database
                 - Fast-Start Failover target

Fast-Start Failover: ENABLED
  Threshold:           180 seconds
  Target:              orcl_stdby1
  Observer:            11gStdby
  Lag Limit:           30 seconds
  Shutdown Primary:    TRUE
  Auto-reinstate:      TRUE

Current status for "MAA_orcl":
Warning: ORA-16608: one or more databases have warnings

DGMGRL> show database orcl_primary

Database
  Name:            orcl_primary
  Role:            PRIMARY
  Enabled:         YES
  Intended State:  TRANSPORT-ON
  Instance(s):
    orcl_primary

Current status for "orcl_primary":
SUCCESS

DGMGRL> show database orcl_stdby1

Database
  Name:            orcl_stdby1
  Role:            PHYSICAL STANDBY
  Enabled:         YES
  Intended State:  APPLY-ON
  Instance(s):
    orcl_stdby1

Current status for "orcl_stdby1":
SUCCESS

DGMGRL> show fast_start failover

Fast-Start Failover: ENABLED
  Threshold:           180 seconds
  Target:              orcl_stdby1
  Observer:            11gStdby
  Lag Limit:           30 seconds
  Shutdown Primary:    TRUE
  Auto-reinstate:      TRUE

Configurable Failover Conditions
  Health Conditions:
    Corrupted Controlfile          YES
    Corrupted Dictionary           YES
    Inaccessible Logfile            NO
    Stuck Archiver                  NO
    Datafile Offline               YES

  Oracle Error Conditions:
    (none)

Step 5:

Automatic Detection of Failover Conditions: An Example

Now that FSFO is fully configured and is ready to detect a failover situation, I’ll use the same technique I used in the prior article about Data Guard failover to simulate a failure of the primary database: I’ll simply issue the kill -9 <pid> command against its Server Monitor (SMON) background process. Once again, the death of the primary database is almost immediately recorded in its alert log:

. . .
Tue Aug 25 18:54:10 2009
Errors in file /u01/app/oracle/diag/rdbms/orcl_primary/orcl_primary/trace/orcl_primary_pmon_6166.trc:
ORA-00474: SMON process terminated with error
PMON (ospid: 6166): terminating the instance due to error 474
Instance terminated by PMON, pid = 6166
. . .
Just as before, the loss of connectivity to the primary database is reflected within the alert log of the corresponding physical standby databases by its Remote File Server (RFS) background process:

. . .
Tue Aug 25 18:54:49 2009
RFS[2]: Possible network disconnect with primary database
Tue Aug 25 18:54:49 2009
RFS[1]: Possible network disconnect with primary database
Tue Aug 25 18:55:49 2009
. . .
This time, however, there’s a dramatic difference! After approximately three minutes have elapsed, there’s a sudden flurry of activity at the physical standby site as the FSFO automatically detects the failure of the primary database. In Listing 7.1, I’ve captured the alert logs of both databases as well as the Data Guard Broker log entries to show all of the actions that Oracle 11g initiates during a Fast-Start Failover. After the automatic failover is complete, the Data Guard configuration fully reflects the successful actions of the FSFO:

DGMGRL> show configuration verbose

Configuration
  Name:                MAA_orcl
  Enabled:             YES
  Protection Mode:     MaxPerformance
  Databases:
    orcl_stdby1  - Primary database
    orcl_primary - Physical standby database (disabled)
                 - Fast-Start Failover target

Fast-Start Failover: ENABLED
  Threshold:           180 seconds
  Target:              orcl_primary
  Observer:            11gStdby
  Lag Limit:           30 seconds
  Shutdown Primary:    TRUE
  Auto-reinstate:      TRUE

Current status for "MAA_orcl":
Warning: ORA-16608: one or more databases have warnings

DGMGRL> show database verbose orcl_stdby1

Database
  Name:            orcl_stdby1
  OEM Name:        orcl_11gStdby1
  Role:            PRIMARY
  Enabled:         YES
  Intended State:  TRANSPORT-ON
  Instance(s):
    orcl_stdby1

  Properties:
    DGConnectIdentifier             = 'orcl_stdby1'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'OPTIONAL'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = '/u01/app/oracle/oradata/orcl/, /u01/app/oracle/oradata/stdby/'
    FastStartFailoverTarget         = 'orcl_primary'
    StatusReport                    = '(monitor)'
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    HostName                        = '11gStdby'
    SidName                         = 'orcl_stdby1'
    StandbyArchiveLocation          = '/u01/app/oracle/flash_recovery_area/STDBY/'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = 'log_%s_%t_%r.arc'
    LatestLog                       = '(monitor)'
    TopWaitEvents                   = '(monitor)'

Current status for "orcl_stdby1":
Warning: ORA-16829: fast-start failover configuration is lagging


DGMGRL> show database verbose orcl_primary

Database
  Name:            orcl_primary
  OEM Name:        orcl_11gPrimary
  Role:            PHYSICAL STANDBY
  Enabled:         NO
  Intended State:  APPLY-ON
  Instance(s):
    orcl_primary

  Properties:
    DGConnectIdentifier             = 'orcl_primary'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'ASYNC'
    DelayMins                       = '0'
    Binding                         = 'OPTIONAL'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '30'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = ''
    FastStartFailoverTarget         = 'orcl_stdby1'
    StatusReport                    = '(monitor)'
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    HostName                        = '11gPrimary'
    SidName                         = 'orcl_primary'
    StandbyArchiveLocation          = '/u01/app/oracle/flash_recovery_area/ORCL/'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = 'log_%s_%t_%r.arc'
    LatestLog                       = '(monitor)'
    TopWaitEvents                   = '(monitor)'

Current status for "orcl_primary":
Error: ORA-16661: the standby database needs to be reinstated

Failover to a Physical Standby Database When data protection mode is in MAXIMUM PERFORMANCE

Step 1:

If the primary database was running in MAXIMUM PERFORMANCE mode prior to failure,
the procedure of failover is slightly different. In this case the following issues should be evaluated:

Since the protection mode is MAXIMUM PERFORMANCE, the physical Oracle instance may not have standby redo logs configured.
After the loss of the primary database, there may be an archive gap on the
standby database. These gaps should be resolved before transitioning the standby into the primary role.

If possible, manually copy the online redo logs from the original primary database to the target physical Oracle instance and register the
online redo log files using the ALTER DATABASE statement. After registering
the online redo log files, check the alert log file to verify the recovery process has applied the redo from these log files.


Step 2 :

The failover to a physical Oracle instance is similar to a switchover operation. Oracle provides the functionality of
converting a physical Oracle instance into the primary database without considering any of the above mentioned data recovery options.
This is kind of intentional data loss failover and can be achieved using the following statement:

ALTER DATABASE ACTIVATE STANDBY DATABASE;

If a standby log file has been created on the physical standby database,
issuing the above-mentioned statement will result in an Oracle error. The activation of
the physical Oracle instance can be force by skipping the standby log files:

ALTER DATABASE ACTIVATE Oracle instance SKIP STANDBY LOGFILE;

Failover to a Physical Standby Database When data protection mode is in MAXIMUM PROTECTION or MAXIMUM AVAILABILITY

Step 1:

Use the following procedure when the data protection mode is in MAXIMUM PROTECTION or MAXIMUM AVAILABILITY:

Before starting the failover operation, update the initialization parameter file on the physical Oracle instance
that is used for failover.  Include other standby databases in the Data Guard environment used as the archival destination
so that after the failover, the redo data can be archived from the new primary database to other standby databases.


Step 2:

In order to initiate the failover operation, the target physical Oracle instance should be placed
in MAXIMUM PERFORMANCE data protection mode using the following statement:

ALTER DATABASE SET Oracle instance TO MAXIMIZE PERFORMANCE.


Step 3 :

Allow the MRP to finish applying the redo data from the archived redo log file and the standby redo log files.

Step 4 :

Issuing the following statement will cause the MRP to terminate when it has applied all the available redo data:

ALTER DATABASE RECOVER MANAGED Oracle instance FINISH;

In Case If the standby redo log files on the target physical Oracle instance are corrupt,
or it is not desirable to apply the changes that are in the standby
redo log file, MRP can be completed by skipping the standby redo logs by using the following statement:

ALTER DATABASE RECOVER MANAGED Oracle instance FINISH SKIP STANDBY LOGFILE;

Step 5:

Once the MRP has finished applying all the redo data, issue the following statement to transition the physical
Oracle instance to the primary role. Status of managed recovery process
can be obtained from the v$managed_standby view or from the alert log file.

ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

Step 6:

On all other standby databases, both physical and logical, register the standby
redo log file from the new primary database. The following statement will register
a log file with a database, and if the recovery process is running on a database,
the redo data from the log file will be applied:

ALTER DATABASE REGISTER LOGFILE ‘filespec’;

Step 7:

Finally, restart the new primary database to enable the read/write operation.
Update the Oracle Net configuration so that the new primary database will start serving the requests from applications.

Monday, July 17, 2017

FS-CANT OPEN TEMPFILE

Error While opening output file and logfile.



Symptoms
-------------

       After completion of the request we cant open the output file and logfile,we will face the below error.



Solution:
-----------

Step 1: Check for the Space in the tmp file location and clear the files,now you can open the logfiles.


ORA-20100: ORA-20100: File o2127370.tmp creation failed.

Concurrent request failed with the following error in log file.

Error
--------
ORA-20100: ORA-20100: File o2127370.tmp creation failed.
File could not be opened or operated on as requested.

Action: Make sure the directory - /usr/tmp - is a valid directory with write permissions and is accessible from the database server node for write operations. This d
ORA-20100: ORA-20100: File o2127370.tmp creation failed.
File could not be opened or operated on as requested.

Action: Make sure the directory - /usr/tmp - is a valid directory with write permissions and is accessible from the database server node for wr
+---------------------------------------------------------------------------+
Start of log messages from FND_FILE
+---------------------------------------------------------------------------+
p_event_class_rec.application_id201
p_event_class_rec.tax_event_type_codeCREATE
+---------------------------------------------------------------------------+
End of log messages from FND_FILE
+---------------------------------------------------------------------------+


+---------------------------------------------------------------------------+
Executing request completion options...


Cause:
----------

   This issue usually occurs if you are using same UTL directory for more than one instance.if other instance is an clone of production than this issue occurs because of the tmp file of one instance will be same of other instance.


Solution:
----------


Step 1: Go to the UTL directory location,check for the tmp file with the tmp file in the log file.

Step 2 : If the file exists move that file or remove that file then rerun the request.


Warning: ORA-16792: configurable property value is inconsistent with database setting

Issue:

Warning: ORA-16792: configurable property value is inconsistent with database setting

Reason:

Mismatch of parameter values between spfile, Data Guard Broker configuration and the database.

solution:

DGMGRL> show configuration;

Configuration - oracle_dg

  Protection Mode: MaxPerformance
  Databases:
    oracle_dgp  - Primary database
    oracle_dgs  - Physical standby database
      Warning: ORA-16792: configurable property value is inconsistent with database setting

Fast-Start Failover: DISABLED

Configuration Status:
WARNING


DGMGRL> show database 'oracle_dgs' 'InconsistentProperties'
INCONSISTENT PROPERTIES
    INSTANCE_NAME      PROPERTY_NAME   MEMORY_VALUE   SPFILE_VALUE   BROKER_VALUE
            voice   StandbyArchiveLocation             /arch        /archivelog/arch              0

change the value of /arch location in dataguard configuration to solve this issue.

DGMGRL>edit database oracle_dgs set property StandbyArchiveLocation='/archlog/arch/';

after that, we should get the output like this.


DGMGRL> show database 'oracle_dgs' 'InconsistentProperties'
INCONSISTENT PROPERTIES
    INSTANCE_NAME      PROPERTY_NAME   MEMORY_VALUE   SPFILE_VALUE   BROKER_VALUE


DGMGRL> show configuration;

Configuration - oracle_dg

Protection Mode: MaxPerformance
  Databases:
    oracle_dgp  - Primary database
    oracle_dgs - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Database goes down automatically after starts up.ORA-07445: exception encountered: core dump [kmmlrl()+6723] [SIGSEGV] [ADDR:0x8] [PC:0x8BB9B2D] [Address not mapped to object] []

ORA-07445: exception encountered: core dump [kmmlrl()+6723] [SIGSEGV] [ADDR:0x8] [PC:0x8BB9B2D] [Address not mapped to object] []


Symptoms

   After starting the database it goes down automatically with the following errors in the alert log file.

Completed: ALTER DATABASE OPEN 
Fri Jul 14 14:08:33 2017 
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x8] [PC:0x8BB9B2D, kmmlrl()+6723] 
Errors in file /Oracle/cig/app/cig/diag/rdbms/orcl/orcl/trace/orcl_pmon_14242.trc (incident=123636): 
ORA-07445: exception encountered: core dump [kmmlrl()+6723] [SIGSEGV] [ADDR:0x8] [PC:0x8BB9B2D] [Address not mapped to object] [] 
Incident details in: /Oracle/cig/app/cig/diag/rdbms/orcl/orcl/incident/incdir_123636/orcl_pmon_14242_i123636.trc 
Fri Jul 14 14:08:35 2017 
Trace dumping is performing id=[cdmp_20170714140835] 
Fri Jul 14 14:09:13 2017 
DBRM (ospid: 14250): terminating the instance due to error 472

Solution
-----------

Step 1:

     If you are using apex in your environment then comment the dispatcher patameter value in the parameter file and try to start the database.

Step 2:

   Now the database will start normally,but if you set the dispatcher value it will work and the dispatcher value will be taken.

Step 3:

  Now if you try to set the port for apex you will face the below error(for all the port it will throw the below error).

ORA-07445: exception encountered: core dump [kmmlrl()+6723] [SIGSEGV] [ADDR:0x8] [PC:0x8BB9B2D] [Address not mapped to object] [] 
Incident details in: /Oracle/cig/app/cig/diag/rdbms/orcl/orcl/incident/incdir_140436/orcl_pmon_15661_i140436.trc 
  

Step 4:
     
   Bounce the server and then set the port it will work fine,and apex will open.
    The issue is that port number

Warning: ORA-16826: apply service state is inconsistent with the DelayMins property

Issue:

Warning: ORA-16826: apply service state is inconsistent with the DelayMins property

Reason:

This is because of the wrong configuration to time delay paramater in dataguard

Work around:

Please refere < Doc ID 249703.1>

DGMGRL> edit database <standby database> set state=apply-off;
DGMGRL> edit database <standby database> set state=apply-on;

or

SQL>alter database recover managed standby database cancel;

SQL> alter database recover managed standby database using current logfile disconnect;

ORA-19815: WARNING: db_recovery_file_dest_size of 32212254720 bytes is 99.98% used, and has 5718016 remaining bytes available. ORA-19815: WARNING: db_recovery_file_dest_size of 1073741824 bytes is 100.00% used, and has 0 remaining bytes available.

Issue

ORA-19815: WARNING: db_recovery_file_dest_size of 32212254720 bytes is 99.98% used, and has 5718016 remaining bytes available.
ORA-19815: WARNING: db_recovery_file_dest_size of 1073741824 bytes is 100.00% used, and has 0 remaining bytes available.

Solution:


Its because your db_recovery_file_dest_size is full:

SQL> show parameter db_recovery_file_dest_size

NAME TYPE VALUE
———————————— ———– ——————————
db_recovery_file_dest_size big integer 45G
SQL>

SQL> set lines 100
col name format a60
select name, floor(space_limit / 1024 / 1024) “Size MB”, ceil(space_used / 1024 / 1024) “Used MB” from v$recovery_file_dest order by name;SQL> SQL>

NAME Size MB Used MB
———————————————————— ———- ———-
/data01/oracle/fast_recovery_area 46080 46080

SQL>

To Resolve this:

Connect to RMAN prompt.

-bash-3.2$ rman target /

Recovery Manager: Release 11.2.0.3.0 – Production on Tue Aug 11 06:54:21 2012

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: DB1 (DBID=12345)

RMAN>crosscheck archivelog all;

RMAN>delete expired archivelog all;

Sunday, July 16, 2017

Where is EBS Heading Beyond R12.2.6

EBS Product Development has announced the roadmap for EBS.
According to Cliff Godwin, Oracle SVP, release 12.3 will be, “a consumption and uptake of underlying latest database and middleware Java technologies, so we can leverage those while keeping the stack under E-Business Suite modern.” E-Business suite will continue to evolve along with new technology, 

A brief overview of E-Business Suite’s roadmap.



Anways the update roadmap also includes the support information.
According to this support information;
EBS 12.1 will be supported through Dec 2012.
EBS 12.2 will be supported through Sep 2023.
The new future relase  will be supported through 2030.

Hopefully this new major release will be announced in 2019-2020. 

Thanks for reading ..

Post 12c Upgrade : ORA 68 Error


Recently after 12C database upgrade one of  our customer complaints that they are facing ORA-00068: invalid value parameter _query_execution_time_limit error.

 .


When we checked alert log, we noticed following information message


WARNING: Oracle executable binary mismatch detected.
Binary of new process does not match binary which started instance
issue alter system set "_disable_image_check" = true to disable these messages
SOLUTION:

!. Database is working and we want to suppress the messages we can alter the system as below
alter system set "_disable_image_check" = TRUE
2. Bounce database.

      ORA-01994: GRANT failed: password file missing or disabled  



Granting sysdba privilege to user requires password file or else it would throw an error as ORA-01994.


Problem:

I hit this error when I was trying to give sysdba privilege to one of the database user account which need to configure the snapshot backup from OS level.


 SQL> grant sysdba to snapshot;
ORA-01994: GRANT failed: password file missing or disabled

Solution:

1. Create a password file from os:

orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=test2sys entries=5 force=y

And now grant the user 

SQL> grant sysdba to snapshot;


To verify:


SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP
------------------------------ ----- -----
SYS                                    TRUE  TRUE
SNAPSHOT                       TRUE  FALSE

Before if you have crosschecked this v$pwfile_users it doesn't show snapshot user . 







           How  to Find out the ASM DISKGROUP label without connecting to the Database.


Recently we have issue on DiskGroup in order to increase the size of the device ,

We need to identify the label which is assigned to the diskgroup and update the OS team .

Solution:
------------

Set the grid environment and run the below query

[root@ordb1 ~]# cd /home/oracle
[root@ordb1 oracle]# . ./grid.env

oracleasm querydisk -d DISKGROUP_NAME

[root@ordb1 oracle]# oracleasm querydisk -d DEVARCHIVE
Disk "DEVARCHIVE" is a valid ASM disk on device [202,65]
[root@ordb1 oracle]#

Once the device number is listed , we can use ls command to find the partition label.

[root@ordb1 oracle]# ls -l /dev/* |grep 202, |grep 65
brw-r----- 1 root   disk     202,   65 Jun 18 08:50 /dev/xvde1

To Crosscheck :

And also confirm at OS level , by issuing fdisk -l


[root@ordb1 ~]# fdisk -l | grep /dev/xvde
Disk /dev/xvde: 21.4 GB, 21474836480 bytes


To crosscheck the diskgroup size from asm (For cross-checking purpose we have connected to grid database)

ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

MOUNTED  EXTERN  N         512   4096  1048576     20473    19539                0           19539              0             N  DEV_ARCHIVE/


Here DEV_ARCHIVE diskgroup has 20 GB from asm and from OS level also it same size from the above data.



Saturday, July 15, 2017

Fixing undo segment corruption

Recently i faced a typical situation where user performed a delete on a huge table. All of a sudden he terminated that delete statement which leaded to one undo segment corruption because of this, when other transactions hitting this segment, lands up with an error. so I used below procedure to recover that undo segment

Note : The procedure i followed may differ from other solutions

Initially I tried to drop undo tablespace (after creating new one), but its saying i cannot drop it. so i followed below steps

1) created pfile from spfile (if your database is already using pfile, then take a backup of that file and this step is not necessary)

2) shutdown database

3) edit pfile and include the following line

*._corrupt_rollback_segments=”segment_name which is giving problem”

we can know the corrput segment name using dba_rollback_segs

4) startup pfile=’pfile name’

5) drop the old undo tablespace (remember i created new one already and made that as default )

6) shutdown the database

7) startup using spfile (if no spfile used, edit your pfile and remove the parameter which was included in step 3

Now it should start normally and your work will be smoother

Materialized View Refresh is Slow

Many a times i had seen my DBA friends asking this question. There could be lot of reasons whenever something is slow….but in case of materialized view, we can use below checks to get confirmed where is the problem

Troubleshoot the problem in following steps

1) check the network connectivity using ping command. you should able to see no time gap between packets transfer

2) check if tnsping command is taking time

3) check the size of MV log. it should be truncated after every refresh

Note: It may be a surprise for many people if i say MV log will get truncated after every refresh. But this is the actual thing that happens. how oracle will manage refresh in such case…lets discuss in another post

4) check the size of original table. if MV log size is more than original table, then its clear that problem is with MV log

The following is the reason for increment in size of MV log than table

1) Additional MV’s are created on the same tables, but had stopped refreshing now.
 

2) If the snapshot becomes invalid or lost and was not dropped formally.

Solution:

The below would be quite good solutions to apply
 

1) Drop the MV which are not using from long time.

2) drop the materialized view log and re-create (but this will require a complete refresh and the table will not be accessible during this refresh to the users)

again, can we drop MV log alone without dropping MV? answer is YES and below is sample command for the same

DROP MATERIALIZED VIEW LOG ON VLRAPP.CL_ACC_HOLD_TRANS;

You can create MV log again using below command 

CREATE MATERIALIZED VIEW LOG ON “VLRAPP”.”CL_ACC_HOLD_TRANS”
 PCTFREE 60 PCTUSED 30 INITRANS 1 MAXTRANS 255 NOLOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE “DATA”
  WITH PRIMARY KEY EXCLUDING NEW VALUES;


For commands, please use DBMS_METADATA.GET_DDL package (for syntax, just google it)

Resolve ORA-00490 error

Sometimes all of a sudden our Instance will be crashed and when looked at alert log file we found below error message

ORA-00490: PSP process terminated with error

While PSP process itself terminated due to any error the whole instance is crashed with ORA-00490 error message.
On further check you may found one more error also in the alert log file which describes as

ORA-27300: OS system dependent operation:fork failed with status: 12
ORA-27301: OS failure message: Not enough space
ORA-27302: failure occurred at: skgpspawn3


When operating system is encountering with some unknown error like insufficient space in temp Area or swap Area or insufficient system resources then Oracle throws above errors

Same time PMON is terminating instance with following error with process id of PMON process. Because Oracle processes are being unmanageable of Oracle database instance.

PMON: terminating instance due to error 490
Instance terminated by PMON, pid = 20094

Root Cause:

This error will occur when there is no free space available in swap area of System for spawning new process of Oracle. Due to this reason Process SPwaner process PSP0 (with ORA-00490 error code of Oracle) terminated because it doesn’t able to manage or create Oracle processes. Result is Oracle instance crashed by PMON process with errorstack 490 (which is pointing out ORA-00490). If lack of system resource found then also same situation can be occurring.

Solution :

There are 2 solutions for this problem which are mentioned below

1. Check your swap space and increase swap area in system. Because due to lack of space in swap are Oracle unable to create new process and PSP0 Process SPwaner is unable to manage Oracle process.

2. Check “ulimit” setting for Oracle. “ulimit” is for user shell limitation. If maximum shell limit is reached then also PSP0 process becomes unstable to manage other Oracle processes.Increase the “ulimit” setting for Oracle user.

User not able to login through OEM

Have you any time tried to login as normal user like scott or HR into OEM and landed up with below message?

The application requires more database privileges than you have currently been granted. Click on Help to get more version specific information.

Unfortunately, when you click on HELP, it may not give proper details on which privilege is missing.
When checked at DB level, may be that user will have all the required privileges to connect to the database, but still login to OEM is impacted.

The solution for this is to grant SELECT_CATALOG_ROLE to that user.

sqlplus / as sysdba

SQL> grant select_catalog_role to schemaname;

for example, if you are facing issue with scott schema, then issue below command

SQL> grant select_catalog_role to scott;

After above grant, user should able to login to OEM.

But, even after granting the above role, still if you face issue, then you may need to grant select on dictionary views to that user. But do remember, with this user can able to retrieve dictionary information from oracle database. So, grant it only if it is very much required.

SQL> grant select any dictionary to scott;

Oracle 12c Database TDE (Transparent Data Encryption) Setup

To setup TDE for my new database I have used and adapted Oracle Database 12c: Transparent Data Encryption (TDE)

First I have modified sqlnet.ora so that each database has its own TDE directories:

$ grep SID $ORACLE_HOME/network/admin/sqlnet.ora
ENCRYPTION_WALLET_LOCATION = (SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/u01/app/oracle/admin/$ORACLE_SID/tde_wallet)))
WALLET_LOCATION = (SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/u01/app/oracle/admin/$ORACLE_SID/db_wallet)))
$ I have created TDE wallet directory for NCDB database:

$ mkdir /u01/app/oracle/admin/NCDB/tde_wallet

I have connected to NCDB and run following SQL statements:

SQL> administer key management create keystore '/u01/app/oracle/admin/NCDB/tde_wallet' identified by xxx;

keystore altered.

SQL> administer key management set keystore open identified by xxx;

keystore altered.

SQL> select * from v$encryption_wallet;

WRL_TYPE
--------------------
WRL_PARAMETER
--------------------------------------------------------------------------------
STATUS                         WALLET_TYPE          WALLET_OR FULLY_BAC
------------------------------ -------------------- --------- ---------
    CON_ID
----------
FILE
/u01/app/oracle/admin/NCDB/tde_wallet/
OPEN_NO_MASTER_KEY             PASSWORD             SINGLE    UNDEFINED
         0


SQL>  administer key management create auto_login keystore from keystore '/u01/app/oracle/admin/NCDB/tde_wallet' identified by xxx;

keystore altered.

SQL>
SQL> administer key management set key identified by mks with backup;

keystore altered.

SQL>

SQL> select con_id, key_id, keystore_type from v$encryption_keys;

    CON_ID
----------
KEY_ID
------------------------------------------------------------------------------
KEYSTORE_TYPE
-----------------
         0
AdOGu2sWO085v33seosS01IAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
SOFTWARE KEYSTORE


SQL> select * from v$encryption_wallet;

WRL_TYPE
--------------------
WRL_PARAMETER
--------------------------------------------------------------------------------
STATUS                         WALLET_TYPE          WALLET_OR FULLY_BAC
------------------------------ -------------------- --------- ---------
    CON_ID
----------
FILE
/u01/app/oracle/admin/NCDB/tde_wallet/
OPEN                           PASSWORD             SINGLE    NO
         0

Step 3: create application tablespace
I have run:

SQL> show parameter new

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces              string      DDL
SQL> alter system set encrypt_new_tablespaces=cloud_only;

System altered.

SQL> create tablespace data;

Tablespace created.

SQL> select tablespace_name, encrypted from dba_tablespaces;

TABLESPACE_NAME                ENC
------------------------------ ---
SYSTEM                         NO
SYSAUX                         NO
SYS_UNDOTS                NO
DATA                           YES

Oracle Database 12.2 PowerUp the DBMS_STAT package

Oracle Database 12 Release 2 makes it a lot easier to be able to manage this vast collection of scripts by includes a new DBMS_STATS preference called PREFERENCE_OVERRIDES_PARAMETER. When this preference is set to TRUE, it allows preference settings to override the parameter values specifically set in a DBMS_STATS command.

For example, if the global preference ESTIMATE_PERCENT is set to the default DBMS_STATS.AUTO_SAMPLE_SIZE, but a 10% sample size was specified in the in a DBMS_STATS.GATHER_TABLE_STATS command, setting PREFERENCE_OVERRIDES_PARAMETER to TRUE would automatically replace the 10% sample size with the default preference  DBMS_STATS.AUTO_SAMPLE_SIZE.

Let’s take a look at this in action. we have a SALES table with nearly 1 million rows in it.

15:06:27 SQL>  SELECT TO_CHAR(SYSDATE,'DD-MON-YYYY HH24:MI:SS') WALL_CLOCK_TIME
15:17:49   2  FROM dual;

WALL_CLOCK_TIME
--------------------
15-JUL-2017 15:17:49

Elapsed: 00:00:00.00
15:17:49 SQL>

15:02:18 SQL> SELECT COUNT(*)
  2  FROM sales;

  COUNT(*)
----------
    918843

15:02:20 SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS(ownname=>'SH', tabname=>'SALES', estimate_percent=>10);
  3  END;
  4  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:09.08

If we check the sample size used for gathering statistics, we see that only 10% of the rows were used as requested and that a FREQUENCY histogram has been created on the CUST_ID column of the SALES table.
15:03:38 SQL> DBMS_STATS.GET_PREFS(pname=>'ESTIMATE_PRECENT' , ownname=>'SH'
SP2-0734: unknown command beginning "DBMS_STATS..." - rest of line ignored.
15:04:02 SQL>
15:04:02 SQL> SELECT TABLE_NAME, sample_size
15:04:19   2  FROM   user_tables
15:04:19   3  WHERE  TABLE_NAME= 'SALES';

TABLE_NAME                     SAMPLE_SIZE
------------------------------ -----------
SALES                                91619

Elapsed: 00:00:00.09
15:04:20 SQL> SELECT column_name, num_distinct, num_nulls, histogram
15:04:51   2  FROM   user_tab_col_statistics
15:04:51   3  WHERE  TABLE_NAME='SALES';

COLUMN_NAME     NUM_DISTINCT   NUM_NULLS    HISTOGRAM
-------------- --------------- ------------ ----------
PROD_ID                  651       0         NONE
CUST_ID                  630       0         FREQUENCY
TIME_ID                  620       0         NONE
CHANNEL_ID               5         0         NONE
PROMO_ID                 116       0         NONE
QUANTITY_SOLD            44        0         NONE
AMOUNT_SOLD              583       0         NONE

7 rows selected.

Elapsed: 00:00:00.12


Unfortunately, these statistics are not that accurate as there are over 600 distinct PROD_IDs and CUST_IDs in the SALES table but the statistics gather missed this information because it looked at such a small sample.

15:04:52 SQL> SELECT COUNT(DISTINCT prod_id), COUNT(DISTINCT cust_id)
15:05:31   2  FROM   sales;

COUNT(DISTINCTPROD_ID)  COUNT(DISTINCTCUST_ID)
----------------------  ----------------------
651                      630
In order to improve the accuracy of the statistics, we should use the recommended best practice value for the ESTIMATE_PERECENT parameter, AUTO_SAMPLE_SIZE. Let’s check the preference value for the ESTIMATE_PERECENT parameter on the SALES table.

15:05:34 SQL> SELECT
DBMS_STATS.GET_PREFS(pname=>'ESTIMATE_PRECENT' , ownname=>'SH', tabname=>'SALES')
FROM Dual;

DBMS_STATS.GET_PREFS(PNAME=>'ESTIMATE_PERCENT',OWNNAME=>'SH',TABNAME=>'SALES')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_SAMPLE_SIZE

We see it’s actually set to DBMS_STATS.AUTO_SAMPLE_SIZE, which is good news for us.

Rather than having to find and edit all of the existing statistics gathering script that refer to the SALES table, we can set the DBMS_STATS preference PREFERENCE_OVERRIDES_PARAMETER to TRUE on the SALES table, which will automatically override all of the none default parameter values used in the DBMS_STATS command with the corresponding preferences values.

15:05:39 SQL> BEGIN
DBMS_STATS.SET_TABLE_PREFS(ownname=>'SH', tabname=>'SALES',
                           pname=>'PREFERENCE_OVERRIDES_PARAMETER', pvalue=>'TRUE');
END;
/

PL/SQL PROCEDURE successfully completed.
So, in our case the ESTIMATE_PERCENT of 10% will actually be overridden and DBMS_STATS.AUTO_SAMPLE_SIZE will be used. Let’s see what impact that has on both the statistics gathering performance and the quality of the statistics gathered.

15:05:40 SQL> BEGIN
DBMS_STATS.GATHER_TABLE_STATS(ownname=>'SH', tabname=>'SALES', estimate_percent=>10);
END;
/

PL/SQL PROCEDURE successfully completed.
Elapsed: 00:00:06.59
15:05:48 SQL> SELECT TABLE_NAME, SAMPLE_SIZE
FROM   user_tables
WHERE  TABLE_NAME= 'SALES';

TABLE_NAME                      SAMPLE_SIZE
------------------------------ -----------
SALES                               918843

So, you can see that the time it took to gather statistics was actually less than the time for the 10% but the sample size was the total number of rows in the table. But what about the quality of the statistics?

15:06:04 SQL> SELECT column_name, num_distinct, num_nulls, histogram
FROM  user_tab_col_statistics
WHERE TABLE_NAME='SALES';

COLUMN_NAME     NUM_DISTINCT   NUM_NULLS    HISTOGRAM
-------------- --------------- ------------ ----------
PROD_ID                  651       0         NONE
CUST_ID                  630       0         TOP-FREQUENCY
TIME_ID                  620       0         NONE
CHANNEL_ID               5         0         NONE
PROMO_ID                 116       0         NONE
QUANTITY_SOLD            44        0         NONE
AMOUNT_SOLD              583       0         NONE

As you can see, we have a much more accurate set of statistics this time with each of the number of distinct values being correct. You will also notice we got a new type of histogram a TOP-FREQUENCY histogram this time. That’s because this new type of histogram is only gathered if the ESTIMATE_PERCENT is set to the default.


Remember setting PREFERENCE_OVERRIDES_PARAMETER to TRUE will override all none default parameters set in the DBMS_STATS.GATHER_*_STATS command, so you may need to adjust the default preferences for some of the parameters, for example METHOD_OPT if you don’t want to revert to the default.


You can set a table preference using the DBMS_STATS.SET_TABLE_PREFS procedure.


15:06:10 SQL>BEGIN
DBMS_STATS.SET_TABLE_PREFS(ownname=>'SH', tabname=>'SALES', pname=>METHOD_OPT,
                           pvalue=>'FOR ALL COLUMNS SIZE 1 FOR COLUMNS CUST_ID SIZE 254');
END;

Automated Gather Stats on Index Creation.

Whenever index is created , Oracle automatically gathers optimizer statistics for that index. The database piggybacks the statistics gather on the full data scan and sort operation necessary for the index creation. This approach has worked so well since it was introduced in earlier verison,Some people already know about it but I am sharing becuase this for the new dba's.

Demo:

SELECT TO_CHAR(SYSDATE,'DD-MON-YYYY HH24:MI:SS') WALL_CLOCK_TIME
FROM dual;

WALL_CLOCK_TIME
-----------------------------
15-JUL-2017 14:29:44

SELECT index_name,
       TO_CHAR(last_analyzed,'DD-MON-YYYY HH24:MI:SS')
       last_analyzed,
       leaf_blocks,
       distinct_keys,
       clustering_factor
FROM   user_indexes;

no ROWS selected

CREATE INDEX t_s_idx ON items (s);

INDEX created.

SELECT index_name,
       TO_CHAR(last_analyzed,'DD-MON-YYYY HH24:MI:SS')
       last_analyzed,
       leaf_blocks,
       distinct_keys,
       clustering_factor
FROM   user_indexes;

INDEX_NAME LAST_ANALYZED LEAF_BLOCKS DISTINCT_KEYS  CLUS_FACT
---------- -------------------- ----------- ------------- ----------
T_S_IDX    15-JUL-2017 14:29:45    6135      1000000        999839