DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Friday, February 22, 2019

Internal Concurrent Manager logfile Error : Routine &ROUTINE has attempted to start the internal concurrent manager.

Internal Concurrent Manager logfile Error : Routine &ROUTINE has attempted to start the internal concurrent manager. 

Issue :

Users encounter the following error in the Internal Concurrent Manager logfile: 

Routine &ROUTINE has attempted to start the internal concurrent manager. 
The ICM is already running. Contact you system administrator for further assistance.

afpdlrq received an unsuccessful result from PL/SQL procedure or function FND_DCP.Request_Session_Lock. 
Routine FND_DCP.REQUEST_SESSION_LOCK received a result code of 1 from the call 
to DBMS_LOCK.Request.
Possible DBMS_LOCK.Request resultCall to establish_icm failed.
The Internal Concurrent Manager has encountered an error.


Cause :

The concurrent manager startup script is being executed on both nodes.
This causes two instances of the ICM (internal concurrent manager) to be running in one application instance, which relates to error message in manager logfile.
Moreover, FNDSM is not able to complete its job of starting respective processes on the defined nodes.


Solution :


To resolve the issue test the following steps in a development instance and then migrate accordingly

1. Ensure that APPLDCP is set to ON in your $APPL_TOP/.env file.
2. Echo environment variable on command line prior to starting concurrent managers.
3. Execute adcmctl.sh only on the primary node of the Internal Concurrent Manager.


Concurrent Manager : Standard Manager Going Down and Actual and Target Processes are Different

Issue :

Concurrent manager (includes Standard Manager) is not stable, actual and target processes are different. 

Navigation :- Logging as Sysadmin > Concurrent > Manager > Administer Screen > Verify Actual & Target columns


Solution :

EBS 12.0.6 : Apply Patch 10113913.
EBS 12.1.X : Apply Patch 16602978.

Confirm Patch version :

afpgmg.o     120.3.12010000.10
   
You can use the commands like the following:

   strings -a $FND_TOP/bin/FNDLIBR | grep Header | grep afpgmg

ICM Log Filled With: Could not contact Service Manager FNDSM_ The TNS alias could not be located on RAC with Multi Apps Tiers

Error :

Could not contact Service Manager FNDSM_SRVRAP2_PROD. The TNS alias could not be located, the listener process on SRVERPAP2 could not be contacted, or the listener failed to spawn the Service Manager process.


Cause :

The issue is caused by the existence of Service name: SYS$APPLSYS.WF_CONTROL..which should not exist.
 
See the following select:
1. SQL>select value from v$parameter where name='service_names';



VALUE
-------------------------------------------------------------------------------
SID>, SYS$APPLSYS.WF_CONTROL..

The WF Service name should not exist only the Service name as should exist.


SOlution :

1. Ensure backup has been taken before making the changes.

2. Shutdown the Applications Tier including Concurrent managers.

3. Run the following script:

SQL> sqlplus / @$FND_TOP/patch/115/sql/wfctqrec.sql


4. Bounce the 2 RAC nodes.

5. Edit the Database xml files from all RAC nodes and validate if the s_dbService only includes the (SID) and not the SYS$APPLSYS.WF_CONTROL.(SID). service name, correct if needed.

6. Run autoconfig at the Rac Nodes (only in case the xml file includes the wrong s_dbService name).

7. Run autoconfig on the Applications nodes.

8. Start the Applications Services.

Error: Your Oracle E-Business Suite account has not been linked with the Single Sign-On account that you just entered.

Error: Your Oracle E-Business Suite account has not been linked with the Single Sign-On account that you just entered.


Issue : 

Gettig the Error below while logging into ERP:

Error:
---------
Your Oracle E-Business Suite account has not been linked with the Single Sign-On account that you just entered. Please enter your Oracle E-Business Suite information


SOlution : 

Applications SSO Auto Link is not enabled.

Make sure the profile option "Applications SSO Auto Link User" is set to "Y" at "Site" level.

Error while activating 'Oracle ERP Cloud' Endpoint Integration error 'Integration "LLU_TEST_JOURNALIMP | 1.0" cannot be activated.

Error : 

Cannot active Integration : "LLU_TEST_JOURNALIMP | 1.0"
Fails with error:

ERROR
------

error 'Integration "LLU_TEST_JOURNALIMP | 1.0" cannot be activated. Incident has been created with ID 7. [Cause: ICS-20566]'



Cause :

Input file did not conform to associated schema (File was not in utf-8 format)
 
The following justifies how the issue is related to this specific customer:
Error from logfile showed error during showed error when loading the Input file.
Input file schema or format error was the cause


Solution :


Resolve schema issue within input file
Retest with new input file

Error : ORA-10459 "cannot start media recovery on standby database; conflicting state detected"

Error : ORA-10459 "cannot start media recovery on standby database; conflicting state detected"



Cause :

An attempt To Start Up The MRP Process In Physical Standby fails with error ORA-10459 "cannot start media recovery on standby database; conflicting state detected" 

Solution :

Managed recovery can be run only from One instance of Rac Standby.

To check if Managed recovery is already running on a specific instance you can run the below query

Select * from gv$managed_standby ;

Or

Check the alert log.

If Managed recovery is already running on a instance and you want it to be run from specific instance:

Stop the Managed recovery from the instance it is running and start managed recovery from first/specific instance.

Error : ORA-00600:[KFDARELOCSET10] DURING ASM REBALANCE

Error : ORA-00600:[KFDARELOCSET10] DURING ASM REBALANCE 

Cause :

Rebalance operation can be initiated because of disk being added or manual rebalance.

Cause of the crash is observed to be the UNPROTECTED files created in the diskgroup.

Issue is verified in UNPUBLISHED Bug 12319359 - SOL-X64-SC:HIT ASM ORA-00600:[KFDARELOCSET10], [3], [256], [3], [1], [1], [65535]

 

Solution :



The use of UNPROTECTED files is not recommended in production environments.


If the file that is causing the issue is a tempfile, we can drop it first and then start the rebalance to avoid the error.

SQL> select group_number,file_number,blocks,bytes,space,redundancy,striped,REDUNDANCY_LOWERED,MODIFICATION_DATE 
from v$asm_file where group_number=1 and file_number=290;

SQL> select * from v$asm_alias where group_number=1 and file_number=290;
-- change the group_number and file_number based on the arguments of the ORA-600 error.

SQL> alter tablespace TEMP drop tempfile <file_name>

Error : FRM-40010 Cannot Read From /path/form.fmx Error in 12c

Error : FRM-40010 Cannot Read From /path/form.fmx Error in 12c 

Cause :
New Forms 12c installation.
Expected behavior since Forms 12c version.

Solution :

Starting in Forms 12c there is a new environment variable called FORMS_MODULE_PATH that restricts the directories from which Forms applications may be launched.

If the directory where the form is being launched is not defined in FORMS_MODULE_PATH the FRM-40010 error is expected.

This same approach to run forms works fine in 11gR2 version.

Error Running RMAN duplicate of Offline (Cold) Database Backup

Issue : Error Running RMAN duplicate of Offline (Cold) Database Backup 

Cause :

When the target database is running in ARCHIVELOG mode, RMAN will want to do some recovery even when the backup restored is an offline backup.  This also applies to duplicate.  When the target is running in archivelog mode, RMAN will want to apply redo.  This is the case even if there is an offline backup taken after the latest archive redo log.  i.e RMAN uses the SCN of the archive redo then looks for a backup earlier to this number and restores these files and then recovers the database.

 Cause justification is RMAN functionality.  


Solution :

 1. Once RMAN fails.
 2. Log into the AUX instance
 3. select open_mode from v$database;
 4. If not mounted, mount the aux instance.
 5. select file#, status, checkpoint_change#, fuzzy from v$datafile_header;
 -- check that checkpoint_change# is identical
 -- check that fuzzy=NO

 6. Assuming all is OK in checks of #5
 7. recover database using backup controlfile until cancel;
 -- type cancel;
 /* You should get a message -- media recovery cancelleed */

 If you receive ORA-1547 when recovery is cancelled, something is inconsistent about your files.

 8. alter database open resetlogs;
 9. shutdown immediate
 10. startup mount
 11. at operating system run dbnewid to change dbid:
 nid TARGET=SYS/oracle

 12. shutdown immediate
 13. startup mount;
 14. alter database open resetlogs;
 15. Select dbid from v$database
 /* confirm this number is different than target */

Error : RMAN Active Database Duplicate For Standby Failing With ORA-15001 Diskgroup FRA Does Not Exist

Error : RMAN Active Database Duplicate For Standby Failing With ORA-15001 Diskgroup FRA Does Not Exist
Issue : In primary, SNAPSHOT CONTROLFILE NAME was wrongly configured to standby diskgroup location.
Solution : 

Set the snapshot controlfile name to the right location of primary.

CONFIGURE SNAPSHOT CONTROLFILE NAME TO '<DIR>/DB_UNIQUE_NAME/snapcf_<DBNAME>.f';

Form did not come up after cloning. FRM-40010: Cannot read form FND_TOP/forms/US/FNDSCSGN

Issue : 

Form did not come up after cloning.
FRM-40010: Cannot read form FND_TOP/forms/US/FNDSCSGN

Solution :

1. Stop the application tier services

2. Take a backup of context file and make the below changes :


Before:
<formsfndtop oa_var="s_formsfndtop">FND_TOP >

After:
<formsfndtop oa_var="s_formsfndtop">/FIND01/apps_st/appl/fnd/12.0.0</formsfndtop>

3. Run Autoconfig.

4. Start the application tier services

Login In Oracle Database Service Cloud as root user

Cause :


Oracle doesn't allow direct access to root on cloud machines. sudo is the only option users have to access the root privileges.



Solution for the above issue:


1. change the below file


/etc/ssh/sshd_config


change 

PermitRootLogin no
to
PermitRootLogin yes


change 

AllowUsers opc oracle 
to
AllowUsers opc oracle root


2. copy the authorization keys from oracle home/opc/.ssh folder to /root/.ssh/   

cp /home/opc/.ssh/authorized_keys /root/.ssh/

OR  if there is a authorized_keys file already present in /root/.shh/ directory then append that file with the authorized_key present in /opc/.ssh/



3. service sshd restart


Try connecting to machine using the same private key as root user from putty. 

Query Clause In Expdp(DATAPUMP)



Query Clause In Expdp(DATAPUMP):
================================

QUERY clause can be used in expdp or impdp to export/import subset of the data or data with specific conditions.

Export dump of a table from emp_tab WHERE created > sysdate -40 . The filter can be added on any column depending upon the requirement.

SQL> select count(*) from “DBAADMIN”."EMP” WHERE created > sysdate -40;

COUNT(*)
———-
1600

Create a parfile with query clause:


cat expdp_query.par

dumpfile=test.dmp
logfile=test1.log
directory=TEST
tables=dbaadmin.EMP
QUERY=dbaadmin.EMP:"WHERE created > sysdate -40"



Now run the expdp command with parfile. We can see, 1600 rows will be exported.



expdp parfile=expdp_query.par

Export: Release 12.1.0.2.0 - Production on Mon Jan 23 14:52:07 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA parfile=expdp_query.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 29 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "dbaadmin"."EMP"                        199.4 KB    1600 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /export/home/oracle/test.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Jan 27 14:53:02 2018 elapsed 0 00:00:23

REUSE_DUMPFILES Parameter In EXPDP


REUSE_DUMPFILES Parameter In EXPDP:
===================================

If we try to export a dump file with the name, which is already present in that directory.
then we will get the error like ORA-27038: created file already exists

ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file “/export/home/oracle/dbaadmin_estim.dmp“
ORA-27038: created file already exists
Additional information: 1

So if the requirement is to overwrite the existing dump file, then REUSE_DUMPFILES parameter can be used with EXPDP


PARFILE WITH REUSE_DUMPFILES=Y

cat exp_reusedmp.par

dumpfile=dbaadmin_estim.dmp
logfile=dbaadmin.log
directory=EXPDIR
tables=dbaadmin.test_list
REUSE_DUMPFILES=Y



 At this point, we already have the dump file dbaadmin_estim.dmp . So the EXPDP job should overwrite this dump file


 expdp parfile=exp_redump.par

Export: Release 12.1.0.2.0 - Production on Mon Nov 19 12:53:54 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA parfile=exp_redump.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 29 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "dbaadmin"."test_list"                    24.69 MB  219456 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /export/home/oracle/dbaadmin_estim.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Mon Nov 13 12:54:01 2017 elapsed 0 00:00:03

Find user commits per minute in oracle database




Find user commits per minute in oracle database:
===================================
Below script is useful in getting user commit statistics information in the oracle database.
user commits is the number of commits happening the database.It will be helpful in tracking the number of transactions in the database.

col STAT_NAME for a20
col VALUE_DIFF for 9999,999,999
col STAT_PER_MIN for 9999,999,999
set lines 200 pages 1500 long 99999999
col BEGIN_INTERVAL_TIME for a30
col END_INTERVAL_TIME for a30
set pagesize 40
set pause on


select hsys.SNAP_ID,
       hsnap.BEGIN_INTERVAL_TIME,
       hsnap.END_INTERVAL_TIME,
           hsys.STAT_NAME,
           hsys.VALUE,
           hsys.VALUE - LAG(hsys.VALUE,1,0) OVER (ORDER BY hsys.SNAP_ID) AS "VALUE_DIFF",
           round((hsys.VALUE - LAG(hsys.VALUE,1,0) OVER (ORDER BY hsys.SNAP_ID)) /
           round(abs(extract(hour from (hsnap.END_INTERVAL_TIME - hsnap.BEGIN_INTERVAL_TIME))*60 +
           extract(minute from (hsnap.END_INTERVAL_TIME - hsnap.BEGIN_INTERVAL_TIME)) +
           extract(second from (hsnap.END_INTERVAL_TIME - hsnap.BEGIN_INTERVAL_TIME))/60),1)) "STAT_PER_MIN"
from dba_hist_sysstat hsys, dba_hist_snapshot hsnap
 where hsys.snap_id = hsnap.snap_id
 and hsnap.instance_number in (select instance_number from v$instance)
 and hsnap.instance_number = hsys.instance_number
 and hsys.STAT_NAME='user commits'
 order by 1;



   SNAP_ID BEGIN_INTERVAL_TIME            END_INTERVAL_TIME              STAT_NAME                 VALUE    VALUE_DIFF  STAT_PER_MIN
---------- ------------------------------ ------------------------------ -------------------- ---------- ------------- -------------
      6626 11-NOV-18 05.00.13.272 PM      11-NOV-18 06.00.29.527 PM      user commits          350001525     1,147,017        19,022
      6627 11-NOV-18 06.00.29.527 PM      11-NOV-18 07.00.14.759 PM      user commits          351130223     1,128,698        18,875
      6628 11-NOV-18 07.00.14.759 PM      11-NOV-18 08.00.02.845 PM      user commits          351987886       857,663        14,342
      6629 11-NOV-18 08.00.02.845 PM      11-NOV-18 09.00.22.109 PM      user commits          352829839       841,953        13,963
      6630 11-NOV-18 09.00.22.109 PM      11-NOV-18 10.00.07.076 PM      user commits          353478483       648,644        10,865
      6631 11-NOV-18 10.00.07.076 PM      11-NOV-18 11.00.24.303 PM      user commits          353939928       461,445         7,652
      6632 11-NOV-18 11.00.24.303 PM      12-NOV-18 12.00.11.904 AM      user commits          354335275       395,347         6,611
      6633 12-NOV-18 12.00.11.904 AM      12-NOV-18 01.00.29.406 AM      user commits          354604745       269,470         4,469
      6634 12-NOV-18 01.00.29.406 AM      12-NOV-18 02.00.17.332 AM      user commits          354955934       351,189         5,873
      6635 12-NOV-18 02.00.17.332 AM      12-NOV-18 03.00.03.228 AM      user commits          356918293     1,962,359        32,815
      6636 12-NOV-18 03.00.03.228 AM      12-NOV-18 04.00.20.577 AM      user commits          357821672       903,379        14,981
      6637 12-NOV-18 04.00.20.577 AM      12-NOV-18 05.00.09.204 AM      user commits          358154880       333,208         5,572
      6638 12-NOV-18 05.00.09.204 AM      12-NOV-18 06.00.25.507 AM      user commits          358296694       141,814         2,352
      6639 12-NOV-18 06.00.25.507 AM      12-NOV-18 07.00.09.734 AM      user commits          358692156       395,462         6,624
      6640 12-NOV-18 07.00.09.734 AM      12-NOV-18 08.00.01.047 AM      user commits          359373748       681,592        11,379
      6641 12-NOV-18 08.00.01.047 AM      12-NOV-18 09.00.17.981 AM      user commits          360418586     1,044,838        17,327
      6642 12-NOV-18 09.00.17.981 AM      12-NOV-18 10.00.04.542 AM      user commits          362476024     2,057,438        34,405
      6643 12-NOV-18 10.00.04.542 AM      12-NOV-18 11.00.22.732 AM      user commits          364469092     1,993,068        33,053
      6644 12-NOV-18 11.00.22.732 AM      12-NOV-18 12.00.09.693 PM      user commits          365611444     1,142,352        19,103
      6645 12-NOV-18 12.00.09.693 PM      12-NOV-18 01.00.27.672 PM      user commits          366866479     1,255,035        20,813
      6646 12-NOV-18 01.00.27.672 PM      12-NOV-18 02.00.14.537 PM      user commits          368466462     1,599,983        26,756


ENABLE_PARALLEL_DML Hint In Oracle 12c


ENABLE_PARALLEL_DML Hint In Oracle 12c:
=======================================

Till Oracle 12c, For doing DML transactions in parallel, we need to enable PDML (parallel DML) at the session level.
I.e before any DML statement, we need to issue the below statement.

ALTER SESSION ENABLE PARALLEL DML;

-- Then parallel DML statement

insert /*+ parallel(8) */ into test123 select * from test123;



ALTER SESSION ENABLE PARALLEL DML;

-- Then parallel DML statement

insert /*+ parallel(8) */ into test123 select * from test123;


WITHOUT ENABLE_PARALLEL_DML:
============================

SQL> explain plan for insert /*+ parallel(8) */ into test123 select * from test123;

Explained.


PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------
Plan hash value: 2876518734

------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |          |   122K|    13M|    82   (2)| 00:00:01 |        |      |            |
|   1 |  LOAD TABLE CONVENTIONAL | test123    |       |       |            |          |        |      |            |  ----- > NOT UNDER PX CORDIN..
|   2 |   PX COORDINATOR         |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)   | :TQ10000 |   122K|    13M|    82   (2)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   4 |     PX BLOCK ITERATOR    |          |   122K|    13M|    82   (2)| 00:00:01 |  Q1,00 | PCWC |            |
|   5 |      TABLE ACCESS FULL   | test123    |   122K|    13M|    82   (2)| 00:00:01 |  Q1,00 | PCWP |            |

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 8 because of hint
   - PDML is disabled in current session --- >          --- >>>  IT INDICATED PDML IS DISABLED

17 rows selected.



WITH ENABLE_PARALLEL_DML hint:
==============================

SQL> explain plan for insert /*+ parallel(8) enable_parallel_dml */ into test123 select * from test123;

Explained.

SQL> set lines 299
SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------
Plan hash value: 4043334015

-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Dist
-----------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |          | 61649 |  6863K|    40   (3)| 00:00:01 |        |      |       
|   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |       
|   2 |   PX SEND QC (RANDOM)              | :TQ10000 | 61649 |  6863K|    40   (3)| 00:00:01 |  Q1,00 | P->S | QC (RAN --- > LOAD IS UNDER PX COORDINATOR
|   3 |    LOAD AS SELECT (HYBRID TSM/HWMB)| test123    |       |       |            |          |  Q1,00 | PCWP |       
|   4 |     PX BLOCK ITERATOR              |          | 61649 |  6863K|    40   (3)| 00:00:01 |  Q1,00 | PCWC |       
|   5 |      TABLE ACCESS FULL             | test123    | 61649 |  6863K|    40   (3)| 00:00:01 |  Q1,00 | PCWP |       

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------------

Note
-----
   - The degree of Parallelism is 8 because of the hint         - PARALLEL IS ENABLED

16 rows selected.


We can see, With this ENABLE_PARALLEL_DML hint, even without the alter session command, PDML is enabled.

IN-MEMORY In Oracle 12c



IN-MEMORY In Oracle 12c
=======================
How to check whether inmemory is enabled or not:

SQL> show parameter inmemory

NAME                                 TYPE        VALUE
------------------------------------ ----------- --------------
inmemory_clause_default              string
inmemory_force                       string      DEFAULT
inmemory_max_populate_servers        integer     0
inmemory_query                       string      ENABLE     
inmemory_size                        big integer 0  ---- > 0 Means inmemory not enabled
inmemory_trickle_repopulate_servers_ integer     1
percent
optimizer_inmemory_aware             boolean     TRUE



SQL>  select name,value from v$sga where NAME='In-Memory Area';

No rows selected.

How to enable the in-memory feature in DB:

SQL> alter system set inmemory_size=5G scope=spfile;

System altered.


shutdown immediate;
startup


SQL> show parameter inmemory_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------------------------
inmemory_size                        big integer 3G


SQL>  select name,value from v$sga where NAME='In-Memory Area';
NAME                      VALUE
-------------------- ----------
In-Memory Area       3221225472

Enable in-memory for  a table

SELECT table_name,inmemory,inmemory_priority,
inmemory_distribute,inmemory_compression,
inmemory_duplicate
FROM   dba_tables
WHERE table_name='test123';

no rows selected

SQL>select owner, segment_name, populate_status from v$im_segments

no rows selected



SELECT table_name,inmemory,inmemory_priority,
inmemory_distribute,inmemory_compression,
inmemory_duplicate
FROM   dba_tables
WHERE table_name='test123';

no rows selected

SQL>select owner, segment_name, populate_status from v$im_segments

no rows selected

SQL> alter table dbaadmin.test123 inmemory;

Table altered


col owner for a12
col segment_name for a12
select owner, segment_name, populate_status from v$im_segments

no rows selected


select count(*) from dbaadmin.test123;


col owner for a12
col segment_name for a12
select owner, segment_name, populate_status from v$im_segments

OWNER        SEGMENT_NAME POPULATE_STATUS
------------ ------------ ---------------
dbaadmin     test123        COMPLETED

set lines 299
col table_name for a12
SELECT table_name,inmemory,inmemory_priority,
inmemory_distribute,inmemory_compression,
inmemory_duplicate
FROM   dba_tables
WHERE table_name='test123';


TABLE_NAME   INMEMORY INMEMORY INMEMORY_DISTRI INMEMORY_COMPRESS INMEMORY_DUPL
------------ -------- -------- --------------- ----------------- -------------
test123        ENABLED  NONE     AUTO            FOR QUERY LOW     NO DUPLICATE



Now check the explain plan:


SQL> explain plan for select * from dbaadmin.test123;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------
Plan hash value: 3778028574

------------------------------------------------------------------------------------
| Id  | Operation                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |       | 77294 |  8001K|    29  (25)| 00:00:01 |
|   1 |  TABLE ACCESS INMEMORY FULL| test123 | 77294 |  8001K|    29  (25)| 00:00:01 | ---- >>> THIS ONE SHOWS THAT INMEMORY IS USED
------------------------------------------------------------------------------------

8 rows selected.


Enable in-memory with PRIORITY CRITICAL

ALTER TABLE dbaadmin.TEST3 INMEMORY PRIORITY CRITICAL;


SQL> select OWNER,SEGMENT_NAME,populate_status,INMEMORY_PRIORITY from v$im_segments;

OWNER    SEGMENT_N POPULATE_ INMEMORY
-------- --------- --------- --------
dbaadmin test124     COMPLETED CRITICAL
dbaadmin test123     COMPLETED NONE


Enable in-memory for a tablespace:

If enabled at tablespace level, all the tables will enable for IM column store.

SQL> select tablespace_name,DEF_INMEMORY,DEF_INMEMORY_PRIORITY,DEF_INMEMORY_COMPRESSION,
DEF_INMEMORY_DISTRIBUTE,DEF_INMEMORY_DUPLICATE from dba_tablespaces where tablespace_name='USERS';

TABLESPACE_NAME                DEF_INME DEF_INME DEF_INMEMORY_COMP DEF_INMEMORY_DI DEF_INMEMORY_
------------------------------ -------- -------- ----------------- --------------- -------------
USERS                          DISABLED

SQL> ALTER TABLESPACE USERS DEFAULT INMEMORY;

Tablespace altered.

SQL> select tablespace_name,DEF_INMEMORY,DEF_INMEMORY_PRIORITY,DEF_INMEMORY_COMPRESSION,
DEF_INMEMORY_DISTRIBUTE,DEF_INMEMORY_DUPLICATE from dba_tablespaces where tablespace_name='USERS';


TABLESPACE_NAME                DEF_INME DEF_INME DEF_INMEMORY_COMP DEF_INMEMORY_DI DEF_INMEMORY_
------------------------------ -------- -------- ----------------- --------------- -------------
USERS                          ENABLED  NONE     FOR QUERY LOW     AUTO            NO DUPLICATE


Disable in-memory for the table:

ALTER TABLE dbaadmin.test123 NO INMEMORY;

USAGE:

V$INMEMORY_AREA stores the usage of inmemory area.


set pagesize 200
set lines 200
select * from V$INMEMORY_AREA
POOL                       ALLOC_BYTES USED_BYTES POPULATE_STATUS                CON_ID
-------------------------- ----------- ---------- -------------------------- ----------
1MB POOL                    2549088256    9437184 DONE                                0
64KB POOL                    654311424    1638400 DONE                                0



MONITORING LONG RUNING QUERY



MONITORING LONG RUNING QUERY
==============================

using the v$session_longops view, you can view any SQL statement that executes for more than 6 seconds.

select  s.inst_id,
        SQL.SQL_TEXT as "OPERATION",
        to_char(START_TIME, 'dd/mm/yyyy hh24:mi:ss') Start_Time,
        to_char(LAST_UPDATE_TIME, 'dd/mm/yyyy hh24:mi:ss') Last_Update_Time,
        round(TIME_REMAINING/60,1) as "MINUTES_REMAINING",
        round((SOFAR/TOTALWORK) * 100,2) as PCT_DONE
from    gv$session s,
        gv$sqlarea sql,
        gv$session_longops op
where   
        s.sid=op.sid
and     s.sql_id = sql.sql_id
and     s.sid = op.sid
and     s.status  = 'ACTIVE'
and     op.totalwork > op.sofar
and     s.sid not in (select distinct sid from gv$mystat where rownum < 2)
order by 4 desc;


the SET_SESSION_LONGOPS procedure of the DBMS_APPLICATION_INFO package can be used to publish your application information about the progress of long operations.

DECLARE
        rindex    BINARY_INTEGER;
        slno      BINARY_INTEGER;
        totalwork number;
        sofar     number;
        obj       BINARY_INTEGER;
 
      BEGIN
        rindex := dbms_application_info.set_session_longops_nohint;
        sofar := 0;
        totalwork := 10;
 
        WHILE sofar < 10 LOOP
          -- update obj based on sofar
          -- perform task on object target
 
          sofar := sofar + 1;
          dbms_application_info.set_session_longops(rindex, slno,
            "Operation X", obj, 0, sofar, totalwork, "table", "tables");
        END LOOP;
      END;
 
 
 
 
After instrumentation, v$session_longops view is populated with this information:
SELECT
   opname,
   target_desc,
   sofar,
   totalwork,
   time_remaining,
    units
FROM 
   v$session_longops;