DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Wednesday, January 31, 2018

ORA-27102: out of memory (Startup)

You have started up with pfile.

Remove SGA_* parameters from your pfile.

In pfile, Change your MEMORY_MAX_TARGET=500M
MEMORY_TARGET=500M

Save the file.

Perform the following:

SQL> startup nomount pfile='\u01\app\oracle\pfile\pfile.ora';
SQL> alter database mount;
SQL> alter database open;

Friday, January 19, 2018

Standby issue.

ORA-01511: error in renaming log/datafiles
ORA-01270: RENAME operation is not allowed if STANDBY_FILE_MANAGEMENT is auto

Renaming datafile is not allowed if STANDBY_FILE_MANAGEMENT is auto

show parameter STANDBY_FILE_MANAGEMENT=auto

step 1:

We have change the STANDBY_FILE_MANAGEMENT=manual

alter system set STANDBY_FILE_MANAGEMENT=manual scope=both;

step 2:

rename the datafile

alter database create datafile '/sorahome/app/oracle/product/11204/ATLPRPD/dbs/UNNAMED.dbf' as '/s07/oradata/ATLPROD/800P_ord_data44.dbf';

step 3:

change the STANDBY_FILE_MANAGEMENT=auto

alter system set STANDBY_FILE_MANAGEMENT=auto scope=both;

step 4:

start the MRP process

alter database recover managed standby database using current logfile disconnect from session;

step 5:

check whether MRP process started or not

ps -ef|grep mrp

Monitoring Workers while using datapump

Monitoring Parallism while doing export and import using datapump.


select x.job_name,b.state,b.job_mode,b.degree
, x.owner_name,z.sql_text, p.message
, p.totalwork, p.sofar
, round((p.sofar/p.totalwork)*100,2) done
, p.time_remaining
from dba_datapump_jobs b
left join dba_datapump_sessions x on (x.job_name = b.job_name)
left join v$session y on (y.saddr = x.saddr)
left join v$sql z on (y.sql_id = z.sql_id)
left join v$session_longops p ON (p.sql_id = y.sql_id)
WHERE y.module='Data Pump Worker'
AND p.time_remaining > 0;

Wednesday, January 17, 2018

AD Administration error:Connected to db edition other than RUN and PATCH

Issue:

When we try to compile apps schema through adadmin

AD Administration error:
Connected to db edition other than RUN and PATCH:

AD Administration error:
Edition_Name

When tried to change a USER password through FNDCPASS.

The FNDCPASS tool cannot be run against the patch edition

Change :

Recently applied a patch and did the successful cutover.

Solution :

1.
Find the default edition of the database.

SELECT property_value FROM   database_properties WHERE  property_name = 'DEFAULT_EDITION';

SQL> SELECT property_value FROM   database_properties WHERE  property_name = 'DEFAULT_EDITION';

PROPERTY_VALUE
--------------------------------------------------------------------------------
V_20180115_0214

2.
Find service names of the DB

SQL> show parameter service

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
service_names                        string      EBSDB, ebs_patch

3.

Find the edition name which is associated with service_name, it should be same as default edition of the database.

select NAME,EDITION from sys.service$ where name = 'EBSDB'


NAME                           EDITION
------------------------------ ------------------------------
EBSDB                          V_20180115_0203

3.
Update sys.service$ table manually to reflect edition to default edition.

update sys.service$
set EDITION = 'V_20180115_0214'
where NAME = 'EBSDB';

1 row updated.

Commit;

Restart DB to reflect this.

Note : We have tried this in one of our development instance, in case if you are trying this in critical production , please raise SR with oracle before updating sys.service$.

Friday, January 12, 2018

Optimal logfile size

set lines 200
set pages 60

column buckets       format 9999999
column name          format a12

select n.name, t.value
from v$mystat   t join v$statname n
on t.statistic# = n.statistic#
where n.name = 'redo size';

select optimal_logfile_size
from v$instance_recovery;

Reverse all user

set lines 240
set echo off
set feedb off
set heading off
SET VERIFY off
set trimspool on
set pages 0
col l1 format a120 newline
col l2 format a120 newline
col l3 format a120 newline


select 'CREATE USER '||username||' IDENTIFIED BY VALUES '||''''||password||''''||' ' l1,
       '  DEFAULT TABLESPACE ' ||DEFAULT_TABLESPACE ||' TEMPORARY TABLESPACE ' ||TEMPORARY_TABLESPACE l2,
       '  PROFILE ' ||profile ||';' l3
from dba_users
where username = upper('&1')
order by username
/

SET VERIFY on
set echo on
set pages 9999
set feedb on

Reverse all db_link

set lines 220
set echo off
set feedb off
set termout off
set heading off
SET VERIFY off
set trimspool on
set long 32766
set longchunksize 200
col l1 format a90
col l2 format a90 newline
col l3 format a100 newline
col l6 format a200 newline word_wrapped
col l9 format a90 newline

define spoolfile=/u01/app/oracle/oraadm/tmp/reverse_all_dblinks_out.sql
spool &spoolfile

select 'select '||''''||'-- '||owner||'.'||db_link||''''||' from dual; ' l1,
       'select '||''''||'connect '||decode(owner, 'PUBLIC', 'oem', owner)||'/'||
       decode(owner, 'PUBLIC', 'n0th1ng ', sys_context('userenv', 'db_name')||owner)||''''||' from dual; ' l2,
       'select '||''''||'drop '||decode(owner, 'PUBLIC', 'PUBLIC ', ' ')||'database link '||db_link||';'||''''||' from dual; ' l3,
       'select dbms_metadata.get_ddl('||''''||'DB_LINK'||''''||','||
       ''''||db_link||''''||','||''''||owner||''''||') from dual; ' l6,
       'select '||''''||'/'||''''||' from dual; ' l9
from dba_db_links
order by owner, db_link
/
spool off
set termout on

@&spoolfile

set echo on
set feedb on
set heading on
SET VERIFY on

undefine spoolfile

Killing a session in backend

Some times the "alter system kill session" command will returns the output as "Marked as Killed".

in that case we need to kill the session from backend below are the steps:

Finding os process id using sid:---

select vs.sid,vs.serial#, vs.username, vs.osuser,vs.sql_id, vs.process fg_pid, vp.spid bg_pid from v$session vs, v$process vp where vs.paddr = vp.addr and vs.sid=<sid>;

output will get Background process id. use below OS command to kill the session.

Grip the
Kill -9 <background process id>

Thursday, January 11, 2018

Output Post-processor actions failed issues in EBS R12

Environment: Oracle EBS 12.1.3, Oracle Database 11gR2

Users unable to open the out files.
Concurrent requests failed with “Post-processing of request failed error message”
One or more post-processing actions failed. Consult the OPP service log for details.
No further attempts will be made to post-process this request.

Cause:
The concurrent manager process was able to successfully invoke the Output Post-Processor (OPP) but encountered a timeout as the OPP takes longer than the value assigned to complete the job.
 Solution:
Increase the value of profile Concurrent: OPP Response Timeout . Bounce Apache and retest.
If the issue still exists, perform the following steps.
Increase the number of Output Post Processors as follows:
Increase the number of processes for Output Post Processor.
Additionally, ensure there is a setting of  oracle.apps.fnd.cp.opp.OPPServiceThread:2:0:max_threads=5 under Parameters.

Query to check archivelog generation on hourly basis

  SELECT TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY') DAY,
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '00', 1, NULL))
            "00-01",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '01', 1, NULL))
            "01-02",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '02', 1, NULL))
            "02-03",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '03', 1, NULL))
            "03-04",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '04', 1, NULL))
            "04-05",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '05', 1, NULL))
            "05-06",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '06', 1, NULL))
            "06-07",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '07', 1, NULL))
            "07-08",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '08', 1, NULL))
            "08-09",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '09', 1, NULL))
            "09-10",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '10', 1, NULL))
            "10-11",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '11', 1, NULL))
            "11-12",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '12', 1, NULL))
            "12-13",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '13', 1, NULL))
            "13-14",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '14', 1, NULL))
            "14-15",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '15', 1, NULL))
            "15-16",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '16', 1, NULL))
            "16-17",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '17', 1, NULL))
            "17-18",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '18', 1, NULL))
            "18-19",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '19', 1, NULL))
            "19-20",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '20', 1, NULL))
            "20-21",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '21', 1, NULL))
            "21-22",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '22', 1, NULL))
            "22-23",
         SUM (DECODE (TO_CHAR (COMPLETION_TIME, 'HH24'), '23', 1, NULL))
            "23-00",
         COUNT (*) TOTAL
    FROM V$ARCHIVED_LOG
WHERE ARCHIVED='YES'
GROUP BY TO_CHAR (COMPLETION_TIME, 'DD/MM/YYYY')
ORDER BY TO_DATE (DAY, 'DD/MM/YYYY') desc;


Requested URL/OA_HTML/AppsLogin was not found on this server



Applies to:

Oracle Enterprise Linux 5.5
Oracle Application Install - Version: 12.1.1
Database 11gR1

Description:

I recently came across a problem the login page was throwing an error - 404 /OA_HTML/AppsLogin was not found on this server. All services of Database and Listener working fine and get connected. All Application services started successfully without any error/warning on status 0, multiple time restart the services of application and database and also restart the whole machine but still facing the same problem.

Solution:


After some research work I have found the following solution:

The Problem occurs because the wsrp_service.wsdl was owned by root but the services were being started by applmgr.

Ensure that the permission of the file
$ORA_CONFIG_HOME/10.1.3/j2ee/oacore/application-deployments/oacore/html/server-wsdl/wsrp_service.wsdl

Shows that it is owned by root user. Somebody started the services of Apps as root user.
Changed the ownership of the file to Apps Owner.
Restart the service, everything working fine now.

Error 404 When Accessing /OA_HTML/AppsLogin Not Found After Upgrade To Database 12c


Applies to:

Oracle E-Business Suite Technology Stack - Version 12.1.3 and later
Information in this document applies to any platform.

Symptoms:

 Error 404, /OA_HTML/AppsLogin not found Error when accessing AppslocalLogin.jsp

Changes Made:

      Database was upgraded recently to 12c.

Cause:

   File: $OA_HTML/../META-INF/application.xml is manually modified. The file contains duplicate entry for Oracle E-Business Suite Release 12.


Solution:


To resolve the issue, execute the following steps:
1. Copy $OA_HTML/../META-INF/application.xml form backup
       OR
  cp $FND_TOP/admin/template/application_xml_1013.tmp $OA_HTML/../META-INF/application.xml

2. Restart the oacore.
3. Retest the issue.


Oacore Service Hanging at Startup in Oracle EBS R12.1.3




Before
=====
$ adopmnctl.sh status

You are running adopmnctl.sh version 120.6.12010000.5

Checking status of OPMN managed processes...
Processes in Instance: ebstest_apptest.apptest.user.com
---------------------------------+--------------------+---------+---------
ias-component | process-type | pid | status
---------------------------------+--------------------+---------+---------
OC4JGroup:default_group | OC4J:oafm | 8015 | Init
OC4JGroup:default_group | OC4J:forms | 7171 | Alive
OC4JGroup:default_group | OC4J:oacore | 3276 | Init
HTTP_Server | HTTP_Server | 2942 | Alive

Casue: This issue may occur after an OS crash, reboot or services restart.
If file $COMMON_TOP/cacheLock file exists before oacore service is started, the above thread will be locked and the oacore service will hang until timed out.

Solution:
Remove $COMMON_TOP/cacheLock file which is a 0 bytes file created at startup time.

1)stop application services make sure no process are running.
2)go to common top

cd $COMMON_TOP
$ mv cacheLock cacheLock_bkp

3)started the application services and when I checked the status,it was started successfully and no more issues.Below are the details.

$ adopmnctl.sh status

You are running adopmnctl.sh version 120.6.12010000.5

Checking status of OPMN managed processes...

Processes in Instance: ebstest_apptest.apptest.user.com
---------------------------------+--------------------+---------+---------
ias-component | process-type | pid | status
---------------------------------+--------------------+---------+---------
OC4JGroup:default_group | OC4J:oafm | 27257 | Alive
OC4JGroup:default_group | OC4J:forms | 26249 | Alive
OC4JGroup:default_group | OC4J:oacore | 24769 | Alive
HTTP_Server | HTTP_Server | 24447 | Alive

Steps to create ORACLE_HOME after copying oracle binaries from source to destination.

   


Step 1:

Tar the ORACLE_HOME in the source instance.

tar -cvf /DUMP_TABLE/backup/11203_PROD.tar /prddb01/oracle/PROD/db/tech_st/11.2.0.3/

Step 2:

 Scp the tar file to the destination



Step 3:

  Untar the tar file in the destination

Tar -xvf 11203_PROD.tar

Step 4:

  Go to ORACLE_HOME/clone/bin and run clone.pl with ORACLE_BASE,ORACLE_HOME,ORACLE_HOME_NAME

cd $ORACLE_HOME/clone/bin

perl clone.pl ORACLE_BASE=/prod/oracle/PROD ORACLE_HOME=/prod/oracle/PROD/db/tech_st/11.2.0.3 ORACLE_HOME_NAME=11G_PRODDR

Note:If you face the error like ORACLE_HOME already exists then perform the below steps

*Open the /etc/oraInst.loc and comment the home you specified in the clone.pl command 


*Go to $ORACLE_HOME/oui/bin and use the runInstaller
./runInstaller -silent -ignoreSysPrereqs -detachHome ORACLE_HOME=/prod/oracle/PROD/db/tech_st/11.2.0.3 ORACLE_HOME_NAME=11G_PRODDR

After the runInstaller run root.sh

Now set the ORACLE_HOME environment

export ORACLE_HOME=/prod/oracle/PROD/db/tech_st/11.2.0.3
export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_SID=PRODDR


Try to connect sqlplus,you will get below error


Now Try to connect sqlplus still if you get permission denied use below steps

cat $ORACLE_HOME/rdbms/lib/config.s

.csect H.13.NO_SYMBOL{RO}, 3
.string ""
# End csect H.13.NO_SYMBOL{RO}
.csect H.14.NO_SYMBOL{RO}, 3
.string ""



Now change the file as below.

.csect H.13.NO_SYMBOL{RO}, 3
.string "dba"
# End csect H.13.NO_SYMBOL{RO}
.csect H.14.NO_SYMBOL{RO}, 3
.string "dba"


After changing we should run relink,use below command.

/prod/oracle/PROD/db/tech_st/11.2.0.3/bin/relink

Now sqlplus will connect.

Wednesday, January 10, 2018

Killing Oracle Scheduling Sessions




On occasion, it may be necessary to kill an Oracle session that is associated with a running job.  The first step in the process is to identify the session to be killed.

Running jobs that were scheduled using the dbms_job package can be identified using the dba_jobs_running view.
The script listed below uses this view along with the v$session and v$process views to gather all information needed about the running jobs.


set feedback off
alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';
set feedback on

select
   jr.job,
   s.username,
   s.sid,
   s.serial#,
   p.spid,
   s.lockwait,
   s.logon_time
from
   dba_jobs_running jr,
   v$session s,
   v$process p
where
   jr.sid = s.sid
and
   s.paddr = p.addr
order by
   jr.job
;


  JOB USERNAME     SID    SERIAL# SPID LOCKWAIT LOGON_TIME
----- --------- ------ ---------- ---- -------- --------------------
   42 JOB_USER     235          3       3231          23-JUN-2004 08:21:25
   99 JOB_USER     22         77       3199          23-JUN-2004 08:55:35

2 rows selected.

Running jobs that were scheduled using the dbms_scheduler package can be identified using the dba_scheduler_running_jobs view.
The jobs_running_10g.sql script listed below uses this view along with the v$session and v$process views to gather all information needed about the running jobs.




select
   rj.job_name,
   s.username,
   s.sid,
   s.serial#,
   p.spid,
   s.lockwait,
   s.logon_time
from
   dba_scheduler_running_jobs rj,
   v$session s,
   v$process p
where
   rj.session_id = s.sid
and
   s.paddr = p.addr
order by
   rj.job_name
;


JOB_NAME                   USERNAME SID SERIAL# SPID LOCK  LOGON_TIME
-------------------------- -------- --- ------- ---- ----- --------------------
TEST_FULL_JOB_DEFINITION   SYS      22     125 3199       23-JUN-2004 09:22:12

1 row selected.

Regardless of the job scheduling mechanism, the important thing to note is that there are sid, serial#, and spid values associated with the running jobs.  The sid and serial# values are necessary in order to kill the session, while the spid value is necessary if the associated operating system process or thread must be killed directly.

To kill the session from within Oracle, the sid and serial# values of the relevant session can then be substituted into the following statement:

alter system kill session 'sid,serial#';

With reference to the job listed above by the jobs_running_10g.sql script, the statement would look like this.

SQL> alter system kill session '22,125';


System altered.

RDBMS and listener log (xml) from SQL*Plus with V$DIAG_ALERT_EXt view


With the V$DIAG_ALERT_EXT it is possible to read the logs of all the databases and listeners from the ADR location Great for monitoring. Now only one connection to a database is needed to see all the database alert files and listener logs registered inside the ADR structure.Yes Multiple databases.


There is NO GV$DIAG_ALERT_EXT.

ADR is node, database depended and not global. Still we can use a shared ADR on a shared file system on Oracle CLusterware RAC and then is everthing from one place.

The parameter DIAGNOSTIC_DEST represents the root for the Automatic Diagnostic Repository (ADR), which includes the alertfile states. If this parameter is not set and the ORACLE_BASE is than it will use this information for the DIAGNOSTIC_DEST. If ORACLE_BASE is not set then the ADR root is in log Oracle release 11.2 has introduced new v$ diagnostic (diag) views. 

select object_name from dba_objects where object_name like 'V$DIAG%' order by 1;

Describe V$DIAG_ALERT_EXT


Demo Output of a RAC instance

select distinct adr_home from v$diag_alert_ext;
ADR_HOME      COMPONENT_ID
------------------------------------------------------- -------------------------
diag/rdbms/joord/JOORD1     rdbms
diag/rdbms/TEST/TEST1     rdbms
diag/tnslsnr/rdbms11gr2/listener_scan2   tnslsnr
diag/tnslsnr/rdbms11gr2/listener   tnslsnr
diag/asm/+asm/+ASM1      rdbms

select * from v$diag_info;
 INST_ID NAME                                     VALUE
---------- ---------------------------------------- ------------------------------------------------------------------------------------------
         1 Diag Enabled                             TRUE
         1 ADR Base                                 /home/oracle/app/oracle
         1 ADR Home                                 /home/oracle/app/oracle/diag/rdbms/prod1/PROD1
         1 Diag Trace                               /home/oracle/app/oracle/diag/rdbms/prod1/PROD1/trace
         1 Diag Alert                               /home/oracle/app/oracle/diag/rdbms/prod1/PROD1/alert
         1 Diag Incident                            /home/oracle/app/oracle/diag/rdbms/prod1/PROD1/incident
         1 Diag Cdump                               /home/oracle/app/oracle/diag/rdbms/prod1/PROD1/cdump
         1 Health Monitor                           /home/oracle/app/oracle/diag/rdbms/prod1/PROD1/hm
         1 Default Trace File                       /home/oracle/app/oracle/diag/rdbms/prod1/PROD1/trace/PROD1_ora_3486.trc
         1 Active Problem Count                     0
         1 Active Incident Count                    0
 
SELECT * FROM V$DIAG_CRITICAL_ERROR;
FACILITY   ERROR
---------- -----------------------------------------------------------------
ORA        7445
ORA        4030
ORA        4031
ORA        29740
ORA        255
ORA        355
ORA        356
ORA        239
ORA        240
ORA        494
ORA        3137
ORA        227
ORA        353
ORA        1578
ORA        32701
ORA        32703
ORA        29770
ORA        29771
ORA        445
ORA        25319
ORA        56729
OCI        3106
OCI        3113
OCI        3135

XML markup V$DIAG_ALERT_EXT
The V$DIAG_ALERT_EXT read the log.xml file from de ADR location. Here are sample queries using XML markup with V$DIAG_ALERT_EXT 

set pagesize 120
set linesize 200
set long 99999
column ADR_HOME format a40
column "mylog.xml" format a180
select
      xmlelement(noentityescaping "msg",
              xmlattributes( alt.originating_timestamp as "time",
                             alt.organization_id       as "org_id",
                             alt.component_id          as "comp_id",
                             alt.message_id            as "msg_id",
                             alt.message_type          as "type",
                             alt.message_group         as "group",
                             alt.message_level         as "level",
                             alt.host_id               as "host_id",
                             alt.host_address          as "host_addr",
                             alt.process_id            as "pid_id",
                             alt.version               as "version"
                           ),
                   xmlelement("txt", message_text)
               ) as "mylog.xml"
  from
     v$diag_alert_ext alt
 order by alt.component_id, alt.inst_id, alt.originating_timestamp;

V$DIAG_ALERT_EXT (X$DBGALERTEXT) is done by External table referencing log*.xml files. 
Effectively indexed by alert date, so date-based queries are “efficient and performant.” (MOS Doc ID 961682.1) 
Has inst_id column, but not RAC-aware. As of 11.2, V$DIAG_ALERT_EXT Includes alerts from all ADR homes (ASM, listeners, clients, etc.) SQLPLUS statements

select ORIGINATING_TIMESTAMP, MESSAGE_TEXT
from V$DIAG_ALERT_EXT
WHERE ORIGINATING_TIMESTAMP > systimestamp - INTERVAL '15' minute
and trim(COMPONENT_ID)='rdbms'
and inst_id=1
order by originating_timestamp;



set pagesize 120
set linesize 300
column adr_home format a40
column message_text format a80
select call_monitor , adr_home, inst_id, ORIGINATING_TIMESTAMP, message_text
   from ( select adr_home, inst_id, ORIGINATING_TIMESTAMP, message_text
           ,dense_rank() over (PARTITION BY adr_home order by ORIGINATING_TIMESTAMP DESC NULLS LAST) as call_monitor
              from v$diag_alert_ext )
   where
   --   call_monitor < 31
   --and
       ORIGINATING_TIMESTAMP > systimestamp - INTERVAL '24' hour
   order by ORIGINATING_TIMESTAMP;

ORACLE EXADATA FEATURES

1. Smart Flash Cache
The Exadata Storage Server layer includes some flash storage as a hardware component, which has been implemented as a set of PCI flash cards. The main benefit of that is of course faster access than standard, disk-based access. In some cases you can order Exadata to KEEP whole table in Smart Flash Cache which speeds up database layer access with Full Table Scans. On the other hand in Write-Back mode you can empower DBWR or LGWR performance by putting written data on flash cache first, and afterwards sync it with regular hard drives, which of course is transparent to database engine.
2. Storage Indexes
It is absolutely unique to Exadata. Storage Indexes are an intelligent storage implementation. Generally speaking classic database index by it definition is created to efficiently provide a location of a certain data key. To be honest Storage Indexes in Exadata are focused on eliminating areas on storage as possible place where data might exist. Online data maps which are completely transparent for database layer are stored on the Flash Cache of Exadata Storage Servers. To make the story short, when Exadata Storage Server scans through the Storage Index and identifies the regions where predicate value falls within the MIN/MAX for the region, only for the identified regions physical I/O occurs. And as a consequence, even Full Table Scan has been planned by CBO on Database Server layer, limited I/O operations use to be proceed.
3. Smart Scans and Cell Offloading
Offloading use to be called as secret sauce of Oracle Exadata. The main concept of Offloading is to move processing from DB Nodes (Database Servers) to the intelligent storage layer. What is even more important Offloading means the reduction in the volume of data that returns to database server, which is one of the major bottlenecks in terabytes or even more bigger databases. To eliminate the time spent on transferring completely unnecessary data between storage and the database tier is the main issue that Oracle Exadata has been built to solve. Keep in mind that Offloading and Smart Scan terms could be used somewhat interchangeably.
4. Hybrid Columnar Compression
Also known as HCC, Hybrid Columnar Compression is one of the key features of Oracle Exadata, and it is only available on this engineered system. HCC format of compression will be used only when data arrives with direct path loads. There are four levels of compression QUERY LOW (LZO, 4x), QUERY HIGH (ZLIB, 6x), ARCHIVE LOW (ZLIB, 7x) and ARCHIVE HIGH (Bzip2, 12x). Keep in mind that HCC is not a good option for OLTP systems. In case of HCC mechanics, HCC store data in nontraditional format. Even data resides in Oracle blocks, with block header for every block, in HCC data storage has been organised in logical structures called compression units (CUs). Each CU consists of multiple Oracle blocks.
5. IORM
I/O Resource Manager (IORM) is a Oracle Exadata features which enriches Oracle Resource Manager from Database layer. IORM only actively manages I/O requests when needed and when Storage Server is not fully utilised, it provide data immediately. But when a disk is under heavy utilisation, Storage Server software redirects the I/O requests to the appropriate IORM queue and schedules I/O from there according to the policies defined in your IORM plans. Generally IORM policies open the way to prioritise databases on intelligent storage layer, which enable workload optimisation.


Very important parameters to use major features of Exadata.

Very important parameters to use major features of Exadata.
The first is CELL_OFFLOAD_PROCESSING, which relates to Smart Scans (cell scans). The default value is TRUE, which means that Smart Scans are turned ON for Exadata (if you are using it). You can set this value to FALSE to turn off Smart Scans and check other features or compare speeds with and without this feature.
The second parameter is undocumented and should only be used with the consent of Oracle Support and also only for testing purposes. The second is _KCFIS_STORAGEIDX_DISABLED, which is used to disable storage indexes. This tells Oracle not to use storage index optimization on storage cells if set to TRUE. The default is FALSE.
 Another undocumented parameter (again check with Oracle Support) is also related to how the Smart Scan works and uses Bloom Filters.
The parameter _BLOOM_FILTER_ENABLED is set to TRUE by default. With Exadata, bloom filters are used for join filtering with Smart Scans. _BLOOM_PRUNING_ENABLED also has a default of TRUE. Set these parameters to FALSE to disable them.