Error in invoking target 'agent nmb nmo nmhs tclexec' in oracle 11g installation

When installing Oracle 11g  you may get the following error :
 Error in invoking target 'agent nmb nmo nmhs tclexec' of makefile '/u01/app/oracle/product/11.1.0/db_1/sysman/lib/ins_emagent.mk

Solution

Within the file detailed in the error edit it and find the line that reads '$(MK_EMAGENT_NMECTL)' change this to '$(MK_EMAGENT_NMECTL) -lnnz11'

Note : edit this file while you still have the installer open, once you have changed it just click retry.

expdp throwing ORA-04031 (stream pool) error


Summary:-
Today while taking backup of a schema i faced a problem in expdp and the job was not done
successfully.Here is the error
UDE-04031: operation generated ORACLE error 4031
ORA-04031: unable to allocate 32 bytes of shared memory ("shared pool","select obj#,type#,ctime,mtim...","SQLA","tmp")
ORA-06512: at "SYS.KUPV$FT_INT", line 2904
ORA-06512: at "SYS.KUPC$QUE_INT", line 572
ORA-25254: time-out in LISTEN while waiting for a message
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3263
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4488
ORA-06512: at line 1

Solution:-
Datapump jobs (either expdp/impdp), when initiated, will use buffer queues in order to
transfer the data directly from the master table to the dump file. Instead of creating new
buffer queue, datapump operations will try to use the existing queues in stream pool memory
area.

So i increased the memory_max_size and resolved the error.
Previously it was having sga_max_target=396M and hence was not under Automatic memory management
feature(11g).
so i created a pfile from the spfile.
And change the value in pfile like this
SQL> create pfile from spfile;
$ vi initprim.ora
#*.sga_max_size=629145600
*.memory_target=629145600


#Here we enabled memory_target parameter by removing sga_max_size from the pfile.This will let
oracle automatically tune the sga and pga size.

SQL> startup mount pfile='/u01/app/oracle/product/11.2.0/db_1/dbs/initprim.ora';
SQL> alter database open;
SQL> create spfile from pfile;

MEMORY_TARGET AND SGA_MAX_SIZE PARAMETERS in oracle 11g


SGA_% parameters are enabled when AMM(Automatic memory management) is disable.

AMM enabled = SGA and PGA sizes are automatically tuned by oracle. We just set Memory_MAX_TARGET
and MEMORY_TARGET params and oracle give enough memory to the SGA and PGA whatever they need.
For ex: afternoon SGA has no empty memory space but PGA has a lot of them, oracle takes spaces
from PGA and give it to the SGA.

while AMM is enbled sga_target and sga_max_target are not considered (except lower bound limit,
if you set sga_target like 10G, then oracle can not takes too much space and cause to SGA has
less then 10G).

if you disable AMM, this means Memory_MAX_TARGET and MEMORY_TARGET are 0 any more and oracle
will not transfer memory spaced between SGA and PGA! from now, you need SGA_TARGET and
SGA_MAX_TARGET params to be set. so oracle will know how much space SGA will use and also
components of SGA can be auto-tuned (buffer cache, large pool etc).

SGA_MAX_SIZE:-
sga_max_size sets the maximum value for sga_target If sga_max_size is less than the sum of db_cache_size +
log_buffer + shared_pool_size + large_pool_size at initialization time, then the value of sga_max_size is ignored.

SGA_TARGET:-
It specifies the total amaount of SGA memory available to an instance. Setting this parameter makes Oracle distribute the available memory among various components - such as shared pool (for SQL and PL/SQL), Java pool, large_pool and buffer cache - as required.
This new feature is called Automatic Shared Memory Management. With ASMM, the parameters java_pool_size, shared_pool_size, large_pool_size and db_cache_size need not be specified explicitely anymore.
sga_target cannot be higher than sga_max_size.


SGA_TARGET is a database initialization parameter (introduced in Oracle 10g) that can be used for automatic SGA memory sizing.
+++++++++++++++++++++++++++++++++++++++++++++++++

SGA_MAX_SIZE & SGA_TARGET / MEMORY_TARGET & MEMORY_MAX_TARGET

SGA_MAX_SIZE sets the overall amount of memory the SGA can consume but is not dynamic.

The SGA_MAX_SIZE parameter is the max allowable size to resize the SGA Memory area parameters. If the SGA_TARGET is set to some value then the Automatic Shared Memory Management (ASMM) is enabled, the SGA_TARGET value can be adjusted up to the SGA_MAX_SIZE parameter, not more than SGA_MAX_SIZE parameter value.

MEMORY_TARGET & MEMORY_MAX_TARGET 

you can manage SGA and PGA together rather than managing them separately.

If you set SGA_TARGET, SGA_MAX_SIZE and PGA_AGGREGATE_TARGET to 0 and set MEMORY_TARGET (and optionally MEMORY_MAX_TARGET) to non zero value, Oracle will manage both SGA components and PGA together within the limit specified by you.

If MEMORY_TARGET is set to 1024MB, Oracle will manage SGA and PGA components within itself.

If MEMORY_TARGET is set to non zero value:

SGA_TARGET, SGA_MAX_SIZE and PGA_AGGREGATE_TARGET are set to 0, 60% of memory mentioned in MEMORY_TARGET
is allocated to SGA and rest 40% is kept for PGA.
SGA_TARGET and PGA_AGGREGATE_TARGET are set to non-zero values, these values will be considered minimum
values.
SGA_TARGET is set to non zero value and PGA_AGGREGATE_TARGET is not set. Still these values will be
autotuned and PGA_AGGREGATE_TARGET will be initialized with value of (MEMORY_TARGET-SGA_TARGET).
PGA_AGGREGATE_TARGET is set and SGA_TARGET is not set. Still both parameters will be autotunes.
SGA_TARGET will be initialized to a value of (MEMORY_TARGET-PGA_AGGREGATE_TARGET).

How to find out when did a user changed his password in oracle 11g

Summary :-
The table sys.user$ contains the field ptime, which keeps the time when the password was
changed the last time over. Do not confound it with ctime, which is the "creation time", nor with
ltime, which is the time the account has been locked (if any).

SQL>SELECT NAME, ptime AS "LAST TIME CHANGED", ctime "CREATION TIME", ltime "LOCKED"
FROM USER$
WHERE ptime IS NOT NULL
ORDER BY ptime DESC;


NAME                           LAST TIME CREATION  LOCKED
------------------------------ --------- --------- ---------
APPUSER                        27-FEB-15 16-FEB-15
HELPDESK                       27-FEB-15 27-FEB-15
SOUMYA                         27-FEB-15 14-FEB-15
SYS                            25-FEB-15 15-AUG-09 15-AUG-09
SYSTEM                         25-FEB-15 15-AUG-09 15-AUG-09
BRIAN                          16-FEB-15 16-FEB-15

How to find out users with deadly roles assigned to them

SQL>select grantee, granted_role, admin_option
from   sys.dba_role_privs
where  granted_role in ('DBA', 'AQ_ADMINISTRATOR_ROLE',
                       'EXP_FULL_DATABASE', 'IMP_FULL_DATABASE',
                       'OEM_MONITOR')
  and  grantee not in ('SYS', 'SYSTEM', 'OUTLN', 'AQ_ADMINISTRATOR_ROLE',
                       'DBA', 'EXP_FULL_DATABASE', 'IMP_FULL_DATABASE',
                       'OEM_MONITOR', 'CTXSYS', 'DBSNMP', 'IFSSYS',
                       'IFSSYS$CM', 'MDSYS', 'ORDPLUGINS', 'ORDSYS',
                       'TIMESERIES_DBA');
                      
GRANTEE                        GRANTED_ROLE                                                     ADM
------------------------------ ------------------------------                                              ---
IX                                     AQ_ADMINISTRATOR_ROLE                                 NO
GHHCWS                        DBA                                                                             NO
BRIAN                             DBA                                                                             NO
DATAPUMP_EXP_FULL_DATABASE     EXP_FULL_DATABASE              NO
DATAPUMP_IMP_FULL_DATABASE     EXP_FULL_DATABASE              NO
DATAPUMP_IMP_FULL_DATABASE     IMP_FULL_DATABASE              NO
OWBSYS                         AQ_ADMINISTRATOR_ROLE                               NO                      

How to change user's password in oracle 11g

Summary :-
As a DBA you may need to log in as another user, for example to test an application after
doing some workarounds to solve a problem. But you don't know the password and you don't want
to wait. You can change the password, as dba, but how to change it back to previous?

The right way to proceed is:
    * Note down the current (old) password, as found in data dictionary.
    * modify the password with the command ALTER USER IDENTIFIED BY
    * connect using the new password
    * do what you wanted to do
    * reset the password with the clause IDENTIFIED BY VALUES

To get the current password :-
SQL> SET LONG 100000
SQL> SELECT dbms_metadata.get_ddl('USER','SOUMYA') FROM dual;

   CREATE USER "SOUMYA" IDENTIFIED BY VALUES 'S:25204F11525CA54FAC51C7A06E8BF5D2
47BAB12D3C68B3FEAAA688FFF92F;6673D7515E467AA4'
      DEFAULT TABLESPACE "USERS"
      TEMPORARY TABLESPACE "TEMP"

Yes we can only get the password as encrypted format.

Now change the password temporarity:-
SQL > alter user soumya identified by das;

Once done, Login to the user a/c:-
SQL> conn soumya/das
Connected.

Now once the job is done we can again change back the password into previous one:-
SQL> alter user soumya identified by values 'S:25204F11525CA54FAC51C7A06E8BF5D2
47BAB12D3C68B3FEAAA688FFF92F;6673D7515E467AA4' ;


Another method:-

SQL> select spare4 from user$ where name='SOUMYA';

SPARE4
--------------------------------------------------------------------------------
S:C3C54AC4597D90A7CE643D6874F8348167D20E624FC49D981A2DE2BB116F

SQL>  alter user "SOUMYA" identified by values 'S:25204F11525CA54FAC51C7A06E8BF5D2
47BAB12D3C68B3FEAAA688FFF92F';

User altered.



Thats it.

How to find out Rman backup time in oracle 11g


SQL> SELECT * FROM (SELECT END.dt, 'Incremental' BACKUP, STAR.TIME started, END.TIME END,
DECODE(TRUNC(END.TIME - star.TIME), 0, NULL, TRUNC(END.TIME - star.TIME) || ' Days' || ' + ') ||
TO_CHAR(TO_DATE(TRUNC(MOD(END.TIME - star.TIME,1) * 86400), 'SSSSS'), 'HH24:MI:SS') TIME
FROM
(SELECT TRUNC(start_time) dt , MAX(COMPLETION_TIME) TIME FROM V$BACKUP_SET WHERE BACKUP_TYPE='I'
GROUP BY TRUNC(start_time)) END,
(SELECT TRUNC(start_time) dt, MIN(START_TIME) TIME FROM V$BACKUP_SET WHERE BACKUP_TYPE='I'
GROUP BY TRUNC(start_time)) STAR
WHERE END.dt = STAR.dt
UNION ALL
SELECT END.dt, 'Full Database' BACKUP, STAR.TIME started, END.TIME END,
DECODE(TRUNC(END.TIME - star.TIME), 0, NULL, TRUNC(END.TIME - star.TIME) || ' Days' || ' + ') ||
TO_CHAR(TO_DATE(TRUNC(MOD(END.TIME - star.TIME,1) * 86400), 'SSSSS'), 'HH24:MI:SS') TIME
FROM
(SELECT TRUNC(start_time) dt , MAX(COMPLETION_TIME) TIME FROM V$BACKUP_SET WHERE BACKUP_TYPE='D'
GROUP BY TRUNC(start_time)) END,
(SELECT TRUNC(start_time) dt, MIN(START_TIME) TIME FROM V$BACKUP_SET WHERE BACKUP_TYPE='D'
GROUP BY TRUNC(start_time)) STAR
WHERE END.dt = STAR.dt)
ORDER BY dt ASC;

DT        BACKUP        STARTED   END       TIME
--------- ------------- --------- --------- --------------------------------------------------------
18-FEB-15 Full Database 18-FEB-15 18-FEB-15 00:01:07
18-FEB-15 Incremental   18-FEB-15 18-FEB-15 00:00:16

How to maintain Auditing housekeeping in oracle 11g

If you enable auditing then the auditing information, (stored in table SYS.AUD$) will start
growing and because is relies on SYSTEM tablespace you might have performance problems in the
future. Auditing housekeeping must be setup.

1. Create a SYSTEM.AUD$_BU table stored in a different tablespace(AUDIT_DATA) where you will
ove all you auditing produced.

[oracle@server1]$ sqlplus /  as sysdba

SQL> CREATE TABLESPACE AUDIT_DATA DATAFILE '/u01/app/oracle/oradata/prim/audit_data_001.dbf' SIZE 100M AUTOEXTEND ON
LOGGING
PERMANENT
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
BLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT Auto;

2.Create a procedure(Keep_Size_Aud_Log) that moves the rows from SYS.AUD$ to SYSTEM.AUD$_BU
SQL> CREATE OR REPLACE PROCEDURE Keep_Size_Aud_Log
IS
  rowCount NUMBER;
BEGIN
  SELECT COUNT(*) INTO rowCount FROM sys.aud$ ;
  IF rowCount > 0
  THEN
    COMMIT;
    INSERT /*+ APPEND */ INTO SYSTEM.aud$_bu (SELECT * FROM sys.aud$);
 COMMIT;
     EXECUTE IMMEDIATE 'truncate table sys.aud$';
    sys.Dbms_System.ksdwrt (3,'ORA-AUDIT TRAIL: rows moved from SYS.AUD$ to SYSTEM.AUD$_BU');
  END IF;
END Keep_Size_Aud_Log;
/

3. Execute the procedure every day at midnight with a job

SQL> DECLARE
  X NUMBER;
BEGIN
  SYS.DBMS_JOB.SUBMIT
    (
      job        => X
     ,what       => 'SYS.KEEP_SIZE_AUD_LOG;'
     ,next_date  => TO_DATE('23/02/2015 01:00:00','dd/mm/yyyy hh24:mi:ss')
     ,INTERVAL   => 'TRUNC(SYSDATE+1)'
     ,no_parse   => FALSE
    );
END;

Tip: To speed up searching on SYSTEM.AUD$_BU you can create 2 indexes (one on timestamp# and
the other to userid)
SQL> CREATE INDEX SYSTEM.AUD$_BU_TIME_IDX ON SYSTEM.AUD$_BU (TIMESTAMP#) NOLOGGING TABLESPACE
AUDIT_DATA;
Index created.

SQL> CREATE INDEX SYSTEM.AUD$_BU_USERID_IDX ON SYSTEM.AUD$_BU (USERID) NOLOGGING TABLESPACE
AUDIT_DATA;
Index created.

How to find out table fragmentation and fix it in oracle 11g


When you start doing DML operations such as deletes and updates constantly in a table,
you may end up having a fragmented table in your database.As we know the high water mark of
table actually defines the border line between used and unused space for tables. While performing
full table scan, oracle will always read the data up to the high water mark (used block).
HWM is an indicator of USED BLOCKS in the database .
The following tests have been done in oracle 11g Rhel 6.

An example follows how to make a table full of holes and start wasting space and of course
performance when doing full table scans on it.

1. Create the table OBJECTS.
[oracle@server1 ~]$ sqlplus appuser/appuser
SQL> create table test as select * from all_objects;
Table created
SQL> create index IDXT1 on test(owner);
Index created

SQL> select count(*) from test;

  COUNT(*)
----------
     57995

Gather table stats :-
SQL>Begin DBMS_STATS.GATHER_TABLE_STATS (
     ownname => 'APPUSER',
     tabname => 'TEST',
     degree => 2,
     cascade => TRUE,
     METHOD_OPT => 'FOR COLUMNS SIZE AUTO',
     estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
     END;
     /

Check high water mark on db block:-
SQL>select blocks "Ever Used", empty_blocks "Never Used", num_rows "Total rows"
From user_tables where table_name='TEST';
 Ever Used Never Used Total rows
---------- ---------- ----------
       854          0      57986

2. Lets check the size of the table( with fragmented):-
SQL> select table_name,round((blocks*8),2)||'kb' "size"
 From user_tables where table_name = 'TEST';

TABLE_NAME                     size
------------------------------ ------------------------------------------
TEST                           6832kb

3. Make a huge hole deleting all the rows and leaving just one!
SQL>DELETE FROM TEST WHERE ROWNUM < 57985;
SQL> commit;

Now again check the table size after deletion:-
SQL> select table_name,round((blocks*8),2)||'kb' "size"
 From user_tables where table_name = 'TEST';

TABLE_NAME                     size
------------------------------ ------------------------------------------
TEST                           6832kb

We see after the deletions of the rows the tablesize is same.

4.But the actual data size is:-
SQL>select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size"
from user_tables where table_name = 'TEST';

TABLE_NAME                     size
------------------------------ ------------------------------------------
TEST                           .17kb


So the difference in size is like (6832-.17)KB = 6829.83KB


5.The solution to win back the wasted table space is reseting HighWaterMark.
For that we need to reorganize the fragmented table. We have 4 options to reorganize
fragmented tables:
1. alter table ... move + rebuild indexes
2. create table as select ( CTAS)
3. export / truncate / import
4. By shrinking the  Table


Option 1:-
Before going ahead with this option first take a fresh statistics for the table
SQL>Begin DBMS_STATS.GATHER_TABLE_STATS (
     ownname => 'APPUSER',
     tabname => 'TEST',
     degree => 2,
     cascade => TRUE,
     METHOD_OPT => 'FOR COLUMNS SIZE AUTO',
     estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
     END;
     /


[oracle@server1 ~]$ sqlplus /  as sysdba
SQL> alter table appuser.test move;
Table altered.

SQL> select status,index_name from dba_indexes
where table_name like 'TEST%';

STATUS   INDEX_NAME
-------- ------------------------------
UNUSABLE IDXT1

SQL> ALTER INDEX APPUSER.IDXT1 REBUILD online;
SQL> select status,index_name from dba_indexes
where table_name like 'TEST%';;

STATUS   INDEX_NAME
-------- ------------------------------
VALID    IDXT1

SQL> Begin DBMS_STATS.GATHER_TABLE_STATS (
     ownname => 'APPUSER',
     tabname => 'TEST',
     degree => 2,
     cascade => TRUE,
     METHOD_OPT => 'FOR COLUMNS SIZE AUTO',
     estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
     END;
     /


Now check the original table size after moving the table:-
SQL> select table_name,owner,round((blocks*8),2)||'kb' "size"
from dba_tables where table_name like 'TEST%';
TABLE_NAME                     OWNER                          size
------------------------------ -----------------              ----------------
TEST                           APPUSER                        32kb

We see the table size has reduces and we recovered the wasted space.

Option 2:-create table as select ( CTAS)

P.S. in this procedure keep a track of created index on the table.Cause after dropping the
table we need to recreate those indexes.

[oracle@server1 ~]$ sqlplus /  as sysdba
SQL> select index_name,table_owner ,table_name from dba_ind_columns where table_name like 'TEST%';

INDEX_NAME                     TABLE_OWNER                    TABLE_NAME
------------------------------ ------------------------------ ------------------------------
IDXT1                          APPUSER                        TEST
SQL> create table test1 as select * from test;

Table created.

SQL> drop table test purge;

Table dropped.

SQL> rename test1 to test;(This query if you are the table owner)
OR
SQL> alter table appuser.test1 rename to test;(Use this query if you are doing the activity as sys user)

Table renamed.

SQL> Begin DBMS_STATS.GATHER_TABLE_STATS (
     ownname => 'APPUSER',
     tabname => 'TEST',
     degree => 2,
     cascade => TRUE,
     METHOD_OPT => 'FOR COLUMNS SIZE AUTO',
     estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
     END;
     /

PL/SQL procedure successfully completed.

SQL> select table_name,round((blocks*8),2)||'kb' "size"
 from dba_tables
 where table_name = 'TEST';

TABLE_NAME size
------------------------------ ------------------------------------------
TEST 32kb

SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size"
 from user_tables
 where table_name = 'TEST';

TABLE_NAME size
------------------------------ ------------------------------------------
TEST .17kb

SQL> select status from DBA_INDEXES
 where table_name = 'TEST';

no rows selected

SQL>create index IDXT1 on APPUSER.TEST ( OWNER ) ;




Option 3.
SQL> select table_name,round((blocks*8),2)||'kb' "size"
  2   from user_tables
  3   where table_name = 'TEST';

TABLE_NAME size
---------- ------------------------------------------
TEST       6832kb


SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size"
  2   from user_tables
  3   where table_name = 'TEST';

TABLE_NAME size
---------- ------------------------------------------
TEST       .17kb

SQL>  select status from user_indexes where table_name = 'TEST';

STATUS
--------
VALID

Now take an export backup of the Test table:-
[oracle@server1 ~]$ expdp system/system dumpfile=test.dmp directory=DATA_PUMP_DIR tables=appuser.test
Now truncate the table
[oracle@server1 ~]$ sqlplus /  as sysdba
SQL> truncate table appuser.test;

Now lets import the table from the expdp dump.
[oracle@server1 ~]$ impdp system/system dumpfile=test.dmp tables=appuser.test directory=DATA_PUMP_DIR ignore=y

SQL>EXEC dbms_redefinition.can_redef_table('APPUSER','TEST', dbms_redefinition.cons_use_pk);

Option 4.By shrinking the  Table

SQL>Begin DBMS_STATS.GATHER_TABLE_STATS (
     ownname => 'APPUSER',
     tabname => 'TEST',
     degree => 2,
     cascade => TRUE,
     METHOD_OPT => 'FOR COLUMNS SIZE AUTO',
     estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
     END;
     /

SQL> alter table TEST enable row movement;
Table altered.
#In first part re-arrange rows and in second part reset the HWM.
SQL> alter table TEST shrink space compact;
Table altered.
SQL> alter table TEST shrink space;
Table altered.

For this method the whole tablespace has to be in ASSM(automatic segment space manament).

Benefit for using Shrink command Method:-
Unlike "alter table move", indexes are not in UNUSABLE state. After shrink command, indexes are updated also.
It is an online operation, so you do not need downtime to do re-organization.
It does not require any extra space for the process to complete.

How to check last checkpoints in the datafile headers

SQL> column file# format a10
SQL> column TO_CHAR format a60
SQL> column FILE_NAME format a30
SQL> set linesize 150
SQL> SELECT TO_CHAR(SYSDATE,'DD/MM/YYYY HH24:MI:SS') FROM dual;
TO_CHAR(SYSDATE,'DD
-------------------
05/02/2016 09:22:48

SQL> SELECT FILE# file_nr, TO_CHAR(CHECKPOINT_TIME,'DD/MM/YYYY HH24:MI:SS') checkpoint_time, NAME file_name
 FROM v$datafile_header;

   FILE_NR CHECKPOINT_TIME     FILE_NAME
---------- ------------------- ------------------------------------------------------------
         1 05/02/2016 09:22:19 /u01/app/oracle/oradata/prim/system01.dbf
         2 05/02/2016 09:22:19 /u01/app/oracle/oradata/prim/sysaux01.dbf
         3 05/02/2016 09:22:19 /u01/app/oracle/oradata/prim/undotbs01.dbf
         4 05/02/2016 09:22:19 /u01/app/oracle/oradata/prim/users01.dbf
         5 05/02/2016 09:22:19 /u01/app/oracle/oradata/prim/example01.dbf


How to enable autotrace on

Description:-
SQL> set autotrace on
SP2-0618: Cannot find the Session Identifier.  Check PLUSTRACE role is enabled
SP2-0611: Error enabling STATISTICS report

solution:-
For execution AUTOTRACE the users needs to have the PLUSTRACE role, which does not
exist by default. PLUSTRACE role can be created using SYS user by executing
/u01/app/oracle/product/11.2.0/db_1/sqlplus/admin/plustrce.sql
[oracle@server1]$ sqlplus /  as sysdba
SQL> @/u01/app/oracle/product/11.2.0/db_1/sqlplus/admin/plustrce.sql
SQL>
SQL> drop role plustrace;
drop role plustrace
          *
ERROR at line 1:
ORA-01919: role 'PLUSTRACE' does not exist


SQL> create role plustrace;

Role created.

SQL>
SQL> grant select on v_$sesstat to plustrace;

Grant succeeded.

SQL> grant select on v_$statname to plustrace;

Grant succeeded.

SQL> grant select on v_$mystat to plustrace;

Grant succeeded.

SQL> grant plustrace to dba with admin option;

Grant succeeded.

SQL>
SQL> set echo off
SQL> grant plustrace to appuser;

Grant succeeded.
SQL>conn appuser/appuser

SQL> create table abc (id number);

Table created.

SQL> insert into abc values (1);

1 row created.


Execution Plan
----------------------------------------------------------

--------------------------------------------------------------------------------
-

| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time
|

--------------------------------------------------------------------------------
-

|   0 | INSERT STATEMENT         |      |     1 |   100 |     1   (0)| 00:00:01
|

|   1 |  LOAD TABLE CONVENTIONAL | ABC  |       |       |            |
|

--------------------------------------------------------------------------------
-



Statistics
----------------------------------------------------------
        136  recursive calls
         58  db block gets
         32  consistent gets
          0  physical reads
       6400  redo size
        843  bytes sent via SQL*Net to client
        782  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          1  rows processed



How to set username and instance name at SQL prompt

[oracle@server1 ~]$ sqlplus /  as sysdba
SQL> SET SQLPROMPT "_USER @ _CONNECT_IDENTIFIER> "
SYS @ prim> select instance_name from v$instance;
INSTANCE_NAME
----------------
prim

MongoDB backup using shell script for all databases

vi /backups/mongodb_backup.sh

#!/bin/bash
date1=`date +%d%m%Y_%H%M%S`
export path1=/backups/mongodb_backup
OUTPUTDIR="$path1/$date1"
/usr/bin/find /backups/mongodb_backup/* -type d -mtime +30 -exec rm -r {} \; 2> /dev/null
mongodump -o $path1/$date1
/bin/tar -zcvf  /backups/mongodb_backup/backup_$date1.tar.gz $path1/$date1
rm -rf $path1/$date1
exit

#Schedule in crontab:-
 crontab -e

0 0 * * * /backups/mongodb_backup.sh > /dev/null

How to start/stop oracle database from linux command prompt

How to start/stop oracle database from linux command prompt:-
[oracle@configsrv1 ~]$vi /etc/oratab
#

# This file is used by ORACLE utilities.  It is created by root.sh
# and updated by the Database Configuration Assistant when creating
# a database.

# A colon, ':', is used as the field terminator.  A new line terminates
# the entry.  Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
#   $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third filed indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
prim:/u01/app/oracle/product/11.2.0/db_1:N

change the entry to this

prim:/u01/app/oracle/product/11.2.0/db_1:Y

and save the file.

For starting db
[oracle@configsrv1 ~] dbstart
ORACLE_HOME_LISTNER is not SET, unable to auto-start Oracle Net Listener
Usage: /u01/app/oracle/product/11.2.0/db_1/bin/dbstart ORACLE_HOME
Processing Database instance "prim": log file /u01/app/oracle/product/11.2.0/db_1/startup.log

To fix the above error;

[oracle@configsrv1 ~]export ORACLE_HOME_LISTNER=$ORACLE_HOME
[oracle@configsrv1 ~]dbstart $ORACLE_HOME
Then start the db again
[oracle@configsrv1 bin]$ dbstart $ORACLE_HOME
Processing Database instance "prim": log file /u01/app/oracle/product/11.2.0/db_1/startup.log

For stopping db
[oracle@configsrv1 ~] dbshut


How to enable/disable case-sensitive password in oracle 11g


From oracle 11g password can be case sensitive by setting
SEC_CASE_SENSITIVE_LOGON = true in  initialization parameter file.

SQL> show parameter SEC_CASE_SENSITIVE_LOGON

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon             boolean     TRUE

Lets check the feature,
[oracle@configsrv1 ~]$ sqlplus /  as sysdba
SQL> create user soumya identified by das;

User created.

SQL> grant connect to soumya;

Grant succeeded.

[oracle@configsrv1 ~]$ sqlplus /  as sysdba
SQL> conn soumya/das
Connected.

Now we will check by providing the password in caps letter.
SQL> conn soumya/DAS
ERROR:
ORA-01017: invalid username/password; logon denied


Warning: You are no longer connected to ORACLE.

We can disable this feature by setting the parameter value to false.
SQL> alter system set SEC_CASE_SENSITIVE_LOGON= false scope=both;

System altered.

SQL> conn soumya/DAS
Connected.

Postgresql backup shell script for all databases in linux

vi /backups/postgre_backup.sh

#!/bin/bash
date1=`date +%d%m%Y_%H%M%S`
export path1=/backups/postgre_backup
OUTPUTDIR="$path1/$date1"
/usr/bin/find /backups/postgre_backup/* -type d -mtime +32 -exec rm -r {} \; 2> /dev/null
su postgres -c 'pg_dumpall' > "$OUTPUTDIR.sql"
/bin/tar -zcvf /backups/postgre_backup/backup_$date1.tar.gz $OUTPUTDIR.sql
rm $OUTPUTDIR.sql
exit

:wq

Schedule in crontab:-
# crontab -e

0 0 * * * /backups/postgre_backup.sh > /dev/null

How to setup mongodb sharded cluster in a standalone server

Linux version :Rhel 6.3
Mongodb version: 2.6

Components of a sharded cluster:-
Every sharded cluster has three main components:
Shards: This are the actual places where the data is stored. Each of the shards can be a
mongod instance or a replica set.

Config Servers: The config server has the metadata about the cluster. It is in charge of
keeping track of which shard has each piece of data.

Query Routers: The query routers are the point of interaction between the clients and the
shard. The query servers use information from the config servers to retrieve the data from
the shards.
For development purposes I am going to use three mongod instances as shards, exactly one
mongod instance as config server and one mongos instance to be a query router.

It is important to remember that due to mongo restrictions the number of mongo config servers
needs to be either one or three. In a production environment you need to use three to
guarantee redundancy but for a development environment with one will be enough.

*Install mongodb
Step:1 System Login as root user. We are checking system OS type and system bits type.

# uname –a

# cat /etc/issue

Step:2 Now we are creating a yum repo file .like /etc/yum.repos.d/mongodb.repo

# vi /etc/yum.repos.d/mongodb.repo
[mongodb]
name=mongodb Repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1
:wq

Step:3 Now we install mongodb client and server using yum

# yum install mongo-*

If you face any package error do the following
# yum erase mongo*
yum shell
> install mongodb-org
> remove mongo-10gen
> remove mongo-10gen-server
> run

Step:4 Now we can configure and basic setting in Mongodb Database Server

# vi /etc/mongod.conf
logappend=true
logpath=logpath=/var/log/mongodb/mongod.log
dbpath=/var/lib/mongo
smallfiles = true
:wq

Step:5 Start Mongodb Server

# /etc/init.d/mongod start
# chkconfig mongod on

Open another terminal and type
#mongo

Step 6. Create the directory structure like the below tree.

/u01/mongocluster/
                              mongod1/
                                             logs/
                                             data/
                               mongod2/   
                                             logs/
                                             data/
                               mongod3/
                                             logs/
                                             data/
                               mongoc/
                                            logs/
                                            data/
                               mongos/
                                            logs/
                                            data/

Here, {mongod1,mongod2,mongod3}these folders will be used for the shards, mongoc for the
config server and mongos for the query router.
               
Once the above directory structure has been created give them proper permission
[root@server1 ]# chown -Rf mongod:mongod /u01/mongocluster/
[root@server1 ]# chmod -Rf 775 /u01/mongocluster/
               
               
Step 7.
Shards configuration:-
We are going to create a mongodN.conf inside each of the mongodN folders, replacing N for the
corresponding number of shard.Also it is important to set a different port to each of the
shards, of course these ports have to be available in the host.

[root@server1 ]# cd /u01/mongocluster/mongod1
[root@server1 mongod1]vi mongod1.conf

systemLog:
  destination: file
  path: "/u01/mongocluster/mongod1/logs/mongod1.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongod1/mongod1.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 47018
storage:
  dbPath: "/u01/mongocluster/mongod1/data"
  directoryPerDB: true
sharding:
  clusterRole: shardsvr
operationProfiling:
  mode: all

:wq
 
[root@server1 ]# cd /u01/mongocluster/mongod2 
[root@server1 mongod2]#vi mongod2.conf
systemLog:
  destination: file
  path: "/u01/mongocluster/mongod2/logs/mongod2.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongod2/mongod2.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 48018
storage:
  dbPath: "/u01/mongocluster/mongod2/data"
  directoryPerDB: true
sharding:
  clusterRole: shardsvr
operationProfiling:
  mode: all
 
:wq

[root@server1 ]# cd /u01/mongocluster/mongod3/
[root@server1 mongod3]#vi mongod3.conf 
systemLog:
  destination: file
  path: "/u01/mongocluster/mongod3/logs/mongod3.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongod3/mongod3.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 49018
storage:
  dbPath: "/u01/mongocluster/mongod3/data"
  directoryPerDB: true
sharding:
  clusterRole: shardsvr
operationProfiling:
  mode: all
 
:wq

The important things to notice here are:
That dbPath under the storage section is pointing to the correct place, otherwise you might
have issues with the files mongod creates for normal operation if two of the shards point to
the same data directory.
The sharding.clusterRole is the essential part of this configuration, it is the one that
indicates that the mongod instance is part of a sharded cluster and that its role is to be a
data shard. 

Step 8. 
Config server configuration
[root@server1 ]#vi /u01/mongocluster/mongoc/mongoc.conf
systemLog:
  destination: file
  path: "/u01/mongocluster/mongoc/logs/mongoc.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongoc/mongoc.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 47019
storage:
  dbPath: "/u01/mongocluster/mongoc/data"
  directoryPerDB: true
sharding:
  clusterRole: configsvr
operationProfiling:
  mode: "all"
 
:wq

Step 9.
Query router (Mongos)
The configuration of the query router is pretty simple. The important part in it, is the
sharding.configDB value.The value needs to be a string containing the configuration server's
location in the form of <host>:<port>.

If you have a 3-config server cluster you need to put the location of the three configuration
servers separated by commas in the string.
Important: if you have more than one query router, make sure you use exactly the same string
for the sharding.configDB in every query router.

[root@server1 ]#vi /u01/mongocluster/mongos/mongos.conf
systemLog:
  destination: file
  path: "/u01/mongocluster/mongos/logs/mongos.log"
  logAppend: true
processManagement:
  pidFilePath: "/u01/mongocluster/mongos/mongos.pid"
  fork: true
net:
  bindIp: 127.0.0.1
  port: 47017
sharding:
  configDB: "localhost:47019"
 
:wq 

Step 10.Running the sharded cluster
Starting the components
The order in which the components should be started is the following:
*shards
*config servers
*query routers

#Start the mongod shard instances 
[root@server1 ]mongod --config /u01/mongocluster/mongod1/mongod1.conf
[root@server1 ]mongod --config /u01/mongocluster/mongod2/mongod2.conf
[root@server1 ]mongod --config /u01/mongocluster/mongod3/mongod3.conf

#Start the mongod config server instance
[root@server1 ]mongod --config /u01/mongocluster/mongoc/mongoc.conf

#Start the mongos
[root@server1 ]mongos -f /u01/mongocluster/mongos/mongos.conf


Stopping the components:-
To stop the components we just need to stop the started instances.

For that we are going to use the kill  command. In order to use it, we need the PIDs of each
of the processes. For that reason, we added the processManagement.pidFile to the configuration
files of the components: the instances are going to store their PIDs in the those files,
making it easy to get the PID of the process to kill when wanting to shutdown the cluster.

The following script shuts down each of the processes in case the PID file exists:
[root@server1 ] vi processkill.sh

#!/bin/bash
#Stop mongos
PID_MONGOS_FILE=/u01/mongocluster/mongos/mongos.pid
if [ -e $PID_MONGOS_FILE ]; then
    PID_MONGOS=$(cat $PID_MONGOS_FILE)
    kill $PID_MONGOS
    rm $PID_MONGOS_FILE
fi

#Stop mongo config
PID_MONGOC_FILE=/u01/mongocluster/mongoc/mongoc.pid
if [ -e $PID_MONGOC_FILE ]; then
    PID_MONGOC=$(cat $PID_MONGOC_FILE)
    kill $PID_MONGOC
    rm $PID_MONGOC_FILE
fi

#Stop mongod shard instances
PID_MONGOD1_FILE=/u01/mongocluster/mongod1/mongod1.pid
if [ -e $PID_MONGOD1_FILE ]; then
    PID_MONGOD1=$(cat $PID_MONGOD1_FILE)
    kill $PID_MONGOD1
    rm $PID_MONGOD1_FILE
fi

PID_MONGOD2_FILE=/u01/mongocluster/mongod2/mongod2.pid
if [ -e $PID_MONGOD2_FILE ]; then
    PID_MONGOD2=$(cat $PID_MONGOD2_FILE)
    kill $PID_MONGOD2
    rm $PID_MONGOD2_FILE
fi

PID_MONGOD3_FILE=/u01/mongocluster/mongod3/mongod3.pid
if [ -e $PID_MONGOD3_FILE ]; then
    PID_MONGOD3=$(cat $PID_MONGOD3_FILE)
    kill $PID_MONGOD3
    rm $PID_MONGOD3_FILE
fi

:wq

Step 11.Before using the sharded cluster
What we need to do is setup the shards we created in the configuration server.
In order to do that we need to connect to the cluster using the mongo client against
the query server, like this:

[root@server1 ] mongo localhost:47017

Once we are connected we need to issue the following commands to add the shards to the cluster:

mongos> sh.addShard("localhost:47018")
mongos> sh.addShard("localhost:48018")
mongos> sh.addShard("localhost:49018")

To check the sharding information:-
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "version" : 4,
        "minCompatibleVersion" : 4,
        "currentVersion" : 5,
        "clusterId" : ObjectId("54d8dde8ea5c30beb58658eb")
}
  shards:
        {  "_id" : "shard0000",  "host" : "localhost:47018" }
        {  "_id" : "shard0001",  "host" : "localhost:48018" }
        {  "_id" : "shard0002",  "host" : "localhost:49018" }
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
        {  "_id" : "test",  "partitioned" : false,  "primary" : "shard0000" }


To List Databases with Sharding Enabled:-
mongos> use config
switched to db config
mongos> db.databases.find( { "partitioned": true } )db.databases.find( { "partitioned": true } )
{ "_id" : "students", "partitioned" : true, "primary" : "shard0002" }

To enable sharding on a particular database:-
sh.enableSharding("students")

Define a database name as soumya:-
>use soumya

then to check your current db name:-
>db

To add the created db in dblist we need to add collection in this database.For instance, create a document in a customers collection like this:
db.customers.save({"firstName":"Alvin", "lastName":"Alexander"})

Next, verify that your document was created with this command:
db.customers.find()

Now check your db name:-
>show dbs

Now to add a new user in a db:-
>use soumya
>db.addUser( { user: "soumya",
              pwd: "redhat2",
              roles: [ "readWrite", "dbAdmin" ]
            } )
           
To check all the users in your current db:-
>show users
or
db.system.users.find()

To drop the database pizzas:-
use pizzas;
>db.dropDatabase()

To check current version:-
db.version()           

Done..


Alert mail script for checking oracle database up/down status

Step 1. create the shell script for checking dbstatus

vi /home/oracle/dbcheck.sh

#!/bin/bash
hostname=server1.soumya.com
oracle_sid=prim
export instance_name=ora_pmon_prim
status=`ps -ef |grep pmon | head -1 | awk '{print $8}'`
if [ "$status" == "$instance_name" ]; then
echo "DB IS UP"
else
echo "DB down"
mail -s "Attention!! $oracle_sid Database is Down at $hostname!! " yourmailid@gmail.com
fi
exit 0
:wq

Step 2. Schedule it for checkup in every 5 minutes
crontab -e
*/5 * * * * /home/oracle/dbcheck.sh > /dev/null

p.s.Make sure mailx rpm is installed and sendmail or other mail service is running on server.

Shell script for auto start of oracle and listener at boot time

Step 1. Edit /etc/oratab file.
# vi /etc/oratab
prim:/u01/app/oracle/product/11.2.0/db_1:Y

:wq

Step 2. Create a file called "/etc/init.d/dbora" as the root user

vi /etc/init.d/dbora

#!/bin/bash
# chkconfig: 345 99 10
# description: Oracle auto start-stop script.
ORA_HOME=/u01/app/oracle/product/11.2.0/db_1
ORA_OWNER=oracle
if [ ! -f $ORA_HOME/bin/dbstart ]
then
    echo "Oracle startup: cannot start"
    exit
fi
case "$1" in
    'start')
        su $ORA_OWNER -c $ORA_HOME/bin/dbstart &
        su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl start" &
        touch /var/lock/subsys/dbora
        ;;
    'stop')
        su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl stop"
        su $ORA_OWNER -c $ORA_HOME/bin/dbshut
        rm -f /var/lock/subsys/dbora
        ;;
esac

:wq

Step 3. Give dbora file proper permission .
chmod 750 /etc/init.d/dbora
chkconfig --add dbora

Monitoring Flash Recovery Area space using shell script


This script will check the flash recovery area and will shoot a mail if the space is over 80% 
filled up.

From oracle user:
[root@server1 ~]# vi /home/oracle/flashback.sql

col ROUND(SPACE_LIMIT/1048576) heading "Space Allocated (MB)" format 999999
col round(space_used/1048576) heading "Space Used (MB)" format 99999
col round((space_used/space_limit)*100) heading " % used "  format 99999
col name format a30
set head off
select name, round(space_limit/1048576),round(space_used/1048576),round ((space_used/space_limit)*100) as "% used"
from  v$RECOVERY_FILE_DEST;
exit;

:wq

Now lets create the shell-script which will monitor the space usage of flash recovery area:-
From root user
[root@server1 ~]# vi /root/spacecheck.sh

su - oracle -c "sqlplus -S / as sysdba @/home/oracle/flashback.sql" > /home/oracle/test.txt
space=`cat /home/oracle/test.txt | awk '{print $4}'`
if
[ $space -gt 80 ]; then
mail -s "Attention!! Low space in Flash recovery area! " yourmailid@gmail.com
fi
exit 0

:wq

We schedule the script which will check in every 5 mins.
[root@server1 ~]crontab -e
*/5 * * * * /root/spacecheck.sh > /dev/null

Recover database from a backup taken before a RESETLOGS

Prior to Oracle 10g, the redo log files generated after opening the database with RESETLOGS
could not be used with the backups taken before the RESETLOGS was performed.

Therefore, whenever a resetlogs was done, it was important to take an immediate full database
backup, since all previous backups became invalid.

A RESETLOGS needs to be performed when we need to do the following activities
1)    Do a point in time recovery
2)    Recover a database using a backup of the control file

What does a RESETLOGS do?

•    Archives the current online redo logs (if they are accessible) and then erases the contents
    of the online redo logs and resets the log sequence number to 1.
•    Creates the online redo log files if they do not currently exist.
•    Updates all current datafiles and online redo logs and all subsequent archived redo logs
    with a new RESETLOGS SCN and time stamp.
   
Scenario: Here i'm trying to recover my database using a backup which was taken before resetlogs
option.


[oracle@server1 prim]$ sqlplus /  as sysdba
SQL> select group#, member from v$logfile;

    GROUP#            MEMBER
--------------------------------------------------------------------------------
        3            /u01/app/oracle/oradata/prim/redo03.log

        2            /u01/app/oracle/oradata/prim/redo02.log

        1            /u01/app/oracle/oradata/prim/redo01.log
       
SQL> !
[oracle@server1 prim]$ cd /u01/app/oracle/oradata/prim
[oracle@server1 prim]$ ls -ltrh
total 1.5G
-rw-r-----. 1 oracle oinstall  21M Jan  8 22:22 temp01.dbf
-rw-r-----. 1 oracle oinstall  51M Jan  8 23:09 redo02.log
-rw-r-----. 1 oracle oinstall  51M Jan  8 23:09 redo03.log
-rw-r-----. 1 oracle oinstall 101M Jan  8 23:09 example01.dbf
-rw-r-----. 1 oracle oinstall 5.1M Jan  8 23:14 users01.dbf
-rw-r-----. 1 oracle oinstall  56M Jan  8 23:25 undotbs01.dbf
-rw-r-----. 1 oracle oinstall 671M Jan  8 23:25 system01.dbf
-rw-r-----. 1 oracle oinstall 501M Jan  8 23:25 sysaux01.dbf
-rw-r-----. 1 oracle oinstall  51M Jan  8 23:26 redo01.log
-rw-r-----. 1 oracle oinstall 9.3M Jan  8 23:27 control01.ctl

Before dropping the redologs lets take a fresh backup of database.
RMAN> backup database plus archivelog;
RMAN> exit

Now we will manually remove the redologs.
[oracle@server1 prim]$ cd /u01/app/oracle/oradata/prim
[oracle@server1 prim]$ rm -rf *.log

Here's the output in alertlog file:-
[oracle@server1 u01]$ tail -100f /u01/app/oracle/diag/rdbms/prim/prim/trace/alert_prim.log
Thu Jan 08 23:44:12 2015
Thread 1 advanced to log sequence 6 (LGWR switch)
  Current log# 3 seq# 6 mem# 0: /u01/app/oracle/oradata/prim/redo03.log
Thu Jan 08 23:44:12 2015
Errors in file /u01/app/oracle/diag/rdbms/prim/prim/trace/prim_arc1_8374.trc:
ORA-00313: open failed for members of log group 2 of thread 1
ORA-00312: online log 2 thread 1: '/u01/app/oracle/oradata/prim/redo02.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Thu Jan 08 23:44:12 2015       

SQL> shut abort;
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area  413372416 bytes
Fixed Size                  2213896 bytes
Variable Size             322963448 bytes
Database Buffers           83886080 bytes
Redo Buffers                4308992 bytes
Database mounted.

Now we will recover the database until last log sequence

[oracle@server1 prim]$ rman target /
RMAN> run {
 set until logseq=6 thread=1; 
 restore database;
 recover database;
 }
executing command: SET until clause

Starting restore at 08-JAN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/prim/system01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/prim/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/prim/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/prim/users01.dbf
channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/prim/example01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/flash_recovery_area/PRIM/backupset/2015_01_08/o1_mf_nnndf_TAG20150108T233907_bbxky3k1_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/PRIM/backupset/2015_01_08/o1_mf_nnndf_TAG20150108T233907_bbxky3k1_.bkp tag=TAG20150108T233907
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:36
Finished restore at 08-JAN-15

Starting recover at 08-JAN-15
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 4 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_08/o1_mf_1_4_bbxkztt9_.arc
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_08/o1_mf_1_4_bbxkztt9_.arc thread=1 sequence=4
unable to find archived log
archived log thread=1 sequence=5
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 01/08/2015 23:51:10
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 5 and starting SCN of 1034128

Here we can ignore the above error. its generated cause Oracle does not stop recovery at the
last available archive log in the database backupsets as per the restored control file. It keeps
on requesting for "next available archive log sequence number (5)".
In fact, there is no log sequence 5 recorded in the restored control file.

Now open the database using resetlogs option.
RMAN> alter database open resetlogs;

Now lets insert some data into a schema.
[oracle@server1 prim]$ sqlplus soumya/soumya

SQL> create table t3 (id number);
Table created.
SQL> insert into t3 values(1);
1 row created.
SQL> /
1 row created.
SQL> /
1 row created.
SQL> commit;
Commit complete.
SQL> exit

Now lets do a few log switches.

[oracle@server1 prim]$ sqlplus /  as sysdba
SQL> alter system switch logfile;
System altered.
SQL> alter system switch logfile;
System altered.
SQL> /
System altered.
SQL> /   
SQL> archive log list;
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     4
Next log sequence to archive   6
Current log sequence           6
SQL>

Now lets remove the redolog files once again.
NOTE: No Backup has been taken after opening the database with RESETLOGS option.
[oracle@server1 prim]$ cd /u01/app/oracle/oradata/prim
[oracle@server1 prim]$ rm -rf *.log

SQL> shut abort;
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  413372416 bytes
Fixed Size                  2213896 bytes
Variable Size             327157752 bytes
Database Buffers           79691776 bytes
Redo Buffers                4308992 bytes
Database mounted.
ORA-03113: end-of-file on communication channel
Process ID: 9221
Session ID: 1 Serial number: 5
SQL> exit

[oracle@server1 prim]$ rman target /
RMAN> startup mount

Oracle instance started
database mounted

Total System Global Area     413372416 bytes

Fixed Size                     2213896 bytes
Variable Size                327157752 bytes
Database Buffers              79691776 bytes
Redo Buffers                   4308992 bytes

RMAN> list incarnation of database;

using target database control file instead of recovery catalog

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       PRIM     4130009889       PARENT  1          15-AUG-09
2       2       PRIM     4130009889       PARENT  945184     24-JAN-14
3       3       PRIM     4130009889       CURRENT 1034129    08-JAN-15

Now we have to recover the database once again until the log sequence no 6

RMAN> run {
2> set until sequence=6 thread =1;
3> restore database;
4> recover database;
5>  }

executing command: SET until clause

Starting restore at 09-JAN-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/prim/system01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/prim/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/prim/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/prim/users01.dbf
channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/prim/example01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/flash_recovery_area/PRIM/backupset/2015_01_08/o1_mf_nnndf_TAG20150108T233907_bbxky3k1_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/PRIM/backupset/2015_01_08/o1_mf_nnndf_TAG20150108T233907_bbxky3k1_.bkp tag=TAG20150108T233907
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 09-JAN-15

Starting recover at 09-JAN-15
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 4 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_08/o1_mf_1_4_bbxkztt9_.arc
archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_1_bbxmmqq9_.arc
archived log for thread 1 with sequence 2 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_2_bbxn97nj_.arc
archived log for thread 1 with sequence 3 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_3_bbxn98kg_.arc
archived log for thread 1 with sequence 4 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_4_bbxn99oh_.arc
archived log for thread 1 with sequence 5 is already on disk as file /u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_5_bbxn9dmp_.arc
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_08/o1_mf_1_4_bbxkztt9_.arc thread=1 sequence=4
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_1_bbxmmqq9_.arc thread=1 sequence=1
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_2_bbxn97nj_.arc thread=1 sequence=2
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_3_bbxn98kg_.arc thread=1 sequence=3
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_4_bbxn99oh_.arc thread=1 sequence=4
archived log file name=/u01/app/oracle/flash_recovery_area/PRIM/archivelog/2015_01_09/o1_mf_1_5_bbxn9dmp_.arc thread=1 sequence=5
media recovery complete, elapsed time: 00:00:02
Finished recover at 09-JAN-15

RMAN> alter database open resetlogs;
database opened

RMAN> quit
[oracle@server1 prim]$ sqlplus soumya/soumya
SQL> select * from t3;

        ID
----------
         1
         1
         1
       
Now we can see all data recoved until point of failure.