DOYENSYS Knowledge Portal




We Welcome you to visit our DOYENSYS KNOWLEDGE PORTAL : Doyensys Knowledge Portal




Tuesday, August 4, 2015

Deleting a Node from 11g R2 RAC Cluster on linux

Deleting a Node from 11g R2 RAC Cluster on linux
----------------------------------------------

This document illustrates how to remove a node from an existing 11g R2 RAC cluster on Linux

1.Login into the node from which cluster has to be removed ( racnode2 )

2.check active nodes in the cluster

[grid@racnode2 ~]$ olsnodes -t -s -n
racnode1        1       Active  Unpinned
racnode2        2       Active  Unpinned
[grid@racnode2 ~]$

3.If EM is running
you must stop the EMAGENT, as follows:
$ emctl stop dbconsole

4. issue the following command to remove from cluster as root user

[root@racnode2 ~]# cd $GRID_HOME/crs/install/
[root@racnode2 install]# ./rootcrs.pl -deconfig -force

Using configuration parameter file: ./crsconfig_params
Network exists: 1/10.10.1.0/255.255.255.0/eth1, type static
VIP exists: /racnode1-vip/10.10.1.11/10.10.1.0/255.255.255.0/eth1, hosting node racnode1
VIP exists: /racnode2-vip/10.10.1.21/10.10.1.0/255.255.255.0/eth1, hosting node racnode2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'racnode2'
CRS-2677: Stop of 'ora.registry.acfs' on 'racnode2' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode2'
CRS-2673: Attempting to stop 'ora.crsd' on 'racnode2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racnode2'
CRS-2673: Attempting to stop 'ora.CRS_DG.dg' on 'racnode2'
CRS-2677: Stop of 'ora.CRS_DG.dg' on 'racnode2' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racnode2'
CRS-2677: Stop of 'ora.asm' on 'racnode2' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racnode2' has completed
CRS-2677: Stop of 'ora.crsd' on 'racnode2' succeeded
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'racnode2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'racnode2'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racnode2'
CRS-2673: Attempting to stop 'ora.evmd' on 'racnode2'
CRS-2673: Attempting to stop 'ora.asm' on 'racnode2'
CRS-2677: Stop of 'ora.evmd' on 'racnode2' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'racnode2' succeeded
CRS-2677: Stop of 'ora.asm' on 'racnode2' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'racnode2'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'racnode2' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'racnode2' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'racnode2' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racnode2'
CRS-2677: Stop of 'ora.cssd' on 'racnode2' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'racnode2'
CRS-2677: Stop of 'ora.crf' on 'racnode2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'racnode2'
CRS-2677: Stop of 'ora.gipcd' on 'racnode2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'racnode2'
CRS-2677: Stop of 'ora.gpnpd' on 'racnode2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node   <----------------


If you are deleting multiple nodes, then run the rootcrs.pl script on each node that you are deleting.

If you are deleting all nodes from a cluster, then append the -lastnode option to the preceding command to clear OCR and the voting disks, as rootuser

as root

# cd $GRID_HOME/crs/install/

# ./rootcrs.pl -deconfig -force -lastnode


If you are not using the -force option in the above command or the node you are deleting is not accessible for you to execute the preceding command, then the VIP resource remains running on the node. We must stop and remove the VIP resource manually using the following commands as root from any node that you are not deleting.

# srvctl stop vip -i vip_name -f

ie, srvctl stop vip -i racnode2-vip -f

# srvctl remove vip -i vip_name -f

ie srvctl remove vip -i racnode2-vip -f


Where vip_name is the VIP for the node to be deleted. If you specify multiple VIP names, then separate the names with commas and surround the list
in double quotation marks ("").

5.From any node that you are not deleting, run the following command from the $GRID_HOME/bin directory as root
to delete the node from the cluster:

crsctl delete node -n racnode2
CRS-4661: Node racnode2 successfully deleted. <----------------
[root@racnode1 ~]#

now check,
[root@racnode1 ~]# olsnodes -s -t -n
racnode1        1       Active  Unpinned
[root@racnode1 ~]#
[root@racnode1 ~]#


6.On the node you want to delete, run the following command as the grid user from $GRID_HOME/oui/bin diretory

[root@racnode1 ~]# su - grid
[grid@racnode2 ~]$
[grid@racnode2 ~]$
[grid@racnode2 ~]$
[grid@racnode2 bin]$ cd $GRID_HOME/oui/bin
[grid@racnode2 bin]$
[grid@racnode2 bin]$
[grid@racnode2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=GRID_HOME "CLUSTER_NODES={racnode2}" CRS=TRUE -silent -local

[grid@racnode2 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES={racnode2}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3526 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.     < ---------------------------
[grid@racnode2 bin]$
[grid@racnode2 bin]$

7.On the node that you are deleting, run the runInstaller command as the user that installed Oracle Clusterware.
Depending on whether you have a shared or nonshared Oracle home, complete one of the following procedures:

If you have a shared home, then run the following command from the Grid_home/oui/bin directory on the node you want to delete:

$ ./runInstaller -detachHome  ORACLE_HOME=Grid_home

For a non-shared home, deinstall the Oracle Clusterware home from the node that you want to delete, as follows,
by running the following command from the $GRID_HOME/deinstall directory, where Grid_home is the path defined for the Oracle Clusterware home:

$ ./deinstall รข€“local ( we use this )

Caution:
If you do not specify the -local flag, then the command removes the grid infrastructure homes from every node in the cluster.


[grid@racnode2 deinstall]$ cd $GRID_HOME/deinstall
[grid@racnode2 deinstall]$
[grid@racnode2 deinstall]$ ./deinstall -local


Location of logs /tmp/deinstall2015-06-27_09-10-54PM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: racnode2
Checking for sufficient temp space availability on node(s) : 'racnode2'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2015-06-27_09-10-54PM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "racnode2"[racnode2-vip]
 >
racnode2-vip
The following information can be collected by running "/sbin/ifconfig -a" on node "racnode2"
Enter the IP netmask of Virtual IP "10.10.1.21" on node "racnode2"[255.255.255.0]
 >
255.255.255.0
Enter the network interface name on which the virtual IP address "10.10.1.21" is active
 >
eth1
Enter an address or the name of the virtual IP[]
 >
racnode2-vip
The following information can be collected by running "/sbin/ifconfig -a" on node "racnode2"
Enter the IP netmask of the virtual IP "racnode2-vip"[]
 >
255.255.255.0
Enter the network interface name on which the virtual IP address "racnode2-vip" is active
 >
eth1

Enter an address or the name of the virtual IP[]
 >


Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2015-06-27_09-10-54PM/logs/netdc_check2015-06-27_09-16-53-PM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:LISTENER

At least one listener from the discovered listener list [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1] is missing in the specified listener list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]:

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:LISTENER

At least one listener from the discovered listener list [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1] is missing in the specified listener list [LISTENER]. The Oracle home will be cleaned up, so all the listeners will not be available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2015-06-27_09-10-54PM/logs/asmcadc_check2015-06-27_09-19-15-PM.log


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:racnode2
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'racnode2', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2015-06-27_09-10-54PM/logs/deinstall_deconfig2015-06-27_09-11-14-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2015-06-27_09-10-54PM/logs/deinstall_deconfig2015-06-27_09-11-14-PM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2015-06-27_09-10-54PM/logs/asmcadc_clean2015-06-27_09-19-52-PM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2015-06-27_09-10-54PM/logs/netdc_clean2015-06-27_09-19-52-PM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER
    Stopping listener on node "racnode2": LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "racnode2".

/tmp/deinstall2015-06-27_09-10-54PM/perl/bin/perl -I/tmp/deinstall2015-06-27_09-10-54PM/perl/lib -I/tmp/deinstall2015-06-27_09-10-54PM/crs/install /tmp/deinstall2015-06-27_09-10-54PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2015-06-27_09-10-54PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------

Remove the directory: /tmp/deinstall2015-06-27_09-10-54PM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/11.2.0/grid' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

Delete directory '/u01/app/grid' on the local node : Done

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2015-06-27_09-10-54PM' on node 'racnode2'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "racnode2"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Successfully deleted directory '/u01/app/grid' on the local node.
Oracle Universal Installer cleanup was successful.


Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'racnode2' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'racnode2' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

8.delete the following files from the node youi are deleting as "ROOT" USER


[root@racnode1 ~]rm -rf /etc/oraInst.loc

[root@racnode1 ~]rm -rf /opt/ORCLfmap



9.On any node other than the node you are deleting, run the following command from the Grid_home/oui/bin directory where remaining_nodes_list
 is a comma-delimited list of the nodes that are going to remain part of your cluster:
./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE



[grid@racnode1 ~]$ cd $GRID_HOME/oui/bin
[grid@racnode1 bin]$
[grid@racnode1 bin]$
[grid@racnode1 bin]$
[grid@racnode1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={racnode1}" CRS=TRUE
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 3526 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[grid@racnode1 bin]$
[grid@racnode1 bin]$

10.Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster:

$ cluvfy stage -post nodedel -n node_list [-verbose]

[grid@racnode1 ~] $ cd $GRID_HOME/bin
[grid@racnode1 bin]
[grid@racnode1 bin]
[grid@racnode1 bin]
[grid@racnode1 bin]$ cluvfy stage -post nodedel -n racnode2 -verbose

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "racnode1"

CRS integrity check passed
Result:
Node removal check passed

Post-check for node removal was successful.
[grid@racnode1 bin]$


-Viola
We suessfully deleted a node from 11g R2 RAC Cluster on Linux.

No comments: