Adding Node To Oracle RAC Database
Summary:
==============================================================
This presentation will provide detail steps for Oracle DBA and Linux
Engineer to add new node to existing 11gR1 (11.1.0.6) database RAC.
The most critical steps that need to be followed are:
- Verify the exists cluster configuration
- Installation Prerequisite
- Configuring SSH on New Cluster Member Nodes
- Pre-install checking
- Backup
the files (OCR, Voting Disk, root.sh, oraInventory)
- Adding an Oracle Clusterware home to new nodes using OUI in interactive node
- Configure ONS for the new node.
- Adding an Oracle ASM home to new nodes using OUI in interactive node (Required If you have separate home directory )
- Adding an Oracle RDBMS home to new nodes using OUI in interactive mode
- Reconfigure listener on new node using NETCA
- Adding ASM instance to new nodes using DBCA
- Adding DB Instance to new nodes using DBCA
1. a)
Verify the exists cluster configuration
[oracle@krac1 ~]$ crs_stat -t -v
Name
Type
R/RA F/FT Target
State Host
----------------------------------------------------------------------
ora....L1.inst application
0/5 0/0 ONLINE
ONLINE krac1
ora....L2.inst application
0/5 0/0 ONLINE
ONLINE krac2
ora.ORCL.db application
0/0 0/1 ONLINE
ONLINE krac1
ora....SM1.asm application 0/5
0/0 ONLINE ONLINE
krac1
ora....C1.lsnr application
0/5 0/0 ONLINE
ONLINE krac1
ora.krac1.gsd application
0/5 0/0 ONLINE
ONLINE krac1
ora.krac1.ons application
0/3 0/0 ONLINE ONLINE
krac1
ora.krac1.vip application
0/0 0/0 ONLINE
ONLINE krac1
ora....SM2.asm application
0/5 0/0 ONLINE
ONLINE krac2
ora....C2.lsnr application
0/5 0/0 ONLINE
ONLINE krac2
ora.krac2.gsd application
0/5 0/0 ONLINE
ONLINE krac2
ora.krac2.ons application
0/3 0/0 ONLINE
ONLINE krac2
ora.krac2.vip application
0/0 0/0 ONLINE
ONLINE krac2
RAC Database Configuration details
[oracle@krac1 ~]$ srvctl config database -d orcl
krac1 ORCL1 /u02/app/oracle/product/11.1.0/db_1
krac2 ORCL2 /u02/app/oracle/product/11.1.0/db_1
[oracle@krac1 ~]$ srvctl status database -d orcl
Instance ORCL1 is running on node krac1
Instance ORCL2 is running on node krac2
Instance ORCL1 is running on node krac1
Instance ORCL2 is running on node krac2
Automatic Storage Management(ASM) Configuration details
[oracle@krac1 ~]$ srvctl config asm -n krac1
+ASM1 /u02/app/oracle/product/11.1.0/asm_1
[oracle@krac1 ~]$ srvctl config asm -n krac2
+ASM2 /u02/app/oracle/product/11.1.0/asm_1
[oracle@krac1 ~]$ srvctl status asm -n krac1
ASM instance +ASM1 is running on node krac1.
[oracle@krac1 ~]$ srvctl status asm -n krac2
ASM instance +ASM2 is running on node krac2.
[oracle@krac1 ~]$
ASM instance +ASM1 is running on node krac1.
[oracle@krac1 ~]$ srvctl status asm -n krac2
ASM instance +ASM2 is running on node krac2.
[oracle@krac1 ~]$
Nodeapps Configuration Details
[oracle@krac1 ~]$ srvctl config nodeapps -n krac1
VIP exists.: /krac1-vip/152.168.1.60/255.255.255.0/eth0
GSD exists.
ONS daemon exists.
Listener exists.
[oracle@krac1 ~]$ srvctl config nodeapps -n krac2
VIP exists.: /krac2-vip/152.168.1.61/255.255.255.0/eth0
GSD exists.
ONS daemon exists.
Listener exists.
Listener Configuration Details
[oracle@krac1 ~]$ srvctl config listener -n krac1
krac1 LISTENER_KRAC1
[oracle@krac1 ~]$ srvctl status nodeapps -n krac1
VIP is running on node: krac1
GSD is running on node: krac1
Listener is running on
node: krac1
ONS daemon is running on node: krac1
[oracle@krac1 ~]$ srvctl status nodeapps -n krac2
VIP is running on node: krac2
GSD is running on node: krac2
Listener is running on
node: krac2
ONS daemon is running on node: krac2
Nodes and Environment Details
[oracle@krac1 ~]$ olsnodes -n -p -i
krac1 1
krac1-priv krac1-vip
krac2 2
krac2-priv krac2-vip
=====================================================================
Exist Nodes : krac1.dbprod.com,krac2.dbprod.com
New Node : krac3.dbprod.com
CRS_HOME : /u01/crs11g -- crs(OS) user is owner
ASM_HOME : /u02/app/oracle/product/11.1.0/asm_1 -- oracle(OS) user owner
RAC_HOME : /u02/app/oracle/product/11.1.0/db_1 -- oracle(OS) user owner
Note :
In this setup ASM configured in separate home directory for High availability.
Verify network interface using Oracle interface configuration tool
[oracle@krac1 ~]$ oifcfg getif
-global
eth0 152.168.1.0 global public
eth1 122.168.1.0 global
cluster_interconnect
2. Installation Prerequisite
Refer Oracle 11g Release 1 RAC Installation Steps On Linux
blog for more details.
3. a. Configuring SSH on New Cluster Member Nodes
[crs@krac3 ~]$ su - crs
[crs@krac3 ~]$ mkdir
~/.ssh
[crs@krac3 ~]$ chmod 700
~/.ssh
[crs@krac3 ~]$ cd ~/.ssh
[crs@krac3 .ssh]$
/usr/bin/ssh-keygen -t rsa
Generating
public/private rsa key pair.
Enter file in which to
save the key (/home/crs/.ssh/id_rsa):
<ENTER>
Enter passphrase (empty
for no passphrase): <ENTER>
Enter same passphrase
again: <ENTER>
Your identification has
been saved in /home/crs/.ssh/id_rsa.
Your public key has been
saved in /home/crs/.ssh/id_rsa.pub.
The key fingerprint is:
2e:85:e0:06:c8:14:66:65:28:83:47:db:78:71:17:01
crs@krac3.dbprod.com
[crs@krac3 .ssh]$
/usr/bin/ssh-keygen -t dsa
Generating
public/private dsa key pair.
Enter file in which to
save the key (/home/crs/.ssh/id_dsa): <ENTER>
Enter passphrase (empty
for no passphrase): <ENTER>
Enter same passphrase
again: <ENTER>
Your identification has
been saved in /home/crs/.ssh/id_dsa.
Your public key has been
saved in /home/crs/.ssh/id_dsa.pub.
The key fingerprint is:
60:52:61:65:b7:df:5a:5c:b2:83:79:b4:d2:9b:9d:21
crs@krac3.dbprod.com
[crs@krac3 .ssh]$ ls
-ltr
total 16
-rw-r--r-- 1 crs
oinstall 402 Oct 22 21:50 id_rsa.pub
-rw------- 1 crs
oinstall 1675 Oct 22 21:50 id_rsa
-rw-r--r-- 1 crs
oinstall 610 Oct 22 21:50 id_dsa.pub
-rw------- 1 crs
oinstall 668 Oct 22 21:50 id_dsa
b. SCP the authorized_keys file from exist node
(on krac1)
su - crs
[crs@krac1 .ssh]$ scp
~/.ssh/authorized_keys krac3:$HOME /.ssh/.
c. Login into new node (krac3.dbprod.com)
[crs@krac3 .ssh]$ ls
-ltr
total 20
-rw-r--r-- 1 crs
oinstall 402 Oct 22 21:50 id_rsa.pub
-rw------- 1 crs
oinstall 1675 Oct 22 21:50 id_rsa
-rw-r--r-- 1 crs
oinstall 610 Oct 22 21:50 id_dsa.pub
-rw------- 1 crs
oinstall 668 Oct 22 21:50 id_dsa
-rw-r--r-- 1 crs oinstall 2024 Oct 22 21:50 authorized_keys
[crs@krac3
.ssh]$ cat id_rsa.pub >> authorized_keys
[crs@krac3
.ssh]$ cat id_dsa.pub >> authorized_keys
d. Copy the
latest authorized_key file to remaining all nodes in cluster.
su - crs
[crs@krac3 .ssh]$ scp authorized_keys krac1:$HOME/.ssh/.
[crs@krac3 .ssh]$ scp authorized_keys krac2:$HOME/..ssh/.
e. Enabling SSH User Equivalency on Cluster Member Nodes (Need to done both crs and oracle users)
ssh krac1 date
ssh krac2 date
ssh krac3 date
ssh krac1.dbprod.com date
ssh krac2.dbprod.com date
ssh krac3.dbprod.com date
ssh krac1-priv date
ssh krac2-priv date
ssh krac3-priv date
ssh krac1-priv.dbprod.com date
ssh krac2-priv.dbprod.com date
ssh krac3-priv.dbprod.com date
f. Repeat the same steps as a
oracle user in all new nodes.
3. Pre-install checking
a) Verify cluster healthy
on existing nodes (ocrcheck, cluvfy).
[oracle@krac1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version
: 2
Total space
(kbytes) : 1043916
Used space
(kbytes) :
5612
Available
space (kbytes) : 1038304
ID
: 1347985972
Device/File
Name : /dev/raw/raw1
Device/File integrity check succeeded
Device/File not configured
Cluster
registry integrity check succeeded
b) This theory assumes that the Prerequisites Installation takes have already been performed on the nodes.
$cluvfy stage -post hwos -n krac3
c) Check OS version, Kernel parameters, /etc/hosts file and ensure they are identical on all nodes.
[oracle@krac1 ~]$ uname -nrmo
krac1.dbprod.com 2.6.18-194.el5 i686 GNU/Linux
[oracle@krac1 ~]$ cat /etc/redhat-release
Red
Hat Enterprise Linux Server release 5.5 (Tikanga)
d) Verify below lines in /etc/sysclt.conf file on new node(krac3) .
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
e) Confirm a new node(commented in red color) entry in /etc/hosts files on all the nodes.
##################################################
### LocalHost
##################################################
127.0.0.1
localhost.localdomain localhost
##################################################
### Public IP Address
##################################################
152.168.1.50
krac1.dbprod.com krac1
152.168.1.51 krac2.dbprod.com
krac2
152.168.1.52
krac3.dbprod.com krac3
##################################################
### Private IP Address
##################################################
192.168.1.50
krac1-priv.dbprod.com krac1-priv
192.168.1.51
krac2-priv.dbprod.com krac2-priv
192.168.1.52
krac3-priv.dbprod.com krac3-priv
##################################################
### VIP Address
##################################################
152.168.1.60 krac1-vip.dbprod.com
krac1-vip
152.168.1.61
krac2-vip.dbprod.com krac2-vip
152.168.1.62
krac3-vip.dbprod.com krac3-vip
[crs@krac1 ~]$cluvfy stage -pre crsinst -n
krac1,krac2,krac3
Performing pre-checks for cluster services setup
Checking node reachability...
Node reachability check passed from node
"krac1".
Checking user equivalence...
User equivalence check passed for user
"crs".
Checking administrative privileges...
User existence check passed for "crs".
Group existence check passed for
"oinstall".
Membership check for user "crs" in group
"oinstall" [as Primary] passed.
Administrative privileges check passed.
Checking node connectivity...
Node connectivity check passed for subnet
"152.168.1.0" with node(s) krac3,krac2,krac1.
Node connectivity check passed for subnet
"192.168.1.0" with node(s) krac3,krac2,krac1.
Interfaces found on subnet "152.168.1.0"
that are likely candidates for VIP:
krac3 eth0:152.168.1.52
krac2 eth0:152.168.1.51 eth0:152.168.1.61
krac1 eth0:152.168.1.50 eth0:152.168.1.60
Interfaces found on subnet "192.168.1.0"
that are likely candidates for VIP:
krac3 eth1:192.168.1.52
krac2 eth1:192.168.1.51
krac1 eth1:192.168.1.50
WARNING:
Could not find a suitable set of interfaces for the
private interconnect.
Node connectivity check passed.
Checking system requirements for 'crs'...
Total memory check passed.
Free disk space check passed.
Swap space check passed.
System architecture check passed.
Kernel version check passed.
Package existence check passed for
"make-3.81".
Package existence check passed for
"binutils-2.17.50.0.6".
Package existence check passed for
"gcc-4.1.1".
Package existence check passed for
"libaio-0.3.106".
Package existence check passed for
"libaio-devel-0.3.106".
Package existence check passed for
"libstdc++-4.1.1".
Package existence check passed for "elfutils-libelf-devel-0.125".
Package existence check passed for
"sysstat-7.0.0".
Package existence check passed for
"compat-libstdc++-33-3.2.3".
Package existence check passed for
"libgcc-4.1.1".
Package existence check passed for
"libstdc++-devel-4.1.1".
Package existence check passed for
"unixODBC-2.2.11".
Package existence check passed for
"unixODBC-devel-2.2.11".
Package existence check passed for
"glibc-2.5-12".
Group existence check passed for "dba".
Group existence check passed for
"oinstall".
User existence check passed for "nobody".
System requirement passed for 'crs'
Pre-check for cluster services setup was successful.
Backup
the files (OCR, Voting Disk).
[root@krac1 ~]# cd /u01/crs11g/bin
[root@krac1 bin]#./ocrconfig -manualbackup
krac1 2012/10/22
22:58:57
/u01/crs11g/cdata/krac_cluster/backup_20121022_225857.ocr
[root@krac1 bin]# ./ocrconfig -export
/u01/crs11g/cdata/krac_cluster/ocr_backup_before_adding_node.dmp
[root@krac1 bin]# ls -l
/u01/crs11g/cdata/krac_cluster/ocr_backup_before_adding_node.dmp
-rw-r--r-- 1 root root 104169 Oct 22 23:00
/u01/crs11g/cdata/krac_cluster/ocr_backup_before_adding_node.dmp
[root@krac1 bin]# ./crsctl query css votedisk
0. 0
/dev/raw/raw2
Located 1 voting disk(s).
[root@krac1 bin]# dd if=/dev/raw/raw2
of=/u01/crs11g/cdata/krac_cluster/votedisk_bkp_before_adding_node.dmp bs=1024
Inventory and root.sh Backup
[crs@krac1 oracle]$
cat /etc/oraInst.loc
inventory_loc=/u02/app/oracle/oraInventory
inst_group=oinstall
[crs@krac1 oracle]$ cd /u02/app/oracle
[crs@krac1 oracle]$ tar -cvf oraInventory.tar oraInventory
su - crs
cd $ORA_CRS_HOME/bin
cp root.sh root.sh.bkp
su - oracle
cd $ORACLE_HOME/bin
cp root.sh root.sh.bkp
Node : Repeat above the steps in
all nodes
Action : Click "Next"
Adding
an Oracle Clusterware home to new nodes using OUI in interactive node
su - crs
cd
$ORA_CRS_HOME/oui/bin
./addNode.sh
Action : Enter the new nodes name(Public,Private,Virtual host name) and click "Next"
Action : Verify cluster add node summary and click "Install"
[root@krac3 ~]#
/u02/app/oracle/oraInventory/orainstRoot.sh
Changing
permissions of /u02/app/oracle/oraInventory to 770.
Changing
groupname of /u02/app/oracle/oraInventory to oinstall.
The
execution of the script is complete
[root@krac3 ~]# ssh krac1
/u01/crs11g/install/rootaddnode.sh
root@krac1's
password:
clscfg:
EXISTING configuration version 4 detected.
clscfg:
version 4 is 11 Release 1.
Attempting
to add 1 new nodes to the configuration
Using ports:
CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
<nodenumber>: <nodename> <private interconnect name>
<hostname>
node 3:
krac3 krac3-priv krac3
Creating OCR
keys for user 'root', privgrp 'root'..
Operation
successful.
/u01/crs11g/bin/srvctl
add nodeapps -n krac3 -A krac3-vip/255.255.255.0/eth0
[root@krac3 ~]# /u01/crs11g/root.sh
Checking to
see if Oracle CRS stack is already configured
/etc/oracle
does not exist. Creating it now.
OCR LOCATIONS
= /dev/raw/raw1
OCR backup
directory '/u01/crs11g/cdata/krac_cluster' does not exist. Creating now
Setting the
permissions on OCR backup directory
Setting up
Network socket directories
Oracle
Cluster Registry configuration upgraded successfully
clscfg:
EXISTING configuration version 4 detected.
clscfg:
version 4 is 11 Release 1.
Successfully
accumulated necessary OCR keys.
Using ports:
CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
<nodenumber>: <nodename> <private interconnect name>
<hostname>
node 1:
krac1 krac1-priv krac1
node 2:
krac2 krac2-priv krac2
clscfg:
Arguments check out successfully.
NO KEYS WERE
WRITTEN. Supply -force parameter to override.
-force is
destructive and will destroy any previous cluster
configuration.
Oracle
Cluster Registry for cluster has already been initialized
Startup will
be queued to init within 30 seconds.
Adding
daemons to inittab
Expecting
the CRS daemons to be up within 600 seconds.
Cluster
Synchronization Services is active on these nodes.
krac1
krac2
krac3
Cluster
Synchronization Services is active on all the nodes.
Waiting for
the Oracle CRSD and EVMD to start
Oracle CRS
stack installed and running under init(1M)
Action : Click "Exit".
Action : Click "Yes"
[oracle@krac1
]$ crs_stat -t -v -c krac3
Name Type R/RA F/FT
Target State Host
----------------------------------------------------------------------
ora.krac3.gsd application
0/5 0/0 ONLINE
ONLINE krac3
ora.krac3.ons application
0/3 0/0 ONLINE ONLINE
krac3
ora.krac3.vip application
0/0 0/0 ONLINE
ONLINE krac3
Configure ONS for the new node.
From the first node, Looking at the ons.config file located in <CRS_HOME>/opmn/config directory. You can determine the ONS remote port to be used. You need to use the same port in racgons add_config as show in below to make sure that the first node can communicate with the ONS on new node.
[crs@krac1 ]$
cd $ORA_CRS_HOME/opmn/conf
[crs@krac1 ]$
cat ons.config
localport=6150
useocr=on
allowgroup=true
usesharedinstall=true
[crs@krac1
conf]$
On new node(krac3) execute below commands.
[crs@krac3
~]$ crs_stat -t -c krac3
Name Type Target State
Host
------------------------------------------------------------
ora.krac3.gsd application
ONLINE ONLINE krac3
ora.krac3.ons application
ONLINE ONLINE krac3
ora.krac3.vip application
ONLINE ONLINE krac3
[oracle@krac3
]$ su - crs
[crs@krac3 ]$cd $ORA_CRS_HOME/bin
[crs@krac3
bin]$ ./racgons add_config rac3:6150
Adding an ASM home to new nodes using OUI in interactive node.(Its needed when ASM is installed with separate home directory for high availability)
[crs@krac1 ]$
su - oracle
[oracle@krac1
]$ cd $ORA_ASM_HOME/oui/bin
[oracle@krac1
]$ ./addNode.sh
Action : Click "Next"Action : Review Cluster Node Addition Summary and click "Next"
Action : Nothing
After Installation completes OUI will prompt you to run root.sh as root user. Once root.sh script execution completed follow with below steps.
[root@krac3
~]# /u02/app/oracle/product/11.1.0/asm_1/root.sh
Running
Oracle 11g root.sh script...
The
following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u02/app/oracle/product/11.1.0/asm_1
Enter the
full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating
/etc/oratab file...
Entries will
be added to the /etc/oratab file as needed by
Database
Configuration Assistant when a database is created
Finished
running generic part of root.sh script.
Now product-specific
root actions will be performed.
Finished
product-specific root actions.
Action: Click "Next"
Adding an RDBMS home to new nodes using OUI in interactive node.
[crs@krac1 ]$
su - oracle
[oracle@krac1
]$ cd $ORACLE_HOME/oui/bin
[oracle@krac1
]$ ./addNode.sh
Action : Click "Next"
Action : Nothing
After Installation completes OUI will prompt you to run root.sh as root user. Once root.sh script execution completed follow with below steps.
[root@krac3
~]# /u02/app/oracle/product/11.1.0/db_1/root.sh
Running
Oracle 11g root.sh script...
The
following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u02/app/oracle/product/11.1.0/db_1
Enter the
full pathname of the local bin directory: [/usr/local/bin]:
The file
"dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying dbhome to /usr/local/bin ...
The file
"oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file
"coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will
be added to the /etc/oratab file as needed by
Database
Configuration Assistant when a database is created
Finished
running generic part of root.sh script.
Now
product-specific root actions will be performed.
Finished
product-specific root actions.
Action : Click "Next"
Reconfigure listener on new node using NETCA
Action : Select Cluster Configuration and click "Next"
Adding ASM instance to
new nodes using DBCA
Action : Select Oracle Real Application Cluster Database and click "Next"
Action : Select Configure Automatic Storage Management
Action : Select New nodes and click "Next"
Action : Click "Finish"
Adding DB Instance to new nodes using DBCA
Action : Select Real Application Clusters Database and click "Next"
Action : Select Instance Management and click "Next"
Action : Select Add node and click "Next"
Action : Enter the Username and password then click "Next"
Action : Click "Next"
Action : Verify Instance and new node name then click "Next"
Action : Click "Finish"
Action : Click "OK"
Post Insallations Check.
Related Blogs:
==============================================================
Reconfigure listener on new node using NETCA
[crs@krac1 ]$
su - oracle
[oracle@krac1
]$ cd $ORA_ASM_HOME/bin
[oracle@krac1
]$ ./netca
Action : Select Cluster Configuration and click "Next"
Action : Select new node name and click "Next"
Action : Select Listener configuration and click "Next"
Action : Select add and click "Next"
Action : Enter listener name and click "Next"
Action : Click "Next"
Action : Click "Next"
Action : Select no and click "Next"
Action : Click "Next"
Action : Click "Finish"
[oracle@krac1
bin]$ crs_stat -t -v -c krac3
Name Type R/RA F/FT
Target State Host
----------------------------------------------------------------------
ora....C3.lsnr
application 0/5 0/0
ONLINE ONLINE krac3
ora.krac3.gsd application
0/5 0/0 ONLINE
ONLINE krac3
ora.krac3.ons application
0/3 0/0 ONLINE ONLINE
krac3
ora.krac3.vip application
0/0 0/0 ONLINE
ONLINE krac3
[crs@krac1 ]$
su - oracle
[oracle@krac1
]$ cd $ORA_ASM_HOME/bin
[oracle@krac1
]$ ./dbca
Action : Select Oracle Real Application Cluster Database and click "Next"
Action : Select Configure Automatic Storage Management
Action : Click "Yes"
[oracle@krac1
bin]$ crs_stat -t -v -c krac3
Name Type R/RA F/FT
Target State Host
----------------------------------------------------------------------
ora....SM3.asm
application 0/5 0/0
ONLINE ONLINE krac3
ora....C3.lsnr
application 0/5 0/0
ONLINE ONLINE krac3
ora.krac3.gsd application
0/5 0/0 ONLINE
ONLINE krac3
ora.krac3.ons application
0/3 0/0 ONLINE ONLINE
krac3
ora.krac3.vip application
0/0 0/0 ONLINE
ONLINE krac3
Adding DB Instance to new nodes using DBCA
su - oracle
[oracle@krac1
]$ cd $ORACLE_HOME/bin
[oracle@krac1
]$ ./dbca
Action : Select Real Application Clusters Database and click "Next"
Action : Select Instance Management and click "Next"
Action : Select Add node and click "Next"
Action : Enter the Username and password then click "Next"
Action : Click "Next"
Action : Verify Instance and new node name then click "Next"
Action : Click "Finish"
Action : Click "OK"
Action : Nothing
Action : Click "Yes"
[oracle@krac1
bin]$ crs_stat -t -v
Name Type R/RA F/FT
Target State Host
----------------------------------------------------------------------
ora....L1.inst application 0/5
0/0 ONLINE ONLINE
krac1
ora....L2.inst application 0/5
0/0 ONLINE ONLINE
krac2
ora....L3.inst application 0/5
0/0 ONLINE ONLINE
krac3
ora.ORCL.db application 0/0
0/1 ONLINE ONLINE
krac1
ora....SM1.asm
application 0/5 0/0
ONLINE ONLINE krac1
ora....C1.lsnr
application 0/5 0/0
ONLINE ONLINE krac1
ora.krac1.gsd application
0/5 0/0 ONLINE
ONLINE krac1
ora.krac1.ons application
0/3 0/0 ONLINE
ONLINE krac1
ora.krac1.vip application
0/0 0/0 ONLINE
ONLINE krac1
ora....SM2.asm
application 0/5 0/0
ONLINE ONLINE krac2
ora....C2.lsnr
application 0/5 0/0
ONLINE ONLINE krac2
ora.krac2.gsd application
0/5 0/0 ONLINE
ONLINE krac2
ora.krac2.ons application
0/3 0/0
ONLINE ONLINE krac2
ora.krac2.vip application
0/0 0/0 ONLINE
ONLINE krac2
ora....SM3.asm
application 0/5 0/0
ONLINE ONLINE krac3
ora....C3.lsnr
application 0/5 0/0
ONLINE ONLINE krac3
ora....C3.lsnr
application 0/5 0/0
ONLINE ONLINE krac3
ora.krac3.gsd application
0/5 0/0 ONLINE
ONLINE krac3
ora.krac3.ons application
0/3 0/0 ONLINE
ONLINE krac3
ora.krac3.vip application
0/0 0/0 ONLINE
ONLINE krac3
Post Insallations Check.
clufy stage
-post crsinst -n all -verbose
[oracle@krac1
~]$ sqlplus system/oracle@orcl3
SQL*Plus:
Release 11.1.0.6.0 - Production on Mon Oct 29 19:55:42 2012
Copyright
(c) 1982, 2007, Oracle. All rights
reserved.
Connected
to:
Oracle
Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the
Partitioning, Real Application Clusters, OLAP, Data Mining
and Real
Application Testing options
SQL> col
host_name for a25
SQL> col
instance_name for a20
SQL> select
instance_name,host_name from v$instance;
INSTANCE_NAME HOST_NAME
--------------------
-------------------------
ORCL3 krac3.dbprod.com
SQL>
select instance_name,host_name from gv$instance;
INSTANCE_NAME HOST_NAME
--------------------
-------------------------
ORCL3 krac3.dbprod.com
ORCL2 krac2.dbprod.com
ORCL1 krac1.dbprod.com
Related Blogs:
==============================================================
Hope! This helps...
Regards,
Kavin.
BE THE BEST!!! BE WITH THE BEST !!!.
gud one
ReplyDeletereally a excellent one
ReplyDeleteAwesome Post
ReplyDelete