Oracle Apps DBA

My photo
More than 5+ Years of IT Experience in administering Enterprise wide Multi Vendor UNIX Servers, Oracle Databases (8i to 11gR2), Middle-tiers, Applications and Clusters. I am a Sun Certified System Administrator (SCSA) for Solaris 10, oracle 10g database certified associate(OCA), Oracle 10g Database certified professional (OCP) and Oracle E-Business suite R12 certified professionl(OCP)

Oracle Database and Applications

Sunday, 18 May 2014

Adding new node to 12c RAC Cluster

This is a step by step article for adding a new node to an existing 2  node 12c Grid Infrastructure cluster. I've demonstrated this post on virtual box. This post will be helpful for those who wants to practice adding of nodes for TESTING/DEMONSTRATION purpose and it will be also helpful for those who wants to add a node in a real production environment.

Please read my previous article  to have a clear understanding of this deployment.

"Oracle 12c RAC Installation on linux using virtualbox"


High level steps for adding a node to 12c Grid Infrastructure:

1) Install OS same as other surviving cluster nodes
2) Configure shared Storage
3) Configure network (public, private and virtual)
4) Configure all OS pre-requisites (users, groups, directories, kernel parameters etc)
5) Run cluvfy to verify addition of node
6) Add node to an existing 12c cluster
7) Verify Services on all cluster nodes
8) Add Instance to a third node

Existing network configuration of 2 Node RAC:



Network configuration of new node:



1) Install OS same as other surviving cluster nodes

 In my case I am cloning configured OS Image  (prepared before RAC Installation) to a new node. You can Install a fresh OS and configure everything from beginning.


> Now add new machine to virtual box:







- Here select cloned disk "racnode3"























- Configure 2 Network Interfaces (Public and Private)




 2) Configure shared Storage

- Add additional disks which are configured as shared disks in an existing  cluster nodes:






- Similarly add all disks to a virtual machine.



- It should be configured on identical ports configured on existing RAC nodes.

- If you are using SAN/NAS then the pseudo names should be configured identical to existing RAC nodes.

3) Configure network (public, private and virtual)

Configure public, private and virtual network.  The IP address should be in same range of existing RAC nodes.

- Execute command "system-config-network and configure IP addresses as below





- Now add these entries to a DNS Server. These entries should be added to DNS Server if you are using DNS Server for resolving SCAN Name.

If you are not using DNS for resolving SCAN Name the you can skip this step.





[root@racnode1 ~]# ifconfig -a
eth2      Link encap:Ethernet  HWaddr 08:00:27:BF:9D:56  
          inet addr:192.168.1.49  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe32:18b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:360 (360.0 b)  TX bytes:7586 (7.4 KiB)

eth3      Link encap:Ethernet  HWaddr 08:00:27:6E:E6:AA  
          inet addr:10.10.15.23  Bcast:10.10.15.255  Mask:255.0.0.0
          inet6 addr: fe80::a00:27ff:feda:a7f1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:6902 (6.7 KiB)  TX bytes:4481 (4.3 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:960 (960.0 b)  TX bytes:960 (960.0 b)




- If we clone existing machine on virtual box then it will configure new interface names for configured network cards. In RAC we need to have same interface names across all nodes in a cluster.

We need to edit 2 configuration files to change this interface names:

1 -  /etc/sysconfig/network-scripts/ifcfg-
2 -  /etc/udev/rules.d/70-persistent-net.rules


[root@racnode1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE="eth0"
NM_CONTROLLED="yes"
ONBOOT="no"
#HWADDR=08:00:27:BF:9D:56
HWADDR=08:00:27:32:01:8B
TYPE=Ethernet
BOOTPROTO=none
IPADDR=192.168.1.41
PREFIX=24
GATEWAY=192.168.1.1
DNS1=192.168.1.1
DOMAIN=oralabs.com
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03
[root@racnode1 ~]# 



----


[root@racnode1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 
DEVICE="eth1"
NM_CONTROLLED="yes"
ONBOOT=yes
TYPE=Ethernet
BOOTPROTO=none
IPADDR=10.10.15.21
PREFIX=8
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth1"
UUID=9c92fad9-6ecb-3e6c-eb4d-8a47c6f50c04
#HWADDR=08:00:27:6E:E6:AA
HWADDR=08:00:27:DA:A7:F1
GATEWAY=192.168.1.1
DNS1=192.168.1.1
DOMAIN=oralabs.com
[root@racnode1 ~]# 


- Edit "/etc/udev/rules.d/70-persistent-net.rules"



- Restart network services and verify network Interface cards:

[root@racnode1 ~]# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 08:00:27:32:01:8B  
          inet addr:192.168.1.49  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe32:18b/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:360 (360.0 b)  TX bytes:7586 (7.4 KiB)

eth1      Link encap:Ethernet  HWaddr 08:00:27:DA:A7:F1  
          inet addr:10.10.15.23  Bcast:10.10.15.255  Mask:255.0.0.0
          inet6 addr: fe80::a00:27ff:feda:a7f1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18 errors:0 dropped:0 overruns:0 frame:0
          TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:6902 (6.7 KiB)  TX bytes:4481 (4.3 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:960 (960.0 b)  TX bytes:960 (960.0 b)

- Change Host name 

change this entry from racnode1 to racnode3

[root@racnode1 ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=racnode3
[root@racnode1 ~]# 

- Restart Virtual Machine

- Scan ASM Disks:

[root@racnode3 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@racnode3 ~]# oracleasm listdisks
CRSDISK1
CRSDISK2
DATADISK1
DATADISK2
[root@racnode3 ~]# 


- Configure "/etc/hosts" file and add similar entries in existing cluster nodes:

[root@racnode3 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain 
#RAC hosts IP addresses
#################PUBLIC################
192.168.1.49 racnode3.oralabs.com racnode3 localhost
192.168.1.41 racnode1.oralabs.com racnode1 
192.168.1.42 racnode2.oralabs.com racnode2 
################PRIVATE###############
10.10.15.21 racnode1-priv.oralabs.com racnode1-priv
10.10.15.22 racnode2-priv.oralabs.com racnode2-priv
10.10.15.23 racnode3-priv.oralabs.com racnode3-priv
################VIP###################
192.168.1.47 racnode1-vip.oralabs.com racnode1-vip
192.168.1.48 racnode2-vip.oralabs.com racnode2-vip
192.168.1.50 racnode3-vip.oralabs.com racnode3-vip
[root@racnode3 ~]# 


- Verify connectivity in between nodes

[root@racnode3 ~]# ping racnode3
PING racnode3.oralabs.com (192.168.1.49) 56(84) bytes of data.
64 bytes from racnode3.oralabs.com (192.168.1.49): icmp_seq=1 ttl=64 time=0.026 ms
^C
--- racnode3.oralabs.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 576ms
rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms
[root@racnode3 ~]# ping racnode2
PING racnode2.oralabs.com (192.168.1.42) 56(84) bytes of data.
64 bytes from racnode2.oralabs.com (192.168.1.42): icmp_seq=1 ttl=64 time=0.453 ms
64 bytes from racnode2.oralabs.com (192.168.1.42): icmp_seq=2 ttl=64 time=0.497 ms
^C
--- racnode2.oralabs.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1071ms
rtt min/avg/max/mdev = 0.453/0.475/0.497/0.022 ms
[root@racnode3 ~]# ping racnode1
PING racnode1.oralabs.com (192.168.1.41) 56(84) bytes of data.
64 bytes from racnode1.oralabs.com (192.168.1.41): icmp_seq=1 ttl=64 time=2.67 ms
64 bytes from racnode1.oralabs.com (192.168.1.41): icmp_seq=2 ttl=64 time=0.462 ms
^C
--- racnode1.oralabs.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1305ms
rtt min/avg/max/mdev = 0.462/1.569/2.677/1.108 ms
[root@racnode3 ~]# 

---------------------------

[root@racnode3 ~]# ping racnode3-priv
PING racnode3-priv.oralabs.com (10.10.15.23) 56(84) bytes of data.
64 bytes from racnode3-priv.oralabs.com (10.10.15.23): icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from racnode3-priv.oralabs.com (10.10.15.23): icmp_seq=2 ttl=64 time=0.048 ms
^C
--- racnode3-priv.oralabs.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1423ms
rtt min/avg/max/mdev = 0.048/0.051/0.055/0.008 ms
[root@racnode3 ~]# ping racnode2-priv
PING racnode2-priv.oralabs.com (10.10.15.22) 56(84) bytes of data.
64 bytes from racnode2-priv.oralabs.com (10.10.15.22): icmp_seq=1 ttl=64 time=1.05 ms
64 bytes from racnode2-priv.oralabs.com (10.10.15.22): icmp_seq=2 ttl=64 time=0.548 ms
^C
--- racnode2-priv.oralabs.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1217ms
rtt min/avg/max/mdev = 0.548/0.802/1.056/0.254 ms
[root@racnode3 ~]# ping racnode1-priv
PING racnode1-priv.oralabs.com (10.10.15.21) 56(84) bytes of data.
64 bytes from racnode1-priv.oralabs.com (10.10.15.21): icmp_seq=1 ttl=64 time=1.39 ms
64 bytes from racnode1-priv.oralabs.com (10.10.15.21): icmp_seq=2 ttl=64 time=0.805 ms
^C
--- racnode1-priv.oralabs.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1443ms
rtt min/avg/max/mdev = 0.805/1.098/1.391/0.293 ms
[root@racnode3 ~]# 



- Verify SCAN name is able to resolve from new node (racnode3):

[root@racnode3 ~]# nslookup scan-rac12c
Server:  192.168.1.1
Address: 192.168.1.1#53

Name: scan-rac12c.oralabs.com
Address: 192.168.1.45
Name: scan-rac12c.oralabs.com
Address: 192.168.1.44
Name: scan-rac12c.oralabs.com
Address: 192.168.1.46

[root@racnode3 ~]# 



4) Configure all OS pre-requisites (users, groups, directories, kernel parameters, SSH setup etc)

- As we cloned virtual machine from pre-configured OS so no need to create user, group, directories and kernel parameters.

- Install "cvuqdisk" rpm

[root@racnode2 sw_home]# scp cvuqdisk-1.0.9-1.rpm  root@racnode3:/tmp
The authenticity of host 'racnode3 (192.168.1.49)' can't be established.
RSA key fingerprint is 89:ce:83:34:f6:99:5f:33:bf:9b:7f:a4:ea:92:60:1e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'racnode3,192.168.1.49' (RSA) to the list of known hosts.
root@racnode3's password: 
cvuqdisk-1.0.9-1.rpm                                                                                                                            100% 8827     8.6KB/s   00:00    
[root@racnode2 sw_home]# 


[root@racnode3 tmp]# export CVUQDISK_GRP=dbarac
[root@racnode3 tmp]# rpm -ivh cvuqdisk-1.0.9-1.rpm 
Preparing...                ########################################### [100%]
   1:cvuqdisk               ########################################### [100%]
[root@racnode3 tmp]# 


- Configure SSH Setup for racnode3

Use script "sshUserSetup.sh" under directory sshsetup from 12c Grid Infrastructure software:

Execute this script as the owner of GI user and provide password configured on other nodes:

[oracle@racnode3 sshsetup]$ ./sshUserSetup.sh -user oracle -hosts "racnode1 racnode2 racnode3" -advance -noPromptPassphrase
The output of this script is also logged into /tmp/sshUserSetup_2014-05-12-15-08-30.log
Hosts are racnode1 racnode2 racnode3
user is oracle
Platform:- Linux 
Checking if the remote hosts are reachable
PING racnode1.oralabs.com (192.168.1.41) 56(84) bytes of data.
64 bytes from racnode1.oralabs.com (192.168.1.41): icmp_seq=1 ttl=64 time=0.569 ms
64 bytes from racnode1.oralabs.com (192.168.1.41): icmp_seq=2 ttl=64 time=0.678 ms
64 bytes from racnode1.oralabs.com (192.168.1.41): icmp_seq=3 ttl=64 time=0.887 ms
64 bytes from racnode1.oralabs.com (192.168.1.41): icmp_seq=4 ttl=64 time=0.703 ms
64 bytes from racnode1.oralabs.com (192.168.1.41): icmp_seq=5 ttl=64 time=0.707 ms

--- racnode1.oralabs.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.569/0.708/0.887/0.107 ms
PING racnode2.oralabs.com (192.168.1.42) 56(84) bytes of data.
64 bytes from racnode2.oralabs.com (192.168.1.42): icmp_seq=1 ttl=64 time=1.03 ms
64 bytes from racnode2.oralabs.com (192.168.1.42): icmp_seq=2 ttl=64 time=0.749 ms
64 bytes from racnode2.oralabs.com (192.168.1.42): icmp_seq=3 ttl=64 time=0.705 ms
64 bytes from racnode2.oralabs.com (192.168.1.42): icmp_seq=4 ttl=64 time=0.577 ms
64 bytes from racnode2.oralabs.com (192.168.1.42): icmp_seq=5 ttl=64 time=0.463 ms

--- racnode2.oralabs.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4004ms
rtt min/avg/max/mdev = 0.463/0.706/1.036/0.193 ms
PING racnode3.oralabs.com (192.168.1.49) 56(84) bytes of data.
64 bytes from racnode3.oralabs.com (192.168.1.49): icmp_seq=1 ttl=64 time=0.097 ms
64 bytes from racnode3.oralabs.com (192.168.1.49): icmp_seq=2 ttl=64 time=0.056 ms
64 bytes from racnode3.oralabs.com (192.168.1.49): icmp_seq=3 ttl=64 time=0.045 ms
64 bytes from racnode3.oralabs.com (192.168.1.49): icmp_seq=4 ttl=64 time=0.050 ms
64 bytes from racnode3.oralabs.com (192.168.1.49): icmp_seq=5 ttl=64 time=0.049 ms

--- racnode3.oralabs.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.045/0.059/0.097/0.020 ms
Remote host reachability check succeeded.
The following hosts are reachable: racnode1 racnode2 racnode3.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost racnode1
numhosts 3
The script will setup SSH connectivity from the host racnode3 to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host racnode3
and the remote hosts without being prompted for passwords or confirmations.

NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.

NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes

The user chose yes
User chose to skip passphrase related questions.
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /home/oracle/.ssh/config, it would be backed up to /home/oracle/.ssh/config.backup.
Removing old private/public keys on local host
Running SSH keygen on local host with empty passphrase
Generating public/private rsa key pair.
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
36:cd:30:b3:1c:a2:9e:a7:a2:9d:1b:35:76:71:2d:e8 oracle@racnode3
The key's randomart image is:
+--[ RSA 1024]----+
|                 |
|       . .       |
|      + B .      |
|     o = X       |
|    = E S o      |
|   + + . .       |
|  . o .          |
| ..o o           |
|..+o.            |
+-----------------+
Creating .ssh directory and setting permissions on remote host racnode1
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.
The script would create ~oracle/.ssh/config file on remote host racnode1. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host racnode1.
Warning: Permanently added 'racnode1,192.168.1.41' (RSA) to the list of known hosts.
oracle@racnode1's password: 
Done with creating .ssh directory and setting permissions on remote host racnode1.
Creating .ssh directory and setting permissions on remote host racnode2
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.
The script would create ~oracle/.ssh/config file on remote host racnode2. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host racnode2.
Warning: Permanently added 'racnode2,192.168.1.42' (RSA) to the list of known hosts.
oracle@racnode2's password: 
Done with creating .ssh directory and setting permissions on remote host racnode2.
Creating .ssh directory and setting permissions on remote host racnode3
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR oracle. THIS IS AN SSH REQUIREMENT.
The script would create ~oracle/.ssh/config file on remote host racnode3. If a config file exists already at ~oracle/.ssh/config, it would be backed up to ~oracle/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host racnode3.
Warning: Permanently added 'racnode3,192.168.1.49' (RSA) to the list of known hosts.
oracle@racnode3's password: 
Done with creating .ssh directory and setting permissions on remote host racnode3.
Copying local host public key to the remote host racnode1
The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode1.
oracle@racnode1's password: 
Done copying local host public key to the remote host racnode1
Copying local host public key to the remote host racnode2
The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode2.
oracle@racnode2's password: 
Done copying local host public key to the remote host racnode2
Copying local host public key to the remote host racnode3
The user may be prompted for a password or passphrase here since the script would be using SCP for host racnode3.
oracle@racnode3's password: 
Done copying local host public key to the remote host racnode3
cat: /home/oracle/.ssh/known_hosts.tmp: No such file or directory
cat: /home/oracle/.ssh/authorized_keys.tmp: No such file or directory
SSH setup is complete.

------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user oracle.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~oracle or ~oracle/.ssh on the remote host may not be owned by oracle.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the /sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--racnode1:--
Running /usr/bin/ssh -x -l oracle racnode1 date to verify SSH connectivity has been setup from local host to racnode1.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Mon May 12 15:09:04 AST 2014
------------------------------------------------------------------------
--racnode2:--
Running /usr/bin/ssh -x -l oracle racnode2 date to verify SSH connectivity has been setup from local host to racnode2.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Mon May 12 15:09:05 AST 2014
------------------------------------------------------------------------
--racnode3:--
Running /usr/bin/ssh -x -l oracle racnode3 date to verify SSH connectivity has been setup from local host to racnode3.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Mon May 12 15:09:05 AST 2014
------------------------------------------------------------------------
SSH verification complete.
[oracle@racnode3 sshsetup]$


- Verify ssh setup from all cluster nodes:


racnode1:

[oracle@racnode1 12c_DB]$ ssh racnode2 date
Mon May 12 15:21:30 AST 2014
[oracle@racnode1 12c_DB]$ ssh racnode3 date
Mon May 12 15:21:33 AST 2014
[oracle@racnode1 12c_DB]$ 

====================

racnode2:

[oracle@racnode2 .ssh]$ cd
[oracle@racnode2 ~]$ ssh racnode1 date
Mon May 12 15:23:13 AST 2014
[oracle@racnode2 ~]$ ssh racnode3 date
Mon May 12 15:23:21 AST 2014
[oracle@racnode2 ~]$ 


==========================

racnode3:

[oracle@racnode3 12c_GI]$ ssh racnode1 date
Mon May 12 15:23:42 AST 2014
[oracle@racnode3 12c_GI]$ ssh racnode2 date
Mon May 12 15:23:47 AST 2014
[oracle@racnode3 12c_GI]$ 


5) Run cluvfy to verify addition of node

- Run cluvfy to verify all pre-requisites are in place and node is ready to be added in an existing cluster.



[oracle@racnode1 ~]$ cluvfy stage -pre nodeadd -n racnode3 -verbose > cluvfy_nodeadd_racnode3_1.txt

Performing pre-checks for node addition 

Checking node reachability...

Check: Node reachability from node "racnode1"
  Destination Node                      Reachable?              
  ------------------------------------  ------------------------
  racnode3                              yes                     
Result: Node reachability check passed from node "racnode1"


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Status                  
  ------------------------------------  ------------------------
  racnode3                              passed                  
Result: User equivalence check passed for user "oracle"

Check: Package existence for "cvuqdisk" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      cvuqdisk-1.0.9-1          cvuqdisk-1.0.9-1          passed    
  racnode2      cvuqdisk-1.0.9-1          cvuqdisk-1.0.9-1          passed    
  racnode1      cvuqdisk-1.0.9-1          cvuqdisk-1.0.9-1          passed    
Result: Package existence check passed for "cvuqdisk"

Checking CRS integrity...
The Oracle Clusterware is healthy on node "racnode1"

CRS integrity check passed

Clusterware version consistency passed.

Checking shared resources...

Checking CRS home location...
"/u01/grid_12c" is not shared
Result: Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  racnode1                              passed                  
  racnode2                              passed                  
  racnode3                              passed                  

Verification of the hosts config file successful


Interface information for node "racnode1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.41    192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:BF:9D:56 1500  
 eth0   192.168.1.47    192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:BF:9D:56 1500  
 eth0   192.168.1.45    192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:BF:9D:56 1500  
 eth1   10.10.15.21     10.0.0.0        0.0.0.0         192.168.1.1     08:00:27:6E:E6:AA 1500  
 eth1   169.254.213.68  169.254.0.0     0.0.0.0         192.168.1.1     08:00:27:6E:E6:AA 1500  
 virbr0 192.168.122.1   192.168.122.0   0.0.0.0         192.168.1.1     52:54:00:E4:D7:42 1500  

Interface information for node "racnode2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.1.42    192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:70:43:A5 1500  
 eth0   192.168.1.46    192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:70:43:A5 1500  
 eth0   192.168.1.44    192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:70:43:A5 1500  
 eth0   192.168.1.48    192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:70:43:A5 1500  
 eth1   10.10.15.22     10.0.0.0        0.0.0.0         192.168.1.1     08:00:27:F6:68:BD 1500  
 eth1   169.254.245.239 169.254.0.0     0.0.0.0         192.168.1.1     08:00:27:F6:68:BD 1500  
 virbr0 192.168.122.1   192.168.122.0   0.0.0.0         192.168.1.1     52:54:00:E4:D7:42 1500  


Interface information for node "racnode3"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth1   10.10.15.23     10.0.0.0        0.0.0.0         192.168.1.1     08:00:27:DA:A7:F1 1500  
 eth0   192.168.1.49    192.168.1.0     0.0.0.0         192.168.1.1     08:00:27:32:01:8B 1500  
 virbr0 192.168.122.1   192.168.122.0   0.0.0.0         192.168.1.1     52:54:00:E4:D7:42 1500  


Check: Node connectivity using interfaces on subnet "10.0.0.0"

Check: Node connectivity of subnet "10.0.0.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  racnode3[10.10.15.23]           racnode2[10.10.15.22]           yes             
  racnode3[10.10.15.23]           racnode1[10.10.15.21]           yes             
  racnode2[10.10.15.22]           racnode1[10.10.15.21]           yes             
Result: Node connectivity passed for subnet "10.0.0.0" with node(s) racnode3,racnode2,racnode1

Result: Node connectivity passed for subnet "10.0.0.0" with node(s) racnode3,racnode2,racnode1


Check: TCP connectivity of subnet "10.0.0.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  racnode3:10.10.15.23            racnode2:10.10.15.22            passed          
  racnode3:10.10.15.23            racnode1:10.10.15.21            passed          
Result: TCP connectivity check passed for subnet "10.0.0.0"


Check: Node connectivity using interfaces on subnet "192.168.1.0"

Check: Node connectivity of subnet "192.168.1.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  racnode1[192.168.1.41]          racnode2[192.168.1.42]          yes             
  racnode1[192.168.1.41]          racnode3[192.168.1.49]          yes             
  racnode1[192.168.1.41]          racnode1[192.168.1.45]          yes             
  racnode1[192.168.1.41]          racnode2[192.168.1.44]          yes             
  racnode1[192.168.1.41]          racnode2[192.168.1.48]          yes             
  racnode1[192.168.1.41]          racnode1[192.168.1.47]          yes             
  racnode1[192.168.1.41]          racnode2[192.168.1.46]          yes             
  racnode2[192.168.1.42]          racnode3[192.168.1.49]          yes             
  racnode2[192.168.1.42]          racnode1[192.168.1.45]          yes             
  racnode2[192.168.1.42]          racnode2[192.168.1.44]          yes             
  racnode2[192.168.1.42]          racnode2[192.168.1.48]          yes             
  racnode2[192.168.1.42]          racnode1[192.168.1.47]          yes             
  racnode2[192.168.1.42]          racnode2[192.168.1.46]          yes             
  racnode3[192.168.1.49]          racnode1[192.168.1.45]          yes             
  racnode3[192.168.1.49]          racnode2[192.168.1.44]          yes             
  racnode3[192.168.1.49]          racnode2[192.168.1.48]          yes             
  racnode3[192.168.1.49]          racnode1[192.168.1.47]          yes             
  racnode3[192.168.1.49]          racnode2[192.168.1.46]          yes             
  racnode1[192.168.1.45]          racnode2[192.168.1.44]          yes             
  racnode1[192.168.1.45]          racnode2[192.168.1.48]          yes             
  racnode1[192.168.1.45]          racnode1[192.168.1.47]          yes             
  racnode1[192.168.1.45]          racnode2[192.168.1.46]          yes             
  racnode2[192.168.1.44]          racnode2[192.168.1.48]          yes             
  racnode2[192.168.1.44]          racnode1[192.168.1.47]          yes             
  racnode2[192.168.1.44]          racnode2[192.168.1.46]          yes             
  racnode2[192.168.1.48]          racnode1[192.168.1.47]          yes             
  racnode2[192.168.1.48]          racnode2[192.168.1.46]          yes             
  racnode1[192.168.1.47]          racnode2[192.168.1.46]          yes             
Result: Node connectivity passed for subnet "192.168.1.0" with node(s) racnode1,racnode2,racnode3


Check: TCP connectivity of subnet "192.168.1.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  racnode1:192.168.1.41           racnode2:192.168.1.42           passed          
  racnode1:192.168.1.41           racnode3:192.168.1.49           passed          
  racnode1:192.168.1.41           racnode1:192.168.1.45           passed          
  racnode1:192.168.1.41           racnode2:192.168.1.44           passed          
  racnode1:192.168.1.41           racnode2:192.168.1.48           passed          
  racnode1:192.168.1.41           racnode1:192.168.1.47           passed          
  racnode1:192.168.1.41           racnode2:192.168.1.46           passed          
Result: TCP connectivity check passed for subnet "192.168.1.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "10.0.0.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "10.0.0.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "10.0.0.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.

Check: Total memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      3.8637GB (4051340.0KB)    4GB (4194304.0KB)         failed    
  racnode1      3.8637GB (4051340.0KB)    4GB (4194304.0KB)         failed    
Result: Total memory check failed


Check: Available memory 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      3.6675GB (3845656.0KB)    50MB (51200.0KB)          passed    
  racnode1      2.5264GB (2649084.0KB)    50MB (51200.0KB)          passed    
Result: Available memory check passed

Check: Swap space 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      8GB (8388600.0KB)         3.8637GB (4051340.0KB)    passed    
  racnode1      8GB (8388600.0KB)         3.8637GB (4051340.0KB)    passed    
Result: Swap space check passed

Check: Free disk space for "racnode3:/usr,racnode3:/var,racnode3:/etc,racnode3:/sbin,racnode3:/tmp" 
  Path              Node Name     Mount point   Available     Required      Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              racnode3      /             4.7129GB      1.0635GB      passed      
  /var              racnode3      /             4.7129GB      1.0635GB      passed      
  /etc              racnode3      /             4.7129GB      1.0635GB      passed      
  /sbin             racnode3      /             4.7129GB      1.0635GB      passed      
  /tmp              racnode3      /             4.7129GB      1.0635GB      passed      
Result: Free disk space check passed for "racnode3:/usr,racnode3:/var,racnode3:/etc,racnode3:/sbin,racnode3:/tmp"

Check: Free disk space for "racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp" 
  Path              Node Name     Mount point   Available     Required      Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              racnode1      /             4.6182GB      1.0635GB      passed      
  /var              racnode1      /             4.6182GB      1.0635GB      passed      
  /etc              racnode1      /             4.6182GB      1.0635GB      passed      
  /sbin             racnode1      /             4.6182GB      1.0635GB      passed      
  /tmp              racnode1      /             4.6182GB      1.0635GB      passed      
Result: Free disk space check passed for "racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp"

Check: Free disk space for "racnode3:/u01/grid_12c" 



Check: Free disk space for "racnode3:/u01/grid_12c" 
  Path              Node Name     Mount point   Available     Required      Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/grid_12c     racnode3      /u01          36.0518GB     6.9GB         passed      
Result: Free disk space check passed for "racnode3:/u01/grid_12c"

Check: Free disk space for "racnode1:/u01/grid_12c" 
  Path              Node Name     Mount point   Available     Required      Status      
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/grid_12c     racnode1      /u01          22.2656GB     6.9GB         passed      
Result: Free disk space check passed for "racnode1:/u01/grid_12c"

Check: User existence for "oracle" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  racnode3      passed                    exists(500)             
  racnode1      passed                    exists(500)             

Checking for multiple users with UID value 500
Result: Check for multiple users with UID value 500 passed 
Result: User existence check passed for "oracle"

Check: Run level 
  Node Name     run level                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      5                         3,5                       passed    
  racnode1      5                         3,5                       passed    
Result: Run level check passed

Check: Hard limits for "maximum open file descriptors" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  racnode3          hard          65536         65536         passed          
  racnode1          hard          65536         65536         passed   

Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  racnode3          soft          1024          1024          passed          
  racnode1          soft          1024          1024          passed          
Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  racnode3          hard          16384         16384         passed          
  racnode1          hard          16384         16384         passed          
Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes" 
  Node Name         Type          Available     Required      Status          
  ----------------  ------------  ------------  ------------  ----------------
  racnode3          soft          2047          2047          passed          
  racnode1          soft          2047          2047          passed          
Result: Soft limits check passed for "maximum user processes"

Check: System architecture 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      x86_64                    x86_64                    passed    
  racnode1      x86_64                    x86_64                    passed    
Result: System architecture check passed

Check: Kernel version 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      2.6.32-100.34.1.el6uek.x86_64  2.6.32                    passed    
  racnode1      2.6.32-100.34.1.el6uek.x86_64  2.6.32                    passed    
Result: Kernel version check passed

Check: Kernel parameter for "semmsl" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          250           250           250           passed          
  racnode3          250           250           250           passed          
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          32000         32000         32000         passed          
  racnode3          32000         32000         32000         passed          
Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          100           100           100           passed          
  racnode3          100           100           100           passed          
Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          128           128           128           passed          
  racnode3          128           128           128           passed          
Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          4398046511104  4398046511104  2074286080    passed          
  racnode3          4398046511104  4398046511104  2074286080    passed          
Result: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          4096          4096          4096          passed          
  racnode3          4096          4096          4096          passed          
Result: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          1073741824    1073741824    405134        passed          
  racnode3          1073741824    1073741824    405134        passed          
Result: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          6815744       6815744       6815744       passed          
  racnode3          6815744       6815744       6815744       passed          
Result: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed          
  racnode3          between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed          
Result: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          262144        262144        262144        passed          
  racnode3          262144        262144        262144        passed          
Result: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          4194304       4194304       4194304       passed          
  racnode3          4194304       4194304       4194304       passed          
Result: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          262144        262144        262144        passed          
  racnode3          262144        262144        262144        passed          
Result: Kernel parameter check passed for "wmem_default"

Check: Kernel parameter for "wmem_max" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          1048576       1048576       1048576       passed          
  racnode3          1048576       1048576       1048576       passed          
Result: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr" 
  Node Name         Current       Configured    Required      Status        Comment     
  ----------------  ------------  ------------  ------------  ------------  ------------
  racnode1          1048576       1048576       1048576       passed          
  racnode3          1048576       1048576       1048576       passed          
Result: Kernel parameter check passed for "aio-max-nr"


Check: Package existence for "binutils" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      binutils-2.20.51.0.2-5.20.el6  binutils-2.20.51.0.2      passed    
  racnode1      binutils-2.20.51.0.2-5.20.el6  binutils-2.20.51.0.2      passed    
Result: Package existence check passed for "binutils"

Check: Package existence for "compat-libcap1" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      compat-libcap1-1.10-1     compat-libcap1-1.10       passed    
  racnode1      compat-libcap1-1.10-1     compat-libcap1-1.10       passed    
Result: Package existence check passed for "compat-libcap1"

Check: Package existence for "compat-libstdc++-33(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed    
  racnode1      compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed    
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "libgcc(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      libgcc(x86_64)-4.4.5-6.el6  libgcc(x86_64)-4.4.4      passed    
  racnode1      libgcc(x86_64)-4.4.5-6.el6  libgcc(x86_64)-4.4.4      passed    
Result: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      libstdc++(x86_64)-4.4.5-6.el6  libstdc++(x86_64)-4.4.4   passed    
  racnode1      libstdc++(x86_64)-4.4.5-6.el6  libstdc++(x86_64)-4.4.4   passed    
Result: Package existence check passed for "libstdc++(x86_64)"


Check: Package existence for "libstdc++-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      libstdc++-devel(x86_64)-4.4.5-6.el6  libstdc++-devel(x86_64)-4.4.4  passed    
  racnode1      libstdc++-devel(x86_64)-4.4.5-6.el6  libstdc++-devel(x86_64)-4.4.4  passed    
Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      sysstat-9.0.4-18.el6      sysstat-9.0.4             passed    
  racnode1      sysstat-9.0.4-18.el6      sysstat-9.0.4             passed    
Result: Package existence check passed for "sysstat"

Check: Package existence for "gcc" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      gcc-4.4.5-6.el6           gcc-4.4.4                 passed    
  racnode1      gcc-4.4.5-6.el6           gcc-4.4.4                 passed    
Result: Package existence check passed for "gcc"

Check: Package existence for "gcc-c++" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      gcc-c++-4.4.5-6.el6       gcc-c++-4.4.4             passed    
  racnode1      gcc-c++-4.4.5-6.el6       gcc-c++-4.4.4             passed    
Result: Package existence check passed for "gcc-c++"

Check: Package existence for "ksh" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      ksh-20100621-6.el6        ksh-...                   passed    
  racnode1      ksh-20100621-6.el6        ksh-...                   passed    
Result: Package existence check passed for "ksh"

Check: Package existence for "make" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      make-3.81-19.el6          make-3.81                 passed    
  racnode1      make-3.81-19.el6          make-3.81                 passed    
Result: Package existence check passed for "make"

Check: Package existence for "glibc(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      glibc(x86_64)-2.12-1.25.el6  glibc(x86_64)-2.12        passed    
  racnode1      glibc(x86_64)-2.12-1.25.el6  glibc(x86_64)-2.12        passed    
Result: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "glibc-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      glibc-devel(x86_64)-2.12-1.25.el6  glibc-devel(x86_64)-2.12  passed    
  racnode1      glibc-devel(x86_64)-2.12-1.25.el6  glibc-devel(x86_64)-2.12  passed    
Result: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "libaio(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed    
  racnode1      libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed    
Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "libaio-devel(x86_64)" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed    
  racnode1      libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed    
Result: Package existence check passed for "libaio-devel(x86_64)"


Check: Package existence for "nfs-utils" 
  Node Name     Available                 Required                  Status    
  ------------  ------------------------  ------------------------  ----------
  racnode3      nfs-utils-1.2.3-7.el6     nfs-utils-1.2.3-15        failed    
  racnode1      nfs-utils-1.2.3-7.el6     nfs-utils-1.2.3-15        failed    
Result: Package existence check failed for "nfs-utils"

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed 

Check: Current group ID 
Result: Current group ID check passed

Starting check for consistency of primary group of root user
  Node Name                             Status                  
  ------------------------------------  ------------------------
  racnode3                              passed                  
  racnode1                              passed                  

Check for consistency of root user's primary group passed

Check: Group existence for "dbarac" 
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  racnode3      passed                    exists                  
  racnode1      passed                    exists                  
Result: Group existence check passed for "dbarac"

Checking ASMLib configuration.
  Node Name                             Status                  
  ------------------------------------  ------------------------
  racnode1                              passed                  
  racnode3                              passed                  
Result: Check for ASMLib configuration passed.

Result: Check for ASMLib configuration passed.

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Check: Time zone consistency 
Result: Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS
) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running

Result: Clock synchronization check using Network Time Protocol(NTP) passed


Checking to make sure user "oracle" is not in "root" group
  Node Name     Status                    Comment                 
  ------------  ------------------------  ------------------------
  racnode3      passed                    does not exist          
  racnode1      passed                    does not exist          
Result: User "oracle" is not part of "root" group. Check passed
Checking integrity of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
"domain" and "search" entries do not coexist in any  "/etc/resolv.conf" file
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
"domain" entry does not exist in any "/etc/resolv.conf" file
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
More than one "search" entry does not exist in any "/etc/resolv.conf" file
All nodes have same "search" order defined in file "/etc/resolv.conf"
Checking DNS response time for an unreachable node
  Node Name                             Status                  
  ------------------------------------  ------------------------
  racnode1                              passed                  
  racnode3                              passed                  
The DNS response time for an unreachable node is within acceptable limit on all nodes

Check for integrity of file "/etc/resolv.conf" passed


Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed


Pre-check for node addition was unsuccessful on all the nodes. 


It failed on Memory and nfsutil package (pkg exists with lower version - hence ignored). This can be ignored. As I am configuring it on virtual box so I have limitations for adding additional memory to this node. If you have enough resources available then you can Increase it.


6) Add node to an existing 12c cluster:

addnode.sh is the script which will be used for adding new cluster nodes. This script will be located under $GI_HOME/addnode directory. This script should be executed from any of the active cluster node.



[oracle@racnode1 addnode]$ ./addnode.sh  "CLUSTER_NEW_NODES={racnode3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racnode3-vip}"
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 4508 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 8191 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
You can find the log of this install session at:
 /u01/oraInventory/logs/addNodeActions2014-05-13_10-30-42AM.log


- You can add nodes using silent option it will not prompt for any inputs. But i preferred to add it using GUI mode.

- Follow screens:
















running root.sh script:


[root@racnode3 ~]# /u01/oraInventory/orainstRoot.sh
Changing permissions of /u01/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/oraInventory to dbarac.
The execution of the script is complete.
[root@racnode3 ~]# 



------------------


[root@racnode3 ~]# sh /u01/grid_12c/root.sh
Performing root user operation for Oracle 12c 

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/grid_12c

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/grid_12c/crs/install/crsconfig_params
2014/05/13 10:58:57 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful
2014/05/13 10:59:39 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'racnode3'
CRS-2677: Stop of 'ora.drivers.acfs' on 'racnode3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'racnode3'
CRS-2672: Attempting to start 'ora.evmd' on 'racnode3'
CRS-2676: Start of 'ora.mdnsd' on 'racnode3' succeeded
CRS-2676: Start of 'ora.evmd' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'racnode3'
CRS-2676: Start of 'ora.gpnpd' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'racnode3'
CRS-2676: Start of 'ora.gipcd' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode3'
CRS-2676: Start of 'ora.cssdmonitor' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'racnode3'
CRS-2672: Attempting to start 'ora.diskmon' on 'racnode3'
CRS-2676: Start of 'ora.diskmon' on 'racnode3' succeeded
CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'racnode3'
CRS-2676: Start of 'ora.cssd' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'racnode3'
CRS-2672: Attempting to start 'ora.ctssd' on 'racnode3'
CRS-2676: Start of 'ora.ctssd' on 'racnode3' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'racnode3'
CRS-2676: Start of 'ora.asm' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'racnode3'
CRS-2676: Start of 'ora.storage' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'racnode3'
CRS-2676: Start of 'ora.crsd' on 'racnode3' succeeded
CRS-6017: Processing resource auto-start for servers: racnode3
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'racnode2'
CRS-2672: Attempting to start 'ora.ons' on 'racnode3'
CRS-2672: Attempting to start 'ora.DATA.dg' on 'racnode3'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'racnode2' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'racnode2'
CRS-2677: Stop of 'ora.scan2.vip' on 'racnode2' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'racnode3'
CRS-2676: Start of 'ora.scan2.vip' on 'racnode3' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'racnode3'
CRS-2676: Start of 'ora.ons' on 'racnode3' succeeded
CRS-2676: Start of 'ora.DATA.dg' on 'racnode3' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'racnode3' succeeded
CRS-6016: Resource auto-start has completed for server racnode3
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2014/05/13 11:06:04 CLSRSC-343: Successfully started Oracle clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2014/05/13 11:06:32 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded






- Verify Services are running and stable across all cluster nodes.



Now all cluster services are active on cluster nodes.  Now we have to proceed with adding a database instance to a newly connected cluster node.

8) Add Instance to a third node:

- Use script "addnode.sh" script under directory $RDBMS_ORACLE_HOME/addnode

- You can add this node using "silent" option, but i prefer to do it in a GUI mode.

- Follow screens for addition of Instance for a newly configured RAC node.

- This will add RDBMS software to racnode3
















- Now we will use "dbca" from any active cluster nodes to add third Instance on racnode3













- Make sure that database Instance running on racnode3 - "RACDB3"

Hope this post will be helpful. In next article i will demonstrate node deletion on Oracle 12c Grid Infrastructure.

Any comments/suggestions/recommendations are highly appreciated.

regards,
X A H E E R

2 comments:

21st Century Software Solutions said...

No matter what database platform you’re running, dbaDIRECT is your answer for 24×7 monitoring and expert skill, at a lower cost than what’s possible with internal administration. We offer each of our core remote management services for all major database platforms, including Oracle, Sybase, MySQL, SQLServer, and IBM DB2. Our team of DBAs is here ’round the clock for your database needs, capable of servicing any size organization at any time of the day. Period.
Remote dba services support 24x7 of below mentioned applications - more… Online Training- Corporate Training- IT Support U Can Reach Us On +917386622889 - +919000444287
http://www.21cssindia.com/support.html

Herry jonson said...

ARINET DBA Services is a Chicago, IL based organization established in 2013. Uniting more than 100+ years of combined involvement in giving quality Oracle database Support administrations to American organizations, we've given the diverse options about remote dba experts, remote dba organizations, remote dba reinforce, remote database, oracle remote, dba remote, database association, prophet dba reinforce, Oracle sponsorship and Oracle Consultants.