Chapter 2. Installation and Configuration of RHEL4
Each node that will become an Oracle RAC cluster node requires RHEL4 Update 3 or higher. Previous versions of RHEL4 are not recommended. The lock servers must also be installed with the same version of RHEL4, but can either be 32-bit or 64-bit. In our sample cluster, the RAC nodes and the external lock servers are all 64-bit Opteron servers running 64-bit versions of both RHEL and Oracle.
When installing RHEL4 on the RAC nodes, there is some basic minimal provisioning that should be followed. Although most modern systems have very large system disks, all the software ever needed to install either a 10gRAC or CS4 lock server would actually fit on a 9GB disk. Here are some typical minimums:
/boot 128MB swap 4096MB # 10gR2 Installer expects at least 3932MB / 4096MB # A RAC/CS4-ready RHEL4 is about 2.5GB /home 1024MB # Most of the ORACLE files are on GFS
Most customers will deploy with much larger drives, but this example helps explain what is being allocated. Oracle files are mostly installed on a GFS volume. The /home requirements are so minimal, that it can be safely folded into the root volume and still not exceed a 4GB partition. The size of the RHEL install including the need to recompile the kernel will rarely exceed 4GB. Once installed, the only space growth would come from system logs or crash.
The following network interfaces can be configured during the installation session:
192.168.1.100 (SQL*Net App Tier)
192.168.2.100 (Oracle RAC GCS/CS4-GULM)
192.168.3.100 (Optional to isolate RAC from CS4-GULM)
The first two interfaces are required; the optional third network interface could be deployed to further separate CS4 lock traffic from Oracle RAC traffic. In addition, NIC bonding (which is supported by Oracle RAC) is recommended for all interfaces if further hardening is required. For the sake of simplicity in this example, this cluster does not deploy bonding Ethernet interfaces.
DNS should be configured and enabled so that ntpd can locate the default clock servers. The ntpd process normally needs DNS to look up the default name servers. If ntpd will be configured to use raw IP addresses, then DNS will not be required. This sample cluster will configure DNS during the install and ntpd during post-install.
The firewall software and security features are also optional for Oracle. Normally security is to place the database and lock servers in an isolated VLAN that has been locked down using ACL (Access Control Lists) entries in the network switches. The Oracle SQL*Net listeners will likely require an ACL to be allocated and opened because the datacenter has most or all ports disabled.
During the installation of RHEL4, the installer will ask if you would like to do a standard or customized install. The minimum subset required to configure and install both CS4 and Oracle 10gRAC is:
X Windows
Server Config Tools
Development Tools
X Software Development
Compatibility Software Development (32-bit)*
Admin Tools
System Tools
Compatibility Arch Support (32-bit)*
* Sub-systems that are only available on x64 installs
The compatibility subsets appear as options only during 64-bit install sessions. More subsets can of course be selected, but it is recommended that an Oracle 10gRAC node not be provisioned to do anything other than run Oracle 10gRAC. The same recommendation also applies to GULM lock servers.
The system installs with a default run level of 5, although 3 can also be used. Level 5 is required only because the GUI tools are executed from the system console. The GUI tools are executed using a remote X session and so level 3 is sufficient. Installing CS4 and Oracle often require the user to not be co-located to the console, so running remote X (even through a firewall) is very common, and this is how this cluster was configured. The run level can be changed in /etc/inittab
# Default runlevel. The runlevels used by RHS are: # 0 - halt (Do NOT set initdefault to this) # 1 - Single user mode # 2 - Multiuser, without NFS (The same as 3, if you do not have networking) # 3 - Full multiuser mode # 4 - unused # 5 - X11 # 6 - reboot (Do NOT set initdefault to this) # id:2:initdefault
The ntpd service must be run on all RAC and GULM servers and is not enabled by default. It can be enabled for both run level init level 3 and 5:
lock1 $ sudo chkconfig –level 35 ntpd on
RAC clusters need to their clocks to be within a few minutes of each other, but not completely synchronous. Using ntpd should provide accuracy within a second or better and this is more than adequate. If the clocks are wildly out on the system after install, ntpd will slowly slew the clocks back into synchronization and this will not happen quickly enough to be effective. In order to use ntpdate as a one-time operation ntpd must not be running.
Once the system is installed, then it needs to be configured. The iLO configuration does vary a bit from machine to machine, but the username, password and IP address of the iLO interface is all that is required to configure fencing. The server’s BIOS Advanced section is usually where this is configured. The version of iLO that appears on most DL1xx series boxes (i100) is not supported as it does not support sshd.
The Storage must be shared and visible on all nodes in the cluster including the GULM lock servers. The FCP driver will present SCSI LUNs to the operating system.
Here are the results of running dmesg | grep SCSI:
lock1 $ dmesg | grep scsi
Attached scsi disk sda at scsi0, channel 0, id 0, lun 0 Attached scsi disk sdb at scsi0, channel 0, id 0, lun 2 Attached scsi disk sdc at scsi0, channel 0, id 0, lun 7 Attached scsi disk sdd at scsi0, channel 0, id 0, lun 1 Attached scsi disk sde at scsi0, channel 0, id 0, lun 6 Attached scsi disk sdf at scsi0, channel 0, id 0, lun 3 Attached scsi disk sdg at scsi0, channel 0, id 0, lun 4 Attached scsi disk sdh at scsi0, channel 0, id 0, lun 5