Oracle 11g RAC Student Guide Volume 1 - Ebook download as PDF File .pdf), Text File I-6 Oracle Real Application Clusters I-7 Benefits of Using RAC I-8 Clusters and Oracle 11g R2 RAC and Grid Infrastructure by Rene Kundersma . Oracle Rac Student Guide - Rac Student Guide Oracle 11g RAC . by Step Guide for Installing Oracle RAC 11gR2 on Linux -Download as PDF. Oracle 11g rac student guide volume 1. Download Not Available terney.info · nickhit. Oracle 11g rac student guide volume 1 Oracle 11g rac.
|Language:||English, Spanish, Japanese|
|Distribution:||Free* [*Registration Required]|
Oracle Database 2 Day + Real Application Clusters Guide 11g Release 2 . About Oracle Grid Infrastructure for a Cluster and Oracle RAC. Ebook Oracle 11gr2 Rac Student Guide currently available at terney.info for Factory Service Repair Manual Download Pdf, Verizon Enlighten User. security essentials exam study guide - 1 oracle database 11g advanced pl sql student pdf - oracle database 11g 11gr2 oracle real application clusters / grid infrastructure n.f. upgrading to oracle database 11g release 2 oracle rac database oracle database 11g: managing oracle on linux.
Locking prevents two processes from changing the same resource or row at the same time. Necessity of Global Resources In single-instance environments. Internode synchronization guarantees that each instance sees the most recent version of a block in its buffer cache. In RAC environments. The slide shows you what would happen in the absence of cache coordination.
Cache coherency is the technique of keeping multiple copies of a block consistent between different Oracle instances. The GES also performs deadlock detection to all deadlock-sensitive enqueues and resources. Maintaining cache coherency is an important part of a RAC activity. The primary resources of the GES controls are dictionary cache locks and library cache locks. Global Resources Coordination Cluster operations require synchronization among all instances to control shared access to resources.
The GES manages all non—Cache Fusion interinstance resource operations and tracks the status of all Oracle enqueuing mechanisms. This instance is called the resource master. GCS implements cache coherency by using what is called the Cache Fusion algorithm. A past image block cannot be modified further. The first instance receives the message and sends the block to the second instance.
Global Cache Coordination: Example The scenario described in the slide assumes that the data block has been changed. The second instance attempting to modify the block submits a request to the GCS. The first instance retains the dirty buffer for recovery purposes. In this case. The GCS transmits the request to the holder.
On receipt of the block. This dirty image of the block is also called a past image of the block. The data block is not written to disk before the resource is granted to the second instance.
After receipt of the notification. Because multiple versions of the same data block with different changes can exist in the caches of instances in the cluster.
The second instance records the completion of the write operation with the GCS. The GCS forwards the request to the second instance. Instance 2. In this scenario. Who has the current version of that block? Write to Disk Coordination: Example The scenario described in the slide illustrates how an instance can perform a checkpoint at any time or replace buffers in the cache as a response to free buffer requests. The first instance sends a write request to the GCS. The second instance receives the write request and writes the block to disk.
These past images are no longer needed for recovery. A write request for a data block can originate in any instance that has the current or past image of the block. It must also ensure that all previous versions are purged from the other caches. As the resources are remastered. Instead of remastering all resources across all nodes.
RAC uses an algorithm called lazy remastering to remaster only a minimal number of resources during a reconfiguration. Dynamic Reconfiguration When one instance departs the cluster. For each instance. This is illustrated on the slide.
When the second instance fails. The system automatically moves mastership of undo segment objects to the instance that owns the undo segments. The lower part of the graphic shows you the situation after dynamic remastering occurred.
Object Affinity and Dynamic Remastering In addition to dynamic resource reconfiguration. In order to determine whether dynamic remastering is necessary. Object Affinity and Dynamic Remastering Messages are sent to remote node when reading into cache. This means that if an instance.
In that case. The upper part of the graphic shows you the situation where the same object has master resources spread over different instances. This is called dynamic remastering. The basic idea is to master a buffer cache resource on the instance where it is mostly accessed. The parallel execution coordinator runs on the instance that the client connects to. In contrast. Global Dynamic Performance Views Global dynamic performance views retrieve information about all started instances accessing one RAC database.
Because blocks may be cached across instances. If you use the recommended automatic memory management feature as a starting point.
These values are heuristics. Cache Fusion block transfers operate independently of these user-visible row-level locks. These locks are held until the application commits or rolls back the transaction. Any other application process will be blocked if it requests a lock on the same row.
These row-level locks are created when data manipulation language DML operations. The transfer of data blocks by the GCS is a low-level process that can occur without waiting for rowlevel locks to be released. GCS provides access to data blocks allowing multiple transactions to proceed in parallel. Blocks may be transferred from one instance to another while row-level locks are held.
Real Application Clusters further extends these efficiencies to clusters by enabling the redistribution of work across all the parallel execution slaves of a cluster. In a RAC environment. In this manner. In real-world decision support applications.
Parallel Execution with RAC Execution slaves have node affinity with the execution coordinator but will expand if needed. In this way. The Oracle parallel execution technology dynamically detects idle processes and assigns work to these idle processes from the queue tables of the overloaded processes. This demonstrates efficient intranode parallelism and eliminates the query coordination overhead across multiple nodes.
They provide a standard cluster interface on all platforms and perform high-availability operations. They manage what is called the global resources: Global Cache Service Processes. Diagnosability process At the cluster level. These processes are primarily used to maintain database coherency among each instance.
RAC Software Principles You may see a few additional background processes associated with a RAC instance than you would with a single-instance database.
You find these processes on each node of the cluster: Is a process monitor for the cluster There are also several tools that are used to manage the various resources available on the cluster at a global level.
Listener Cluster interface Global management: In addition. Virtual Interconnect Protocol addresses. Its size is set to around 20 MB. This permits rolling patch upgrades and eliminates the software as a single point of failure. It maintains information about the high-availability components in your cluster.
In the second phase. Its size is around MB. Although it is possible to install the RAC software on your cluster shared storage when using certain cluster file systems. In the first phase. OCR and voting files can be on redundant. You must also create at least two redo log groups for each instance. If you are using the recommended flash recovery area feature. That is why they must be stored on a file system. Archive logs cannot be placed on raw devices because their names are automatically generated and are different for each archive log.
If you do not use a CFS. If you use a cluster file system CFS. RAC Database Storage Principles The primary difference between RAC storage and storage for single-instance Oracle databases is that all data files in RAC must reside on shared devices either raw devices or cluster file systems in order to be shared by all the instances that access the same database. A shared directory can be an ASM disk group. SANs take the principle a step further by allowing storage devices to exist on their own separate networks and communicate directly with each other over very fast media.
Next in the evolutionary scale came Network Attached Storage NAS that took the storage devices away from the server and connected them directly to the network. The choice of file system is critical for RAC deployment. Over the past few years. Users can gain access to these storage devices through server systems that are connected to both the local area network LAN and SAN.
Traditional file systems do not support simultaneous mounting by more than one system. These new storage options enable multiple servers to access the same set of disks. It is a portable. One or more cluster file systems can be used to hold all RAC files. Cluster file systems require block mode storage such as fiber channel or iSCSI.
These are directly attached raw devices that require storage that operates in block mode such as fiber channel or iSCSI. Oracle Cluster File System 2 1. OCFS volumes can span one shared disk or multiple shared disks for redundancy and performance enhancements. OCFS eliminates the requirement that Oracle database files be linked to logical drives and enables all nodes to share a single Oracle Home on Windows and only. OCFS is already included.
From OTN. Oracle Cluster File System — http: The following is a list of files that can be placed on Oracle Cluster File System version 1: It can be downloaded from the Oracle Technology Network Web site. Automatic Storage Management Automatic Storage Management ASM provides a vertical integration of the file system and the volume manager that is specifically built for Oracle database files.
ASM can maintain redundant copies of data to provide fault tolerance. The ASM capabilities save DBAs time by automating manual storage and thereby increasing their ability to manage larger databases and more of them with increased efficiency.
Data management is done by selecting the desired reliability and performance characteristics for classes of data rather than with human interaction on a per-file basis. OCFS will continue to be developed and supported for those who are using it. It helps DBAs manage a dynamic database environment by allowing them to increase the database size without having to shut down the database to adjust the storage allocation.
ASM is the strategic and stated direction as to where Oracle database files should be stored. Logical Storage Managers. CFS or Raw? As already explained. Cluster file systems provide the following advantages: Using Oracle Clusterware. Similar to the interconnect. If a cluster file system CFS is available on the target platform.
In any case. Typical Cluster Stack with RAC Each node in a cluster requires a supported interconnect software protocol to support interinstance communication.
If a CFS is unavailable on the target platform. Use this matrix to answer any certification questions that are related to RAC.
RAC Certification Matrix 1. Select Real Application Clusters. Connect and log in to http: Click the Certify tab on the menu frame. Then click Submit. Select the correct platform and click Submit.
Each service represents a workload with common attributes. Following outages. When instances are later repaired. Run-time connection load balancing Service location transparency Service connections CRM Modify service to instance mapping. Immediately the service changes state. The number of instances offering the service is transparent to the application. RAC and Services Services are a logical abstraction for managing workloads.
Services enable the automatic recovery of work. With RAC. Services are built into the Oracle database providing a single-system image for workloads. These attributes are handled by each instance in the cluster by using metrics.
Listeners are also aware of services availability. Services divide the universe of work executing in the Oracle database into mutually disjoint classes. This architecture forms an end-to-end continuous service for applications. A service can span one or more instances of an Oracle database in a cluster. Available Demonstrations To illustrate the major concepts that were briefly introduced in this lesson.
RAC Administration 1. Objectives After completing this lesson. The enhancements include: The first phase installs Oracle Clusterware. Oracle Clusterware provides high-availability components and can also interact with vendor clusterware.
The installation also enables you to configure services for your RAC environment. If you have a previous Oracle cluster database version. Installation Utilities: The Oracle Database 11g installation process provides a single-system image. DBUA creates a restore script to restore the database. Oracle RAC 11g Installation: Outline 1.
Perform ASM installation. Perform Oracle Clusterware installation. This is considered a best practice. Complete postinstallation tasks. You must perform step-by-step tasks for hardware and software verification. Make sure that your cluster hardware is functioning normally before you begin this step. Oracle Clusterware must be installed using OUI. Complete preinstallation tasks: After Oracle Clusterware has been successfully installed and tested.
You must install the operating system patches required by the cluster database. The remainder of this lesson provides the necessary knowledge to complete these tasks successfully. Before the installation can begin in earnest. Perform cluster database creation. Failure to do so results in an aborted or nonoperative installation.
Perform Oracle Database 11g software installation. Install EM agent on cluster nodes if using Grid Control. After the database has been created. In UNIX systems. This means that the administrative privileges user account and password must be the same on all nodes. On Oracle RAC systems. You do not need a separate account. OUI creates and sets startup and shutdown services at installation time. Preinstallation Tasks Check system requirements.
Others are specific to Oracle RAC 11g. Create groups and users. Check kernel parameters. Attention to details here simplifies the rest of the installation process. Some of these tasks are common to all Oracle database installations and should be familiar to you. Perform cluster setup. Preinstallation Tasks Several tasks must be completed before the Oracle Clusterware and Oracle Database 11g software can be installed.
Check software requirements. Failure to complete these tasks can certainly affect your installation and possibly force you to restart the process from the beginning. On systems with 2 GB or more of memory. To determine the amount of physical memory. The df command can be used to check for the availability of the required disk space.
Hardware Requirements The system must meet the following minimum hardware requirements: To determine the size of the configured swap space. The virtual IP address must be in the same subnet as the associated public interface.
Before starting the installation. For the public network. After installation. For a more complete list of supported protocols. If a node fails. For the private network. Network Requirements Each node must have at least two network adapters: For the private IP address and optional host name for each private interface.
Gigabit Ethernet or an equivalent is recommended. Oracle recommends that you use private network IP addresses for these interfaces. When this occurs: The slide shows you the connect case with and without VIP. In the case of SQL. When a node fails. Without using VIPs. This means that when the client issues SQL to the node that is now down 3. As a result. For directly connected clients. In the case of connect. This results in the clients getting errors immediately.
On the basis of this testing and extensive experience with production customer deployments. In addition to UDP. Best practices for UDP include: Your interconnect must be certified by Oracle for your platform.
You should also have a Web browser to view online documentation. Hyper Messaging protocol. Oracle Universal Installer OUI performs checks on your system to verify that it meets the Linux package requirements of the cluster database and related services.
To determine whether the required packages are installed. Package Requirements Depending on the products that you intend to install. To ensure that these checks succeed. The nobody user must own the external jobs extjob executable after the installation.
This user owns all the software installed during the installation. If you want to specify a group name other than the default dba group. OUI prompts you to specify the name of this group. You must verify that the unprivileged user named nobody exists on the system. This group owns the Oracle inventory. The usual name chosen for this user is oracle. You must create the oracle user the first time you install the Oracle database software on the system.
You must create the dba group the first time you install the Oracle database software on the system. This user must have the Oracle Inventory group as its primary group. It identifies the UNIX users that have database administrative privileges. To configure the environment. Use the df -k command to identify a suitable file system with sufficient free space.
Make sure that the oracle user and the oinstall group can write to the directory. The maximum number of open file descriptors should be PAM is a system of libraries that handle the authentication tasks of applications services on the system. The maximum number of processes available to a single user must not be less than User Shell Limits To improve the performance of the software. The principal feature of the PAM approach is that the nature of the authentication is dynamically configurable.
The hard values. When prompted for the pass phrase. As the oracle user. Create the public and private keys on all nodes: Test the configuration. Configuring for Remote Installation continued 3. The first time you install the Oracle database software on a system. The Oracle inventory directory oraInventory stores the inventory of all software installed on the system. Required Directories for the Oracle Database Software You must identify five directories for the Oracle database software: It is required by.
On UNIX systems. OUI prompts you to specify the path to this directory.
If you are installing the software on a local file system. Specify a path similar to the following directory for ASM: When you run OUI. You must install Oracle Clusterware in a separate home directory. You must install different Oracle products. It is recommended that you specify a path similar to the following for the Oracle Clusterware home directory: Required Directories for the Oracle Database Software continued The Oracle Clusterware home directory is the directory where you choose to install the software for Oracle Clusterware.
The Oracle home directory is the directory where you choose to install the software for a particular Oracle product. The directory that you specify must be a subdirectory of the Oracle base directory. Because the clusterware parent directory should be owned by root. It is recommended that you specify a path similar to the following for the Oracle home directory: Linux Operating System Parameters Verify that the parameters shown in the table above are set to values greater than or equal to the recommended value shown.
Use the sysctl command to view the default values of the various parameters. For example, to view the semaphore parameters, run the following command: The values shown represent semmsl, semmns, semopm, and semmni in that order.
Kernel parameters that can be manually set include: Semaphores are grouped into semaphore sets, and SEMMSL controls the array size, or the number of semaphores that are contained per semaphore set.
It should be about ten more than the maximum number of the Oracle processes. The maximum size of a shared-memory segment.
The number of shared memory identifiers. The kernel parameters shown above are recommended values only. For production database systems, it is recommended that you tune these values to optimize the performance of the system.
Because they are a lot of parameters to check, you can use the Cluster Verification Utility to automatically do the verification. View the Certifications by Product section at http: Verify your high-speed interconnects. Determine the shared storage disk option for your system:. See the Certifications by Product section at http: Verify that your cluster interconnects are functioning properly.
Determine the storage option for your system, and configure the shared disk. Oracle Clusterware requires that the OCR files and voting disk files be shared. These files could map to shared block or raw devices or exist on an OCFS volume. Verifying Cluster Setup with cluvfy The Cluster Verification Utility cluvfy enables you to perform many preinstallation and postinstallation checks at various stages of your RAC database installation.
The cluvfy utility is available in Oracle Database 11g Release 1. To check the readiness of your cluster for an Oracle Clusterware installation, run cluvfy as shown below: Node reachability check passed from node "vx".
Checking user equivalence User equivalence check passed for user "oracle". Checking node connectivity Node connectivity check passed. Checking shared storage accessibility Shared storage check passed on nodes "vx,vx". Post-check for hardware and operating system setup was successful on all the nodes. RAC Administration 1 - Click the Oracle Clusterware button and then click Next. You may accept the name or enter a new name at this time.
If not, enter the location in the target destination, and click Next to continue. You should be aware of this, and not just click through because there is a value. The parent directory of the Clusterware Home should be owned by root and writable by the install group, oinstall.
Product-Specific Prerequisite Checks The installer then checks your environment to ensure that it meets the minimum requirements for an Oracle Clusterware installation. The installer checks for the existence of critical packages and release levels, proper kernel parameter settings, network settings, and so on.
If discrepancies are found, they are flagged and you are given an opportunity to correct them. If you are sure that the flagged items will not cause a problem, it is possible to click the item and change the status to self-checked, and continue with the installation. Only do this if you are absolutely sure that no problems actually exist, otherwise correct the condition before proceeding. When all checks complete successfully, click the Next button to proceed.
You must supply the public node names. OUI displays the Cluster Configuration screen without the predefined node information. If all your nodes do not appear in the cluster nodes window. Cluster Configuration The Specify Cluster Configuration screen displays predefined node information if OUI detects that your system has vendor clusterware.
Ensure that the cluster name is unique in your network. In the Cluster Name field. Private Interconnect Enforcement The Specify Network Interface Usage screen enables you to select the network interfaces on your cluster nodes to use for internode communication.
Ensure that the network interfaces that you choose for the interconnect have enough bandwidth to support the cluster and RAC-related network traffic. A gigabit Ethernet interface is highly recommended for the private interconnect. To configure the interface for private use, click the interface name, and then click Edit. A pop-up window appears and allows you to indicate the usage for the network interfaces. In the example shown in the slide, there are three interfaces: It should be marked Do Not Use.
The eth2 interface is configured for the private interconnect and should be marked Private. When you finish, click the Next button to continue. Enter a fully qualified file name for the shared block or raw device or a shared file system file for the OCR file.
If you are using an external disk mirroring scheme, click the External Redundancy option button. You will be prompted for a single OCR file location. If no mirroring scheme is employed, click the Normal Redundancy option button.
You will be prompted for two file locations. For highest availability, provide locations that exist on different disks or volumes. Click Next to continue.
Voting Disk File The primary purpose of the voting disk is to help in situations where the private network communication fails. Therefore, some of the nodes must go offline.
The voting disk is used to communicate the node state information used to determine which nodes will go offline. Because the voting disk must be accessible to all nodes to accurately assess membership, the file must be stored on a shared disk location.
The voting disk can reside on a block or raw device or a cluster file system. In Oracle Database 11g Release 1, voting disk availability is improved by the configuration of multiple voting disks. If the voting disk is not mirrored, then there should be at least three voting disks configured. Note that OUI must install the components shown in the summary window. Click the Install button.
The Install screen is then displayed, informing you about the progress of the installation. During the installation, OUI first copies the software to the local node and then copies the software to the remote nodes. The root. Run the cluvfy utility to verify the post crs installation.
Make sure you run the above mentioned scripts serially on each node in the proposed order. End of Installation When the configuration scripts have been run on both nodes, the Configuration Assistants page is displayed. The Cluster Verification Utility is then run to test the viability of the new installation. When the Next button is clicked, the End of Installation screen appears.
Click Exit to leave OUI. Verifying the Oracle Clusterware Installation Before continuing with the installation of the Oracle database software, you must verify your Oracle Clusterware installation and startup mechanism.
With the introduction of Oracle RAC 11g, cluster management is controlled by the evmd, ocssd, and crsd processes. Run the ps command on both nodes to make sure that the processes are running. Check the startup mechanism for Oracle Clusterware. The processes are started at run levels 3 and 5 and are started with the respawn flag.
Verifying the Oracle Clusterware Installation continued This means that if the processes abnormally terminate, they are automatically restarted. If you kill the Oracle Clusterware processes, they automatically restart or, worse, cause the node to reboot. For this reason, stopping Oracle Clusterware by killing the processes is not recommended. If you want to stop Oracle Clusterware without resorting to shutting down the node, you should run the crsctl command: If you encounter difficulty with your Oracle Clusterware installation, it is recommended that you check the associated log files.
To do this, check the directories under the Oracle Clusterware Home: This directory contains the alert. This directory contains the log files for the CSSD. This directory contains the log files for the EVMD. When you have determined that your Oracle Clusterware installation is successful and fully functional, you may start the Oracle Database 11g software installation.
Summary In this lesson. Practice 1: Overview This practice covers the following topics: RAC Administration 2. Installing Automatic Storage Management For this installation. After Oracle Clusterware is installed. Click the Oracle Database 11g button and then click Next. Installation Type When the Installation Type screen appears. Click the Next button to proceed. Be sure to specify a name for your installation that reflects this. Then click the Next button to continue. Here you specify the location of your ASM home directory and installation name.
Although it is possible for ASM and the database installation to reside in the same directory and use the same files.
Refer to your documentation if the detailed output indicates that your clusterware is not running properly. If OUI does not display the nodes properly. When this is done. Then return to OUI. Product-Specific Prerequisite Checks The Product-Specific Prerequisite Checks screen verifies the operating system requirements that must be met for the installation to be successful. When all tests have succeeded. The example in the slide shows the results of a completely successful test suite.
The test suite results are displayed at the bottom of the screen. Any tests that fail are also reported here. After each successful check.
It is possible to bypass the errors that are flagged by selecting the check box next to the error. If you encounter any failures. When you have done this. The default value is dba for each group: Then click the Install button to proceed.
After installing the files and linking the executables on the first node. You can monitor the progress of the installation on the Install screen. You may scan the installation tree to verify your choices if you like. Summary The Summary screen appears next. OUI prompts you to execute configuration scripts on all nodes. Open a terminal window for each node listed and run the root. Execute Configuration Scripts The next screen that appears prompts you to run the root. End of Installation When the root.
Click the Exit button to quit.
When the installation is finished. DBCA is used to do this quickly and accurately. Click the Select All button. ASM Configuration continued On the next screen. You now choose the nodes on which to manage ASM. Before starting the instances. Then click Next to continue. If you require specific initialization parameter values to be set for your ASM instances.
ASM Configuration continued At this stage. When you click Yes to start the listeners. ASM Configuration continued A dialog box informs you that listeners are not running on the cluster nodes and asks if you want to start them now. Click the OK button after providing the discovery path to your disks. On the disk group creation page. Click the Create New button to create a disk group. When you have finished creating all the necessary disk groups.
Other disk groups can also be created at this point. In the example in the slide. When you click OK. A dialog box asks if you wish to perform other operations.
Click the Oracle Database 11g button and click Next. You need to run OUI as the oracle user. For this reason. For this. Your installation options include: Selecting the Custom installation type option enables you to install only those Oracle product components that you deem necessary. Accept the suggested name or enter your own Oracle Home name.
In the Software Location section of the page The Name field is populated with a default or suggested installation name. After entering the information. Install Location On the Install Location screen. Additional nodes that are to be part of this installation must be selected by selecting the check boxes.
Note that the local node is always selected for the installation. Because OUI is node aware. Restart OUI. To do this. If you do not see all your nodes listed here. Most installation scenarios require the Cluster Installation option. Click the Next button when you are ready to proceed with the installation.
Products Prerequisite Check The Product-Specific Prerequisite Checks screen verifies the operating system requirements that must be met for the installation to be successful.
These requirements include: The test suite results are displayed at the bottom of the page.
This option enables you to create the database by manually invoking the DBCA at some point in time after OUI finishes installing the database software. On this screen. If you choose to install a database. This choice provides you with more options than the standard preconfigured database models. Select the Install Software Only option. Click the Next button to continue.
You may also choose to defer the database creation by clicking the Install Software Only option button. After OUI stops. OUI copies the software first to the local node and then to the remote nodes. If you are not. During installation.
If you are satisfied with the summary. Node information and space requirements can be viewed here. On the Install screen. Check Summary The Summary screen is displayed next. Review the information on this page. OUI displays a dialog box indicating that you must run the root. Execute the root. Required Tasks Prior to Database Creation You can now set the Oracle database—related environment variables for the oracle user so that they are recognized by the DBCA during database creation: User equivalence check passed for user "oracle"..
Checking administrative privileges. Checking user equivalence.. RAC Administration Use the —pre option with the dbcfg argument to thoroughly analyze your cluster before creating your RAC database..
Group existence check passed for "dba". Membership check for user "oracle" in group "dba" [as Primary] passed.. Run the Cluster Verify Utility as the oracle user. User existence check passed for "oracle". Your output should be similar to the following example: Pre-check for database configuration was successful.. Checking CRS health. Node connectivity check passed for subnet " CRS integrity check passed. Checks Before Database Creation continued Administrative privileges check passed.
CRS health check passed. Suitable interfaces for VIP on subnet " Checking node connectivity. Practice 2: RAC Administration 3. To use Grid Control. To install the Management Agent. The amount of shared disk space is determined by the size of your database. If the public interface on one node uses the network adapter eth0, then you must configure eth0 as the public interface on all nodes. If eth1 is the private interface name for the first node, then eth1 should be the private interface name for your second node.
Every node in the cluster should be able to connect to every private network interface in the cluster. This is implemented using the Twisty Plugin which requires Java Script to be enabled on your browser. User Accounts 1. Set the password for the oracle account using the following command.
Replace password with your own password. Repeat Step 1 through Step 3 on each node in your cluster. Determine your cluster name. The cluster name should satisfy the following conditions: 1. Determine the public host name for each node in the cluster. For the public host name, use the primary host name of each node. In other words, use the name displayed by the hostname command for example: racnode1.
Determine the public virtual hostname for each node in the cluster. The virtual host name is a public node name that is used to reroute client requests sent to the node if the node is down. The virutal hostname must meet the following requirements: -The virtual IP address and the network name must not be currently in use.
Determine the private hostname for each node in the cluster. After modifying the nsswitch. Using the previous example, the clients would use docrac-scan to connect to the cluster. The short SCAN for the cluster is docrac-scan.
Synchronizing the Time on ALL Nodes Ensure that the date and time settings on all nodes are set as closely as possible to the same date and time. Configuring Kernel Parameters 1. Repeat steps 1 and 2 on all cluster nodes. Set shell limits for the oracle user To improve the performance of the software on Linux systems, you must increase the shell limits for the oracle user 1.
Repeat this procedure on all other nodes in the cluster. Stage the Oracle Software It is recommended that you stage the required software onto a local drive on Node 1 of your cluster. Check OS Software Requirements The OUI will check during the install for missing packages and you will have the opportunity to install them at that point during the prechecks. Nevertheless you might want to validate that all required packages have been installed prior to launching the OUI. Enterprise Linux 5. Requirements for other supported platforms can be found in My Oracle Support Note To ensure high availability of Oracle Clusterware files on Oracle ASM, you need to have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks.
Each disk must have at least 1 GB of capacity to ensure that there is sufficient space to create Oracle Clusterware files. Use the following guidelines when identifying appropriate disk devices: -All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics. Partition the Shared Disks 1. Tip: From the fdisk prompt, type "u" to switch the display unit from cylinder to sector.
Then create a single primary partition starting on sector 1MB offset assuming sectors of bytes per unit. Calling ioctl to re-read partition table. Syncing disks. Install the RPMs by running the following as the root user: rpm -ivh oracleasm-support Configure ASMLib by running the following as the root user: NOTE: If using user and group separation for the installation as shown in this guide , the ASMLib driver interface owner is grid and the group to own the driver interface is asmdba oracle and grid are both members of this group.
These groups were created in section 2. If a more simplistic installation using only the Oracle user is performed, the owner will be oracle and the group owner will be dba. This will configure the on-boot properties of the Oracle ASM library driver.
The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ''. Ctrl-C will abort. Repeat steps 2 - 4 on ALL cluster nodes.