Archive for the ‘Oracle 11g R2 RAC’ Category

Oracle Technology Network South America Tour- Sao Paulo   2 comments

Being inivted by Oracle Technology Network (OTN),  on July 15th, 2011, I joined the last part of  OTN South America Tour to travel to Sao Paulo Brazil;  Montevideo Uruguay; and Santiago, Chile.  I will give two presentations at each conference.  On July  16th, I gave two presentations in Sao Paulo for the Brazil  Oracle User group  at GUOB) Tech Day 2011:

1) Virtualized Oracle 11g/R2 RAC Database on Oracle VM: Methods/Tips

2)Oracle 11gR2 Clusterware: Architecture and Best Practices of Configuration and Troubleshooting.

The followings are some photos from the conference and my presentation sessions:

This slideshow requires JavaScript.

How to Store Voting Disk Files in ASM for 11g R2 Clusterware High Availability   2 comments

11gR2 clusterware voting disk files can be stored in ASM through the ASM diskgroup.  To ensure the high availability of the clusterware, it is highly recommended to store multiple voting disk files. When storing the voting disk files in ASM, the redundancy setting of the ASM diskgroup determines the number of voting disk files. The following settings  are recommended for planning  the proper configuration of ASM diskgroup that is used to store the voting disk files.

  • External Redundancy: The ASM diskgroup only needs one failure group without mirroring and it only stores one voting disk file. It is recommended that the disk have  external redundancy RAID configuration to provide the storage high availability.  
  • Normal Redundancy:  The ASM diskgroup requires three failure groups and it will provide three voting disks files.
  • High Redundancy: The ASM diskgroup requires five failure groups and it has five voting disk files.

Following the recommended settings above, you can configure the 11g R2 clusterware by storing the voting disk files in ASM:

1.  Based on the external storage redundancy and number of voting disks that you plan  to have , select the appropriate redundancy setting for your ASM diskgroup. If there is no external redundancy for the storage disks or you would like to have additional voting disk files, it is recommended to select normal or high redundancy as the setting of choice for the ASM diskgroup.  As an example , we configure the high redundancy setting for the ASM diskgroup. The ASM diskgroup has five ASM failure groups each of which is on one of the following ASM disks:  ORCL:OCR1,  ORCL: OCR2, ORCL:OCR3,  ORCL: OCR4, ORCL:OCR5

     2.   During 11g R2 Oracle Grid Infrastructure installation,  select “Automatic Storage Management (ASM) In the storage selection step  as shown below:

3. Specify the ASM diskgroup name and select the proper redundancy level and the corresponding ASM disks. As   shown in the example below, select the high redundancy and the five ASM disk: ORCL:OCR1,  ORCL: OCR2, ORCL:OCR3,  ORCL: OCR4, ORCL:OCR5

4. After the Grid Infrastructure installation completes, login to the ASM instance to check the ASM diskgroup redundancy setting and list the voting disk files using the command of ‘crsctl query css votedisk’.

My Upcoming Presentation on IOUG –RAC Webinar Series   Leave a comment

I will speak at IOUG RAC Webinar Series on May 11th, 2011. The presentation aims to provide DBAs some practical understanding and best practices of system configuration such as network and storage that are very critical to the stability of Oracle clusterware and RAC. It discusses hardware/OS configuration, public and private interconnect network and shared storage configuration, and shares some experiences/tips for troubleshooting clusterware issues such as node eviction and how some useful diagnostic tools help for the root cause analysis. This presentation also covers some clusterware administration methods as well as the new clusterware features. The following are the details of this IOUG Webinar:

Title: Oracle11g R2 Clusterware: Architecture and Best Practices of Configuration and Troubleshooting
Date:
May 11, 2011 from 11:00am-12:00pm CDT
Register Today! https://www1.gotomeeting.com/register/833030457  

Outlines:

  • Oracle 11g R2 Clusterware Architecture
  • System Hardware for Oracle Clusterware/RAC
  • Network configuration for Oracle Clusterware/RAC
  • Storage Configuration for Oracle Clusterware/RAC
  • Managing Oracle Clusterware
  • Clusterware Troubleshooting
  • QA

My Collaborate 11 Presentations:   Leave a comment

The Collaborate 11 conference will be held in Orlando, Florida on April 10-14, 2011, At this conference, I will be giving the following three presentions:

1. Title: Oracle E-Business Suite: Migration to Oracle VM & template based deployment:  

  •    Conference: The Oracle Applications Users Group (OAUG) Forum
  •    Time:  Monday, April 11 at 2:30pm-3:30pm.  
  •  Abstract:Oracle VM provides the server virtualization that not only enables high availability and scalability, also simplifies and standardizes the deployment for Oracle E-Business Suite.  To leverage Oracle VM, the existing Oracle E-Business systems on physical servers need to be migrated the VMs and a new development needs to start on the VMs.  Attend this session to learn some best practices for such an migration and  also to learn how to create and use the VM templates of customers’  own project specific Oracle E-Business systems for on-going project. The session will also examine how to leverage the benefits of Oracle VM such as high availability and scalability and server partitioning for Oracle E-Business suite R12.1 infrastructure

    2. Title:  Automated Provisioning Oracle 11g R2 RAC using Oracle Enterprise Manager 11g  Provisioning Pack

  •  Confernece: The Independent Oracle User Group (IOUG) Forum
  • Time:  Tuesday, April 12 at 11:45am – 12:15pm
  •   Abstract : Oracle Enterprise Manager 11g provides an end-to-end solution for automated provisioning and lifecycle management of the entire system stack.  This session covers how Oracle Enterprise Manager Provisioning pack can save the time and cost for IT organizations by automatically  provisioning Oracle Real Applications Cluster (RAC).  Attend this session to learn how configure the latest Oracle enterprise manager version 11gr1 and how to enable its provisioning pack including the provisioning deployment procedures and the software library ,and  how to automate some time consuming and error prone  tasks such as provisioning Oracle 11g R2 RAC database,  extending RAC database by adding an additional node and how to save the gold image of the 11g R2 RAC based on the existing RAC environment.

3. Ensure the RAC High Availability: Storage and Network Side Story

  •    Conference: The independent Oracle User Group (IOUG) forum, High Availability Bootcamp
  •     Time: Wednesday, April 13 at 10:30am – 11:30am
  • Abstract: While Oracle RAC technology provides the high level of availability and the great scalability for the database , the stability of the central piece of RAC technology: Oracle clusterware largely depends on its underneath system infrastructure: network and shared storage. Come to this session to learn some architecture of RAC and clusterware in 11g R1 and 11g R2 and the best practices of configuring the network and shared storage to ensure the stability and high availability of the Oracle clusterware and RAC. The session will also cover the troubleshooting tips of some clusterware stability issues such as node eviction which is frequently related to the network and the share storage.

I also will be attending the following panels as the moderator or a panelist:

1. IOUG High Availability Panel:    Time: Wednesday April 13,   11:45am-12:15pm

2. Oracle RAC SIG panels: 

  •     RAC SIG BOG Panel: Monday April 11,  9:15am-10:15am
  •      RAC customer Panel: Teusday April 12, 9: 15am-10:15am
  •      RAC Expert Panel, Wednesday, April 13, 8:00am-9:00am
   

11g R2 clusterware and ASM, which one gets up first and which one depends on other?   Leave a comment

In Oracle 10g RAC and 11gR1 RAC,  Oracle clusterware and ASM are installed in the different Oracle homes, and the Clusterware has to be  up before ASM instance can be started because ASM instance uses the clusterware to access the shared storage.  Oracle 11g R2 introduced the  grid infrastructure home which combines Oracle clusterware and ASM.  The OCR and votingdisk of 11g R2 clusterware can be stored in ASM.  So it seems that ASM needs the clusterware up first to access the shared storage  and the clusterware needs ASM up first before it can access its key data structure: OCR and votingdisk.  So really clusterware and ASM, which one needs to be up first, and which one has to wait for other? This seemed to be the chicken or the ego problem.

 Oracle’s solution to this problem is to combines  the clusterware and ASM  into a single Grid Infrastructure home and  comes up a  procedure with  a complex  start up sequence which  mixes  the different components of clusterware and ASM  instance in order.  Oracle Metalink note 11gR2 Clusterware and Grid Home – What You Need to Know [ID 1053147.1] gave the following  startup sequence:

Although the clusterware startup command  $GI_HOME/bin/crsctl start crs follows this sequence to bring both clusterware and ASM online, but this command really doesn’t echo back each milestone of the startup process and we really can’t see how the startup was done.  A workaround is to look at the some of outputs  of root.sh command during the initial Grid infrastructure installation process as follow:

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

CRS-2672: Attempting to start ‘ora.gipcd’ on ‘owirac1′

CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘owirac1′

CRS-2676: Start of ‘ora.mdnsd’ on ‘owirac1′ succeeded

CRS-2672: Attempting to start ‘ora.gipcd’ on ‘owirac1′

CRS-2676: Start of ‘ora.gipcd’ on ‘owirac1′ succeeded

CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘owirac1′

CRS-2676: Start of ‘ora.gpnpd’ on ‘owirac1′ succeeded

CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘owirac1′

CRS-2676: Start of ‘ora.cssdmonitor’ on ‘owirac1′ succeeded

CRS-2672: Attempting to start ‘ora.cssd’ on ‘owirac1′

CRS-2672: Attempting to start ‘ora.diskmon’ on ‘owirac1′

CRS-2676: Start of ‘ora.diskmon’ on ‘owirac1′ succeeded

CRS-2676: Start of ‘ora.cssd’ on ‘owirac1′ succeeded

CRS-2672: Attempting to start ‘ora.ctssd’ on ‘owirac1′

CRS-2676: Start of ‘ora.ctssd’ on ‘owirac1′ succeeded

CRS-2672: Attempting to start ‘ora.asm’ on ‘owirac1′

CRS-2676: Start of ‘ora.asm’ on ‘owirac1′ succeeded

CRS-2672: Attempting to start ‘ora.crsd’ on ‘owirac1′

CRS-2676: Start of ‘ora.crsd’ on ‘owirac1′ succeeded

CRS-2672: Attempting to start ‘ora.evmd’ on ‘owirac1′

CRS-2676: Start of ‘ora.evmd’ on ‘owirac1′ succeeded

CRS-2672: Attempting to start ‘ora.asm’ on ‘owirac1′

CRS-2676: Start of ‘ora.asm’ on ‘owirac1′ succeeded

CRS-2672: Attempting to start ‘ora.OCRVOTDSK.dg’ on ‘owirac1′

CRS-2676: Start of ‘ora.OCRVOTDSK.dg’ on ‘owirac1′ succeeded

CRS-2672: Attempting to start ‘ora.registry.acfs’ on ‘owirac1′

CRS-2676: Start of ‘ora.registry.acfs’ on ‘owirac1′ succeeded

This  sequence shows  the   ASM instance startup  is just one step in middle of  the entire sequence : Some of  CRS components such as CSSD, CTSS get started before ASM, while other components such as CRSD,  EVEMD, ACFS are up after the ASM starts.  This sequence can be also confirmed by the  timestamps and log messages in  clusterware log files  (alter<hostname>.log, cssd.log and crsd.log)  and ASM instance log like  alert_+ASM1.log . Here are the sequences of messages and their timestamps: during the startup of 11g R2 clusterware and ASM instance:

OLR service started  : 2011-01-17 14:33:13.678

Starting CSS daemon 2011-01-17 14:33:18.684:

Fetching asmlib disk :ORCL:OCR1 : 2011-01-17 14:33:24.825

Read ASM header off dev:ORCL:OCR3:224:256

Opened hdl:0x1d485110 for dev:ORCL:OCR1: 2011-01-17 14:33:24.829

Successful discovery for disk ORCL:OCR1 : 2011-01-17 14:33:24.837

Successful discovery of 5 disks: 2011-01-17 14:33:24.838

CSSD voting file is online: ORCL:OCR1:  2011-01-17 14:33:50.047

CSSD Reconfiguration complete: 2011-01-17 14:34:07.729

The Cluster Time Synchronization Service started:  2011-01-17 14:34:12.333

Note: ** CSSD and CTSSD got up before ASM .  Votingdisks were discovered  by reading the header of the ASM disks (OCRL:OCR1) of  the votingdisk diskgroup without using ASM instance **

Starting ASM: Jan 17 14:34:13 2011 

CRS Daemon Starting 2011-01-17 14:34:30.329:  

Checking the OCR device : 2011-01-17 14:34:30.331

Initializing OCR 2011-01-17 14:34:30.337

diskgroup OCRVOTDSK was mounted : Jan 17 14:34:30 2011

OCRVOTDSK was mounted : Mon Jan 17 14:34:30 2011

The OCR service started : 2011-01-17 14:34:30.835

Verified ocr1-5: 2011-01-17 14:33:50.128

Cluster Time Synchronization Service started:  2011-01-17 14:34:12.333

The OCR service started : 2011-01-17 14:34:30.835 

CRSD started: 2011-01-17 14:34:31.902

Note: CRS server started  after ASM is up and the diskgroup for OCR and votingdisks are mounted

From this sequence of the log message and timestamp, we get some understanding about the sequence of clusterware and ASM instance:

1)      CSSD and CTSSD are up before ASM

2)      Votingdisks used by CSSD are discovered by reading the header of the disks, not throught ASM

3)      Startup of CRS service has to wait until ASM instance is up and the diskgroup for OCR and votingdisk is mounted.

  Leave a comment

The following conference presentations  have been accepted by the corresponding conference commitees and currently are in planing phrase :

1. Oracle11g R2 Clusterware and RAC: Architecture , Configuration, Troubleshooting  and Case study, Oracle OpenWorld Beijing 2010, Dec 13-16th,2010, Beijing, China

2. Case Study: Implementing the Oracle Grid Computing for Multiple ERP Applications ,Oracle OpenWorld Beijing 2010, Dec 13-16th,2010, Beijing, China.

3. Oracle E-Business Suite: Migration to Oracle VM & template based deployment , UKOUG Conference Series Technology & E-Business Suite 2010, Nov 29th 2010

  Leave a comment

My Oracle Oracle OpenWorld 2010 Presentations:

1. Session ID: S316318
Title: Oracle RAC/Oracle VM Automated Provisioning with Oracle Enterprise Manager 11g
Abstract: Oracle Enterprise Manager offers an end-to-end solution for automated provisioning and lifecycle management of the whole system stack, including physical and virtual infrastructure. In this Oracle/Dell Inc. joint session, learn how the Oracle Enterprise Manager 11g provisioning and virtualization management packs can save time and money by automating the provisioning and management of the infrastructure of internal cloud services of IT organizations. It focuses on how to automate some time-consuming and error-prone tasks such as provisioning the Oracle VM environment, provisioning Oracle Real Application Clusters (Oracle RAC) 11g Release 2, converting a single-node database to Oracle RAC, and extending an Oracle RAC database.
Event: Oracle OpenWorld
Stream(s): SERVER AND STORAGE SYSTEMS, DATABASE
Track(s): Virtualization, Oracle Enterprise Manager
Session Type: User Group Forum (Sunday Only)
Session Category: Best Practices
Duration: 60 min.
Schedule: Sunday, September 19, 12:30PM | Moscone West L2, Rm 2009 Available

2. Session ID: S316263
Title: Monitoring and Diagnosing Oracle RAC Performance with Oracle Enterprise Manager
Abstract: DBAs may use some homegrown scripts based on v$views and gv$views for performance monitoring and diagnosis. Today Oracle Enterprise Manager provides a much more effective way to manage database performance. It not only identifies root causes of performance issues but also gives impact ratios and recommended solutions. This session discusses the performance monitoring and diagnosing features in Oracle Enterprise Manager 11g and presents some tuning examples to show step-by-step methods for monitoring real-time Oracle Real Application Clusters (Oracle RAC) 11g database performance and then diagnosing performance issues by using the Automatic Database Diagnostic Monitor feature and navigating performance pages in Oracle Enterprise Manager 11g.
Track(s): Database, Oracle Enterprise Manager

Session Type: Conference Session
Session Category: Features
Duration: 60 min.
Schedule: Thursday, September 23, 3:00PM | Moscone South, Rm 310 Available

Just received two more presentation invites   Leave a comment

Last week I recieved one more session accepted  by Oracle Openworld content commitee:   Session ID: S316318 , Automated provisioning of Oracle RAC and Oracle VM using Enterprise Manager 11g

With this new session being accepted, I have total three sessions currently on my Oracle OpenWorld  2010 presentation schedules:

1) Session ID: S316318 ,  Title: Automated provisioning of Oracle RAC and Oracle VM using Enterprise Manager 11g:

Oracle Enterprise Manager offers an end-to-end solution for automated provisioning and lifecycle management of the whole system stack, including physical and virtual infrastructure. In this Oracle/Dell Inc. joint session, learn how the Oracle Enterprise Manager 11g provisioning and virtualization management packs can save time and money by automating the provisioning and management of the infrastructure of internal cloud services of IT organizations. It focuses on  how to automate some time-consuming and error-prone tasks such as provisioning the Oracle VM environment, provisioning Oracle Real Application Clusters (Oracle RAC) 11g Release 2, converting a single-node database to Oracle RAC, and extending an Oracle RAC database.

2)Session ID: S316970, Title: Enabling Database-as-a-Service through Agile, Self-service Provisioning, co-presenters: Kai Yu (Dell), Rajat Nigam(Oracle), Akanksha Sheoran(Oracle)

Abstract: Database-as-a-service is becoming a reality. More and more datacenters are moving to a Self-Service model where users can request databases and get it in minutes. Learn how Enterprise manager’s Provisioning solution can enable such agile database-on-demand. Customers can now deploy tens of databases in a few minutues using self-service methods and without relying on pages of documentation.

3) Session ID: S316263, Title: Monitoring and Diagnosing Oracle RAC Performance with Oracle Enterprise Manager.

Abstract: DBAs may use some homegrown scripts based on v$views and gv$views for performance monitoring and diagnosis. Today Oracle Enterprise Manager provides a much more effective way to manage database performance. It not only identifies root causes of performance issues but also gives impact ratios and recommended solutions. This session discusses the performance monitoring and diagnosing features in Oracle Enterprise Manager 11g and presents some tuning examples to show step-by-step methods for monitoring real-time Oracle Real Application Clusters (Oracle RAC) 11g database performance and then diagnosing performance issues by using the Automatic Database Diagnostic Monitor feature and navigating performance pages in Oracle Enterprise Manager 11g.

Besides of  these Oracle Openworld sessions, I will also be presenting an webseminar for IOUG Fusion Soup to Nuts Program on July 12th at 12pm ET. The prsentation topic is “Provisioning Oracle RAC in a Virtualized Environment using Oracle Enterprise Manager”

Completed configuration of Oracle 11g R2 RAC on OVM2.2   Leave a comment

Last week, I completed a configuration of Oracle 11g R2 RAC on Oracle VM 2.2. It was a pretty complex configuration, which consists of the following components:
Hardware and storage infrastructure:
1) Physical hardware for Oracle VM servers: Two Dell PowerEdge server R810:
2) Storage: Dell EqualLogic PS6510 iSCSI array with 10GbE iSCSI as the shared storage for Oracle VM shared storage as well as the shared storage for Oracle 11g R2 RAC.

Software stack:
1) Oracle VM 2.2 for Oracle Virtual servers installed R810
2) Oracle VM manager 2.2
2) Oracle Enterprise Linux 5U5 64 bit Os for as the two guest VMs as the RAC node
3) Oracle 11g R2 RAC built on top of two guest VMs:
Grid infrastructure with 11g R2 clusterware and ASM instances
11g R2 RAC software and RAC database
Basic configuration:
1) Oracle VM repository resides in shared storage of EqualLogic PS6510 storage array
2) The shared storage for 11g R2 Grid infrastructure and the RAC database
were configured on the virtual disks on the guest VMs which are mapped to the
the physical devices that were built on the EqualLogic storage volumes :
. ACFS for shared 11g R2 RAC home
. ASM diskgroup for OCRs and Votingdisks
. ASM diskgroup for Oracle RAC database including data and FRA.

Follow

Get every new post delivered to your Inbox.

Join 53 other followers