Wednesday, November 25, 2015

Mutipath Set UP for Oracle RAC ASM 12C step by step



Device Mapper provides I/O management and load-balancing for oracle devices including ASM and OCR/Voting disks. Mutipath is an essential piece for shared storage environments. To the end of this document you will see my disk are using a major cylinder number of 252, which means disk are properly configured with multipath. For each vendor it will change like for EMC it will be 120 and for other unix flavours it will be different number. Device Mapper includes four components :

  1. Mutipath Configuration Tool
  2. Mutipathd Deamon
  3. DM-Mutiptah Kernel Module
  4. kpartx Utility


Lets jump on to the working scenario for mutipath on ASM disks:

Once you use fdisk to create the necessary partitions and validated all the oracleasm RPMS are installed on RAC Servers.

[root@SajidServer01 ~]# ls -l /dev/mapper/mapth*
lrwxrwxrwx 1 root root       7 Nov 25 10:37 DISK01 -> ../dm-17
lrwxrwxrwx 1 root root       7 Nov 25 10:55 DISK01p1 -> ../dm-25
                       ||||||||
                       ||||||||
                       ||||||||=====>Output Truncated
                       ||||||||
                       ||||||||

[root@SajidServer01 ~]# egrep 'dm-17|dm-25' /proc/partitions
 252       17    7340032 dm-17
 252       25    7339008 dm-25

Now we need to run partprobe OS related command on all the existing nodes to make sure new partitons are available and visible on each and every node.

[root@SajidServer01 ~]# partprobe /dev/mapper/mapth [e,f,g,h,i]*

[root@SajidServer01 ~]# ls -l /dev/mapper/mapth [e,f,g,h,i]*
lrwxrwxrwx 1 root root       8 Nov 25 10:55 mpathe -> ../dm-10
lrwxrwxrwx 1 root root       7 Nov 25 10:37 mpathep1 -> ../dm-4
                       ||||||||
                       ||||||||
                       ||||||||=====>Output Truncated
                       ||||||||
                       ||||||||
Run the same  partprobe command on other nodes and do check the soft link is set as above.

Now you need to run oracleasm configure on each node to change the permissions of the disks from (root:root) to (grid:asmadmin)


[root@SajidServer01 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration:           [  OK  ]
Creating /dev/oracleasm mount point:                              [  OK  ]
Loading module "oracleasm":                                           [  OK  ]
Mounting ASMlib driver filesystem:                                [  OK  ]
Scanning system for ASM disks:                                      [  OK  ]


[root@SajidServer01 ~]# /etc/init.d/oracleasm start

Intializing the oracle ASMLIB driver:                             [  OK  ]
Scanning the system for Oracle ASMLib disk:                [  OK  ]

Note: We need to run this command on all the other nodes.

Now you need to update the /et/sysconfig/oracleasm on all the nodes file by changing only 2 paramters for using mutipath.

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER="/dev/mapper/*"

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE="sd"

Now reboot all the nodes to bring the changes in place and make sure the asmlib are loaded properly.

[root@SajidServer01 ~]# /etc/init.d/oracleasm status

Checking if ASM is loaded :  yes
Checking if /de/oracleasm is mounted: yes


[root@SajidServer01 ~]# lsmod | grep oracleasm
oracleasm           54341  1

Note: Check the above oracleasm status and lsmod on all the nodes of your clustered environments.

Now connect as Super User ROOT and execute the below commands only on Node1.

/usr/sbin/oracleasm createdisk DISK01 /dev/mapper/mapthep1
Marking disk "DISK01" as an ASM DISK:
/usr/sbin/oracleasm createdisk DISK02 /dev/mapper/mapthfp1
Marking disk "DISK02" as an ASM DISK:
/usr/sbin/oracleasm createdisk DISK03 /dev/mapper/mapthgp1
                       ||||||||
                       ||||||||
                       ||||||||=====>Output Truncated (Do it for all necessary disk and only on Node 1)
                       ||||||||
                       ||||||||


[root@SajidServer01 ~]# /etc/init.d/oracleasm listdisks
DISK01
DISK02
DISK03
DISK04
DISK05

[root@SajidServer01 ~]# /etc/init.d/oracleasm querydisks DISK01
Disk "DISK01" is a valid ASM diks

[root@SajidServer01 ~]# /etc/init.d/oracleasm querydisks DISK02
Disk "DISK02" is a valid ASM diks
                       ||||||||
                       ||||||||
                       ||||||||=====>Output Truncated (Do it for all necessary disk and only on Node 1)
                       ||||||||
                       ||||||||

[root@SajidServer01 ~]# ls -l /dev/oracleasm/disks*
brw-rw---- 1 grid asmadmin 252,  3 Nov 25 10:55 DATA01
brw-rw---- 1 grid asmadmin 252, 10 Nov 25 10:55 DATA02
brw-rw---- 1 grid asmadmin 252, 14 Nov 25 10:55 DATA03
brw-rw---- 1 grid asmadmin 252, 16 Nov 25 10:55 DATA04
brw-rw---- 1 grid asmadmin 252, 20 Nov 25 10:55 DATA05

Now on the rest of the RAC nodes just scandisk to discover the disk thats the added beauty back from 11g.

[root@SajidServer02 ~]# /etc/init.d/oracleasm scandisks
Scanning the system fro Oracle ASMLIB disk:       [  OK  ]

You can find the disk now
[root@SajidServer02 ~]# ls -l /dev/oracleasm/disks*
brw-rw---- 1 grid asmadmin 252,  3 Nov 25 10:55 DATA01
brw-rw---- 1 grid asmadmin 252, 10 Nov 25 10:55 DATA02
brw-rw---- 1 grid asmadmin 252, 14 Nov 25 10:55 DATA03
brw-rw---- 1 grid asmadmin 252, 16 Nov 25 10:55 DATA04
brw-rw---- 1 grid asmadmin 252, 20 Nov 25 10:55 DATA05

Now you can say your disk's are using MUTIPATH !!!