Oracle Linux6.7下使用udev做ASM
Oracle Linux6.7使用udev作为asm
/dev/sdb <--> /dev/asm-diskb/dev/sdc <--> /dev/asm-diskc/dev/sdd <--> /dev/asm-diskd/dev/sde <--> /dev/asm-diske/dev/sdf <--> /dev/asm-diskf
2、故障现象
vmware vshpere5.1下的Oracle Linux 6.7,使用scsi_id 相关的命令,查看不到uuid
3、处理方法
a. 添加记录到/etc/scsi_id.config
[root@dfyl rules.d]# echo "options=--whitelisted --replace-whitespace" >> /etc/scsi_id.config
b. 通过vcenter连接,然后"主页"--"清单"--"数据存储和数据存储集群",找到对应的存储介质,然后右击"数据存储浏览器"
虚拟机关机状态下,在虚拟机文件中vmx,添加disk.EnableUUID="TRUE",重启主机,重新运行udev.sh脚本,然后运行start_udev命令
4、udev.sh(配置udev设备的对应关系,以及生成udev规则文件)
#!/bin/bashfor i in b c d e f;do echo "KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`", NAME="asm-disk$i", OWNER="grid", GROUP="asmadmin", MODE="0660"">>/etc/udev/rules.d/80-asm.rulesdone
5、启动udev设备
[root@dfyl rules.d]# start_udevStarting udev: [ OK ]
6、查看对应关系
[root@dfyl rules.d]# ll /dev/asm-disk*brw-rw----. 1 grid asmadmin 8, 16 Apr 12 14:51 /dev/asm-diskbbrw-rw----. 1 grid asmadmin 8, 32 Apr 12 14:51 /dev/asm-diskcbrw-rw----. 1 grid asmadmin 8, 48 Apr 12 14:51 /dev/asm-diskdbrw-rw----. 1 grid asmadmin 8, 64 Apr 12 14:51 /dev/asm-diskebrw-rw----. 1 grid asmadmin 8, 80 Apr 12 14:51 /dev/asm-diskf
7、参考资料
a、http://www.askmaclean.com/archives/%E5%9C%A8linux-6%E4%B8%8A%E4%BD%BF%E7%94%A8udev%E8%A7%A3%E5%86%B3rac-asm%E5%AD%98%E5%82%A8%E8%AE%BE%E5%A4%87%E5%90%8D%E9%97%AE%E9%A2%98.htmlb、http://blog.csdn.net/staricqxyz/article/details/8332566
本文出自 “冰冻vs西瓜” 博客,请务必保留此出处http://molewan.blog.51cto.com/287340/1763269
Oracle Linux6.7下使用udev做ASM
标签:linux6 udev
小编还为您整理了以下内容,可能对您也有帮助:
如何使用udev给rac asm
机器上面使用的磁盘:
[root@rac-db2 ~]# fdisk -l
Disk /dev/sda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track,
5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector
size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal):
512 bytes / 512 bytes
Disk identifier: 0x00081b38
Device Boot
Start
End Blocks Id
System
/dev/sda1
*
1 5100
40960000 83
Linux
/dev/sda2
5100
5222 982016 82 Linux swap /
Solaris
Disk /dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130
cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size
(logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512
bytes / 512 bytes
Disk identifier: 0x4af1bcf1
Device Boot
Start
End Blocks Id System
Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130
cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size
(logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512
bytes / 512 bytes
Disk identifier: 0x45528f9a
Device Boot
Start
End Blocks Id System
Disk /dev/sdd: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130
cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size
(logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512
bytes / 512 bytes
Disk identifier: 0xf65dbdac
Device Boot
Start
End Blocks Id System
Disk /dev/sde: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130
cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size
(logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512
bytes / 512 bytes
Disk identifier: 0xe0e7f25c
Device Boot
Start
End Blocks Id System
Disk /dev/sdf: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130
cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size
(logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512
bytes / 512 bytes
Disk identifier: 0x4b720782
Device Boot
Start
End Blocks Id System
其中sdb sdc sdd sde sdf是给oracle asm使用的磁盘
下面是配置udev的过程:
1、得到每个磁盘对应的scsi id号,使用命令如下:
[root@rac-db1 ~]# scsi_id -g -u
/dev/sdb
36000c29a982d7d5fe0af684b22046b34
[root@rac-db1 ~]# scsi_id
-g -u /dev/sdc
36000c294085e61228085d870db2173af
[root@rac-db1
~]# scsi_id -g -u
/dev/sdd
36000c29a1a053b53514c7202e9fd2658
[root@rac-db1 ~]# scsi_id
-g -u /dev/sde
36000c29ffa47abfd932ddf3a2f598633
[root@rac-db1
~]# scsi_id -g -u /dev/sdf
36000c292926acffd78b0a49f27d62c51
2、编辑udev的配置文件
编辑文件/etc/udev/rules.d/99-oracle-asmdevices.rules,内容如下:
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p",
RESULT=="36000c29a982d7d5fe0af684b22046b34", NAME="asmgrid_disk1",
OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi",
PROGRAM=="/sbin/scsi_id -g -u -s %p",
RESULT=="36000c294085e61228085d870db2173af", NAME="asmgrid_disk2",
OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi",
PROGRAM=="/sbin/scsi_id -g -u -s %p",
RESULT=="36000c29a1a053b53514c7202e9fd2658", NAME="asmgrid_disk3",
OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi",
PROGRAM=="/sbin/scsi_id -g -u -s %p",
RESULT=="36000c29ffa47abfd932ddf3a2f598633", NAME="asmgrid_disk4",
OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi",
PROGRAM=="/sbin/scsi_id -g -u -s %p",
RESULT=="36000c292926acffd78b0a49f27d62c51", NAME="asmgrid_disk5",
OWNER="grid", GROUP="asmadmin", MODE="0660"
3、加载配置文件
[root@rac-db1 /]# udevadm control reload-rules
4、重启udev服务
[root@rac-db1 /]# start_udev
Starting udev: [ OK ]
5、检查配置结果
[root@rac-db1 /]# ls -l /dev/asm*
brw-rw---- 1 grid asmadmin 8, 16 Aug 31
11:11 /dev/asmgrid_disk1
brw-rw---- 1 grid asmadmin 8, 32 Aug 31 11:11
/dev/asmgrid_disk2
brw-rw---- 1 grid asmadmin 8, 48 Aug 31 11:11
/dev/asmgrid_disk3
brw-rw---- 1 grid asmadmin 8, 64 Aug 31 11:11
/dev/asmgrid_disk4
brw-rw---- 1 grid asmadmin 8, 80 Aug 31 11:11
/dev/asmgrid_disk5
[root@rac-db1 proc]# udevadm info --query=all --path=/block/sdc
P:
/devices/pci0000:00/0000:00:11.0/0000:02:05.0/host3/target3:0:1/3:0:1:0/block/sdc
N:
asmgrid_disk2
W: 50
S: block/8:32
S:
disk/by-id/scsi-36000c294085e61228085d870db2173af
S:
disk/by-path/pci-0000:02:05.0-scsi-0:0:1:0
S:
disk/by-id/wwn-0x6000c294085e61228085d870db2173af
E: UDEV_LOG=3
E:
DEVPATH=/devices/pci0000:00/0000:00:11.0/0000:02:05.0/host3/target3:0:1/3:0:1:0/block/sdc
E:
MAJOR=8
E: MINOR=32
E: DEVNAME=/dev/asmgrid_disk2
E: DEVTYPE=disk
E:
SUBSYSTEM=block
E: ID_SCSI=1
E: ID_VENDOR=VMware
E:
ID_VENDOR_ENC=VMware\x20\x20
E: ID_MODEL=Virtual_disk
E:
ID_MODEL_ENC=Virtual\x20disk\x20\x20\x20\x20
E: ID_REVISION=1.0
E:
ID_TYPE=disk
E: ID_SERIAL_RAW=36000c294085e61228085d870db2173af
E:
ID_SERIAL=36000c294085e61228085d870db2173af
E:
ID_SERIAL_SHORT=6000c294085e61228085d870db2173af
E:
ID_WWN=0x6000c294085e6122
E: ID_WWN_VENDOR_EXTENSION=0x8085d870db2173af
E:
ID_WWN_WITH_EXTENSION=0x6000c294085e61228085d870db2173af
E:
ID_SCSI_SERIAL=6000c294085e61228085d870db2173af
E: ID_BUS=scsi
E:
ID_PATH=pci-0000:02:05.0-scsi-0:0:1:0
E: ID_PART_TABLE_TYPE=dos
E:
LVM_SBIN_PATH=/sbin
E: UDISKS_PRESENTATION_NOPOLICY=0
E:
UDISKS_PARTITION_TABLE=1
E: UDISKS_PARTITION_TABLE_SCHEME=mbr
E:
UDISKS_PARTITION_TABLE_COUNT=0
E: DEVLINKS=/dev/block/8:32
/dev/disk/by-id/scsi-36000c294085e61228085d870db2173af
/dev/disk/by-path/pci-0000:02:05.0-scsi-0:0:1:0
/dev/disk/by-id/wwn-0x6000c294085e61228085d870db2173af
[root@rac-db1 proc]#
配置了udev后,操作系统fdisk命令看不到asm使用的磁盘
[root@rac-db1 proc]# fdisk -l
Disk /dev/sda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track,
5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector
size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal):
512 bytes / 512 bytes
Disk identifier: 0x000cc4c1
Device Boot
Start
End Blocks Id
System
/dev/sda1
*
1 5100
40960000 83
Linux
/dev/sda2
5100
5222 982016 82 Linux swap /
Solaris
对于rac中其他节点,把99-oracle-asmdevices.rules copy过去,再执行步骤3,4就可以了。
如何使用udev给rac asm
机器上面使用的磁盘:
[root@rac-db2 ~]# fdisk -l
Disk /dev/sda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track,
5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector
size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal):
512 bytes / 512 bytes
Disk identifier: 0x00081b38
Device Boot
Start
End Blocks Id
System
/dev/sda1
*
1 5100
40960000 83
Linux
/dev/sda2
5100
5222 982016 82 Linux swap /
Solaris
Disk /dev/sdb: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130
cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size
(logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512
bytes / 512 bytes
Disk identifier: 0x4af1bcf1
Device Boot
Start
End Blocks Id System
Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130
cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size
(logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512
bytes / 512 bytes
Disk identifier: 0x45528f9a
Device Boot
Start
End Blocks Id System
Disk /dev/sdd: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130
cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size
(logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512
bytes / 512 bytes
Disk identifier: 0xf65dbdac
Device Boot
Start
End Blocks Id System
Disk /dev/sde: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130
cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size
(logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512
bytes / 512 bytes
Disk identifier: 0xe0e7f25c
Device Boot
Start
End Blocks Id System
Disk /dev/sdf: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130
cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size
(logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512
bytes / 512 bytes
Disk identifier: 0x4b720782
Device Boot
Start
End Blocks Id System
其中sdb sdc sdd sde sdf是给oracle asm使用的磁盘
下面是配置udev的过程:
1、得到每个磁盘对应的scsi id号,使用命令如下:
[root@rac-db1 ~]# scsi_id -g -u
/dev/sdb
36000c29a982d7d5fe0af684b22046b34
[root@rac-db1 ~]# scsi_id
-g -u /dev/sdc
36000c294085e61228085d870db2173af
[root@rac-db1
~]# scsi_id -g -u
/dev/sdd
36000c29a1a053b53514c7202e9fd2658
[root@rac-db1 ~]# scsi_id
-g -u /dev/sde
36000c29ffa47abfd932ddf3a2f598633
[root@rac-db1
~]# scsi_id -g -u /dev/sdf
36000c292926acffd78b0a49f27d62c51
2、编辑udev的配置文件
编辑文件/etc/udev/rules.d/99-oracle-asmdevices.rules,内容如下:
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p",
RESULT=="36000c29a982d7d5fe0af684b22046b34", NAME="asmgrid_disk1",
OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi",
PROGRAM=="/sbin/scsi_id -g -u -s %p",
RESULT=="36000c294085e61228085d870db2173af", NAME="asmgrid_disk2",
OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi",
PROGRAM=="/sbin/scsi_id -g -u -s %p",
RESULT=="36000c29a1a053b53514c7202e9fd2658", NAME="asmgrid_disk3",
OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi",
PROGRAM=="/sbin/scsi_id -g -u -s %p",
RESULT=="36000c29ffa47abfd932ddf3a2f598633", NAME="asmgrid_disk4",
OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi",
PROGRAM=="/sbin/scsi_id -g -u -s %p",
RESULT=="36000c292926acffd78b0a49f27d62c51", NAME="asmgrid_disk5",
OWNER="grid", GROUP="asmadmin", MODE="0660"
3、加载配置文件
[root@rac-db1 /]# udevadm control reload-rules
4、重启udev服务
[root@rac-db1 /]# start_udev
Starting udev: [ OK ]
5、检查配置结果
[root@rac-db1 /]# ls -l /dev/asm*
brw-rw---- 1 grid asmadmin 8, 16 Aug 31
11:11 /dev/asmgrid_disk1
brw-rw---- 1 grid asmadmin 8, 32 Aug 31 11:11
/dev/asmgrid_disk2
brw-rw---- 1 grid asmadmin 8, 48 Aug 31 11:11
/dev/asmgrid_disk3
brw-rw---- 1 grid asmadmin 8, 64 Aug 31 11:11
/dev/asmgrid_disk4
brw-rw---- 1 grid asmadmin 8, 80 Aug 31 11:11
/dev/asmgrid_disk5
[root@rac-db1 proc]# udevadm info --query=all --path=/block/sdc
P:
/devices/pci0000:00/0000:00:11.0/0000:02:05.0/host3/target3:0:1/3:0:1:0/block/sdc
N:
asmgrid_disk2
W: 50
S: block/8:32
S:
disk/by-id/scsi-36000c294085e61228085d870db2173af
S:
disk/by-path/pci-0000:02:05.0-scsi-0:0:1:0
S:
disk/by-id/wwn-0x6000c294085e61228085d870db2173af
E: UDEV_LOG=3
E:
DEVPATH=/devices/pci0000:00/0000:00:11.0/0000:02:05.0/host3/target3:0:1/3:0:1:0/block/sdc
E:
MAJOR=8
E: MINOR=32
E: DEVNAME=/dev/asmgrid_disk2
E: DEVTYPE=disk
E:
SUBSYSTEM=block
E: ID_SCSI=1
E: ID_VENDOR=VMware
E:
ID_VENDOR_ENC=VMware\x20\x20
E: ID_MODEL=Virtual_disk
E:
ID_MODEL_ENC=Virtual\x20disk\x20\x20\x20\x20
E: ID_REVISION=1.0
E:
ID_TYPE=disk
E: ID_SERIAL_RAW=36000c294085e61228085d870db2173af
E:
ID_SERIAL=36000c294085e61228085d870db2173af
E:
ID_SERIAL_SHORT=6000c294085e61228085d870db2173af
E:
ID_WWN=0x6000c294085e6122
E: ID_WWN_VENDOR_EXTENSION=0x8085d870db2173af
E:
ID_WWN_WITH_EXTENSION=0x6000c294085e61228085d870db2173af
E:
ID_SCSI_SERIAL=6000c294085e61228085d870db2173af
E: ID_BUS=scsi
E:
ID_PATH=pci-0000:02:05.0-scsi-0:0:1:0
E: ID_PART_TABLE_TYPE=dos
E:
LVM_SBIN_PATH=/sbin
E: UDISKS_PRESENTATION_NOPOLICY=0
E:
UDISKS_PARTITION_TABLE=1
E: UDISKS_PARTITION_TABLE_SCHEME=mbr
E:
UDISKS_PARTITION_TABLE_COUNT=0
E: DEVLINKS=/dev/block/8:32
/dev/disk/by-id/scsi-36000c294085e61228085d870db2173af
/dev/disk/by-path/pci-0000:02:05.0-scsi-0:0:1:0
/dev/disk/by-id/wwn-0x6000c294085e61228085d870db2173af
[root@rac-db1 proc]#
配置了udev后,操作系统fdisk命令看不到asm使用的磁盘
[root@rac-db1 proc]# fdisk -l
Disk /dev/sda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track,
5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector
size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal):
512 bytes / 512 bytes
Disk identifier: 0x000cc4c1
Device Boot
Start
End Blocks Id
System
/dev/sda1
*
1 5100
40960000 83
Linux
/dev/sda2
5100
5222 982016 82 Linux swap /
Solaris
对于rac中其他节点,把99-oracle-asmdevices.rules copy过去,再执行步骤3,4就可以了。
如何使用udev给rac asm
下面是配置udev的过程:
1、得到每个磁盘对应的scsi id号,使用命令如下:
[root@rac-db1 ~]# scsi_id -g -u /dev/sdb
36000c29a982d7d5fe0af684b22046b34
[root@rac-db1 ~]# scsi_id -g -u /dev/sdc
36000c294085e61228085d870db2173af
[root@rac-db1 ~]# scsi_id -g -u /dev/sdd
36000c29a1a053b53514c7202e9fd2658
[root@rac-db1 ~]# scsi_id -g -u /dev/sde
36000c29ffa47abfd932ddf3a2f598633
[root@rac-db1 ~]# scsi_id -g -u /dev/sdf
36000c292926acffd78b0a49f27d62c51
2、编辑udev的配置文件
编辑文件/etc/udev/rules.d/99-oracle-asmdevices.rules,内容如下:
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="36000c29a982d7d5fe0af684b22046b34", NAME="asmgrid_disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="36000c294085e61228085d870db2173af", NAME="asmgrid_disk2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="36000c29a1a053b53514c7202e9fd2658", NAME="asmgrid_disk3", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="36000c29ffa47abfd932ddf3a2f598633", NAME="asmgrid_disk4", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="36000c292926acffd78b0a49f27d62c51", NAME="asmgrid_disk5", OWNER="grid", GROUP="asmadmin", MODE="0660"
3、加载配置文件
[root@rac-db1 /]# udevadm control reload-rules
4、重启udev服务
[root@rac-db1 /]# start_udev
Starting udev: [ OK ]
想要安装oracle数据库 11g rac,需要配置udev来解决asm存储设备名持久...
利用UDEV服务解决RAC ASM存储设备名
在<Why ASMLIB and why not?>我们介绍了使用ASMLIB作为一种专门为Oracle Automatic Storage Management特性设计的内核支持库(kernel support library)的优缺点,同时建议使用成熟的UDEV方案来替代ASMLIB。这里我们就给出配置UDEV的具体步骤,还是比较简单的:
1.确认在所有RAC节点上已经安装了必要的UDEV包
[root@rh2 ~]# rpm -qa|grep udev
udev-095-14.21.el5
2.通过scsi_id获取设备的块设备的唯一标识名,假设系统上已有LUN sdc-sdp
for i in c d e f g h i j k l m n o p ;
do
echo "sd$i" "`scsi_id -g -u -s /block/sd$i` ";
done
sdc 1IET_00010001
sdd 1IET_00010002
sde 1IET_00010003
sdf 1IET_00010004
sdg 1IET_00010005
sdh 1IET_00010006
sdi 1IET_00010007
sdj 1IET_00010008
sdk 1IET_00010009
sdl 1IET_0001000a
sdm 1IET_0001000b
sdn 1IET_0001000c
sdo 1IET_0001000d
sdp 1IET_0001000e
以上列出于块设备名对应的唯一标识名
3.创建必要的UDEV配置文件,
首先切换到配置文件目录
[root@rh2 ~]# cd /etc/udev/rules.d
定义必要的规则配置文件
[root@rh2 rules.d]# touch 99-oracle-asmdevices.rules
[root@rh2 rules.d]# cat 99-oracle-asmdevices.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010001", NAME="ocr1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010002", NAME="ocr2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010003", NAME="asm-disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010004", NAME="asm-disk2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010005", NAME="asm-disk3", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010006", NAME="asm-disk4", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010007", NAME="asm-disk5", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010008", NAME="asm-disk6", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010009", NAME="asm-disk7", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_0001000a", NAME="asm-disk8", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_0001000b", NAME="asm-disk9", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_0001000c", NAME="asm-disk10", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_0001000d", NAME="asm-disk11", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_0001000e", NAME="asm-disk12", OWNER="grid", GROUP="asmadmin", MODE="0660"
Result 为/sbin/scsi_id -g -u -s %p的输出--Match the returned string of the last PROGRAM call. This key may be
used in any following rule after a PROGRAM call.
按顺序填入刚才获取的唯一标识名即可
OWNER为安装Grid Infrastructure的用户,在11gr2中一般为grid,GROUP为asmadmin
MODE采用0660即可
NAME为UDEV映射后的设备名,
建议为OCR和VOTE DISK创建独立的DISKGROUP,为了容易区分将该DISKGROUP专用的设备命名为ocr1..ocrn的形式
其余磁盘可以根据其实际用途或磁盘组名来命名
4.将该规则文件拷贝到其他节点上
[root@rh2 rules.d]# scp 99-oracle-asmdevices.rules Other_node:/etc/udev/rules.d
5.在所有节点上启动udev服务,或者重启服务器即可
[root@rh2 rules.d]# /sbin/udevcontrol reload_rules
[root@rh2 rules.d]# /sbin/start_udev
Starting udev: [ OK ]
6.检查设备是否到位
[root@rh2 rules.d]# cd /dev
[root@rh2 dev]# ls -l ocr*
brw-rw---- 1 grid asmadmin 8, 32 Jul 10 17:31 ocr1
brw-rw---- 1 grid asmadmin 8, 48 Jul 10 17:31 ocr2
[root@rh2 dev]# ls -l asm-disk*
brw-rw---- 1 grid asmadmin 8, 64 Jul 10 17:31 asm-disk1
brw-rw---- 1 grid asmadmin 8, 208 Jul 10 17:31 asm-disk10
brw-rw---- 1 grid asmadmin 8, 224 Jul 10 17:31 asm-disk11
brw-rw---- 1 grid asmadmin 8, 240 Jul 10 17:31 asm-disk12
brw-rw---- 1 grid asmadmin 8, 80 Jul 10 17:31 asm-disk2
brw-rw---- 1 grid asmadmin 8, 96 Jul 10 17:31 asm-disk3
brw-rw---- 1 grid asmadmin 8, 112 Jul 10 17:31 asm-disk4
brw-rw---- 1 grid asmadmin 8, 128 Jul 10 17:31 asm-disk5
brw-rw---- 1 grid asmadmin 8, 144 Jul 10 17:31 asm-disk6
brw-rw---- 1 grid asmadmin 8, 160 Jul 10 17:31 asm-disk7
brw-rw---- 1 grid asmadmin 8, 176 Jul 10 17:31 asm-disk8
brw-rw---- 1 grid asmadmin 8, 192 Jul 10 17:31 asm-disk9
想要安装oracle数据库 11g rac,需要配置udev来解决asm存储设备名持久...
利用UDEV服务解决RAC ASM存储设备名
在<Why ASMLIB and why not?>我们介绍了使用ASMLIB作为一种专门为Oracle Automatic Storage Management特性设计的内核支持库(kernel support library)的优缺点,同时建议使用成熟的UDEV方案来替代ASMLIB。这里我们就给出配置UDEV的具体步骤,还是比较简单的:
1.确认在所有RAC节点上已经安装了必要的UDEV包
[root@rh2 ~]# rpm -qa|grep udev
udev-095-14.21.el5
2.通过scsi_id获取设备的块设备的唯一标识名,假设系统上已有LUN sdc-sdp
for i in c d e f g h i j k l m n o p ;
do
echo "sd$i" "`scsi_id -g -u -s /block/sd$i` ";
done
sdc 1IET_00010001
sdd 1IET_00010002
sde 1IET_00010003
sdf 1IET_00010004
sdg 1IET_00010005
sdh 1IET_00010006
sdi 1IET_00010007
sdj 1IET_00010008
sdk 1IET_00010009
sdl 1IET_0001000a
sdm 1IET_0001000b
sdn 1IET_0001000c
sdo 1IET_0001000d
sdp 1IET_0001000e
以上列出于块设备名对应的唯一标识名
3.创建必要的UDEV配置文件,
首先切换到配置文件目录
[root@rh2 ~]# cd /etc/udev/rules.d
定义必要的规则配置文件
[root@rh2 rules.d]# touch 99-oracle-asmdevices.rules
[root@rh2 rules.d]# cat 99-oracle-asmdevices.rules
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010001", NAME="ocr1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010002", NAME="ocr2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010003", NAME="asm-disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010004", NAME="asm-disk2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010005", NAME="asm-disk3", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010006", NAME="asm-disk4", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010007", NAME="asm-disk5", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010008", NAME="asm-disk6", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_00010009", NAME="asm-disk7", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_0001000a", NAME="asm-disk8", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_0001000b", NAME="asm-disk9", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_0001000c", NAME="asm-disk10", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_0001000d", NAME="asm-disk11", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -s %p", RESULT=="1IET_0001000e", NAME="asm-disk12", OWNER="grid", GROUP="asmadmin", MODE="0660"
Result 为/sbin/scsi_id -g -u -s %p的输出--Match the returned string of the last PROGRAM call. This key may be
used in any following rule after a PROGRAM call.
按顺序填入刚才获取的唯一标识名即可
OWNER为安装Grid Infrastructure的用户,在11gr2中一般为grid,GROUP为asmadmin
MODE采用0660即可
NAME为UDEV映射后的设备名,
建议为OCR和VOTE DISK创建独立的DISKGROUP,为了容易区分将该DISKGROUP专用的设备命名为ocr1..ocrn的形式
其余磁盘可以根据其实际用途或磁盘组名来命名
4.将该规则文件拷贝到其他节点上
[root@rh2 rules.d]# scp 99-oracle-asmdevices.rules Other_node:/etc/udev/rules.d
5.在所有节点上启动udev服务,或者重启服务器即可
[root@rh2 rules.d]# /sbin/udevcontrol reload_rules
[root@rh2 rules.d]# /sbin/start_udev
Starting udev: [ OK ]
6.检查设备是否到位
[root@rh2 rules.d]# cd /dev
[root@rh2 dev]# ls -l ocr*
brw-rw---- 1 grid asmadmin 8, 32 Jul 10 17:31 ocr1
brw-rw---- 1 grid asmadmin 8, 48 Jul 10 17:31 ocr2
[root@rh2 dev]# ls -l asm-disk*
brw-rw---- 1 grid asmadmin 8, 64 Jul 10 17:31 asm-disk1
brw-rw---- 1 grid asmadmin 8, 208 Jul 10 17:31 asm-disk10
brw-rw---- 1 grid asmadmin 8, 224 Jul 10 17:31 asm-disk11
brw-rw---- 1 grid asmadmin 8, 240 Jul 10 17:31 asm-disk12
brw-rw---- 1 grid asmadmin 8, 80 Jul 10 17:31 asm-disk2
brw-rw---- 1 grid asmadmin 8, 96 Jul 10 17:31 asm-disk3
brw-rw---- 1 grid asmadmin 8, 112 Jul 10 17:31 asm-disk4
brw-rw---- 1 grid asmadmin 8, 128 Jul 10 17:31 asm-disk5
brw-rw---- 1 grid asmadmin 8, 144 Jul 10 17:31 asm-disk6
brw-rw---- 1 grid asmadmin 8, 160 Jul 10 17:31 asm-disk7
brw-rw---- 1 grid asmadmin 8, 176 Jul 10 17:31 asm-disk8
brw-rw---- 1 grid asmadmin 8, 192 Jul 10 17:31 asm-disk9
Oracle数据库开启ASM磁盘远程方式?
题目如下所示:
在Oracle中,创建ASM磁盘的方式有哪几种?
答案如下所示:
可以通过ASMLIB、udev及Faking的方式来创建ASM磁盘。其中,Faking的方式不需要额外添加磁盘,可以在现有文件系统上分配一些空间用于ASM磁盘,过程如下所示:
mkdir -p /oracle/asmdisk
dd if=/dev/zero of=/oracle/asmdisk/disk1 bs=1024k count=1000
dd if=/dev/zero of=/oracle/asmdisk/disk2 bs=1024k count=1000
/sbin/losetup /dev/loop1 /oracle/asmdisk/disk1
/sbin/losetup /dev/loop2 /oracle/asmdisk/disk2
raw /dev/raw/raw1 /dev/loop1
raw /dev/raw/raw2 /dev/loop2
chmod 660 /dev/raw/raw1
chmod 660 /dev/raw/raw2
chown oracle:dba /dev/raw/raw1
chown oracle:dba /dev/raw/raw2
将以下内容添加到文件/etc/rc.local文件中:
Oracle数据库开启ASM磁盘远程方式?
题目如下所示:
在Oracle中,创建ASM磁盘的方式有哪几种?
答案如下所示:
可以通过ASMLIB、udev及Faking的方式来创建ASM磁盘。其中,Faking的方式不需要额外添加磁盘,可以在现有文件系统上分配一些空间用于ASM磁盘,过程如下所示:
mkdir -p /oracle/asmdisk
dd if=/dev/zero of=/oracle/asmdisk/disk1 bs=1024k count=1000
dd if=/dev/zero of=/oracle/asmdisk/disk2 bs=1024k count=1000
/sbin/losetup /dev/loop1 /oracle/asmdisk/disk1
/sbin/losetup /dev/loop2 /oracle/asmdisk/disk2
raw /dev/raw/raw1 /dev/loop1
raw /dev/raw/raw2 /dev/loop2
chmod 660 /dev/raw/raw1
chmod 660 /dev/raw/raw2
chown oracle:dba /dev/raw/raw1
chown oracle:dba /dev/raw/raw2
将以下内容添加到文件/etc/rc.local文件中:
linux7能安装oracle asm吗
可以安装asm,安装与配置步骤如下:
1 安装Oracle ASM
使用yum命令网络安装
# yum install oracleasm oracleasm-support本地安装
# cd /mnt/install_DVD/Packages# rpm -qi oracleasm
# rpm -qi oracleasm-support
# rpm -qi oracleasm-support-2.1.8-1.el6.x86_64.rpm
2 使用fdisk命令为每个硬盘创建主分区
# fdisk /dev/sdb完成后我们通过fdisk -l查看一下
注意,分区后无需格式化,asm本身就是文件系统,asm直接管理裸设备
3 创建asm分区
# /usr/sbin/oracleasm configure -iDefault user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
# /etc/init.d/oracleasm createdisk VOL1 /dev/sdb1
# /etc/init.d/oracleasm createdisk VOL2 /dev/sdc1
# /etc/init.d/oracleasm createdisk VOL3 /dev/sdd1
linux7能安装oracle asm吗
可以安装asm,安装与配置步骤如下:
1 安装Oracle ASM
使用yum命令网络安装
# yum install oracleasm oracleasm-support本地安装
# cd /mnt/install_DVD/Packages# rpm -qi oracleasm
# rpm -qi oracleasm-support
# rpm -qi oracleasm-support-2.1.8-1.el6.x86_64.rpm
2 使用fdisk命令为每个硬盘创建主分区
# fdisk /dev/sdb完成后我们通过fdisk -l查看一下
注意,分区后无需格式化,asm本身就是文件系统,asm直接管理裸设备
3 创建asm分区
# /usr/sbin/oracleasm configure -iDefault user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
# /etc/init.d/oracleasm createdisk VOL1 /dev/sdb1
# /etc/init.d/oracleasm createdisk VOL2 /dev/sdc1
# /etc/init.d/oracleasm createdisk VOL3 /dev/sdd1
想在Linux的环境下在ASM上建立Oracle数据库,可是没有ASM磁盘啊,应该怎么做呢?
正好刚整理了一个这方面的资料,先贴给你吧.
准备ASM环境
ASM使用一个名叫“+ASM”的数据库实例来管理ASM磁盘,因此在配置ASM磁盘之前,需要先启动ASM实例。另外还需要注意,ASM 实例必须要先于数据库实例启动,和数据库实例同步运行,迟于数据库实例关闭。ASM 实例的创建和删除可以用DBCA 工具来操作。在DBCA的第一个界面选择配置自动存储管理就可以进入ASM配置的界面。
根据提示运行脚本就可以配置和启动CSS(Cluster Synchronization Service)了,注意要以root的身份运行这个脚本,运行情况如下:
# /u01/app/oracle/proct/10.2.0/db_1/bin/localconfig add
/etc/oracle does not exist. Creating it now.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configuration for local CSS has been initialized
Adding to inittab
Startup will be queued to init within 90 seconds.
Checking the status of new Oracle init process...
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
s1
CSS is active on all nodes.
Oracle CSS service is installed and running under init(1M)
ASM可以使用裸设备或者ASMLib方式, 因为裸设备的维护更罗嗦一些,本文只讨论ASMLib方式。为了在Linux系统中使用ASMLib方式准备ASM磁盘,需要安装相关的软件,下载链接如下:
http://www.oracle.com/technology/tech/linux/asmlib/index.html
下载时注意选择自己的操作系统和内核的版本,我下载到的是以下三个软件:
oracleasm-2.6.18-164.el5-2.0.5-1.el5.i686.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
oracleasm-support-2.1.4-1.el5.i386.rpm
这里特别需要注意第一个软件要和你的Linux内核的版本一致。(其实我的内核版本是2.6.18-155.el5,但是在官方网站找不到完全对应的版本,只好使用这个2.6.18-164.el5的版本了,后面会讲怎么解决这个问题。)接下来进行软件的安装,只需要使用rpm命令即可。
# rpm -ivh oracleasm*
现在安装oracleasm模块可能会报错,像我因为没有找到对应我的内核版本的oracleasm软件就遇到了这个问题。经过一番查找,发现软件将oracleasm的模块文件oracleasm.ko安装到了目录/lib/moles/2.6.18-164.el5/kernel/drivers/addon/oracleasm中,而我的默认的模块文件路径应该是/lib/moles/2.6.18-155.el5,因此决定手工建立相关的目录和文件。
# mkdir -p /lib/moles/2.6.18-155.el5/kernel/drivers/addon/oracleasm
# cp oracleasm.ko /lib/moles/2.6.18-155.el5/kernel/drivers/addon/oracleasm
然后再安装oracleasm的模块文件就可以通过了:
# depmod -a
# modprobe oracleasm
最后进行oracleasm服务的初始配置
# service oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
好了,现在已经有了初步的ASM配置环境,接下来就可以准备磁盘了。
添加ASM磁盘组
为了实现ASM的磁盘负载均衡和冗余能力,我准备使用四个磁盘建立ASM的磁盘组,因此需要为Linux系统添加四个磁盘。为了搭建这个环境,我使用的是VMware环境,因此添加磁盘就非常容易了。只需要在VMware的设置中选择添加硬件即可,而且现在的VMware版本是可以支持磁盘的热插拔的。无需关闭Linux系统,直接添加磁盘即可。添加完四个磁盘之后,为了让Linux系统马上识别这几个磁盘,可以运行如下命令:
# echo 'scsi add-single-device 0 0 1 0' > /proc/scsi/scsi
# echo 'scsi add-single-device 0 0 2 0' > /proc/scsi/scsi
# echo 'scsi add-single-device 0 0 3 0' > /proc/scsi/scsi
# echo 'scsi add-single-device 0 0 4 0' > /proc/scsi/scsi
运行fdisk -l命令可以看到系统中增加了/dev/sdb、/dev/sdc、/dev/sdd、/dev/sde四个磁盘。使用fdisk工具在这个四个磁盘上各建立一个分区(具体步骤略,不熟悉的可以查阅Linux的fdisk命令用法)。
然后运行oracleasm createdisk命令添加ASM磁盘:
# oracleasm createdisk VOL1 /dev/sdb1
Writing disk header: done
Instantiating disk: done
依次添加/dev/sdb1、/dev/sdc1、/dev/sdd1、/dev/sde1四个磁盘分区,完成后检查如下:
# oracleasm listdisks
VOL1
VOL2
VOL3
VOL4
这时运行DBCA工具的ASM配置向导就可以建立ASM磁盘组了。进入DBCA向导后,选择“Configure Automatic Storage Management”,然后单击“Create New”按钮,可以看到ASM磁盘组的配置界面。输入磁盘组的名称,如“dg1”,关于冗余级别,我选择了Norma,并且将VOL1和VOL2设置为一个Failure Group,而VOL3和VOL4设置为一个Failure Group。
以上操作也可以使用sqlplus连接到ASM实例上,使用命令完成。
为了连接到ASM实例上,首先需要设置ORACLE_SID环境变量,然后再使用sqlplus进行连接:
# ORACLE_SID=+ASM
# sqlplus / as sysdba
创建Disk Group的语句如下:
SQL> create diskgroup dg1 normal rendancy
failgroup fg1 disk 'ORCL:VOL1','ORCL:VOL2'
failgroup fg2 disk 'ORCL:VOL3','ORCL:VOL4';
至此,ASM磁盘组的准备已经完成,终于可以在ASM上建立数据库了。还是使用DBCA工具,选择“Create Database”进入创建数据库的向导。按照向导操作,只是在选择存储机制时,选择ASM类型。
求助 多路径 centos6.2 udev asm 怎么写udev rule
步骤如下:
1. 编辑/etc/scsi_id.config文件,如果该文件不存在,则创建该文件,添加如下行:
options=--whitelisted --replace-whitespace
备注:在我的测试中,此步骤可以省略。
2. 获取需要绑定为ASM Disk的磁盘uuid,比如我们要使用/dev/sdc和/dev/sdd作为ASM磁盘,那么:
# scsi_id --whitelisted --replace-whitespace --device=/dev/sdc
1ATA_VBOX_HARDDISK_VB36a9e548-1838194a
# scsi_id --whitelisted --replace-whitespace --device=/dev/sdd
1ATA_VBOX_HARDDISK_VB9808fc7f-cdf35030
3. 编写udev rules文件,PROGRAM中写scsi_id命令,RESULT中写上面命令中返回的uuid,这跟OEL5中没什么区别,只是scsi_id命令的语法发生变化了。
vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB36a9e548-1838194a", NAME="asm-disk1", OWNER="grid", GROUP="dba", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB9808fc7f-cdf35030", NAME="asm-disk2", OWNER="grid", GROUP="dba", MODE="0660"
4. 用udevadm进行测试,注意udevadm命令不接受/dev/sdc这样的挂载设备名,必须是使用/sys/block/sdc这样的原始设备名。
udevadm test /sys/block/sdc
udevadm info --query=all --path=/sys/block/sdc
udevadm info --query=all --name=asm-disk1
在显示中,有类似如下输出,表示测试正确,/dev/sdc设备在udev启动以后将会绑定为/dev/asm-disk1:
udevadm_test: UDEV_LOG=6
udevadm_test: DEVPATH=/devices/pci0000:00/0000:00:0d.0/host4/target4:0:0/4:0:0:0/block/sdc
udevadm_test: MAJOR=8
udevadm_test: MINOR=32
udevadm_test: DEVNAME=/dev/asm-disk1
udevadm_test: DEVTYPE=disk
udevadm_test: ACTION=add
udevadm_test: SUBSYSTEM=block
5. 启动udev
# /sbin/start_udev
6. 检查设备是否正确绑定
# ls -l /dev/asm*
brw-rw---- 1 grid dba 8, 32 Oct 26 21:24 /dev/asm-disk1
brw-rw---- 1 grid dba 8, 48 Oct 26 21:17 /dev/asm-disk2