Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

系统Oracle Linux 6.5,Oracle 11.2.0.1

终于开始安装ASM和RAC的行程了.开始前需要想清楚的几个事情:

  • 如何规划网络配置(配置多网卡,实现连通性,规划内外网,eth0,eth1,vip以及scan ip)

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

  • 规划共享磁盘配置(在virtualbox中的实现)
  • ASM配置及权限
  • 两个节点的ssh互信配置(为了rac1将介质复制到rac2)
  • ntp的时间同步配置
  • dns的解析配置,采用了hosts做替代
  • 出现crs问题后的解决办法

 

 

  • Linux下准备工作

 

vi /etc/selinux/config-------永久关闭selinux

SELINUX=disabled--------然后重启服务器

 

chkconfig iptables off 重启永久关闭防火墙   

 

安装包

mount /dev/cdrom /mnt 
cd /mnt/cdrom/Server/Packages 
rpm -Uvh binutils-2*x86_64*   
rpm -Uvh glibc-2*x86_64* nss-softokn-freebl-3*x86_64*  
rpm -Uvh glibc-2*i686* nss-softokn-freebl-3*i686*   
rpm -Uvh compat-libstdc++-33*x86_64*   
rpm -Uvh glibc-common-2*x86_64*   
rpm -Uvh glibc-devel-2*x86_64*   
rpm -Uvh glibc-devel-2*i686*   
rpm -Uvh glibc-headers-2*x86_64*  
rpm -Uvh elfutils-libelf-0*x86_64*   
rpm -Uvh elfutils-libelf-devel-0*x86_64* 
rpm -Uvh gcc-4*x86_64*   
rpm -Uvh gcc-c++-4*x86_64*   
rpm -Uvh ksh-*x86_64*   
rpm -Uvh libaio-0*x86_64*   
rpm -Uvh libaio-devel-0*x86_64*   
rpm -Uvh libaio-0*i686*   
rpm -Uvh libaio-devel-0*i686*   
rpm -Uvh libgcc-4*x86_64*   
rpm -Uvh libgcc-4*i686*   
rpm -Uvh libstdc++-4*x86_64*   
rpm -Uvh libstdc++-4*i686*   
rpm -Uvh libstdc++-devel-4*x86_64*   
rpm -Uvh make-3.81*x86_64*   
rpm -Uvh numactl-devel-2*x86_64*   
rpm -Uvh sysstat-9*x86_64*   
rpm -Uvh compat-libstdc++-33*i686*   
rpm -Uvh compat-libcap*

 

查询是否安装如下包

rpm -q binutils compat-libstdc++-33 compat-gcc-34-c++ elfutils-libelf elfutils-libelf-devel gcc gcc-c++ glibc glibc-common glibc-devel glibc-headers kernel-headers ksh libaio libaio-devel libgcc libgomp libstdc++ libstdc++-devel make sysstat numactl-devel unixODBC unixODBC-devel openmotif22 openmotif compat-db libXp 

 

输出如下:

 

binutils-2.20.51.0.2-5.36.el6.x86_64
compat-libstdc++-33-3.2.3-69.el6.x86_64
compat-libstdc++-33-3.2.3-69.el6.i686

package compat-gcc-34-c++ is not installed

elfutils-libelf-0.152-1.el6.x86_64
elfutils-libelf-devel-0.152-1.el6.x86_64
gcc-4.4.7-4.el6.x86_64
gcc-c++-4.4.7-4.el6.x86_64
glibc-2.12-1.132.el6.x86_64
glibc-2.12-1.132.el6.i686
glibc-common-2.12-1.132.el6.x86_64
glibc-devel-2.12-1.132.el6.x86_64
glibc-devel-2.12-1.132.el6.i686
glibc-headers-2.12-1.132.el6.x86_64
kernel-headers-2.6.32-431.el6.x86_64
ksh-20120801-10.el6.x86_64
libaio-0.3.107-10.el6.x86_64
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6.x86_64
libaio-devel-0.3.107-10.el6.i686
libgcc-4.4.7-4.el6.x86_64
libgcc-4.4.7-4.el6.i686
libgomp-4.4.7-4.el6.x86_64
libstdc++-4.4.7-4.el6.x86_64
libstdc++-4.4.7-4.el6.i686
libstdc++-devel-4.4.7-4.el6.x86_64
make-3.81-20.el6.x86_64
sysstat-9.0.4-22.el6.x86_64
numactl-devel-2.0.7-8.el6.x86_64

package unixODBC is not installed
package unixODBC-devel is not installed
package openmotif22 is not installed

openmotif-2.3.3-6.1.el6_4.x86_64

package compat-db is not installed

libXp-1.0.0-15.1.el6.x86_64

 

继续安装包

# rpm -ivh compat-db-4.6.21-15.el6.x86_64.rpm compat-db42-4.2.52-15.el6.x86_64.rpm compat-db43-4.3.29-15.el6.x86_64.rpm   
# rpm -ivh compat-gcc-34-c++-3.4.6-19.el6.x86_64.rpm compat-gcc-34-3.4.6-19.el6.x86_64.rpm   
# rpm -ivh unixODBC-2.2.14-12.el6_3.x86_64.rpm unixODBC-devel-2.2.14-12.el6_3.x86_64.rpm   
# rpm -ivh openmotif-2.3.3-6.1.el6_4.x86_64.rpm openmotif22-2.2.3-19.el6.x86_64.rpm libXp-1.0.0-15.1.el6.x86_64.rpm

 

在每一个节点上添加安装Oracle Grid的用户、组和家目录,并设置权限。

  /usr/sbin/groupadd -g 1000 oinstall

 /usr/sbin/groupadd -g 1020 asmadmin

 /usr/sbin/groupadd -g 1021 asmdba

 /usr/sbin/groupadd -g 1022 asmoper

 /usr/sbin/groupadd -g 1031 dba

/usr/sbin/groupadd -g 1032 oper

 useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid

 useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle

 mkdir -p /u01/app/11.2.0/grid

 mkdir -p /u01/app/grid

 chown -R grid:oinstall /u01

 mkdir /u01/app/oracle

chown oracle:oinstall /u01/app/oracle

 chmod -R 775 /u01/

 

 

修改文件参数

# vi /etc/sysctl.conf
fs.aio-max-nr = 1048576 
fs.file-max = 6815744   
kernel.shmall = 2097152   
kernel.shmmax = 2147483648   
kernel.shmmni = 4096   
kernel.sem = 250 32000 100 128   
net.ipv4.ip_local_port_range = 9000 65500   
net.core.rmem_default = 4194304   
net.core.rmem_max = 4194304   
net.core.wmem_default = 262144   
net.core.wmem_max = 1048586

 

sysctl -p

 

文件描述符

cat >> /etc/security/limits.conf << EOF 
oracle  soft  nproc  2047   
oracle  hard  nproc  16384   
oracle  soft  nofile  1024   
oracle  hard  nofile  65536   
oracle  soft  stack  10240   

 

修改登录模块

cat >> /etc/pam.d/login << EOF 
session required /lib64/security/pam_limits.so   
EOF

 

 cd /mnt/install_DVD

 cd Packages

 ll | grep preinstall

-rw-r--r-- 1 root root 15524 Jan 16 2013 oracle-rdbms-server-11gR2-preinstall-1.0-7.el6.x86_64.rpm

 rpm -ivh oracle-rdbms-server-11gR2-preinstall-1.0-7.el6.x86_64.rpm

 

  • 配置网络 

在virtualbox再虚拟一个eth1.满足服务器要求

每个服务器节点至少需要2块网卡,一块对外网络接口,一块私有网络接口(心跳)。

vip和scan ip都不需要配置在操作系统的network中.

 

 vi /etc/hosts

 

#public
192.168.0.130 rac1
192.168.0.131 rac2

#private
192.168.0.135 rac1-priv
192.168.0.136 rac2-priv

#Virtual
192.168.0.132 rac1-vip
192.168.0.133 rac2-vip

#scan
192.168.0.201 rac-scan

保证双方节点都能通

配置两个节点的网络.

修改网卡的名称

在克隆虚拟机后,另外一个虚拟机的网络可能变成eth2,eth3,需要改回eth0和eth1,方法如下

vi /etc/sysconfig/network-script/ifcfg-eth0

加上dev=eth0

vi /etc/sysconfig/network-script/ifcfg-eth1

加上dev=eth1

修改vi /etc/udev/rules.d/70-persistent-net.rules

屏蔽上面的eth0,eth1的语句,把下面的eth2,eth3的修改成eth0,eth1

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:bb:41:2b", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

修改完后重起网络

service network restart

不生效的话重新启动系统.

 

关闭ntpd

# chkconfig ntpd off

# rm /etc/ntp.conf (mv /etc/ntp.conf /etc/ntp.conf.old)

# rm /var/run/ntpd.pid

 

 

  • 配置ssh互信

需要针对oracle和grid用户都需要配置ssh互信.

 

各节点生成Keys:

[root@rac1 ~]# su - oracle

[oracle@rac1 ~]$ mkdir ~/.ssh

[oracle@rac1 ~]$ chmod 700 ~/.ssh

[oracle@rac1 ~]$ ssh-keygen -t rsa

[oracle@rac1 ~]$ ssh-keygen -t dsa

[root@rac2 ~]# su - oracle

[oracle@rac2 ~]$ mkdir ~/.ssh

[oracle@rac2 ~]$ chmod 700 ~/.ssh

[oracle@rac2 ~]$ ssh-keygen -t rsa

[oracle@rac2 ~]$ ssh-keygen -t dsa

在节点1上进行互信配置:

[oracle@rac1 ~]$ touch ~/.ssh/authorized_keys

[oracle@rac1 ~]$ cd ~/.ssh

[oracle@rac1 .ssh]$ ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys

[oracle@rac1 .ssh]$ ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys

[oracle@rac1 .ssh]$ ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys

[oracle@rac1 .ssh]$ ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys

在rac1把存储公钥信息的验证文件传送到rac2上

[oracle@rac1 .ssh]$ pwd

/home/oracle/.ssh

[oracle@rac1 .ssh]$ scp authorized_keys rac2:'/home/oracle/.ssh'

oracle@rac2's password:

authorized_keys 100% 1644 1.6KB/s 00:00

设置验证文件的权限

在每一个节点执行:

$ chmod 600 ~/.ssh/authorized_keys

启用用户一致性

在你要运行OUI的节点以oracle用户运行(这里选择rac1):

[oracle@rac1 .ssh]$ exec /usr/bin/ssh-agent $SHELL

[oracle@rac1 .ssh]$ ssh-add

Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)

验证ssh配置是否正确

以oracle用户在所有节点分别执行:

ssh rac1 date

ssh rac2 date

ssh rac1-priv date

ssh rac2-priv date

如果不需要输入密码就可以输出时间,说明ssh验证配置成功。必须把以上命令在两个节点都运行,每一个命令在第一次执行的时候需要输入yes。

如果不运行这些命令,即使ssh验证已经配好,安装clusterware的时候也会出现错误:

The specified nodes are not clusterable

因为,配好ssh后,还需要在第一次访问时输入yes,才算是真正的无障碍访问其他服务器

 

  • ASM磁盘配置

在virtualbox rac1中添加盘片,选择固定大小盘片

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 选择固定大小,10G,添加三个盘片 share1.vdi,share2.vdi,share3.vdi

 Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 选择虚拟介质管理,把刚才的盘设置成共享

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

 在rac2节点上选择用现有的磁盘

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

 

进入rac1,fdisk

[root@rac1 Desktop]# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xa812137c.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1305, default 1305):
Using default value 1305

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

 在rac1和rac2结点都需要做这步

[root@rac1 ~]# /usr/sbin/oracleasm configure -i

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

 

[root@rac1 Desktop]# /usr/sbin/oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm
[root@rac1 Desktop]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdb1
Marking disk "VOL1" as an ASM disk: [ OK ]
[root@rac1 Desktop]# /etc/init.d/oracleasm createdisk VOL2 /dev/sdc1
Marking disk "VOL2" as an ASM disk: [ OK ]
[root@rac1 Desktop]# /etc/init.d/oracleasm createdisk VOL3 /dev/sdd1
Marking disk "VOL3" as an ASM disk: [ OK ]
[root@rac1 Desktop]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

 

在rac2结点上运行

/usr/sbin/oracleasm init

 /usr/sbin/oracleasm scandisks

 

  • 环境变量设置

设置环境变量

grid 用户的环境变量

vi .bash_profile

#Grid Settings
TMP=/tmp;export TMP
TMPDIR=$TMP;export TMPDIR

ORACLE_SID=+ASM1;export ORACLE_SID

ORACLE_BASE=/u01/app/oracle;export ORACLE_BASE
GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/11.2.0/db_1; export DB_HOME

ORACLE_HOME=$GRID_HOME;export ORACLE_HOME

NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS";export NLS_DATE_FORMAT

BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi

 

oracle用户的环境变量

#oracle Settings
TMP=/tmp;export TMP
TMPDIR=$TMP;export TMPDIR

ORACLE_SID=RAC1;export ORACLE_SID

ORACLE_BASE=/u01/app/oracle;export ORACLE_BASE
GRID_HOME=/u01/app/11.2.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/11.2.0/db_1; export DB_HOME

ORACLE_HOME=$DB_HOME;export ORACLE_HOME

NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS";export NLS_DATE_FORMAT

BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi

 

  • 检查各个节点的配置

 

[root@node1 ~]# su - grid

[grid@node1 ~]$ cd grid_installation

[grid@node1 grid_installation]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose

 

  • 安装Grid Infrastructure.

通过grid登录rac1节点,运行./runInstaller.

 

 Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

 

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

 Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 很多都是32位的包,在检查的时候如果看到有高版本或者64位的包存在,可以忽略.

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

解决crs的443问题

在最后运行脚本的时候,报错如下

Adding daemon to inittab
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start at/u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.
 
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start: Inappropriate ioctl for device at /u01/app/11.2.0/grid/crs/install/roothas.pl line 296.

 

解决办法是:

在执行root.sh脚本时出现Adding daemon to inittab的时候,在另一个窗口使用root立即执行以下命令: 

/bin/dd if=/var/tmp/.oracle/npohasd of=/dev/nullbs=1024 count=1(等待root.sh成功执行完毕后可以ctrl+c取消这里的命令)
 
之后创建以下文件输入以下命令
#vi /etc/init/oracle-ohasd.conf

start on runlevel [35]
stop on runlevel [!35]
respawn
exec /etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
 
忽略rac-scan的错误,因没有配置dns导致,但是各个节点已经能够ping通

 

解决ntpd时间同步问题

 

[root@node1 ~]#  vi /etc/ntp.conf  
    
 #New ntp server added by Robinson  
 server  127.127.1.0 prefer  # 添加首选的时钟服务器  
 restrict 192.168.0.0  mask 255.255.255.255 nomodify notrap #只允许192.168.7.*网段的客户机进行时间同步  
 broadcastdelay 0.008  
  
 [root@node2 ~]# vi /etc/ntp.conf  
    
 #New ntp server added by Robinson  
 server 192.168.0.130 prefer  
 broadcastdelay 0.008  
  
编辑两节点的ntpd参数  
 [root@node1 ~]# vi /etc/sysconfig/ntpd  
 #The following item added by Robinson  
 #Set to 'yes' to sycn hw clock after successful ntpdate  
 SYNC_HWCLOCK=yes      #此选项用于自动校准系统时钟与硬件时钟  
 OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"  
  
 #注意理解Linux的时钟类型。在Linux系统中分为系统时钟和硬件时钟.  
 #系统时钟指当前Linux kernel中的时钟,而硬件时钟指的是BIOS时钟,由主板电池供电的那个时钟  
 #当Linux启动时,硬件时钟会读取系统时钟的设置,之后系统时钟就独立于硬件时钟运作  
  
 [root@node2 ~]# vi /etc/sysconfig/ntpd  
 The following item added by Robinson  
 SYNC_HWCLOCK=yes  
 OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"    
  
配置ntp自启动服务  
 [root@node1 ~]# chkconfig ntpd on  
 [root@node2 ~]# chkconfig ntpd on  
  
  
在两节点启动ntp服务  
 [root@node1 ~]# service ntpd stop  
 Shutting down ntpd: [FAILED]  
 [root@node1 ~]# service ntpd start  
 ntpd: Synchronizing with time server: [FAILED]  
 Starting ntpd: [  OK  ]  
  
 [root@node2 ~]# service ntpd restart  
 Shutting down ntpd: [  OK  ]  
 ntpd: Synchronizing with time server: [  OK  ]  
 Syncing hardware clock to system time [  OK  ]  
 Starting ntpd: [  OK  ]    
  
查看ntp状态  
 [root@node1 ~]# ntpq -p  
      remote          refid      st t when poll reach  delay  offset  jitter  
 ==============================================================================  
  LOCAL(0)        .LOCL.          10 l  40  64    1    0.000    0.000  0.001  
    
 [root@node2 ~]# ntpq -p  
      remote          refid      st t when poll reach  delay  offset  jitter  
 ==============================================================================  
  node1.szdb.com  .INIT.          16 u  60  64    0    0.000    0.000  0.000  
  LOCAL(0)        .LOCL.          10 l  59  64    1    0.000    0.000  0.001  

 

 验证grid infrasture安装

[root@rac1 oraInventory]# su - grid
[grid@rac1 ~]$ crsctl status resource -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.OCRVOTE.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.eons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.oc4j
1 OFFLINE OFFLINE
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1

 

  • 安装数据库软件

以Oracle用户登录rac1安装数据库软件。

略过

 

  • dbca创建数据库

 

 需要用grid用户运行netca,加入一个listener,并验证 

crs_stat -t 

lsnrctl start 

lsnrctl status 

srvctl status listener 

 

 以Oracle用户登录运行dbca

 Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 选择select all,rac1和rac2.(截图不对)

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

  

如果dbca找不到ASM盘,需要运行

修改ASM设备的组为asmdba,也可以给ORACLE用户加到asmadmin,
usermod -a -G asmadmin oracle

 

遇到创建文件失败的情况,需要到/dev/raw/目录下运行

chmod 660 /dev/raw/*

chown grid:asmadmin /dev/raw/*

 

然后重新启动后再运行dbca.

 Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

 

 安装完后在rac1上用oracle登录显示状态

oracle@rac1 ~]$ srvctl config database -d RAC1
Database unique name: RAC1
Database name: RAC1
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +OCRVOTE/RAC1/spfileRAC1.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: RAC1
Database instances: RAC11,RAC12
Disk Groups: OCRVOTE
Services:
Database is administrator managed

 

[oracle@rac1 ~]$ srvctl status database -d RAC1
Instance RAC11 is running on node rac1
Instance RAC12 is running on node rac2

 

[oracle@rac2 ~]$ export ORACLE_SID=RAC12
[oracle@rac2 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sun Oct 23 22:17:36 2016

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select inst_name from v$active_instances;

INST_NAME
--------------------------------------------------------------------------------
rac1:RAC11
rac2:RAC12

 

 

[grid@rac1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....ER.lsnr ora....er.type ONLINE ONLINE rac1
ora....N1.lsnr ora....er.type ONLINE ONLINE rac2
ora.OCRVOTE.dg ora....up.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.eons ora.eons.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora....network ora....rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type OFFLINE OFFLINE
ora.ons ora.ons.type ONLINE ONLINE rac1
ora....SM1.asm application ONLINE ONLINE rac1
ora....C1.lsnr application ONLINE ONLINE rac1
ora.rac1.db ora....se.type ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora....t1.type ONLINE ONLINE rac1
ora....SM2.asm application ONLINE ONLINE rac2
ora....C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora....t1.type ONLINE ONLINE rac2
ora.scan1.vip ora....ip.type ONLINE ONLINE rac2

 

 

  •  RAC的关闭

 

以oracle登录rac1,运行

$ . oraenv
ORACLE_SID = [oracle] ? RAC1
The Oracle base has been set to /u01/app/oracle

$ srvctl stop database -d RAC1
$


以root登录rac1,运行

# . oraenv
ORACLE_SID = [ractp1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle

# crsctl stop crs
...
CRS-4133: Oracle High Availability Services has been stopped.
#

以root登录rac2,运行
# . oraenv
ORACLE_SID = [ractp1] ? +ASM1
The Oracle base remains unchanged with value /u01/app/oracle

# crsctl stop crs
...
CRS-4133: Oracle High Availability Services has been stopped.
#

参考:

http://blog.itpub.net/30220976/viewspace-1766420/

http://www.cnblogs.com/mawanglin2008/articles/3529102.html

http://jingyan.baidu.com/article/455a99509facd9a167277850.html

 

 

原文链接: https://www.cnblogs.com/ericnie/p/5986388.html

欢迎关注

微信关注下方公众号,第一时间获取干货硬货;公众号内回复【pdf】免费获取数百本计算机经典书籍

    Virtualbox环境中安装Oracle 11gr2 RAC(ASM)

原创文章受到原创版权保护。转载请注明出处:https://www.ccppcoding.com/archives/242556

非原创文章文中已经注明原地址,如有侵权,联系删除

关注公众号【高性能架构探索】,第一时间获取最新文章

转载文章受原作者版权保护。转载请注明原作者出处!

(0)
上一篇 2023年2月13日 下午10:26
下一篇 2023年2月13日 下午10:27

相关推荐