RAC install error
while running:/u01/app/oracle/product/10.2.0/db_1/root.sh
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
PROT-1: Failed to initialize ocrconfig
Failed to upgrade Oracle Cluster Registry configuration
dd if=/dev/zero f=/dev/rdsk/V1064_vote_01_20m.dbf bs=8192 count=2560
dd if=/dev/zero f=/dev/rdsk/ocrV1064_100m.ora bs=8192 count=12800
解决方法1
Failed to upgrade Oracle Cluster Registry configuration
在安装CRS时,在第二个节点执行./root.sh时,出现如下提示,我在第一个节点执行正常.请大虾指点一些,不胜感激!谢谢!
# ./root.sh
WARNING: directory '/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/app/oracle/product' is not owned by root
WARNING: directory '/app/oracle' is not owned by root
WARNING: directory '/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
PROT-1: Failed to initialize ocrconfig
Failed to upgrade Oracle Cluster Registry configuration
错误原因:
是因为安装crs的设备权限有问题,例如我的设备用raw来放置ocr和vote,此时要设置好这些硬件设备以及连接的文件的权限,下面是我的环境:
#
lrwxrwxrwx 1 root root 13 Jan 27 12:49 ocr.crs -> /dev/raw/raw1
lrwxrwxrwx 1 root root 13 Jan 26 13:31 vote.crs -> /dev/raw/raw2
chown root:oinstall /dev/raw/raw1
chown root:oinstall /dev/raw/raw2
chmod 660 /dev/raw/raw1
chmod 660 /dev/raw/raw2
其中/dev/sdb1放置ocr,/dev/sdb2放置vote.
# service rawdevices reload
Assigning devices:
/dev/raw/raw1 --> /dev/sdb1
/dev/raw/raw1: bound to major 8, minor 17
/dev/raw/raw2 --> /dev/sdb2
/dev/raw/raw2: bound to major 8, minor 18
Done
然后再次执行就ok了.
# /oracle/app/oracle/product/crs/root.sh
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 priv1 rac1
node 2: rac2 priv2 rac2
clscfg: Arguments check out successfully.
Oracle gave me a patch that allowed us to format the OCR and votingdisk. Now the problem may just be I need to run root.sh on node1 orwipe both clean and start fresh.
Devices formatted, now we need to get the CRS daemon up and running on both nodes..
# dd if=/dev/zero of=/dev/raw/raw2 bs=1048576 count=1000
1000+0 records in
1000+0 records out
# /orahome/app/oracle/product/10.1.2.0.2/CRS/root.sh
WARNING: directory '/orahome/app/oracle/product/10.1.2.0.2' is not owned by root
WARNING: directory '/orahome/app/oracle/product' is not owned by root
WARNING: directory '/orahome/app/oracle' is not owned by root
WARNING: directory '/orahome/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/orahome/app/oracle/product/10.1.2.0.2' is not owned by root
WARNING: directory '/orahome/app/oracle/product' is not owned by root
WARNING: directory '/orahome/app/oracle' is not owned by root
WARNING: directory '/orahome/app' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: dr2db2 dr2db2-eth2 dr2db2
node 2: dr2db1 dr2db1-eth2 dr2db1
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw6
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
dr2db2
CSS is inactive on these nodes.
dr2db1
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
# ps -ef | grep ora
root 657225260 13:39 ? 00:00:00 sshd: oracle
oracle 657465720 13:39 ? 00:00:00 sshd: oracle@pts/1
oracle 657565740 13:39 pts/1 00:00:00 -bash
root 1417225260 17:54 ? 00:00:00 sshd: oracle
oracle 14176 141720 17:55 ? 00:00:00 sshd: oracle@pts/2
oracle 14177 141760 17:55 pts/2 00:00:00 -bash
root 14682 1 0 17:57 ? 00:00:00 /bin/su -l oracle -c sh -c 'ulimit -cunlimited; cd/orahome/app/oracle/product/10.1.2.0.2/CRS/log/dr2db2/evmd; exec/orahome/app/oracle/product/10.1.2.0.2/CRS/bin/evmd '
root 14686 10 17:57 ? 00:00:00 /orahome/app/oracle/product/10.1.2.0.2/CRS/bin/crsd.bin reboot
oracle 14959 146820 17:58 ? 00:00:00 /orahome/app/oracle/product/10.1.2.0.2/CRS/bin/evmd.bin
root 15018 14942 0 17:58 ? 00:00:00 /bin/su -l oracle -c /bin/sh -c'ulimit -c unlimited; cd/orahome/app/oracle/product/10.1.2.0.2/CRS/log/dr2db2/cssd;/orahome/app/oracle/product/10.1.2.0.2/CRS/bin/ocssd || exit $?'
oracle 15021 15018 0 17:58 ? 00:00:00 /bin/sh -c ulimit -c unlimited;cd /orahome/app/oracle/product/10.1.2.0.2/CRS/log/dr2db2/cssd;/orahome/app/oracle/product/10.1.2.0.2/CRS/bin/ocssd || exit $?
oracle 15057 150210 17:58 ? 00:00:00 /orahome/app/oracle/product/10.1.2.0.2/CRS/bin/ocssd.bin
root 15996 142060 17:59 pts/2 00:00:00 grep ora
crs_setperm crs_setperm.bin crs_start crs_start.bin crs_statcrs_stat.bin crs_stop crs_stop.bin
# /orahome/app/oracle/product/10.1.2.0.2/CRS/bin/crs_stat -t
CRS-0184: Cannot communicate with the CRS daemon.
on next node
# /orahome/app/oracle/product/10.1.2.0.2/CRS/root.sh
WARNING: directory '/orahome/app/oracle/product/10.1.2.0.2' is not owned by root
WARNING: directory '/orahome/app/oracle/product' is not owned by root
WARNING: directory '/orahome/app/oracle' is not owned by root
WARNING: directory '/orahome/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/orahome/app/oracle/product/10.1.2.0.2' is not owned by root
WARNING: directory '/orahome/app/oracle/product' is not owned by root
WARNING: directory '/orahome/app/oracle' is not owned by root
WARNING: directory '/orahome/app' is not owned by root
/orahome/app/oracle/product/10.1.2.0.2/CRS/bin/crsctl.bin: error whileloading shared libraries: libstdc++.so.5: cannot open shared objectfile: No such file or directory
Failure initializing entries in /etc/oracle/scls_scr/dr2db1.
I think whats needed is a clean install.
http://www.puschitz.com/InstallingOracle10gRAC.shtml#CreatingPartitionsForRawDevices
在 node1 上执行:/opt/ora10g/product/10.2.0/crs_1/root.sh; <span style="" />
在 node2 上执行:/opt/ora10g/product/10.2.0/crs_1/root.sh; <span style="" />
通常在最后一个节点执行root.sh时会遇到错误,就我们的情况而言当然就是node2~~<span style="" />
<span style="" />
提示:一般常见的错误有如下三种:<span style="" />
A).如果你碰到了这个错误:<span style="" />
/opt/ora10g/product/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory<span style="" />
<span style="" />
可以按照如下方式解决:<span style="" />
===============================<span style="" />
修改vipca文件<span style="" />
# vi /opt/ora10g/product/10.2.0/crs_1/bin/vipca<span style="" />
找到如下内容:<span style="" />
Remove this workaround when the bug 3937317 is fixed<span style="" />
arch=`uname -m`<span style="" />
if [ "$arch" = "i686" -o "$arch" = "ia64" ]<span style="" />
then<span style="" />
LD_ASSUME_KERNEL=2.4.19<span style="" />
export LD_ASSUME_KERNEL<span style="" />
fi<span style="" />
#End workaround<span style="" />
在fi后新添加一行:<span style="" />
unset LD_ASSUME_KERNEL<span style="" />
<span style="" />
以及srvctl文件<span style="" />
# vi /opt/ora10g/product/10.2.0/crs_1/bin/srvctl<span style="" />
找到如下内容:<span style="" />
LD_ASSUME_KERNEL=2.4.19<span style="" />
export LD_ASSUME_KERNEL<span style="" />
同样在其后新增加一行:<span style="" />
unset LD_ASSUME_KERNEL<span style="" />
<span style="" />
保存退出,然后在node2重新执行root.sh<span style="" />
当然,既然我们已经知道了有这个问题,建议最好在node2执行root.sh之前,首先修改vipca。<span style="" />
<span style="" />
其实同时需要你改的还有$ORACLE_HOME/bin/srvctl文件,不然等装完数据库之后,srvctl命令也是会报这个错误地。要知道srvctl这么常用,如果它执行老报错,那可是相当致命啊。不过呢你现在才安装到crs,离create db还远着呢,大可以等到创建完数据库,待到需要管理时再修改该文件。<span style="" />
<span style="" />
B).如果你碰到了这个错误:<span style="" />
The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.<span style="" />
<span style="" />
解决方式如下:<span style="" />
==============================<span style="" />
图形界面上运行$CRS_HOME/bin/vipca,手工重新配置rac1-vip和rac2-vip。<span style="" />
# xhost +<span style="" />
# /opt/ora10g/product/10.2.0/crs_1/bin/vipca<span style="" />
http://space.itpub.net/attachments/2008/06/7607759_200806231318402.jpg
按照提示点击下一步<span style="" />
http://space.itpub.net/attachments/2008/06/7607759_200806231318403.jpg
<span style="" />
http://space.itpub.net/attachments/2008/06/7607759_200806231318404.jpg
<span style="" />
http://space.itpub.net/attachments/2008/06/7607759_200806231318405.jpg
<span style="" />
点击finish即可<span style="" />
http://space.itpub.net/attachments/2008/06/7607759_200806231318406.jpg
vipca开始自动配置<span style="" />
http://space.itpub.net/attachments/2008/06/7607759_200806231318407.jpg
<span style="" />
全部配置完成之后,点击exit退出操作窗口。<span style="" />
http://space.itpub.net/attachments/2008/06/7607759_200806231318408.jpg
<span style="" />
C).如果你碰到了这个错误:<span style="" />
Error 0(Native: listNetInterfaces:)<span style="" />
)]<span style="" />
<span style="" />
解决方式如下:<span style="" />
===============================<span style="" />
# ./oifcfg iflist<span style="" />
eth1 10.10.17.0<span style="" />
virbr0 192.168.122.0<span style="" />
eth0 192.168.100.0<span style="" />
# ./oifcfg setif -global eth0/192.168.100.0:public<span style="" />
# ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect<span style="" />
# ./oifcfg getif<span style="" />
eth0 192.168.100.0 global public<span style="" />
eth1 10.10.10.0 global cluster_interconnect<span style="" />
<span style="" />
然后在视窗界面重新执行vipca即可,如上b例中所示。
页:
[1]