reference:http://www.ixdba.net/hbcms/article/5b/266.html
現(xiàn)象一:
mount -t ocfs2 -o datavolume,nointr /dev/sdb1 /webdata
mount.ocfs2: Transport endpoint is not connected while mounting /dev/sdb1 on /webdata. Check 'dmesg' for more information on this error.
可能問題:
1:防火墻打開著,沒有關(guān)閉,屏蔽了心跳端口
2:各個節(jié)點的/etc/init.d/o2cb configure值配置不同導致。
3:一個節(jié)點處于掛載中,另外一個節(jié)點剛剛配置好,重啟了ocfs2服務導致,此時只要把連個節(jié)點都重啟一下服務即可完成掛載。
4:SElinux沒有關(guān)閉導致。
下面是一個案例:
[root@test02 ~]# mount -t ocfs2 /dev/vg_ocfs/lv_u02 /u02
mount.ocfs2: Transport endpoint is not connected while mounting /dev/vg_ocfs/lv_u02 on /u02. Check 'dmesg' for more information on this error.
出現(xiàn)這個錯誤是由于配置OCFS時O2CB_HEARTBEAT_THRESHOLD各節(jié)點的值不一樣導致的。我用/etc/init.d/o2cb configure時其實各個節(jié)點的值已經(jīng)都一樣了,不過第一個節(jié)點忘了重啟o2cb,結(jié)果查了好久才發(fā)現(xiàn)。接下當然是把已經(jīng)MOUNT的OCFS目錄UMOUNT掉,結(jié)果又出錯了:
[root@test01 u02]# umount -f /u02
umount2: Device or resource busy
umount: /u02: device is busy
umount2: Device or resource busy
umount: /u02: device is busy
這時候應該用/etc/init.d/ocfs2 stop和/etc/init.d/o2cb stop停掉OCFS2和O2CB再UMOUJNT才行,然后把OCFS2和O2CB啟動以后其他節(jié)點就可以順利MOUNT OCFS了。
現(xiàn)象二:
# /etc/init.d/o2cb online ocfs2
Starting cluster ocfs2: Failed
Cluster ocfs2 created
o2cb_ctl: Configuration error discovered while populating cluster ocfs2. None of its nodes were considered local. A node is considered local when its node name in the configuration maches this machine's host name.
Stopping cluster ocfs2: OK
主機名問題,檢查more /etc/ocfs2/cluster.conf以及/etc/hosts文件信息,修改相應的主機名即可
注意:為了保證開機能自動掛載ocfs2文件系統(tǒng),需要在/etc/fstab加入自動啟動選項后,必須在/etc/hosts中加入兩個節(jié)點的主機名和ip的對應解析,主機名和 /etc/ocfs2/cluster.conf配置的主機名一定要相同。
現(xiàn)象三
1: Starting O2CB cluster ocfs2: Failed
在安裝完ocfs2 后,配置o2cb 出錯:
[root@rac1 ocfs2]# /etc/init.d/o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
<ENTER> without typing an answer will keep that current value. Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [y]:
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [7]:
Writing O2CB configuration: OK
Starting O2CB cluster ocfs2: Failed
Cluster ocfs2 created
o2cb_ctl: Configuration error discovered while populating cluster ocfs2. None of its nodes were considered local. A node is considered local when its node name in the configuration matches this machine's host name.
Stopping O2CB cluster ocfs2: OK
出現(xiàn)這中情況,應該是OCFS沒有配置,可以看一下,有一個圖形ocfs配置命令,首先要配置他,而且最好 用IP地址,不要用主機名!
也就是說,在啟動ocfs2時,ocfs節(jié)點配置文件一定要配置好,如果沒有配置正確,就會報錯,同時在用圖形界面配置的時候,/etc/ocfs2/cluster.conf文件最好是空文件,要不然也會報錯!
現(xiàn)象四
掛載ocfs2文件系統(tǒng)遇到
mount.ocfs2: Error when attempting to run /sbin/ocfs2_hb_ctl: "Operation not permitted"
mount -t ocfs2 -o datavolume /dev/sdb1 /u02/oradata/orcl
ocfs2_hb_ctl: Bad magic number in superblock while reading uuid
mount.ocfs2: Error when attempting to run /sbin/ocfs2_hb_ctl: "Operation not permitted"
這個問題是由于ocfs2文件文件系統(tǒng)分區(qū)沒有格式化引起的錯誤,在掛載ocfs2文件系統(tǒng)之前,用于這個文件系統(tǒng)的分區(qū)一定要進行格式化.
現(xiàn)象五:
Configuration assistant "Oracle Cluster Verification Utility" failed
10g rac 安裝請教 oracle 10.2.0.1 solaris 5.9 雙機 安裝crs最后一步有錯,不知如何解決?
LOG 信息:
INFO: Configuration assistant "Oracle Cluster Verification Utility" failed
-----------------------------------------------------------------------------
*** Starting OUICA ***
Oracle Home set to /orabase/product/10.2
Configuration directory is set to /orabase/product/10.2/cfgtoollogs. All xml files under the directory will be processed
INFO: The "/orabase/product/10.2/cfgtoollogs/configToolFailedCommands" script. contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script. with passwords (if any) before executing the same.
-----------------------------------------------------------------------------
SEVERE: OUI-25031:Some of the configuration assistants failed. It is strongly recommended that you retry the configuration assistants at this time. Not successfully running any "Recommended" assistants means your system will not be correctly configured.
1. Check the Details panel on the Configuration Assistant Screen to see the errors resulting in the failures.
2. Fix the errors causing these failures.
3. Select the failed assistants and click the 'Retry' button to retry them.
INFO: User Selected: Yes/OK
是vip地址沒有啟動造成的,建議在執(zhí)行完orainstRoot.sh和root.sh命令后新開個窗口執(zhí)行vipca,把crs服務都起來后再執(zhí)行最后的verify步驟,可以嘗試一下。
去crs的bin目錄下執(zhí)行crs_stat -t 看看服務是不是都起了,這種情況應該是vip沒起來。
現(xiàn)象六:
Failed to upgrade Oracle Cluster Registry configuration
在安裝CRS時,在第二個節(jié)點執(zhí)行./root.sh時,出現(xiàn)如下提示,我在第一個節(jié)點執(zhí)行正常.請大蝦指點一些,不勝感激!謝謝!
[root@RACtest2 crs]# ./root.sh
WARNING: directory '/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/app/oracle/product' is not owned by root
WARNING: directory '/app/oracle' is not owned by root
WARNING: directory '/app' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
PROT-1: Failed to initialize ocrconfig
Failed to upgrade Oracle Cluster Registry configuration
錯誤原因:
是因為安裝crs的設備權(quán)限有問題,例如我的設備用raw來放置ocr和vote,此時要設置好這些硬件設備以及連接的文件的權(quán)限,下面是我的環(huán)境:
[root@rac2 oracrs]#
lrwxrwxrwx 1 root root 13 Jan 27 12:49 ocr.crs -> /dev/raw/raw1
lrwxrwxrwx 1 root root 13 Jan 26 13:31 vote.crs -> /dev/raw/raw2
chown root:oinstall /dev/raw/raw1
chown root:oinstall /dev/raw/raw2
chmod 660 /dev/raw/raw1
chmod 660 /dev/raw/raw2
其中/dev/sdb1放置ocr,/dev/sdb2放置vote.
[root@rac2 oracrs]# service rawdevices reload
Assigning devices:
/dev/raw/raw1 --> /dev/sdb1
/dev/raw/raw1: bound to major 8, minor 17
/dev/raw/raw2 --> /dev/sdb2
/dev/raw/raw2: bound to major 8, minor 18
Done
然后再次執(zhí)行就ok了.
[root@rac2 oracrs]# /oracle/app/oracle/product/crs/root.sh
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/oracle/app/oracle/product' is not owned by root
WARNING: directory '/oracle/app/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 priv1 rac1
node 2: rac2 priv2 rac2
clscfg: Arguments check out successfully