亚洲av成人无遮挡网站在线观看,少妇性bbb搡bbb爽爽爽,亚洲av日韩精品久久久久久,兔费看少妇性l交大片免费,无码少妇一区二区三区

  免費(fèi)注冊(cè) 查看新帖 |

Chinaunix

  平臺(tái) 論壇 博客 文庫
12下一頁
最近訪問板塊 發(fā)新帖
查看: 10280 | 回復(fù): 11
打印 上一主題 下一主題

CentOS4.4+GFS+Oracle10g RAC+VMWARE [復(fù)制鏈接]

論壇徽章:
0
跳轉(zhuǎn)到指定樓層
1 [收藏(0)] [報(bào)告]
發(fā)表于 2006-12-24 17:31 |只看該作者 |倒序?yàn)g覽
這幾天在安裝一個(gè)RAC測(cè)試環(huán)境,使用的是CentOS4.4+GFS6.1+Oracle10g RAC+VMWARE Server1.0.1,經(jīng)過千辛萬苦和chinaunix上的文章的幫助,終于安裝完畢,安裝時(shí)我是參考Oracle_GFS.pdf做的,該文檔可到redhat網(wǎng)站下載。 現(xiàn)有以下問題請(qǐng)教:

1、測(cè)試環(huán)境中,若是有一個(gè)node down掉,另一個(gè)node也不能訪問共享磁盤,也就是gfs文件系統(tǒng),不知為何?我使用的是lock_dlm
2、因手頭沒有 fence device ,做cluster時(shí)我選擇的是 fence_manual,請(qǐng)問 IBM HS21 BladeCenter 中是否已含 fence device 功能呢?準(zhǔn)備在實(shí)際環(huán)境中用HS21

論壇徽章:
0
2 [報(bào)告]
發(fā)表于 2006-12-25 16:25 |只看該作者
1:gfs有個(gè)法定啟動(dòng)臺(tái)數(shù)的限制,最小是2臺(tái),你down了1臺(tái)自然非法了
2:BladeCenter 有fence功能,我記得是訪問192.168.70.125就能設(shè)置

[ 本帖最后由 fuumax 于 2006-12-25 16:28 編輯 ]

論壇徽章:
0
3 [報(bào)告]
發(fā)表于 2006-12-26 09:26 |只看該作者
能否把你的安裝步驟共享出來

論壇徽章:
0
4 [報(bào)告]
發(fā)表于 2006-12-26 13:00 |只看該作者

CentOS4.4 + RHCS(DLM) + GFS + Oracle10gR2 RAC + VMWare Server 1.0.1 安裝

本文參考了本論壇很多文章,特此致謝!

****************************************************************************
* CentOS4.4 + RHCS(DLM) + GFS + Oracle10gR2 RAC + VMWare Server 1.0.1 安裝 *
****************************************************************************

一、測(cè)試環(huán)境
        主機(jī):一臺(tái)PC,AMD-64位的芯片,4G內(nèi)存,安裝CentOS-4.4-x86_64版本的操作系統(tǒng)
        在這個(gè)主機(jī)上面安裝了2個(gè)虛擬機(jī),全部安裝CentOS-4.4-x86_64版本的操作系統(tǒng),未進(jìn)行內(nèi)核定制,網(wǎng)上更新到最新

二、安裝 VMWare Server 1.0.1 for linux

三、創(chuàng)建共享磁盤

        vmware-vdiskmanager -c -s 6Gb -a lsilogic -t 2 "/vmware/share/ohome.vmdk"   |用于 Shared Oracle Home
        vmware-vdiskmanager -c -s 10Gb -a lsilogic -t 2 "/vmware/share/odata.vmdk"  |用于 datafiles and indexes
        vmware-vdiskmanager -c -s 3Gb -a lsilogic -t 2 "/vmware/share/oundo1.vmdk"  |用于 node1 Redo logs and Undo tablespaces
        vmware-vdiskmanager -c -s 3Gb -a lsilogic -t 2 "/vmware/share/oundo2.vmdk"  |用于 node2 Redo logs and Undo tablespaces
        vmware-vdiskmanager -c -s 512Mb -a lsilogic -t 2 "/vmware/share/oraw.vmdk"  |用于 Oracle集群注冊(cè)表文件和CRS表決磁盤

        2個(gè)虛擬機(jī)使用一個(gè)共享磁盤
       
四、安裝虛擬機(jī)
        1. 在vmware console 創(chuàng)建 vmware guest OS, 取名 gfs-node01, 選擇custome create-> Redhat Enterprise Linux 4 64-bit,其它都是默認(rèn).
           內(nèi)存選擇1G(>800MB你就看不到warning了), 硬盤大小選擇12GB, 建立方式不選擇 pre-allocated

        2. 創(chuàng)建好后vmware guest OS之后, 給guest 加上一塊NIC(也就是網(wǎng)卡)

        3. 關(guān)掉vmware console, 在node1目錄下面,打開gfs-node1.vmx, 在最后空白處添加以下內(nèi)容

scsi1.present = "TRUE"
scsi1.virtualDev = "lsilogic"
scsi1.sharedBus = "virtual"

scsi1:1.present = "TRUE"
scsi1:1.mode = "independent-persistent"
scsi1:1.filename = "/vmware/share/ohome.vmdk"
scsi1:1.deviceType = "disk"

scsi1:2.present = "TRUE"
scsi1:2.mode = "independent-persistent"
scsi1:2.filename = "/vmware/share/odata.vmdk"
scsi1:2.deviceType = "disk"

scsi1:3.present = "TRUE"
scsi1:3.mode = "independent-persistent"
scsi1:3.filename = "/vmware/share/oundo1.vmdk"
scsi1:3.deviceType = "disk"

scsi1:4.present = "TRUE"
scsi1:4.mode = "independent-persistent"
scsi1:4.filename = "/vmware/share/oundo2.vmdk"
scsi1:4.deviceType = "disk"

scsi1:5.present = "TRUE"
scsi1:5.mode = "independent-persistent"
scsi1:5.filename = "/vmware/share/oundo3.vmdk"
scsi1:5.deviceType = "disk"

scsi1:6.present = "TRUE"
scsi1:6.mode = "independent-persistent"
scsi1:6.filename = "/vmware/share/oraw.vmdk"
scsi1:6.deviceType = "disk"

disk.locking = "false"
diskLib.dataCacheMaxSize = "0"
diskLib.dataCacheMaxReadAheadSize = "0"
diskLib.DataCacheMinReadAheadSize = "0"
diskLib.dataCachePageSize = "4096"
diskLib.maxUnsyncedWrites = "0"

        這段是對(duì)vmware使用共享硬盤的方式進(jìn)行定義,大多數(shù)人都知道設(shè)置 disk.locking ="false" 卻漏掉dataCache

        保存退出之后,重新打開你的vmware-console,你就可以看到vmware guest OS的配置中,都有這些硬盤出現(xiàn)了.


五、需要安裝的包以及順序

        可以用yum安裝:
        1、升級(jí)CentOS4.4
                yum update
        2、安裝csgfs
                yum install yumex
                cd /etc/yum.repos.d
                wget [url]http://mirror.centos.org/centos/4/csgfs/CentOS-csgfs.repo[/url]
                yumex

        也可以手動(dòng)rpm安裝:
    包下載地址:[url]http://mirror.centos.org/centos/4/csgfs/x86_64/RPMS/[/url]

        1、在所有節(jié)點(diǎn)上安裝必須的軟件包,軟件包完整列表請(qǐng)參考GFS6.1用戶手冊(cè)

rgmanager                                — Manages cluster services and resources
system-config-cluster        — Contains the Cluster Configuration Tool, used to graphically configure the cluster and the display of the current status of the nodes, resources, fencing agents, and cluster services
ccsd                                        — Contains the cluster configuration services daemon (ccsd) and associated files
magma                                        — Contains an interface library for cluster lock management
magma-plugins                        — Contains plugins for the magma library
cman                                        — Contains the Cluster Manager (CMAN), which is used for managing cluster membership, messaging, and notification
cman-kernel                                — Contains required CMAN kernel modules
dlm                                                — Contains distributed lock management (DLM) library
dlm-kernel                                — Contains required DLM kernel modules
fence                                        — The cluster I/O fencing system that allows cluster nodes to connect to a variety of network power switches, fibre channel switches, and integrated power management interfaces
iddev                                        — Contains libraries used to identify the file system (or volume manager) in which a device is formatted Also, you can optionally install Red Hat GFS on your Red Hat Cluster Suite. Red Hat GFS consists of the following RPMs:
GFS                                                — The Red Hat GFS module
GFS-kernel                                — The Red Hat GFS kernel module
lvm2-cluster                        — Cluster extensions for the logical volume manager
GFS-kernheaders                        — GFS kernel header files


        2、安裝軟件和順序
安裝腳本,install.sh
#!/bin/bash

rpm -ivh kernel-smp-2.6.9-42.EL.x86_64.rpm
rpm -ivh kernel-smp-devel-2.6.9-42.EL.x86_64.rpm

rpm -ivh perl-Net-Telnet-3.03-3.noarch.rpm
rpm -ivh magma-1.0.6-0.x86_64.rpm

rpm -ivh magma-devel-1.0.6-0.x86_64.rpm

rpm -ivh ccs-1.0.7-0.x86_64.rpm
rpm -ivh ccs-devel-1.0.7-0.x86_64.rpm

rpm -ivh cman-kernel-2.6.9-45.4.centos4.x86_64.rpm
rpm -ivh cman-kernheaders-2.6.9-45.4.centos4.x86_64.rpm
rpm -ivh cman-1.0.11-0.x86_64.rpm
rpm -ivh cman-devel-1.0.11-0.x86_64.rpm

rpm -ivh dlm-kernel-2.6.9-42.12.centos4.x86_64.rpm
rpm -ivh dlm-kernheaders-2.6.9-42.12.centos4.x86_64.rpm
rpm -ivh dlm-1.0.1-1.x86_64.rpm
rpm -ivh dlm-devel-1.0.1-1.x86_64.rpm


rpm -ivh fence-1.32.25-1.x86_64.rpm

rpm -ivh GFS-6.1.6-1.x86_64.rpm
rpm -ivh GFS-kernel-2.6.9-58.2.centos4.x86_64.rpm
rpm -ivh GFS-kernheaders-2.6.9-58.2.centos4.x86_64.rpm

rpm -ivh iddev-2.0.0-3.x86_64.rpm
rpm -ivh iddev-devel-2.0.0-3.x86_64.rpm

rpm -ivh magma-plugins-1.0.9-0.x86_64.rpm

rpm -ivh rgmanager-1.9.53-0.x86_64.rpm

rpm -ivh system-config-cluster-1.0.25-1.0.noarch.rpm

rpm -ivh ipvsadm-1.24-6.x86_64.rpm

rpm ivh piranha-0.8.2-1.x86_64.rpm --nodeps


注意:有些包有依賴關(guān)系,使用nodeps開關(guān)進(jìn)行安裝即可


        3、修改各個(gè)節(jié)點(diǎn)上的/etc/hosts文件(每個(gè)節(jié)點(diǎn)都一樣)
        如下:
        [root@gfs-node1 etc]# cat hosts
        # Do not remove the following line, or various programs
        # that require network functionality will fail.
        127.0.0.1        localhost.localdomain localhost

                192.168.154.211 gfs-node1
                192.168.154.212 gfs-node2

        192.168.10.1    node1-prv
        192.168.10.2    node2-prv

                192.168.154.201 node1-vip
                192.168.154.202 node2-vip

        注意:主機(jī)名、cluster主機(jī)名、ocs中的pub節(jié)點(diǎn)名最好相同。


六、運(yùn)行system-config-cluster進(jìn)行配置

增加2個(gè)節(jié)點(diǎn),節(jié)點(diǎn)的權(quán)置全部設(shè)置為1,即Quorum值設(shè)置為1

三個(gè)節(jié)點(diǎn)的名稱為:
gfs-node1
gfs-node2

修改cluster.conf文件,如下:

[root@gfs-node1 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="1" name="alpha_cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="gfs-node1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="F-Man" nodename="gfs-node1" ipaddr="192.168.10.1"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="gfs-node2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="F-Man" nodename="gfs-node2" ipaddr="192.168.10.2"/>
                                        </method>
                        </fence>
                </clusternode>
        </clusternodes>
               
        <cman/>

        <fencedevices>
                <fencedevice agent="fence_manual" name="F-Man"/>
        </fencedevices>
        
        <rm>
                <failoverdomains>
                        <failoverdomain name="web_failover" ordered="1" restricted="0">
                                <failoverdomainnode name="gfs-node01" priority="1"/>
                                <failoverdomainnode name="gfs-node02" priority="2"/>
                                <failoverdomainnode name="gfs-node03" priority="3"/>
                        </failoverdomain>
                </failoverdomains>
        </rm>
</cluster>

[注意] Use fence_bladecenter.  This will require that you have telnet enabled on
        your management module (may require a firmware update)

使用scp命令把這個(gè)配置文件copy到node2節(jié)點(diǎn)上

七、 在01/02節(jié)點(diǎn)上啟動(dòng)dlm,ccsd,fence等服務(wù)  
        可能在安裝配置完上述步驟后,下面這些服務(wù)已經(jīng)起來了。

        在2個(gè)節(jié)點(diǎn)上加載dlm模塊  

        [root@gfs-node1 cluster]# modprobe lock_dlm
        [root@gfs-node2 cluster]# modprobe lock_dlm

        5.2、啟動(dòng)ccsd服務(wù)  
        [root@gfs-node1 cluster]# ccsd
        [root@gfs-node2 cluster]# ccsd

        5.3、啟動(dòng)集群管理器(cman)  
        root@gfs-node1 # /sbin/cman_tool join  
        root@gfs-node2 # /sbin/cman_tool join  

        5.4、測(cè)試ccsd服務(wù)  
        (注意:ccsd的測(cè)試要等cman啟動(dòng)完成后,然后才可以進(jìn)行下面的測(cè)試

        [root@gfs-node1 cluster]# ccs_test connect
        [root@gfs-node2 cluster]# ccs_test connect

        # ccs_test connect 各個(gè)節(jié)點(diǎn)的返回如下:
        node 1:
        [root@gfs-node1 cluster]# ccs_test connect
        Connect successful.
        Connection descriptor = 0
        node 2:
        [root@gfs-node2 cluster]# ccs_test connect
        Connect successful.
        Connection descriptor = 30

        5.5、查看節(jié)點(diǎn)狀態(tài)
        cat /proc/cluster/nodes,應(yīng)該返回  
        [root@gfs-node1 cluster]# cat /proc/cluster/nodes
        Node  Votes Exp Sts  Name
          1    1    3   M   gfs-node1
          2    1    3   M   gfs-node2

        [root@gfs-node1 cluster]#

八、加入fence域:  
[root@gfs-node1 cluster]# /sbin/fence_tool join
[root@gfs-node2 cluster]# /sbin/fence_tool join


九、查看集群狀態(tài)
Node 1:
[root@gfs-node1 cluster]# cat /proc/cluster/status
Protocol version: 5.0.1
Config version: 1
Cluster name: alpha_cluster
Cluster ID: 50356
Cluster Member: Yes
Membership state: Cluster-Member
Nodes: 3
Expected_votes: 3
Total_votes: 3
Quorum: 2   
Active subsystems: 1
Node name: gfs-node1
Node ID: 1
Node addresses: 192.168.10.1

Node 2
[root@gfs-node2 cluster]# cat /proc/cluster/status
Protocol version: 5.0.1
Config version: 1
Cluster name: alpha_cluster
Cluster ID: 50356
Cluster Member: Yes
Membership state: Cluster-Member
Nodes: 3
Expected_votes: 3
Total_votes: 3
Quorum: 2   
Active subsystems: 1
Node name: gfs-node2
Node ID: 2
Node addresses: 192.168.10.2

十、在node-1上進(jìn)行分區(qū)
        #dmesg |grep scsi察看scsi設(shè)備,如下:
    [root@gfs-node1 ~]# dmesg | grep scsi
        scsi0 : ioc0: LSI53C1030, FwRev=00000000h, Ports=1, MaxQ=128, IRQ=169
        Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
        [root@gfs-node1 ~]# pvcreate /dev/sdb
                Physical volume "/dev/sdb" successfully created
        [root@gfs-node1 ~]# pvcreate /dev/sdc
                Physical volume "/dev/sdc" successfully created
        [root@gfs-node1 ~]# pvcreate /dev/sdd
                Physical volume "/dev/sdd" successfully created
        [root@gfs-node1 ~]# pvcreate /dev/sde
                Physical volume "/dev/sde" successfully created
       
        [root@gfs-node1 ~]# system-config-lvm
                physical extent size 改為 128k
                sdb -> common -> ohome
                sdc -> oradata -> datafiles
                sdd -> redo1 -> log1
                sde -> redo2 -> log2

[注意] 在pvcreate之前也可以先用fdisk分區(qū)

十一、創(chuàng)建GFS文件系統(tǒng)
        [root@gfs-node1 ~]# mkfs.gfs -J 32 -j 4 -p lock_dlm -t alpha_cluster:log1 /dev/redo1/log1
        [root@gfs-node1 ~]# mkfs.gfs -J 32 -j 4 -p lock_dlm -t alpha_cluster:log2 /dev/redo2/log2
        [root@gfs-node1 ~]# mkfs.gfs -J 32 -j 4 -p lock_dlm -t alpha_clusterhome /dev/common/ohome
        [root@gfs-node1 ~]# mkfs.gfs -J 32 -j 4 -p lock_dlm -t alpha_cluster:datafiles /dev/oradata/datafiles

        查看:
        dmesg | grep scsi
        lvscan

        修改 /etc/fstab 文件:在文件末尾添加
        /dev/common/ohome       /dbms/ohome     gfs _netdev 0 0
        /dev/oradata/datafiles  /dbms/oradata   gfs _netdev 0 0
        /dev/redo1/log1         /dbms/log1      gfs _netdev 0 0
        /dev/redo2/log2         /dbms/log2      gfs _netdev 0 0

        The _netdev option is also useful as it insures the filesystems are un-mounted before cluster services shutdown.
        2個(gè)節(jié)點(diǎn)都要修改 fstab 文件. /dbms 及其子目錄都要手工創(chuàng)建好。

十二、創(chuàng)建RAW分區(qū)
        The certified version of Oracle 10g on GFS requires that the two clusterware files be located on shared raw partitions and
        be visible by all RAC nodes in the cluster.
        [root@gfs-node1 ~]# fdisk /dev/sdg
        創(chuàng)建2個(gè)256M的raw device

        If the other nodes were already up and running while you created these partitions, these other nodes must re-read the partition
        table from disk:
        [root@gfs-node2 ~]# blockdev --rereadpt /dev/sdg

        Make sure the service rawdevices is enabled on all three RAC nodes for the run level that will be used. This example enables
it for both run levels. Run:
rac1 # chkconfig –level 35 rawdevices on
The mapping occurs in the files /etc/sysconfig/rawdevices
# raw device bindings
# format: <rawdev> <major> <minor>
# <rawdev> <blockdev>
# example: /dev/raw/raw1 /dev/sda1
# /dev/raw/raw2 8 5
/dev/raw/raw1 /dev/sdg1
/dev/raw/raw2 /dev/sdg2
The permissions of these files must always be owned by the oracle user used to install the software (oracle). A 10
second delay is needed to insure that the rawdevices service has a chance to configure the /dev/raw directory. Add
these lines to the /etc/rc.local file. This file is symbolically linked to /etc/rc?.d/S99local.
echo "Sleep a bit first and then set the permissions on raw"
sleep 10
chown oracle:dba /dev/raw/raw1
chown oracle:dba /dev/raw/raw2

十二、修改 /etc/sysctl.conf

kernel.shmmax = 4047483648
kernel.shmmni = 4096
kernel.shmall = 2097152
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 1024 65000
fs.file-max = 65536
##
This is for Oracle RAC core GCS services
#
net.core.rmem_default = 1048576
net.core.rmem_max = 1048576
net.core.wmem_default = 1048576
net.core.wmem_max = 1048576

十三、Create the oracle user
[root@gfs-node1 ~]# groupadd oinstall
[root@gfs-node1 ~]# groupadd dba
[root@gfs-node1 ~]# useradd oracle -g oinstall -G dba

配置 /etc/sudoers 文件,so that oracle admin users can safely execute root commands:
# User alias specification
User_Alias SYSAD=oracle, oinstall
User_Alias USERADM=oracle, oinstall
# User privilege specification
SYSAD ALL=(ALL) ALL
USERADM ALL=(root) NOPASSWD:/usr/local/etc/yanis.client
root ALL=(ALL) ALL

每個(gè)節(jié)點(diǎn)都要做以上工作。

十四、oracle用戶 Create_a_clean_ssh_connection_environment
        1、在每個(gè)節(jié)點(diǎn)上執(zhí)行 ssh-keygen –t dsa 按回車直到執(zhí)行完畢
        2、在node1 collect up all the ~/.ssh/id_dsa.pub files into one ~/.ssh/authorized_keys file and distribute this to the other three nodes:
                ssh gfs-node1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
                ssh gfs-node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
                scp ~/.ssh/authorized_keys gfs-node2:~/.ssh

        3、執(zhí)行一遍以下命令:
                 [oracle@gfs-node1 ~]$ ssh gfs-node1 date
                 [oracle@gfs-node1 ~]$ ssh node1-prv date
                 [oracle@gfs-node1 ~]$ ssh gfs-node2 date
                 [oracle@gfs-node1 ~]$ ssh node2-prv date

                node2節(jié)點(diǎn)同樣做一遍


之后就是安裝 oracle10g 了,注意選取 cluster 方式安裝。

論壇徽章:
0
5 [報(bào)告]
發(fā)表于 2006-12-26 13:03 |只看該作者

回復(fù) 2樓 fuumax 的帖子

gfs 文件系統(tǒng)強(qiáng)制最少兩臺(tái)服務(wù)器是真的么?那如果在工作環(huán)境中有一臺(tái)壞掉了,豈不是另外一臺(tái)也沒法工作?那雙機(jī)還有什么意義呢?

論壇徽章:
0
6 [報(bào)告]
發(fā)表于 2006-12-27 08:58 |只看該作者
應(yīng)該通過心跳功能完成自動(dòng)切換

論壇徽章:
0
7 [報(bào)告]
發(fā)表于 2007-01-05 23:33 |只看該作者
RAC不存在切換的問題的!
RAC是為了高可用性而設(shè)計(jì)的。

論壇徽章:
0
8 [報(bào)告]
發(fā)表于 2007-01-29 00:09 |只看該作者
正準(zhǔn)備實(shí)戰(zhàn),樓主的經(jīng)驗(yàn)對(duì)我很有用。謝了!

論壇徽章:
0
9 [報(bào)告]
發(fā)表于 2007-04-11 21:09 |只看該作者
這個(gè)fence是做什么的?cluster.conf中必須要配置么?

論壇徽章:
0
10 [報(bào)告]
發(fā)表于 2007-11-09 09:38 |只看該作者
您需要登錄后才可以回帖 登錄 | 注冊(cè)

本版積分規(guī)則 發(fā)表回復(fù)

  

北京盛拓優(yōu)訊信息技術(shù)有限公司. 版權(quán)所有 京ICP備16024965號(hào)-6 北京市公安局海淀分局網(wǎng)監(jiān)中心備案編號(hào):11010802020122 niuxiaotong@pcpop.com 17352615567
未成年舉報(bào)專區(qū)
中國互聯(lián)網(wǎng)協(xié)會(huì)會(huì)員  聯(lián)系我們:huangweiwei@itpub.net
感謝所有關(guān)心和支持過ChinaUnix的朋友們 轉(zhuǎn)載本站內(nèi)容請(qǐng)注明原作者名及出處

清除 Cookies - ChinaUnix - Archiver - WAP - TOP