亚洲av成人无遮挡网站在线观看,少妇性bbb搡bbb爽爽爽,亚洲av日韩精品久久久久久,兔费看少妇性l交大片免费,无码少妇一区二区三区

  免費注冊 查看新帖 |

Chinaunix

  平臺 論壇 博客 文庫
最近訪問板塊 發(fā)新帖
查看: 2348 | 回復: 1
打印 上一主題 下一主題

Solaris 8.0+OPS安裝 [復制鏈接]

論壇徽章:
0
跳轉(zhuǎn)到指定樓層
1 [收藏(0)] [報告]
發(fā)表于 2002-10-15 15:35 |只看該作者 |倒序瀏覽
Summary:Updated Oracle paralle Server(OPS)Cluster( SC2.2)Build
Mr.Venkat D mailto:venki21@hotmail.com
Fri, 08 Dec 2000 16:54:47

Previous message: Summary:Updated Oracle paralle Server(OPS)Cluster( SC2.2)Build
Next message: SUMMARY: Disksuite & Solaris 7
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]

--------------------------------------------------------------------------------

This is a multi-part message in MIME format.

------=_NextPart_000_15f8_1177_6f8d
Content-Type: text/plain&#59; format=flowed

sunmanagers@sunmanagers.org
hi all,

     Here is the updated and almost complete Oracle parallel Server
     Build for Cluster 2.2.

     Hope  this document (*.txt) comes in handy for all the sys Admins.

     Please let me know if you have problems in downloading  the attachment.

     Thanks .. cheers

     /v
_____________________________________________________________________________________
Get more from the Web.  FREE MSN Explorer download : http://explorer.msn.com/

------=_NextPart_000_15f8_1177_6f8d
Content-Type: text/plain&#59; name="opscluster.txt"&#59; format=flowed
Content-Transfer-Encoding: 8bit
Content-Disposition: attachment&#59; filename="opscluster.txt"



                       ORACLE PARALLEL SERVER (OPS) ON SUN CLUSTER 2.2

----------------------------------------------------------------------------------------
UPDATED : DEC 08 2000 , VENKAT.D  VENKI21@HOTMAIL.COM

I have tried to capture all the information on building a OPS Cluster. There
could be
some things which i might have missed or omitted.
----------------------------------------------------------------------------------------

These are the high-level steps to install OPS in a Sun Cluster
configuration.
If you are installing Oracle7 Parallel Server, refer to the Oracle7 for Sun
SPARC
Solaris 2.x Installation and Configuration Guide, Release 7.x. If you are
installing
Oracle8 Parallel Server, refer to the Oracle8 Parallel Server Concepts and
Administration,
Release 8.0 Guide. If you are installing Oracle8i Parallel Server, refer to
your Oracle8i
installation documentation.

A. Configure the UNIX kernel Interprocess Communication (IPC) parameters to
accommodate the
   Shared Global Area (SGA) structure of Oracle8i.Reboot all nodes.

B. Set up the Oracle user environment on all nodes, using your Oracle
documentation.

C. Use the scinstall(1M) command to install Sun Cluster 2.2. Specify OPS
when prompted to
   select a data service, and use VxVM with the cluster feature enabled as
the volume manager.

D. Reboot all nodes.

E. Install the ORCLudlm package on all nodes. This package is located at on
the
   Oracle product CD-ROM, in the ops_patch directory.

F. On all nodes, install VERITAS Volume Manager (VxVM) with the cluster
feature enabled.

G. Reboot all nodes.

H. Create the VERITAS root disk group (rootdg). See your VERITAS
documentation for
   detailed instructions about creating the rootdg.

I. Start the cluster on one node only.

J. Create Oracle disk groups and raw volumes for the OPS database. See your
VERITAS
    and Oracle8i installation documentation for details.

K. Configure the Quorum devices.

L. Stop the cluster.

M. Restart the cluster on all nodes.

N. Install Oracle with the OPS option on all nodes. See your Oracle/OPS
documentation for
    detailed instructions about installing OPS.

O. For OPS versions 8.0.5 and earlier, configure the Oracle GMS daemon. See
"Starting
    the Oracle GMS Daemon" on page 392 for details. This step is not
necessary for OPS 8.1.6 or later versions.

P. Configure the Private network management PNM Group ( pmset )

Q. Configure the logical hosts.

R. Put entries in the cluster controlled vfstab.

S. Creat a copy of the CCD as a file.

---------------------------------------------------------------------------------------------------------------------


PLEASE DO FOLLOW ALL THE STEPS IN THE SAME ORDER FOR THE SMOOTH INSTALLATION


SYSTEM INFO :

SYSTEM  :  Two E-4500 with 2 *  I/O boards / 2 * DWSI / 2 * GBIC(FCAL) / 1 *
QFE
Storage :  D1000 and A5200 ( one for each server)
o/s Solaris 2.6 , Patch level 105181-22
Cluster volume manager  CVM ver. 2.2.1
Sun Cluster SC ver. 2.2

The servers boot off the D1000, A5200 are shared between the servers
There is a NAFO Group only on one Qfe card.
There is a logical host configured and the diskgroup can switch between the
two servers.


  Schematic Block diagram


  [D1000]
                                   |  |
                                   |  |
                                c0 |  | c1
                        |--------[SERVER] ------|
| hme0  hme1     | .......>; Fiber loop
                        |         |     |       |
                        |         |     |       |
                        |         |     |       |
                      [A5200]     |     |    [A5200]
                        |         |     |       |
                        |         |     |       |
                        |       hme0  hme1      |
                        |--------[SERVER]-------|
                                c0 |   |c1
                                   |   |
                                   |   |
                  [D1000]

The setup used was :

1.) Install Solaris and the Recommended Patches ONLY !!!!
    Please note not to install any other utilities or any other shells or
GNU stuff
    as you will have to spend time in setting the correct path etc etc ...
    The OPS Cluster Scripts are written to Run in K-Shell ( i had a tough
time ..)

2.) Configure the UNIX kernel Interprocess Communication (IPC) parameters to
    accommodate the Shared Global Area (SGA) structure of Oracle.

Example :

edit /etc/system and put the following entry
( these entries depend on the system memory, and database used. consult your
DBA)

set shmsys:shminfo_shmmax=4000000000
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=200
set shmsys:shminfo_shmseg=200
set semsys:seminfo_semmap=250
set semsys:seminfo_semmni=500
set semsys:seminfo_semmns=2010
set semsys:seminfo_semmnu=600
set semsys:seminfo_semume=600
set semsys:seminfo_semmsl=1000

3.) Add hosts entry on both the servers in /etc/hosts ( Good idea to add the
logical host's too)

4.) Put these entries in /etc/system for the cluster

*For CLUSTER - PLS DO NOT REMOVE
exclude:lofs
set ip: ip_enable_group_ifs=0

5.) Reboot all nodes.

6.) Set up the Oracle user, groups, environment and home directories

7.) Use the scinstall(1M) command to install Sun Cluster 2.2. Specify OPS
   when prompted to select a data service, and use VxVM with the cluster
feature
   enabled as the volume manager.

Example of the Scinstall script ...
-------------------------------------------------------------------------------------------
root@abcluster1:/tmp/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.6/Tools 0#
./scinstall

Checking on installed package state
..........

None of the Sun Cluster software has been installed

<<ress return to continue>;>;


==== Install/Upgrade Software Selection Menu =======================
Upgrade to the latest Sun Cluster Server packages or select package
sets for installation. The list of package sets depends on the Sun
Cluster packages that are currently installed.

Choose one:
1) Upgrade            Upgrade to Sun Cluster 2.2 Server packages
2) Server             Install the Sun Cluster packages needed on a server
3) Client             Install the admin tools needed on an admin workstation
4) Server and Client  Install both Client and Server packages

5) Close      Exit this Menu
6) Quit               Quit the Program

Enter the number of the package set [6]:  2

What is the path to the CD-ROM image [/cdrom/cdrom0]:  
/tmp/suncluster_sc_2_2/

Installing Server packages

< stuff deleted ....>;

Install mode [manual automatic] [automatic]:  Select Automatic

< stuff deleted >;

Checking on installed package state
..........

Volume Manager Selection

Please choose the Volume Manager that will be used
on this node:

1) Cluster Volume Manager (CVM)
2) Sun StorEdge Volume Manager (SSVM)
3) Solstice DiskSuite (SDS)
Choose the Volume Manager: 1

     ---------WARNING---------
The Cluster Volume Manager (CVM) will need to be installed
before Oracle Parallel Database (OPS) can be started.


<<ress return to continue>;>;


What is the name of the cluster? ab-cluster

How many potential nodes will ab-cluster have [4]? 2

How many of the initially configured nodes will be active [2]? 2

What type of network interface will be used for this configuration
(ether|SCI)
[SCI]? ether

What is the hostname of node 0 [node0]? abcluster1

What is abcluster1's first private network interface [hme0]?

What is abcluster1's second private network interface [hme1]?

You will now be prompted for ethernet addresses of
the host. There is only one ethernet address for each host
regardless of the number of interfaces a host has. You can get
this information in one of several ways:

1. use the 'banner' command at the ok prompt,
2. use the 'ifconfig -a' command (need to be root),
3. use ping, arp and grep commands. ('ping exxon&#59; arp -a | grep exxon')

Ethernet addresses are given as six hexadecimal bytes separated
by colons. (ie, 01:23:45:67:89:ab)


What is abcluster1's ethernet address? 08:00:20:XX:XX:XX

What is the hostname of node 1 [node1]? abcluster2

What is abcluster2's first private network interface [hme0]?

What is abcluster2's second private network interface [hme1]?

You will now be prompted for ethernet addresses of
the host. There is only one ethernet address for each host
regardless of the number of interfaces a host has. You can get
this information in one of several ways:

1. use the 'banner' command at the ok prompt,
2. use the 'ifconfig -a' command (need to be root),
3. use ping, arp and grep commands. ('ping exxon&#59; arp -a | grep exxon')

Ethernet addresses are given as six hexadecimal bytes separated
by colons. (ie, 01:23:45:67:89:ab)


What is abcluster2's ethernet address? 08:00:20:XX:XX:XX

Will this cluster support any HA data services (yes/no) [yes]?  yes
Okay to set up the logical hosts for those HA services now (yes/no) [yes]?  
no


Performing Quorum Selection

Checking node status...
Checking host abcluster2 ...abcluster2.XYZ.com is alive

One of the specified hosts is either unreachable or permissions
are not set up or the appropriate software has not been installed
on it. In this situation, it is not possible to determine the set
of devices attached to both the hosts.
Note that both private and shared devices for this host will be
displayed in this node and the administrator must exercise extreme
caution in choosing a suitable quorum device.

Getting device information for host abcluster1
This may take a few seconds to a few minutes...done
Select quorum device for abcluster1 and abcluster2.
Type the number corresponding to the desired selection.

You do not see any quorum device as veritas is not installed ...
Finished Quorum Selection

( Please note if the quorum disks dosent show up then we can configure it
later on
  There is no need to panic..)

Installing ethernet Network Interface packages.

What is the path to the CD-ROM image [/tmp/suncluster_sc_2_2/]:

Installing the following packages: SUNWsma

< stuff deleted ..>;

Installation of <SUNWsma>; was successful.

Checking on installed package state
..........

..........

==== Select Data Services Menu ==========================

Please select which of the following data services
are to be installed onto this cluster. Select singly,
or in a space separated list.
Note: Sun Cluster HA for NFS and Sun Cluster for Informix XPS
are installed automatically with the Server Framework.

You may de-select a data service by selecting it
a second time.

Select DONE when finished selecting the configuration.

        1) Sun Cluster HA for Oracle
        2) Sun Cluster HA for Informix
        3) Sun Cluster HA for Sybase
        4) Sun Cluster HA for Netscape
        5) Sun Cluster HA for Netscape LDAP
        6) Sun Cluster HA for Lotus
        7) Sun Cluster HA for Tivoli
         Sun Cluster HA for SAP
        9) Sun Cluster HA for DNS
        10) Sun Cluster for Oracle Parallel Server

INSTALL 11) No Data Services
        12) DONE

Choose a data service: 10


==== Select Data Services Menu ==========================

Please select which of the following data services
are to be installed onto this cluster. Select singly,
or in a space separated list.
Note: Sun Cluster HA for NFS and Sun Cluster for Informix XPS
are installed automatically with the Server Framework.

You may de-select a data service by selecting it
a second time.

Select DONE when finished selecting the configuration.

1) Sun Cluster HA for Oracle
        2) Sun Cluster HA for Informix
        3) Sun Cluster HA for Sybase
        4) Sun Cluster HA for Netscape
        5) Sun Cluster HA for Netscape LDAP
        6) Sun Cluster HA for Lotus
        7) Sun Cluster HA for Tivoli
         Sun Cluster HA for SAP
        9) Sun Cluster HA for DNS
INSTALL 10) Sun Cluster for Oracle Parallel Server

        11) No Data Services
        12) DONE

Choose a data service: 12

What is the path to the CD-ROM image [/tmp/suncluster_sc_2_2/]:

Installing Data Service packages.

< stuff deleted >;

Install mode [manual automatic] [automatic]:  Select automatic

  < stuff Deleted>;

Checking on installed package state
....................

============ Main Menu =================

1) Install/Upgrade - Install or Upgrade Server
             Packages or Install Client Packages.
2) Remove  - Remove Server or Client Packages.
3) Change  - Modify cluster or data service configuration
4) Verify  - Verify installed package sets.
5) List    - List installed package sets.

6) Quit    - Quit this program.
7) Help    - The help screen for this menu.

Please choose one of the menu items: [7]:  6

==== Verify Package Installation ==========================
Installation
    All  of the install      packages have been installed
Framework
    None of the client       packages have been installed
    All  of the server       packages have been installed
Communications
    All  of the SMA          packages have been installed
Data Services
    All  of the Sun Cluster HA for Oracle packages have been installed
    None of the Sun Cluster HA for Informix packages have been installed
    None of the Sun Cluster HA for Sybase packages have been installed
    None of the Sun Cluster HA for Netscape packages have been installed
    None of the Sun Cluster HA for Netscape LDAP packages have been
installed
    None of the Sun Cluster HA for Lotus packages have been installed
    None of the Sun Cluster HA for Tivoli packages have been installed
    None of the Sun Cluster HA for SAP packages have been installed
    None of the Sun Cluster HA for DNS packages have been installed
    None of the Sun Cluster for Oracle Parallel Server packages have been
installed

root@abcluster1:/tmp/suncluster_sc_2_2/Sun_Cluster_2_2/Sol_2.6/Tools 0#


. Reboot all nodes.

9). Install the ORCLudlm package on all nodes. This package is located at on
   the Oracle product CD-ROM, in the ops_patch directory.
   It is just a pkgadd command and when it prompts for the group please
   select the default group &quot;dba&quot;

10). On all nodes, install VERITAS Volume Manager (VxVM) with the cluster
   feature enabled. ( Cluster Volume manager) DoNOT forget to install the
   CVM patch cluster ( 107958-06.tar.Z)

11). Reboot all nodes.
     # vxdctl license must show this output else the patches have not been
applied

# vxdctl license

All features are available:
Mirroring
Concatenation
Disk-spanning
Striping
RAID-5
VxSmartSync


12). Create the VERITAS root disk group (rootdg).  Rootdisk Encapsulation
     Change the owner:group for the raw partitions

Creat Diskgroup and initialize disks creat spare disks
( NO NEED TO INITIALIZE CCD DISKS)

# vxdiskadd disk1, disk2 ...

Creat Volumes

Example:

vxassist -g oracledg   make <volume>; <size>;m  alloc=&quot;oracled09&quot;
vxassist -g oracledg   make <volume>; <size>;m layout=stripe alloc=&quot;oracled14
oracled18&quot;

Mirror the Volumes

Example

# vxassist -g oracledg mirror <volume>; layout=stripe, alloc=&quot;oracled25
oracled26 oracled39&quot;

Change permissions on the Volumes ( raw partitions)

# vxedit -g oracledg set user=oracle group=dba mode=0600 <volume name>;

13). Start the cluster on one node only.

   # /opt/SUNWcluster/bin/scadmin startcluster <nodename>; <cluster name>;

Ex:

  #/opt/SUNWcluster/bin/scadmin startcluster abcluster1 ab-cluster

To check the working

# /opt/SUNWcluster/bin/get_node_status

14). Create Oracle disk groups and raw volumes for the OPS database. Deport
the
     diskgroup and then import it with the shared option

     # vxdg deport oracledg
     # vxdg -s import oracledg

Now vxdisk list must show that the disks are shared.

15). Stop the cluster.

16). Restart the cluster on all nodes.

  On the second node use the command

# /opt/SUNWcluster/bin/scadmin startnode


# /opt/SUNWcluster/bin/get_node_status

sc: included in running cluster
node id: 0
membership: 0 1
interconnect0: selected
interconnect1: up
vm_type: cvm
vm_on_node: master
vm: up
db: down

17.) Stop cluster on both the servers and Reboot
     after reboot the shared oracledg or diskgroup dosent come up as the
clsuter would
     have deported it.

     WE HAVE TO START THE CLUSTER TO SEE THE SHARED DISK GROUP

18.) To get HA status


# /opt/SUNWcluster/bin/get_ha_status

HASTAT_CONFIG_STATE:
     Configuration State on abcluster1: Stable
HASTAT_CONFIG_STATE:
HASTAT_CURRENT_MEMBERSHIP
     abcluster1 is a cluster member
HASTAT_CURRENT_MEMBERSHIP
HASTAT_UPTIME_STATE:
     uptime of abcluster1:   4:27pm  up 22:56,  3 users,  load average:
0.01, 0.01, 0.01
HASTAT_UPTIME_STATE:
HASTAT_PRIV_NET_STATUS
     Status of Interconnects on abcluster1:
        interconnect0: selected
        interconnect1: up
     Status of private nets on abcluster1:
        To abcluster1 - UP
        To abcluster2 - UP
HASTAT_PRIV_NET_STATUS
HASTAT_LOGICAL_HOSTS_MASTERED
Logical Hosts Mastered on abcluster1:
        None
Logical Hosts for which abcluster1 is Backup Node:
        None
HASTAT_LOGICAL_HOSTS_MASTERED
HASTAT_PUBLIC_NET_STATUS
Status of Public Network On abcluster1:

HASTAT_PUBLIC_NET_STATUS
HASTAT_SERVICE_STATUS
Status Of Data Services Running On abcluster1
       None running.
HASTAT_SERVICE_STATUS
HASTAT_RECENT_ERR_MSGS
Recent Error Messages on abcluster1

Sep 20 14:18:54 abcluster1 ftpd[3431]: pam_authenticate: error
Authentication failed
Sep 20 14:19:01 abcluster1 sshd[3440]: log: Connection from 10.1.67.170 port
1018
Sep 20 14:19:01 abcluster1 sshd[3440]: log: Could not reverse map address
10.1.67.170.
Sep 20 14:33:26 abcluster1 sshd[816]: log: Generating new 768 bit RSA key.
ep 20 15:33:31 abcluster1 sshd[816]: log: RSA key generation complete.
HASTAT_RECENT_ERR_MSGS
hastatver
#
---------------------------------------------------------------------------------

19 ) CHECKING THE CLUSTER AND CONFIGURING THE QUORUM DEVICE

#/opt/SUNWcluster/bin# ./scconf ab-cluster -p

( scconf <cluster-name>; -p )

Checking node status...
Current Configuration for Cluster arb-cluster

   Hosts in cluster: abcluster1 abcluster2

   Private Network Interfaces for

      abcluster1:      hme0 hme1
      abcluster2:      hme0 hme1

  Quorum Device Information

  Logical Host Timeout Value :
        Step10            :720
        Step11            :720
        Logical Host              :180
#
As we do not see any quorum device we have to configure it

20)   CONFIGURING QUORUM DEVICE

#/opt/SUNWcluster/bin# ./scconf ab-cluster -q abcluster1 abcluster2

( scconf <cluster-name>; -q nodename1 nodename2 )

Checking node status...
Checking host abcluster2 ...abcluster2 is alive

One of the specified hosts is either unreachable or permissions
are not set up or the appropriate software has not been installed
on it. In this situation, it is not possible to determine the set
of devices attached to both the hosts.
Note that both private and shared devices for this host will be
displayed in this node and the administrator must exercise extreme
caution in choosing a suitable quorum device.

Getting device information for host arbcluster1
This may take a few seconds to a few minutes...done
Select quorum device for abcluster1 and abcluster2.
Type the number corresponding to the desired selection.
For example: 1
1) DISK:c0t0d0s2:0003B13198 14) DISK:c0t26d0s2:0003B3088527)
DISK:c3t33d0s2:000191955040) DISK:c3t51d0s2:0003B37909
2) DISK:c0t10d0s2:0003A2824915) DISK:c0t2d0s2:0003A96520 2
DISK:c3t34d0s2:000194352741) DISK:c3t52d0s2:0003B31175
3) DISK:c0t16d0s2:000393240616) DISK:c0t3d0s2:0003B35054 29)
DISK:c3t35d0s2:0001A1271742) DISK:c3t53d0s2:0003B34390
4) DISK:c0t17d0s2:0003B0238617) DISK:c0t4d0s2:0003B32145 30)
DISK:c3t36d0s2:000194060743) DISK:c3t54d0s2:0003A88504
5) DISK:c0t18d0s2:0003B221531 DISK:c0t5d0s2:0003B35768 31)
DISK:c3t37d0s2:000194181144) DISK:c3t55d0s2:0003A90841
6) DISK:c0t19d0s2:0003B1565319) DISK:c0t6d0s2:0003A95311 32)
DISK:c3t38d0s2:000194600745) DISK:c3t56d0s2:0003B30606
7) DISK:c0t1d0s2:0003B32528 20) DISK:c0t7d0s2:0003A74375 33)
DISK:c3t39d0s2:000194527946) DISK:c3t57d0s2:0003B35124
DISK:c0t20d0s2:0003B2941921) DISK:c0t8d0s2:0003B17793 34)
DISK:c3t40d0s2:000192044747) DISK:c3t58d0s2:0003B20026
9) DISK:c0t21d0s2:0003B2419222) DISK:c0t9d0s2:0003B10212 35)
DISK:c3t41d0s2:00019431934 DISK:c4t4d0s2:0022422732
10) DISK:c0t22d0s2:0003A8852523) DISK:c2t10d0s2:000247689036)
DISK:c3t42d0s2:000194066649) DISK:c4t5d0s2:0002476791
11) DISK:c0t23d0s2:0003B2742124) DISK:c2t8d0s2:9948434305 37)
DISK:c3t48d0s2:0003B39049
12) DISK:c0t24d0s2:0003B1428325) DISK:c2t9d0s2:9948432090 3
DISK:c3t49d0s2:0003B46831
13) DISK:c0t25d0s2:0003B2716926) DISK:c3t32d0s2:000191841139)
DISK:c3t50d0s2:0003A37373
Quorum device: 13
Disk c0t25d0s2 with serial id 0003B27169 has been chosen
as the quorum device.

Finished Quorum Selection

Now we will be able to see the Quorum device

Checking node status...
Current Configuration for Cluster ab-cluster

   Hosts in cluster: arbcluster1 abcluster2

   Private Network Interfaces for

      abcluster1:      hme0 hme1
      abcluster2:      hme0 hme1


  Quorum Device Information
Quorum device for hosts abcluster1 and abcluster2: 0003B27169

  Logical Host Timeout Value :
        Step10            :720
        Step11            :720
        Logical Host              :180


-----------------------------------------------------------------------------------

21 ) To configure shared CCD device

execute these commands on both the servers while the cluster is up and
running

# /opt/SUNWcluster/bin/scconf ab-cluster -S ccdvol
Checking node status...
Purified ccd file written to
/etc/opt/SUNWcluster/conf/ccd.database.init.pure
There were 0 errors found.
#
Please note there is no such file ccd.database.init.pure even it is says
so.. (BUG)

To Configure the Shared CCD ..

# /opt/SUNWcluster/bin/confccdssa ab-cluster
The disk group sc_dg does not exist.
Will continue with the sc_dg setup.

On a 2-node configured cluster you may select two disks
that are shared between the 2 nodes to store the CCD
database in case of a single node failure.

Please, select the disks you want to use from the following list:

Select devices from list.
Type the number corresponding to the desired selection.
For example: 1<CR>;

1) DISK:c0t25d0s2:0003B27169
2) DISK:c2t10d0s2:0002476890
3) DISK:c3t57d0s2:0003B35124
4) DISK:c4t5d0s2:0002476791
Device 1: 1

Disk c0t25d0s2 with serial id 0003B27169 has been selected
as device 1.


Select devices from list.
Type the number corresponding to the desired selection.
For example: 1<CR>;

1) DISK:c2t10d0s2:0002476890
2) DISK:c3t57d0s2:0003B35124
3) DISK:c4t5d0s2:0002476791
Device 2: 2

Disk c3t57d0s2 with serial id 0003B35124 has been selected
as device 2.

newfs: construct a new file system /dev/vx/rdsk/sc_dg/ccdvol: (y/n)? y
Warning: 480 sector(s) in last cylinder unallocated
/dev/vx/rdsk/sc_dg/ccdvol:      20000 sectors in 10 cylinders of 32 tracks,
64 sectors
        9.8MB in 1 cyl groups (16 c/g, 16.00MB/g, 7680 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32,
#

NOTE:

After the cluster is brought down and up again the ccd will be mounted on
one node

/dev/vx/dsk/sc_dg/ccdvol
                        9007      17     8090      0%   
/etc/opt/SUNWcluster/conf/ccdssa

Also the vxdisk list will show

.
.
.
<stuff deleted>;
c0t23d0s2    sliced    oracled19    oracledg     online shared
c0t24d0s2    sliced    oracled20    oracledg     online shared
c0t25d0s2    sliced    Combo1@25.0  sc_dg        online
c0t26d0s2    sliced    oracled41    oracledg     online shared spare
.
<stuff deleted>;
.
.
c3t55d0s2    sliced    oracled39    oracledg     online shared
c3t56d0s2    sliced    oracled40    oracledg     online shared
c3t57d0s2    sliced    Combo2@57.0  sc_dg        online
c3t58d0s2    sliced    oracled42    oracledg     online shared spare
.
<stuff deleted>;

-------------------------------------------------------------------------------------
22.) To creat a logical host and NAFO group ...

Creat Seperate Diskgroups wih the disks required,( Ex. oraarchdg1 oraarchdg2
) do not share the diskgroups
Creat the mountpoints/ filesystem, Change the permissions on the partitions
using vxedit

NOTE to Have about 10MB space for the Logical Host Administrative Filesystem
.

The steps involved ( These have to be done on Both the servers )


1.) Select a logical host name and add entries to /etc/hosts on both of the
nodes with a valid ip address


# Logial host entry
10.1.XX.XX ab-l1
10.1.XX.XX ab-l2


23.)  Configure the Private Network management PNM using pmset

# /local/opt/SUNWpnm/bin/pnmset

In the following, you will be prompted to do
configuration for network adapter failover

do you want to continue ... [y/n]: y

How many NAFO backup groups on the host [1]: < since we have only one qfe we
select one ENTER>;

Enter backup group number [0]: 113 < Enter a number >; Any number can be used

Please enter all network adapters under nafo113
qfe0 ( Enther the interfaces used )

The following test will evaluate the correctness
of the customer NAFO configuration...
name duplication test passed

Check nafo113... < 20 seconds
qfe0 is active remote address = 10.1.80.132
nafo113 test passed
#

24.) Next configure the Logical host


Syntax

scconf < Cluster name>; -L < Logical host name >; -n < node list>; -g < dg
list>; -i<interfacelist>;,<logical node name>; -m

nodelist is the node that the logical host can run on. interfacelist is the
interfaces (hme0) that the logical host uses for network access nodename is
the
node the logical is going to be running on

scconf < cluster name>; -F <logical host>;

This command creates the admin file system for the logical host
administration files.

a.) from cluster1

/opt/SUNWcluster/bin/scconf ab-cluster -L ab-l1 -n abcluster1,abcluster2 -g
oraarchdg1 -i qfe0,qfe0,ab-l1 -m
Checking node status...
#
/opt/SUNWcluster/bin/scconf ab-cluster -F ab-l1
Checking node status...

b.) From cluster2

#/opt/SUNWcluster/bin/scconf ab-cluster -L ab-l2 -n abcluster2,abcluster1 -g
oraarchdg2 -i qfe0,qfe0,ab-l2 -m
Checking node status...
#

/opt/SUNWcluster/bin/scconf ab-cluster -F ab-l2
Checking node status...
#

25.) Mounting of the filesystems

Put entries in the

/etc/opt/SUNWcluster/conf/hanfs/vfstab on both of the servers

a.) on server1

/etc/opt/SUNWcluster/conf/hanfs/vfstab

-rw-r--r-- 1 root other 26 Oct 20 18:15 dfstab.ab-l1
-rw-r--r-- 1 root other 207 Oct 20 18:47 vfstab.ab-l1
-rw-r--r-- 1 root other 207 Oct 20 18:46 vfstab.ab-l2


# cat vfstab.ab-l1  ( the first line is put by the cluster we have to only
add the second line)

/dev/vx/dsk/oraarchdg1/oraarchdg1-stat
/dev/vx/rdsk/oraarchdg1/oraarchdg1-stat /ab-l1 ufs 1 no -
/dev/vx/dsk/oraarchdg1/abi1 /dev/vx/rdsk/oraarchdg1/arbi1
/local/homes/oracle/arch/abi1 ufs 1 no -

# cat vfstab.ab-l2

/dev/vx/dsk/oraarchdg2/oraarchdg2-stat
/dev/vx/rdsk/oraarchdg2/oraarchdg2-stat /ab-l2 ufs 1 no -
/dev/vx/dsk/oraarchdg2/abi2 /dev/vx/rdsk/oraarchdg2/arbi2
/local/homes/oracle/arch/abi2 ufs 1 no -


b.) on server2

/etc/opt/SUNWcluster/conf/hanfs/vfstab

-rw-r--r-- 1 root other 26 Oct 20 18:15 dfstab.ab-l2
-rw-r--r-- 1 root other 207 Oct 20 18:47 vfstab.ab-l1
-rw-r--r-- 1 root other 207 Oct 20 18:46 vfstab.ab-l2


# cat vfstab.ab-l1  ( the first line is put by the cluster we have to only
add the second line)

/dev/vx/dsk/oraarchdg1/oraarchdg1-stat
/dev/vx/rdsk/oraarchdg1/oraarchdg1-stat /ab-l1 ufs 1 no -
/dev/vx/dsk/oraarchdg1/abi1 /dev/vx/rdsk/oraarchdg1/arbi1
/local/homes/oracle/arch/abi1 ufs 1 no -

# cat vfstab.ab-l2

/dev/vx/dsk/oraarchdg2/oraarchdg2-stat
/dev/vx/rdsk/oraarchdg2/oraarchdg2-stat /ab-l2 ufs 1 no -
/dev/vx/dsk/oraarchdg2/arbi2 /dev/vx/rdsk/oraarchdg2/arbi2
/local/homes/oracle/arch/abi2 ufs 1 no -

#
26.) Issue the haswitch command to initiate the mount. and the respective
directories are mounted.

/opt/SUNWcluster/bin/haswitch abcluster1 ab-l1

/opt/SUNWcluster/bin/haswitch abcluster2 ab-l2

df- k must show the mounted filesystems on both the nodes along with the
administrative filesystem.

/dev/vx/dsk/oraarchdg1/oraarchdg1-stat 1055 9 941 1% /ab-l1

/dev/vx/dsk/oraarchdg1/arbi1 16128634 1452286 14515062 10%
/local/homes/oracle/arch/arbi1


27.) Creat a Copy of the CCD in a file .


ccdadm -c <clustername>; <filename>;

It creates a backup of the current ccd to a file named <filename>;.

This is a good thing to do before any changes are made to the cluster that
could potentially corrupt the ccd. That way if anything does go wrong, a
clean copy of the ccd can be created from the backup.

-------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------
Glance of the system :

# vxdisk list ( from node 1)

DEVICE       TYPE      DISK         GROUP        STATUS
c0t0d0s2     sliced    oracled01    oracledg     online shared
c0t1d0s2     sliced    oracled02    oracledg     online shared
c0t2d0s2     sliced    oracled03    oracledg     online shared
c0t3d0s2     sliced    oracled04    oracledg     online shared
c0t4d0s2     sliced    oracled05    oracledg     online shared
c0t5d0s2     sliced    oracled06    oracledg     online shared
c0t6d0s2     sliced    oracled07    oracledg     online shared
c0t7d0s2     sliced    oracled08    oracledg     online shared
c0t8d0s2     sliced    oracled09    oracledg     online shared
c0t9d0s2     sliced    oracled10    oracledg     online shared
c0t10d0s2    sliced    oracled11    oraarchdg1   online
c0t16d0s2    sliced    -            -            online
c0t17d0s2    sliced    oracled13    oracledg     online shared
c0t18d0s2    sliced    oracled14    oracledg     online shared
c0t19d0s2    sliced    oracled15    oracledg     online shared
c0t20d0s2    sliced    oracled16    oracledg     online shared
c0t21d0s2    sliced    oracled17    oracledg     online shared
c0t22d0s2    sliced    oracled18    oracledg     online shared
c0t23d0s2    sliced    oracled19    oracledg     online shared
c0t24d0s2    sliced    oracled20    oracledg     online shared
c0t25d0s2    sliced    Combo1@25.0  sc_dg        online
c0t26d0s2    sliced    oracled41    oracledg     online shared spare
c2t8d0s2     sliced    rootdisk     rootdg       online
c2t9d0s2     sliced    orahome01    orahome      online
c2t10d0s2    sliced    orabkp01     orabkp       online
c2t11d0s2    sliced    orabkp02     orabkp       online
c2t12d0s2    sliced    orabkp03     orabkp       online
c2t13d0s2    sliced    orabkp04     orabkp       online
c3t32d0s2    sliced    oracled21    oracledg     online shared
c3t33d0s2    sliced    oracled22    oracledg     online shared
c3t34d0s2    sliced    oracled23    oracledg     online shared
c3t35d0s2    sliced    oracled24    oracledg     online shared
c3t36d0s2    sliced    oracled25    oracledg     online shared
c3t37d0s2    sliced    oracled26    oracledg     online shared
c3t38d0s2    sliced    oracled27    oracledg     online shared
c3t39d0s2    sliced    oracled28    oracledg     online shared
c3t40d0s2    sliced    oracled29    oracledg     online shared
c3t41d0s2    sliced    oracled30    oracledg     online shared
c3t42d0s2    sliced    oracled31    oraarchdg1   online
c3t48d0s2    sliced    -            -            online
c3t49d0s2    sliced    oracled33    oracledg     online shared
c3t50d0s2    sliced    oracled34    oracledg     online shared
c3t51d0s2    sliced    oracled35    oracledg     online shared
c3t52d0s2    sliced    oracled36    oracledg     online shared
c3t53d0s2    sliced    oracled37    oracledg     online shared
c3t54d0s2    sliced    oracled38    oracledg     online shared
c3t55d0s2    sliced    oracled39    oracledg     online shared
c3t56d0s2    sliced    oracled40    oracledg     online shared
c3t57d0s2    sliced    Combo2@57.0  sc_dg        online
c3t58d0s2    sliced    oracled42    oracledg     online shared spare
c4t0d0s2     sliced    orabkp05     orabkp       online
c4t1d0s2     sliced    orabkp06     orabkp       online
c4t2d0s2     sliced    orabkp07     orabkp       online
c4t3d0s2     sliced    rootdisk2    rootdg       online
c4t4d0s2     sliced    orahome02    orahome      online
c4t5d0s2     sliced    orabkp08     orabkp       online


# vxdg list ( from clustr1 )

NAME STATE ID
rootdg enabled 969313189.1025.abcluster.com          # which is on the D1000
oraarchdg1 enabled 971969219.1932.abcluster1.com     # is on a5200
orabkp enabled 971288916.1879.abcluster1.com         # is on D1000
oracledg enabled,shared 968779293.1629.abcluster1    # is on A5200
orahome enabled 969377297.1488.abcluster1.com        # is on D1000
sc_dg enabled 969397558.1566.abcluster1.com          # is on a5200 and
cluster makes this

2.) vxdg list from cluster2

NAME STATE ID
rootdg enabled 969313241.1025.abcluster2.com
oraarchdg2 enabled 971969352.1939.abcluster1.com
orabkp enabled 971288081.2017.abcluster2.com
oracledg enabled,shared 968779293.1629.abcluster1
orahome enabled 969377672.1480.abcluster2.com

-------------------------------------------~0 j ~0
------------------------------------------------------------------


------=_NextPart_000_15f8_1177_6f8d--
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers




--------------------------------------------------------------------------------


Previous message: Summary:Updated Oracle paralle Server(OPS)Cluster( SC2.2)Build
Next message: SUMMARY: Disksuite &amp; Solaris 7
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]

論壇徽章:
0
2 [報告]
發(fā)表于 2002-10-15 16:45 |只看該作者

Solaris 8.0+OPS安裝

拜托 由哪弄來的?  比metalink還詳細 佩服 佩服 只是實踐的機會太少了
您需要登錄后才可以回帖 登錄 | 注冊

本版積分規(guī)則 發(fā)表回復

  

北京盛拓優(yōu)訊信息技術(shù)有限公司. 版權(quán)所有 京ICP備16024965號-6 北京市公安局海淀分局網(wǎng)監(jiān)中心備案編號:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年舉報專區(qū)
中國互聯(lián)網(wǎng)協(xié)會會員  聯(lián)系我們:huangweiwei@itpub.net
感謝所有關(guān)心和支持過ChinaUnix的朋友們 轉(zhuǎn)載本站內(nèi)容請注明原作者名及出處

清除 Cookies - ChinaUnix - Archiver - WAP - TOP