- 論壇徽章:
- 1
|
老實(shí)說,對(duì)VIO很有興趣,但是時(shí)間不夠。估計(jì)有也只能在aix上練手?磥韑inux也是power戰(zhàn)略的一部分,有前途呀。。。
Introduction
Virtualisation is a hot topic, and the computing industry has varying views on recommendations for different technologies and solutions, development, and use. Through examples that apply equally to both pSeries, p5, and eServer OpenPower systems, this article shows how to set up and use the SUSE Virtual I/O Server (VIO Server). The SUSE Linux VIO Server is an alternative to the VIO Server software offered by IBM. It provides:
A more familiar Linux™ command environment The option to run systems management tasks or applications alongside the VIO Server Different device support options Alternative costs and support
Background
The POWER5-based machines have inherited the "know how" from IBM mainframes to provide opportunities for a significant reduction in operating costs for complex environments. Unlike software solutions available from other vendors, the POWER5 implementation uses advanced processor features, firmware (called the POWER Hypervisor™), and hardware features to create efficient and flexible virtualisation capabilities. Uniquely, these capabilities are offered from the top to the bottom of the server range -- from a powerful 64-way SMP machine down to a two-way, desk-side system. The key to this virtualisation is the VIO Server.
What is a VIO Server?
Since October 2001, pSeries servers from IBM allow you to divide a machine into logical partitions (LPARs), with each LPAR running a different operating system image -- effectively a server within a server. You can achieve this by logically splitting up a large machine into smaller units with an independent CPU, memory, and PCI adapter slots allocations.
The new POWER5 machines (pSeries, p5, and OpenPower servers) can also run an LPAR with less than one whole CPU -- up to 10 LPARs per CPU. So on a four CPU machine, 20 LPARs can easily be running. With each LPAR needing a minimum of one SCSI adapter for disk I/O and one Ethernet adapter for networking, the 20 LPARs would require the server to have at least 40 PCI adapters. This is where the VIO Server helps.
The VIO Server owns real PCI adapters (Ethernet, SCSI, or SAN) but lets other LPARs share them remotely using the built-in Hypervisor services. These other LPARs are Virtual I/O client partitions (VIO client) and, because they don't need real physical disks or Ethernet adapters to run, you can create them quickly and cheaply.
VIO Server implementations
You can have many different VIO Server implementations:
- The Advanced Power Virtualisation (APV) VIO Server from IBM for pSeries, p5, and the Advanced Open Power Virtualisation (AOPV) VIO Server from IBM for OpenPower machines are possible implementations. Both of these are special-purpose, single-function appliances and are not intended to run general applications.
- The Linux VIO Server for pSeries, p5, or OpenPower hardware first became available with the SUSE SLES 9 distribution. Unlike the APV and AOPV, this is a full copy of the Linux operating system. This means it can run other central services such as:
- NFS
- Network installation
- DNS
- Web site (Apache)
- Samba services
You need to make sure that these functions do not interfere with the performance of the VIO Server service. This software is also available on the Debian Linux for POWER distribution.
There are four different VIO client implementations (actually, these are just the regular operating systems, but they include the device drivers for running a VIO client):
- AIX® 5.3 (only supported by the APV or AOPV VIO Server)
- Linux -- SUSE SLES 9
- Linux -- Red Hat 3 (update 3 onwards) and Red Hat 4
- Linux -- Debian for POWER
This article only covers the SUSE Linux VIO clients only. It should also apply to the Debian version, but this assertion has not been explicitly tested.
Virtual SCSI disks
The VIO Server provides a virtual SCSI disk service. See
[color="#996699"]Figure 1
below.
Figure 1. Virtual SCSI disk service
![]()
[color="#996699"]Figure 1
shows a single VIO Server providing virtual SCSI services to multiple VIO client partitions. Each VIO client operates as if it had a dedicated SCSI device, but, in fact, each client device is a real disk partition (like a partition created using the Linux fdisk command) on the VIO Server. Alternatively, on the VIO Server, it could use a loop-back device driver to a file in a regular file system. The VIO Server and VIO client communicate using the internal pSeries Hypervisor firmware (PHYP) feature, which efficiently allows disk I/O requests to be transferred between the LPARs using a message-passing protocol. The VIO Server has four disks which could be SCSI or Fiber Channel SAN disks. Data protection can be provided by hardware or software (Linux), mirroring, or RAID5. The VIO clients use the VIO client device driver just as they would a regular local device disk to communicate with the matching server VIO device driver. Then the VIO Server actually does the disk transfers on behalf of the VIO client. Note that there is a strict client/server relationship between the VIO client and the VIO Server.
Virtual Ethernet
The LPARs in the machine can use the virtual Ethernet switch service (in the Hypervisor) in a number of different ways.
- Case one -- internal only networks: You can use the virtual Ethernet to allow TPC/IP (Transmission Control Protocol/Internet Protocol) to communicate between the LPARs -- see
[color="#996699"]Figure 2
below. This provides high speed data transfer without any hardware adapters starting at roughly one GB per second, but can be much higher -- especially using larger block sizes. You'll also notice that there is no client/server relationship between the LPARs -- all are equally using the virtual Ethernet. There can be many virtual Ethernets in one machine, where groups of LPARs can communicate only within the virtual Ethernet they connect to. This allows fast communication and complete security, without purchasing additional Ethernet adapters, cables, hubs, or routers.
Figure 2. Virtual Ethernet -- Private/internal only networks
![]()
- Case two -- routing to a physical LAN: One LPAR on the virtual Ethernet can also communicate externally to other machines using a real physical network on behalf of all the LPARs. In this case, this special LPAR is being used to route Ethernet packets between the internal virtual Ethernet and the external physical Ethernet network. Standard Linux features can be used to do this. This will work well but involves setting up TCP/IP routes between the two networks (internal and external) and can take time to setup. The below diagram shows one LPAR with a real physical Ethernet adapter providing standard network routing between the two Ethernets. Note: This is not using any VIO Server features.
Figure 3. Internal virtual Ethernet with route to external LAN
![]()
- Case three -- bridging to a physical LAN: Here, the VIO Server is being used to bridge Ethernet packets between the internal virtual Ethernet and the external physical Ethernet network, so that all the LPARs appear as regular machines on the physical network. This is simple to set up and is the option used in the example in this article. The below diagram the Virtual IO Server is being used to join the two networks using the SUSE Bridging Utilities package. Strictly speaking this is a completely separate function to the Virtual IO Server but it's common practice to run the virtual SCSI Server and TCP/IP bridging on the same LPAR.
Figure 4. Internal virtual Ethernet with bridge to external LAN
![]()
- Case Four -- bridging with VLANs (virtual LANs): This is the same as case three, except there is a number of VLANs within the machine (using virtual Ethernet). These are connected to VLANs on the external network using a bridging LPAR and a network router that supports VLAN. This complex scenario is beyond the scope of this article, but some hints are included and it's supported.
Why use the SUSE VIO Server?
You can use a VIO Server in any number of scenarios. Here are five typical examples that would make good use of a VIO Server. You might find your environment needs are similar to one of these:
Small machine with limited PCI slots
The OpenPower 710 or 720 or the p5 models 510, 520, or 550 have one to four CPUs but limited adapter slots (three or five), in addition to two built-in Ethernets and one built-in SCSI adapter. These machines run out of PCI slots as you add LPARs.
- You have one set of internal SCSI disks, or you can split the SCSI disks in two four packs on the OpenPower 720 or p5-550. This gives you two LPARs at most, using the internal disks. So, you might run a VIO Server to support the other LPARs.
- You can use a VIO Server (0.5 of a CPU) plus four to six clients (0.1 to 1 CPU). Typically, clients might be small, for example, four to 16 GB virtual SCSI disks and one virtual Ethernet for the whole machine.
[color="#996699"]Figure 5
shows multiple LPARs running on a single disk pack.
Figure 5. Multiple LPARs
![]()
- Mid-range machines with extra small workloads
This might be an eight or 16 CPU machine with large partitions for production use. But many system administrators also want a small number of extra LPARs rather than purchase an extra machine. A VIO Server can easily host a half dozen smaller LPARs.
- For example, larger production LPARs might have one to four larger dedicated CPUs, disk I/O, and networks each.
- VIO Server is used for "bits and bobs" LPAR such as test, development, training, upgrade practice, new application trials, and so forth. Typically, VIO clients might have 4 GB to 8 GB virtual SCSI disks and one or two virtual Ethernets.
[color="#996699"]Figure 6
below shows three large production LPARs running (they would have dedicated disks and Ethernet) with a few extra small VIO clients and one VIO Server on the machine using "spare" capacity. This spare capacity could be demanded by the production LPARs during peaks in their workload.
Figure 6. Three large production LPARs
![]()
Ranch or server farm style
Lots of small server consolidation workloads from smaller/older machines or many small servers are required, but they are unlikely to peak at the same time.
- The machine is to run lots of LPARs (for example, 10 to 20 clients on a four-way machine or many times that on larger machines). Each LPAR is for small applications but not high demand (0.2, or 0.5 CPU up to 2 CPU).
- This could be server consolidation or, for example, a collection of small Web servers where isolation of the data is important.
- VIO Server has one or two CPUs and probably RAID 5 SCSI disks or SAN disks.
- Typically, clients have one or more 4 GB virtual SCSI disks each and might have different groups of LPARs around a different virtual Ethernet.
Figure 7. Different groups of LPARs
![]()
[color="#996699"]Figure 7
shows dozens of VIO clients with a medium size VIO Server supporting them on what might be several disk packs. Serious I/O setup only once (to reduce setup and management) - For example, the VIO Server has SAN disks connected using two to four Fibre Channel adapters and two Ethernet adapters to run Ether-Channel for redundancy and additional bandwidth.
- The VIO Server has load balancing and fail over, but VIO clients have much simpler disk and Ethernet setup.
- Typically, the VIO Server could have one to three CPUs, but the VIO clients are larger too. For example, The VIO client might have one to eight CPUs to run large applications. They can have over 100 GB of virtual SCSI disks and many virtual Ethernets.
- This complex setup is not covered in this article, as it's an advanced topic.
[color="#996699"]Figure 8
shows one regular LPAR (it would have dedicated disks and Ethernet) and a large configured VIO Server with multiple paths to disks and Ethernet. This is supporting some large VIO client LPARs.
Figure 8. Regular LPAR
![]()
Serious with high availability backup
Same as above, but use a second VIO Server for availability/throughput.
- There are arguments that for very high availability that you should spread your access to virtual SCSI and virtual Ethernet across two VIO servers so that if you loose one VIO Server, then you can carry on running.
- The counter argument is that VIO Server is only running a few device drivers. Devices drivers are extremely reliable. Also, anything that would crash one VIO server could crash the second one too!
[color="#996699"]Figure 9
shows that instead of using a local physical device driver, the VIO Client uses the virtual resource device drivers to communicate with the VIO Server, which does the real I/O. There is very little code running on the VIO Server except the virtual VIO Server device drivers and the physical resource device drivers. This means there is little to go wrong on the VIO Server side.
Figure 9. VIO Server
![]()
This article does not cover duplicated VIO Servers. Further details can be found in the IBM Redbook -- Advanced POWER Virtualisation on IBM p5 Servers (see
[color="#996699"]Resources
).
Prerequisites before you start
Software Your first question might be: - Where do I get the SUSE VIO Server and how do you know if SUSE VIO Server software is installed?
Answer:
- It's always installed with SUSE itself, as it's an integral function of the SUSE SLES 9.
Notes:
- It's not available with SUSE SLES 8, as Linux 2.6 kernel features are used.
- It's not available with Red Hat EL3 or Red Hat EL4.
- Install all the available SUSE Service Packs.
- The VIO Server is also available with the Debian distribution of Linux on POWER.
Hardware You need: - An OpenPower, pSeries, or p5 machine with spare resources:
- Some CPU resources (can be less than one CPU)
- Memory 512 GB per LPAR (can be just 256 MB)
- Real Ethernet adapter.
- Some time with the CD drive (Unless you prefer network installation, but that is not covered here.)
- SCSI adapter and a SCSI disk (A SAN disk could equally be used, but the details are not included here.)
- The hardware virtualisation feature, which is needed for LPAR and Virtual Server features but optional on some POWER5 machines.
- SUSE SLES 9 for POWER set of CDs.
- There is also a bridge utilities package you need to install, but these are on the standard SUSE SLES9 CDs.
Skills This document is not intending to take a " boil-the-ocean" approach and will not show you a screen-by-screen level of detail and each input field.
You should already understand the following:
- Basic Linux systems administration such as installing an RPM (rpm -Uvh .rpm), configuring an Ethernet network adapter (ifconfig eth0 mask 255.255.255.0) and managing a filesystem (mount /dev/sda5 /mnt). As these tasks are identical to working on the Intel platform, there are many books, training courses, and Internet material covering these regular system administration commands and tasks.
- How to install SUSE Linux in either text mode (on a dumb/ASCII screen) or using a VNC session -- Once you have installed Linux a couple of times, this becomes a simple "follow your nose" task. For the VNC install, the boot prompt extra command is "vnc=1 password=abc123". Note six characters for the password and you get prompted for the other details.
- The HMC:
- How to install the HMC hardware and software
- How to set it up (it's assumed this has already been done)
- How to use the HMC to create and start a simple LPAR and its profiles
- The pSeries, p5, and OpenPower range of machines internals like the names of the adapter positions. For example, the Tn names for internal adapters and Cn for real adapters in a PCI slot. You are expected to create the VIO Server LPAR with the right SCSI disks and Ethernet resources on the HMC with and without the CD. Note: Above "n" is a number of the slot and details can be found in the hardware manuals, IBM Redbooks, or on the large sticker on the outside of the machine covers.
Network The SUSE VIO Server must be able to communicate directly with the HMC for advanced functions and error reporting, and this is easily forgotten. The following network network is recommended:
Figure 10. The nework
![]()
Many sites also have other dedicated purpose networks in addition to the above. Examples might be: a network for remote backup or a network dedicated for systems administration. These would be in addition to the networks in the diagram.
Getting started
Here is the process to get you started with VIO Server (covered below) in detail.
[color="#996699"]Step 1. Logical diagram of the example
[color="#996699"]Step 2. Planning your setup
[color="#996699"]Step 3. Create the SUSE SLES 9 VIO Server LPAR
[color="#996699"]Step 4. Install SUSE SLES 9 VIO Server
[color="#996699"]Step 5. Install bridge utilities
[color="#996699"]Step 6. HMC defining the VIO Server -- virtual Ethernet
[color="#996699"]Step 7. HMC defining the VIO Server -- virtual SCSI
[color="#996699"]Step 8. HMC creating the VIO client LPARs
[color="#996699"]Step 9. Clean up the HMC
[color="#996699"]Step 10. Preparing VIO Server for clients
Virtual Ethernet Virtual SCSI using fdisk type real disk partition for Client A Virtual SCSI using loop back and filesystem file for Client B
[color="#996699"]Step 11. VIO client LPAR installations
And three common tasks that are recommended or useful:
[color="#996699"]Step 12. Backing up a VIO Server and VIO client
[color="#996699"]Step 13. Cloning a client
[color="#996699"]Step 14. Dynamic LPARs (DLPARs) and RAS
I discuss these topics in detail below, but please note it takes longer to describe than actually implement!
Step 1. Logical diagram of the example setup
[color="#996699"]Figure 11
below is the diagram of the SUSE VIO Server in an LPAR and two Virtual I/O clients LPARs that are going to be set up for this article.
Figure 11. The SUSE SLES 9 Server LPAR
![]()
Ethernet
For simplicity, the VIO Client LPARs are given Ethernet IP addresses within the address range of the regular physical Ethernet network in this computer room. The VIO Server bridges between physical and virtual networks. This means that the client LPARs appears like any other computer to users. This is the most likely option to be implemented and hides the virtual Ethernet network completely from users, so it's simple to access the client LPARs.
Disks
For the disks, you are going to use the internal SCSI adapter in the VIO Server and one disk.
- The first client's (called Client A) virtual disk connects to a disk partition created with fdisk on the VIO Server.
- The second client's (called Client B) virtual disk is supported using a 4 GB file in a filesystem and makes use of a loopback driver on the VIO Server.
This shows all of the common types of setup, such as bridging networks, disk partitions, and loopback. In practice, most people use disk partitions or loopback, and not both.
Step 2. Planning your setup
The first task is to do some planning of the VIO Server and client LPARs. Experience has shown that just creating LPARs without some planning causes problems and can waste a lot of time.
Below is the planning I've done for this example, which is an OpenPower 720. Except for the references to PIC slots like C3, T6, and T14, which are machine dependant, this could be any pSeries, p5, or OpenPower machine.
Table 1. OpenPower 720
SUSE VIO Server Client A Client B Hostnameop24op26op27Ethernet adapterC3 bridgingvirtualvirtualIP address9.137.62.249.137.62.269.137.62.27VLAN ID (port) 111Mask255.255.255.0255.255.255.0255.255.255.0Gateway9.137.62.19.137.62.19.137.62.1DNS9.137.62.29.137.62.29.137.62.2CD adapter T6 for install onlyT6 for install onlyT6 for install onlySCSI adapterT14VirtualVirtualDisk sizeSDA 73 GB Linux disk4 GB4 GB SDB 73GB for client partitions Device on VIO Server /dev/sdb6/dev/loop0 Slot four for Client B Slot three to server slot fourProfile namesNormalNormalNormal Normal with CDNormal with CDNormal with CD CPU values Dedicated/shared CPUSharedSharedSharedCPU desired0.40.30.3CPU min0.20.10.1CPU max122Virtual processors122 Memory values Memory 512 MB2048 MB256MB
Step 3. Create the SUSE SLES 9 Server LPAR
Next, you need to create the SUSE VIO Server LPAR. You do this on the HMC and create a special VIO Server LPAR, but initially with no extra virtualisation features (you will add the virtual features later on). There is just one feature that is different to a regular Linux LPAR -- the LPAR partition environment on the first panel of the Create LPAR wizard. Here you must not select the "AIX or Linux" option but must select the "VIO Server" option, as shown in
[color="#996699"]Figure 12
below.
Figure 12. The SUSE SLES 9 Server LPAR
![]()
Create the LPAR and the first profile with the details in the
[color="#996699"]Table 1
above .
This document assumes you are familiar with the HMC and creating LPARs -- if not, take a look at the Do you want more information? section at the end of this article for other documents that describe how to create LPARs on in the InfoCenter for the manuals. You can call the LPAR profile that is normally used with a name or "Normal".
Hints:
- A VIO Server LPAR can use dedicated CPUs -- This is a good idea if you have plenty of CPUs or are expecting to do lots of I/O for many VIO client LPARs and avoids any delay in starting the I/O on the real adapters. Dedicated CPUs are running the VIO Server all the time.
- A VIO Server LPAR can use shared CPUs -- This is a good idea if you don't have whole CPUs that can be assigned. This also means unused CPU cycles are given back to the shared pool for other LPARs to use. If the machine becomes heavily loaded, this can introduce tiny delays in starting the I/O on the real adapters. Shared CPU partitions are time sliced on to the CPU, along with other LPARs. Setting the VIO Server partition uncapped and with a high weight is generally a good idea.
- If you want a simple CPU rule of thumb for those CPUs which are going to be used for the VIO Server and client partitions, assign at least 10 percent of those CPUs to the VIO Server. For example, for 5 CPUs in the shared pool being used for both VIO Server and VIO clients, allocate 0.5 of a CPU to the VIO Server.
- If you want a simple memory rule of thumb, use 512 MB of memory.
- It's recommended to also have an LPAR profile with the adapter connected to the CD drive included to make installing SUSE SLES 9 from CD straightforward. Copy the "Normal" profile and rename it "Normal with CD", then change the new profile properties to include the CD SCSI adapter. This will be used to initially boot the LPAR with a DVD/CD drive for installing SUSE SLES 9.
- If this is a new machine and you are the only user, it's worth noting that installations go much faster if you assign the LPAR a whole CPU or more. If the LPAR is going to be assigned less than this in production, it can always be reduced later but this simple "trick" might save you 10 minutes per LPAR installation.
Step 4. Install SUSE SLES 9 VIO Server
Next, you need to install SUSE SLES 9 into this partition as a normal SUSE Linux install.
Note that there is no special features for the SUSE-based VIO Server -- it's just a regular SUSE SLES 9 operating system, but with the bridging utilities installed (see the
[color="#996699"]Install the bridge utilities
section).
Warning: If you want to use disk partitions for your VIO client LPARs, then you will need some disk space that is not allocated to the basic SUSE partitions.
- If you have just one disk when installing SUSE SLES 9, make sure that the default disk partitions do not use the whole disk or all the disks -- you might have to reduce the root file system disk partition size to do this. It's recommended that you have at least 4 GB per VIO client, and don't forget you can run out of primary disk partitions. An extended disk partition can be used to allow more disk partitions to be created and used to VIO clients.
- If you have more than one disk, you can install on the first disk and use the second disk for hosting the VIO client disk space.
Fortunately, if you end up with no spare raw disk space, you can always use the loopback scheme to a file in the file system approach (assuming you have the space there).
If you have the SUSE SLES 9 Service Pack, boot from the first CD of the Service Pack and follow the instructions. But note that it will also require the original CDs to install and it,s easy to put the wrong CD in the drive when requested (as there are two CD1s and so forth).
Don't install fancy features like firewalls, enhanced security, printers, and so forth to keep the install simple. If required, these can be added later.
Also, it's recommended that you don't get your VIO Server LPAR working on the network at this point -- this will be done using the bridging utilities later.
Assuming that you now have SUSE SLES 9 up and running, add and set up the VIO Server virtualisation features.
Step 5. Install bridge utilities
The only feature that you need, in addition to the standard SUSE SLES 9 installation for Ethernet bridging, is a package on the standard CDs called the bridging utilities.
You will find the commands and tools you need on CD3 as follows:
/SUSE/ppc64/bridge-utils-0.9.6-121.1.ppc64.rpm
Assuming that you have an entry in /etc/fstab like:
/dev/dvd /media/dvd subfs fs=cdfss,ro,procuid,nosuid,nodev,exec,iocharset=utf8 0 0
Then:
# mount /media/dvd
# cd /media/dvd
Install this RPM with:
# rpm -ivh SUSE/ppc64/bridge-utils-0.9.6-121.1.ppc64.rpm
Check to see they are installed with:
# rpm -qa | grep bridge
Hints:
If you have the DLPAR change software installed and working, it's possible to dynamically add virtual Ethernet and virtual SCSI. But in practice, I recommend you shutdown down your VIO Server and VIO client LPARs during this initial setup to make sure it works fine for the first time. If you set up DLPAR later on, you can then experiment, but remember DLPAR changes also have to be implemented identically in your LPAR profile (if you want the same configuration next time, you need to reboot your LPAR).
In this article, I take the simple and safe approach of shutting down the VIO Server, making changes to the VIO Server profile and restarting it to avoid any confusion and complications.
If you make changes to the LPAR profile, you need to shutdown the LPAR and restart from the HMC in order to pick up those changes. If you use:
shutdown -fr now
-r means reboot in the LPAR, then you will have only the same resources that were available when the LPAR was previously started from the HMC.
Step 6. HMC defining the virtual server -- virtual Ethernet
On the HMC, you can now define the virtual Ethernet. First, shutdown the SUSE VIO Server LPAR (as root: shutdown -fh now). Change your "normal" profile properties on the HCM (right-click the profile and select properties) and select the Virtual I/O tab. Then select Ethernet at the bottom and click on Create. By default, this will be allocated to slot number two and port VLAN ID of one. Any LPAR with the same port VLAN ID will be able to communicate with each other. As this is going to be set up as a bridging VIO Server, set the two options as below:
- Check the trunk adapter (for example, select).
- Leave the IEEE 802.1Q compatible adapter unchecked -- this is only needed if you are using VLANs internally.
Note: If you want different virtual Ethernet LANs so that different groups of LPARS can communicate with each other, all you need to do is have different port VLAN ID numbers. These more complex configurations are not covered in this article.
In
[color="#996699"]Figure 13
below, you should see the VIO Server in the lower half and the VIO client in the top half. It shows that if the port VLAN IDs are the same, then the LPARs can communicate. It also shows the additional settings for the VIO Server (trunk is selected and IEEE 802.1Q not selected). These additional settings are really for the bridging feature, as virtual Ethernet does not really have a client/server relationship -- all LPARs are equal on the network.
[color="#996699"]Figure 13
below shows the virtual Ethernet settings. At the bottom is the VIO Server (or any LPAR that will be doing the bridging to the real Ethernet adapter) and at the top is the VIO client (any LPAR that only uses the virtual Ethernet).
Figure 13. Virtual Ethernet settings
![]()
On the other non-bridging virtual Ethernet LPARs, you can use the ifconfig command to set up your network just as you would any network (if using SUSE, use the YaST tool), and it will find the virtual Ethernet adapter just like any other, as shown in
[color="#996699"]Figure 14
below.
Figure 14. Non-bridging virtual Ethernet LPARs
![]()
Step 7. HMC defining the virtual server -- virtual SCSI
On the HMC, you can now define two different virtual SCSI devices. These two types of virtual disk (fdisk-type disk partitions and the loop back to a file) will appear identical on the HMC. It's only on the actual VIO Server LPAR that they are set up any differently.
If not done already, shutdown the SUSE VIO Server LPAR (as root: shutdown -fh now). On the HMC, select the VIO Server and change your "normal" profile properties (right-click the profile and select properties) and select the Virtual I/O tab.
- Then select SCSI at the bottom and click on Create.
- This is the VIO Server, so select Adapter Type: Server.
- Then select any remote partition and slot can connect. Ideally, this should name the specific LPAR and slot to eliminate the risk of the wrong connection between server and client, but at this point, you have not created the client partition. So, you can't name it yet. This is fixed up later on (see Clean up the HMC section).
- Then select OK.
- Do this a second time for the second virtual SCSI.
The client LPARs are going to use the SUSE VIO Server slots three and four. Any more SCSI adapters are optional in this example. In practice, the writer typically sets up a handful of extra virtual devices so they can be used in the future, without stopping the VIO Server or having to do dynamic changes. Unused virtual adapters cost very little, so it's not a waste.
[color="#996699"]Figure 15
illustrates the eventual configuration, showing how the VIO Server (shown at the bottom) and client (shown at the top) both explicitly refer to each other to eliminate errors.
Figure 15. Configuration of VIO Server and client
![]()
Step 8. Create the VIO client LPARs
Now you can create the two VIO client LPARs for the two different types of virtual SCSI being used in the example. It's assumed that you already know the procedure for creating a regular LPAR. Listed below are the additional things that you need to consider.
- This is a bit obvious, but you don't need real adapters for your disks or Ethernet connection because you are going to use virtual resources for these.
- It's recommended that you install the client LPAR using a CD because it's very simple and straightforward, so you will want to have the CD SCSI adapter within your LPAR. Once installed, you can remove it from the LPAR profile.
- Create two identical LPAR profiles, one with and one without the CD. Once installed, the writer uses NFS to remotely mount a filesystem containing the Linux CDs, so you don't need the CD drive from then on.
- Add the virtual Ethernet adapter on the VIO screen with the same port, VLAN ID, which is one in this example. Do not select the Trunk or IEEE 802.1Q compatible adapter options -- These are just for the VIO Server partition.
- Add the virtual SCSI adapter.
- Set the adapter type to client.
- Explicitly name the remote partition-- For example, the LPAR in which you have the SUSE VIO Server.
- Explicitly name the remote partition virtual slot number -- This is slots three for the first client LPAR (Client A) and four for the second client LPAR (Client B).
Don't forget: You have two client LPARs to create with the two different SCSI remote partition virtual slot numbers, but the same virtual Ethernet Port VLAN ID.
Step 9. Clean up the HMC
Now that you have created the client LPARs, you can go back to the SUSE VIO Server LPAR and connect up the virtual SCSI adapters explicitly to their virtual client LPARs and slots.
This ensures that only the right client LPAR connects to the right virtual SCSI disk. This is a safety precaution but still worth doing.
See the two diagrams above to check they are right.
So on the HMC highlight -- the VIO Server LPAR profile -- bring up its properties. In the Virtual I/O tab, select each Server SCSI resource and then select the "Properties Button" and set the:
- Only selected remote partition and slot can connect option plus
- Correct remote partition name
- Correct remote partition virtual slot number
Remember in this example, you have two virtual SCSI adapters on the VIO Server to "clean up".This is very easy to get wrong and confused, and this is why you should plan in advance (see the planning table).
Step 10. Preparing VIO Server for clients
You now have all the connections for the virtual server and virtual clients, but still have to connect to the:
- Virtual SCSI disk to a piece of real disk space
- Virtual and real Ethernets using the bridge utilities
This is done on the VIO Server only as follows:
Step 10.1. Virtual Ethernet
Once the VIO Server is running and no network is set up (if it's, then stop the network before continuing further with: ifconfig eth0 0.0.0.0 ), then do the following:
Give your machine a hostname: hostname . Install the IBM virtual device driver: modprobe ibmveth. Check that it's running: lsmod | grep ibmveth. Install the bridge device driver: modprobe bridge. Check that it's running: lsmod | grep bridge. Create the bridge: brctl addbr br0. Add the real Ethernet interface: brctl addif br0 eth0. Add the virtual Ethernet interface: brctl addif br0 eth1. Let the bridge settle down: sleep 1.
Note: If your real Ethernet adapter has, for example two ports, it can be confusing. In this case, the virtual Ethernet name might be eth2, as eth1 can be the second port on the real adapter. The bridge software needs little time to logically connect the networks. If you are typing the commands, this is not a problem. If it's a script, the sleep command gives it the time before continuing.
You now have a network called br0. The configuration of this network is just like any other network, and it sets up the real and virtual adapters all in one go.
ifconfig br0 9.137.62.170 netmask 255.255.255.0
sleep 1
The bridge software needs little time to settle down and explore the network topology. If you are typing the commands, this is not a problem. If it's a script, the sleep command gives the bridge software some time before continuing.
The following might not be required, but I found that in some cases the networks were not actually started, and it does no harm.
ifconfig br0 up
ifconfig eth0 up
ifconfig eth1 up
You will probably need a gateway to:
route add default gw 9.137.62.1
Explaining the device driver directory
To configure the partition, you have to get to the right directory and change three files (type, device, and active). The directory is: /sys/devices/vio/XXXXXXXX/bus0/target0.
Where XXXXXXXX is the virtual SCSI adapter slot number plus 30000000.
For example: - Slot three is 30000003.
- Slot four is 30000004.
- Slot 42 is 30000042.
This directory contains the following files:
- Active -- Write one for active or zero for not active (do this last).
- Type -- Write b for binary the only option currently.
- Device -- Write the name of the disk device that you wish to associate with the virtual SCSI adapter. As in /dev/sdc or /dev/sdc2 or /dev/loop4.
Step 10.2. Virtual SCSI using fdisk- type real disk partition for Client A
Once the VIO Server is running and before the virtual Clients are started:
Initial setup Actually, create the disk partition that the client uses. Linux has many ways to create a partition on the disk. All you need to do is: - Create the partition.
- Know its name.
- Make it big enough.
A 4 GB as a minimum is recommended -- This allows for Linux, the boot partition, some swap space, and at least 1 GB of spare disk space to install an application and data. If you have more disk space available, you can allocate a much larger size. You can also allocate more than one virtual SCSI disk in a single client LPAR.
Use the fdisk tool to create a partition on a spare disk, but you might want to user the YaST or DiskDruid tools to create mirrored or software RAID partitions. I'm assuming you know how to do this, but please be aware it's a high-risk operation, as getting this wrong can destroy disk contents or worse.
The fdisk other partition management commands and tools can cause damage to your running system, operating system, filesystems, and files and data. These commands are not covered here. You need to check the manual pages, other documentation, and how to files. Using them strictly is at your own risk.
To list your partitions and double check the name and size, you can use: fdisk -l.
Note: The fdisk option is lowercase L.
In this example, the device created is /dev/sdb6.
An alternative is that you allocate a whole unused disk to your LPAR. To do this, simply use the whole disk name instead of the partitions name. For example, /dev/sdc for the third whole disk (this is what the c means).
Initial setup and each time the VIO Server is rebooted Install the IBM virtual SCSI server device driver: modprobe ibmvscsis.
Check that it's running: lsmod | grep ibmvscsis.
The VIO Server profile (on the HMC) includes the virtual SCSI adapter for Client B, and its slot number was three. This "three" is vital to get right. It's the Server slot number -- not the slot number in the client. For example, the file name below uses 300000003.
As the root user:
- Switch off the device in case it's active.
# cd /sys/devices/vio/30000003/bus0/target0
# echo 0 >active
- Program it with new values:
# cd /sys/devices/vio/30000003/bus0/target0
# echo /dev/sdb6 >$VIO/device
# echo b >type
# echo 1 >active
It would be best to script these actions so they are performed correctly every time. There is an example script in
[color="#996699"]Listing 1
below.
Step 10.3 Virtual SCSI using loopback and filesystem file for client B
Once the VIO Server is running and before the virtual clients are started:
Initial setup Create the file that the client uses.
You can create the "disk" file, assuming you want to do this in the directory or filesystem /clients and you have 4 GB free space in the filesystem.
As root:
Listing 1. Example script
# cd /clients
# Use the command bc to work out one million
# bc
1024*1024
1048576
Control-D
# dd if=/dev/zero of=/clients/B bs=4096 count=1048576
# ls -l /clients/B
Next, check the length of the file with ls -l. It should be 4194304 bytes.
Initial setup and each time the VIO Server is rebooted Install the IBM virtual SCSI server device driver: modprobe ibmvscsis
Check that it's running: lsmod | grep ibmvscsis
The VIO Server profile (on the HMC) includes the virtual SCSI adapter for Client B, and its slot number was four. This "four" is vital to get right. It's the server slot number -- not the slot number in the client. For example, the file name below uses 300000004.
As the root user:
- Switch off the device (just in case it's active).
# cd /sys/devices/vio/30000004/bus0/target0
# echo 0 >active
- Now program it with new values.
# cd /sys/devices/vio/30000004/bus0/target0
# now program it with new values
losetup /dev/loop0 /clients/B
echo /dev/loop0 >device
echo b >type
echo 1 >active
You can check the details of the loopback command (losetup) in the manual pages.
Note that you have to use a different loopback resource name (such as loop0, loop1, loop2, and so forth) for each virtual SCSI adapter.
It would be best to script these actions so they are performed correctly every time. There is an example script in
[color="#996699"]Listing 2
below.
Step 11. VIO client LPAR installations
Now you can start up your virtual I/O clients and install them. This can be SUSES SLES 9, Red Hat 3 (update 3 onwards), or Debian. They should find both the:
- Virtual Ethernet -- It will be named a virtual Ethernet much like a real Ethernet is given a name, such as the "Intel 100 pro" adapter.
- Virtual SCSI disk -- It's presented just like a SCSI disk ,but it will only be the size of the underlying disk partition or file.
These should install just like a regular real Ethernet and SCSI disk.
Once running, the virtual Ethernet looks and behaves like a very fast 1GB real adapter.
Listing 2. Example script
clienta:~ # ifconfig
eth0 Link encap:Ethernet HWaddr AE:38:00:00:D0:02
inet addr:9.137.62.178 Bcast:9.137.62.255 Mask:255.255.255.0
inet6 addr: fe80::ac38:ff:fe00:d002/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1075 errors:0 dropped:0 overruns:0 frame:0
TX packets:350 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:113566 (110.9 Kb) TX bytes:40940 (39.9 Kb)
Interrupt:184
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:10 errors:0 dropped:0 overruns:0 frame:0
TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:641 (641.0 b) TX bytes:641 (641.0 b)
Once running, you can see how the virtual SCSI disk is being treated just like a regular disk.
clienta:~ # fdisk -l
Disk /dev/sda: 4194 MB, 4194304000 bytes
130 heads, 62 sectors/track, 1016 cylinders
Units = cylinders of 8060 * 512 = 4126720 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 1 3999 41 PPC PReP Boot
/dev/sda2 6 132 511810 82 Linux swap
/dev/sda3 133 1016 3562520 83 Linux
Network installation might be trickier, as you might have to activate the device drivers for the installation tools to find the virtual adapters. Note that network installations are not covered in this article.
Depending on which release you use, the installer might give you a series of menus and options to install the IBM virtual SCSI client and virtual Ethernet drivers before you begin installation. Later releases fully understand and install the device drivers for these virtual resources without manual intervention.
Step 12. Backing up a VIO Server and VIO client
Once you create your client LPAR and set it up the way you like, you should consider backing up the operating system images. Backups are a large subject and many books have been written in this area. There are many backup solutions, both commercial applications and freely available tools in the Linux world. One of the popular freely available tools is AMANDA (Advanced Maryland Automatic Network Disk Archive). This tool provides remote backup with disk caching and tape library management, and it's worth a look. Also, there is a "Linux Backup and Recovery How To" on the Internet for more information.
This article only covers the special considerations for VIO Servers and clients. Backups are important for the following reasons:
- Recovery of files that are accidentally removed.
- Disk failure -- Assuming your disks are not already protected with a mirror or RAID 5, or you are very unlucky and loose more than one disk.
- Recreating the entire system for a disaster (total machine loss) from backups held off-site.
These reasons also apply to VIO systems.
HMC
The HMC data includes:
- Definitions of the LPAR physical resources such as CPU, memory, and PCI slots
- Definitions of the LPAR virtual resources such as the connections between VIO Server and clients
If the HMC fails, the data is still held in the service processor and can be read by a replacement or recovered HMC. It's vita that the configuration details are available in case of a disaster. HMC backups are documented in the manuals, the InfoCenter help files and IBM Redbooks. it's also recommended that details of the LPARs are documented on paper. For example, something similar to the planning sheet used to create the LPARs in this article.
VIO Server
The VIO Server itself needs to be backed up. If the VIO Server is purely being used for virtual I/O, then you need to:
Back up the details of the client physical disk partitions or the loopback files and filesystem layout details. The details include number partitions/files, their size, and disk layout. Back up the initialisation scripts, which includes the disk links and Ethernet details. Back up the contents of the client physical disk partitions or loopback files.
To recover the VIO Server, you can simply re-install SUSE SLES 9 and recover using the above.
If you are using the VIO Server for other purposes in addition to VIO, you need to also back up the VIO Server system itself, just as you would any Linux system. This might mean a back up to a local or remote tape drive, over NFS to another server, or using one of the automated backup services in a client server mode.
There are different approaches to backing up the VIO client images from the VIO Server. First, you have the option of doing hot or cold backups:
- Hot backup is while the VIO client is running. This is not recommended.
- Cold backup is the only sensible way to back up from the VIO Server. This is simply a matter of shutting down the VIO client -- as root on the client: shutdown -fh now.
- For the disk partitions method, you have to use the dd command to copy the disk partition images to a file.
- For the loopback adapter method, you have to create a copy of the file.
Note: the cp command is not a good idea, as it copies a file using small blocks. This is very inefficient and slow. A better command is dd , with a large block size. For example, 64 KB blocks use the bs=64k option. - Loopback example: dd bs=64k if=/clients/B of=/backup/B
- Disk partition example: dd bs=64k if=/dev/sdb6 of=/backup/B
Alternatively, you can back up straight to a tape drive using a command like tar or backup. Some machines support a writeable DVD device that can also be used as a backup medium.
As SUSE SLES 9 can perform DLPAR changes of PCI slots, a single tape drive and its associated SCSI adapter can be moved to the VIO Server for the backup period and then removed (so it can be used in other LPARs).
Recovery of a VIO client involves getting the disk image back in the right place and starting the VIO client LPAR again.
VIO client
The VIO client can back up its own data, just like any other copy of Linux.
It's unlikely that VIO client LPARs will have a tape, as the purpose of a VIO client, to share physical resources to reduce hardware requirements. As with the VIO Server, you can use DLPAR changes of PCI slots to temporarily introduce a tape driver to the client and back up its own data. Automating this process can be hard to coordinate between multiple client LPARs, but it can be done using scripts on a central machine. Some machines support a writeable DVD device that can also be used as a backup medium.
A second option is for the client to use another LPAR (possibly the VIO Server) or another machine to save the data using either a:
|
|