Document revision date: 15 July 2002 | |
Previous | Contents | Index |
Multiple LAN adapters are supported. The adapters can be for different LAN types or for different adapter models for the same LAN type.
Multiple LAN adapters can be used to provide the following:
When multiple node-to-node LAN paths are available, the OpenVMS Cluster software chooses the set of paths to use based on the following criteria, which are evaluated in strict precedence order:
Packet transmissions are distributed in round-robin fashion across all
communication paths between local and remote adapters that meet the
preceding criteria.
4.11.1.2 Increased LAN Path Availability
Because LANs are ideal for spanning great distances, you may want to supplement an intersite link's throughput with high availability. You can do this by configuring critical nodes with multiple LAN adapters, each connected to a different intersite LAN link.
A common cause of intersite link failure is mechanical destruction of
the intersite link. This can be avoided by path diversity, that is,
physically separating the paths of the multiple intersite links. Path
diversity helps to ensure that the configuration is unlikely to be
affected by disasters affecting an intersite link.
4.11.2 Configuration Guidelines for LAN-Based Clusters
The following guidelines apply to all LAN-based OpenVMS Cluster systems:
The Ethernet (10/100) interconnect is typically the lowest cost of all OpenVMS Cluster interconnects.
Gigabit Ethernet interconnects offer the following advantages in addition to the advantages listed in Section 4.11:
The Ethernet technology offers a range of baseband transmission speeds:
Ethernet adapters do not provide hardware assistance, so processor overhead is higher than for CI or DSSI.
Consider the capacity of the total network design when you configure an OpenVMS Cluster system with many Ethernet-connected nodes or when the Ethernet also supports a large number of PCs or printers. General network traffic on an Ethernet can reduce the throughput available for OpenVMS Cluster communication. Fast Ethernet and Gigabit Ethernet can significantly improve throughput. Multiple Ethernet adapters can be used to improve cluster performance by offloading general network traffic.
Reference: For LAN configuration guidelines, see
Section 4.11.2.
4.11.5 Ethernet Adapters and Buses
The following Ethernet adapters and their internal buses are supported in an OpenVMS Cluster configuration:
Reference: For complete information about each adapter's features and order numbers, access the Compaq web site at:
http://www.compaq.com/ |
Under Products, select Servers, then AlphaServers, then the Alpha
system of interest. You can then obtain detailed information about all
options supported on that system.
4.11.6 Ethernet-to-FDDI Bridges and Switches
You can use transparent Ethernet-to-FDDI translating bridges to provide an interconnect between a 10-Mb/s Ethernet segment and a 100-Mb/s FDDI ring. These Ethernet-to-FDDI bridges are also called 10/100 bridges. They perform high-speed translation of network data packets between the FDDI and Ethernet frame formats.
Reference: See Figure 10-21 for an example of these bridges.
You can use switches to isolate traffic and to aggregate bandwidth,
which can result in greater throughput.
4.11.7 Configuration Guidelines for Gigabit Ethernet Clusters
Use the following guidelines when configuring systems in a Gigabit Ethernet cluster:
Figure 4-1 Point-to-Point Gigabit Ethernet OpenVMS Cluster
Figure 4-2 Switched Gigabit Ethernet OpenVMS Cluster
ATM offers the following advantages, in addition to those listed in Section 4.11:
The ATM interconnect transmits up to 622 Mb/s. The adapter that
supports this throughput is the DAPCA.
4.11.10 ATM Adapters
ATM adapters supported in an OpenVMS Cluster system and the internal buses on which they are supported are shown in the following list:
FDDI is an ANSI standard LAN interconnect that uses fiber-optic or
copper cable.
4.12.1 FDDI Advantages
FDDI offers the following advantages in addition to the LAN advantages listed in Section 4.11:
The FDDI standards define the following two types of nodes:
FDDI limits the total fiber path to 200 km (125 miles). The maximum
distance between adjacent FDDI devices is 40 km with single-mode fiber
and 2 km with multimode fiber. In order to control communication delay,
however, it is advisable to limit the maximum distance between any two
OpenVMS Cluster nodes on an FDDI ring to 40 km.
4.12.4 FDDI Throughput
The maximum throughput of the FDDI interconnect (100 Mb/s) is 10 times higher than that of Ethernet.
In addition, FDDI supports transfers using large packets (up to 4468 bytes). Only FDDI nodes connected exclusively by FDDI can make use of large packets.
Because FDDI adapters do not provide processing assistance for OpenVMS
Cluster protocols, more processing power is required than for CI or
DSSI.
4.12.5 FDDI Adapters and Bus Types
Following is a list of supported FDDI adapters and the buses they support:
Reference: For complete information about each adapter's features and order numbers, access the Compaq web site at:
http://www.compaq.com/ |
Under Products, select Servers, then AlphaServers, then the Alpha
system of interest. You can then obtain detailed information about all
options supported on that system.
4.12.6 Storage Servers for FDDI-Based Clusters
FDDI-based configurations use FDDI for node-to-node communication. The HS1xx and HS2xx family of storage servers provide FDDI-based storage access to OpenVMS Cluster nodes.
This chapter describes how to design a storage subsystem. The design process involves the following steps:
The rest of this chapter contains sections that explain these steps in
detail.
5.1 Understanding Storage Product Choices
In an OpenVMS CLuster, storage choices include the StorageWorks family of products, a modular storage expansion system based on the Small Computer Systems Interface (SCSI--2) standard. StorageWorks helps you configure complex storage subsystems by choosing from the following modular elements:
Consider the following criteria when choosing storage devices:
One of the benefits of OpenVMS Cluster systems is that you can connect storage devices directly to OpenVMS Cluster interconnects to give member systems access to storage.
In an OpenVMS Cluster system, the following storage devices and adapters can be connected to OpenVMS Cluster interconnects:
Table 5-1 lists the kinds of storage devices that you can attach to specific interconnects.
Storage Interconnect | Storage Devices |
---|---|
CI | HSJ and HSC controllers and SCSI storage |
DSSI | HSD controllers, ISEs, and SCSI storage |
SCSI | HSZ controllers and SCSI storage |
Fibre Channel | HSG and HSV controllers and SCSI storage |
FDDI | HS xxx controllers and SCSI storage |
5.1.3 How Floor Space Affects Storage Choices
If the cost of floor space is high and you want to minimize the floor
space used for storage devices, consider these options:
Storage capacity is the amount of space needed on storage devices to
hold system, application, and user files. Knowing your storage capacity
can help you to determine the amount of storage needed for your OpenVMS
Cluster configuration.
5.2.1 Estimating Disk Capacity Requirements
To estimate your online storage capacity requirements, add together the storage requirements for your OpenVMS Cluster system's software, as explained in Table 5-2.
Software Component | Description |
---|---|
OpenVMS operating system |
Estimate the number of blocks
1 required by the OpenVMS operating system.
Reference: Your OpenVMS installation documentation and Software Product Description (SPD) contain this information. |
Page, swap, and dump files |
Use AUTOGEN to determine the amount of disk space required for page,
swap, and dump files.
Reference: The OpenVMS System Manager's Manual provides information about calculating and modifying these file sizes. |
Site-specific utilities and data | Estimate the disk storage requirements for site-specific utilities, command procedures, online documents, and associated files. |
Application programs |
Estimate the space required for each application to be installed on
your OpenVMS Cluster system, using information from the application
suppliers.
Reference: Consult the appropriate Software Product Description (SPD) to estimate the space required for normal operation of any layered product you need to use. |
User-written programs | Estimate the space required for user-written programs and their associated databases. |
Databases | Estimate the size of each database. This information should be available in the documentation pertaining to the application-specific database. |
User data |
Estimate user disk-space requirements according to these guidelines:
|
Total requirements | The sum of the preceding estimates is the approximate amount of disk storage presently needed for your OpenVMS Cluster system configuration. |
Before you finish determining your total disk capacity requirements, you may also want to consider future growth for online storage and for backup storage.
For example, at what rate are new files created in your OpenVMS Cluster system? By estimating this number and adding it to the total disk storage requirements that you calculated using Table 5-2, you can obtain a total that more accurately represents your current and future needs for online storage.
To determine backup storage requirements, consider how you deal with obsolete or archival data. In most storage subsystems, old files become unused while new files come into active use. Moving old files from online to backup storage on a regular basis frees online storage for new files and keeps online storage requirements under control.
Planning for adequate backup storage capacity can make archiving
procedures more effective and reduce the capacity requirements for
online storage.
5.3 Choosing Disk Performance Optimizers
Estimating your anticipated disk performance work load and analyzing the work load data can help you determine your disk performance requirements.
You can use the Monitor utility and DECamds to help you determine which
performance optimizer best meets your application and business needs.
5.3.1 Performance Optimizers
Performance optimizers are software or hardware products that improve storage performance for applications and data. Table 5-3 explains how various performance optimizers work.
Optimizer | Description |
---|---|
DECram for OpenVMS | A disk device driver that enables system managers to create logical disks in memory to improve I/O performance. Data on an in-memory DECram disk can be accessed at a faster rate than data on hardware disks. DECram disks are capable of being shadowed with Volume Shadowing for OpenVMS and of being served with the MSCP server. 1 |
Solid-state disks | In many systems, approximately 80% of the I/O requests can demand information from approximately 20% of the data stored on line. Solid-state devices can yield the rapid access needed for this subset of the data. |
Disk striping |
Disk striping (RAID level 0) lets applications access an array of disk
drives in parallel for higher throughput. Disk striping works by
grouping several disks into a "stripe set" and then dividing
the application data into "chunks" that are spread equally
across the disks in the stripe set in a round-robin fashion.
By reducing access time, disk striping can improve performance, especially if the application:
Two independent types of disk striping are available:
Note: You can use Volume Shadowing for OpenVMS software in combination with disk striping to make stripe set members redundant. You can shadow controller-based stripe sets, and you can shadow host-based disk stripe sets. |
Extended file cache (XFC) | OpenVMS Alpha Version 7.3 offers improved host-based caching with XFC, which can replace and can coexist with virtual I/O cache (VIOC). XFC is a clusterwide, file-system data cache that offers several features not available with VIOC, including read-ahead caching and automatic resizing of the cache to improve performance. |
Controllers with disk cache | Some storage technologies use memory to form disk caches. Accesses that can be satisfied from the cache can be done almost immediately and without any seek time or rotational latency. For these accesses, the two largest components of the I/O response time are eliminated. The HSC, HSJ, HSD, HSZ, and HSG controllers contain caches. Every RF and RZ disk has a disk cache as part of its embedded controller. |
Reference: See Section 10.8 for more information about how these performance optimizers increase an OpenVMS Cluster's ability to scale I/Os.
Previous | Next | Contents | Index |
privacy and legal statement | ||
6318PRO_004.HTML |