Document revision date: 19 July 1999
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

4.10.8 Configuration Guidelines for DSSI Clusters

The following configuration guidelines apply to all DSSI clusters:

Reference: For more information about DSSI, see the DSSI OpenVMS Cluster Installation and Troubleshooting Manual.

4.11 Ethernet, Fast Ethernet, and Gigabit Ethernet Interconnects

The Ethernet, Fast Ethernet, and Gigabit Ethernet interconnects provide single-path connections within an OpenVMS Cluster system and a local area network (LAN). Multiple Ethernet adapters or multichannel Ethernet adapters can be used to provide multiple paths.

Ethernet (including Fast Ethernet and Gigabit Ethernet) and FDDI are LAN-based interconnects. See Section 4.12 for information about FDDI and for general LAN-based cluster guidelines.

4.11.1 Advantages

The Ethernet, Fast Ethernet, and Gigabit Ethernet interconnects offer the following advantages:

4.11.2 Throughput

The Ethernet technology offers a range of baseband transmission speeds:

Ethernet adapters do not provide hardware assistance, so processor overhead is higher than for CI or DSSI.

Consider the capacity of the total network design when you configure an OpenVMS Cluster system with many Ethernet-connected nodes or when the Ethernet also supports a large number of PCs or printers. General network traffic on an Ethernet can reduce the throughput available for OpenVMS Cluster communication. Fast Ethernet and Gigabit Ethernet can significantly improve throughput. Multiple Ethernet adapters can be used to improve cluster performance by offloading general network traffic.

Reference: For extended LAN configuration guidelines, see Section 10.7.7.

4.11.3 Multiple Ethernet Load Balancing

If only Ethernet paths are available, the choice between which path the OpenVMS Cluster software uses is based on latency (computed network delay). If delays are equal, either path can be used. Otherwise, the OpenVMS Cluster software chooses the channel with the least latency. The network delay across each segment is calculated approximately every 3 seconds. Traffic is then balanced across all communication paths between local and remote adapters.

4.11.4 Supported Adapters and Buses

The following are Ethernet adapters and the internal bus that each supports:

Reference: For complete information about each adapter's features and order numbers, see the DIGITAL Systems and Options Catalog.

To access the most recent DIGITAL Systems and Options Catalog on the World Wide Web, use the following URL:


http://www.digital.com:80/info/soc/ 

4.11.5 Ethernet-to-FDDI Bridges

You can use transparent Ethernet-to-FDDI translating bridges to provide an interconnect between a 10-Mb/s Ethernet segment and a 100-Mb/s FDDI ring. These Ethernet-to-FDDI bridges are also called "10/100" bridges. They perform high-speed translation of network data packets between the FDDI and Ethernet frame formats.

Reference: See Figure 10-21 for an example of these bridges.

4.11.6 Configuration Guidelines for Gigabit Ethernet Clusters

Use the following guidelines when configuring systems in a Gigabit Ethernet cluster:

4.12 Fiber Distributed Data Interface (FDDI)

FDDI is an ANSI standard LAN interconnect that uses fiber-optic cable. FDDI supports OpenVMS Cluster functionality over greater distances than other interconnects. FDDI also augments the Ethernet by providing a high-speed interconnect for multiple Ethernet segments in a single OpenVMS Cluster system.

4.12.1 Advantages

FDDI offers the following advantages:

4.12.2 Types of FDDI Nodes

The FDDI standards define the following two types of nodes:

4.12.3 Distance

FDDI limits the total fiber path to 200 km (125 miles). The maximum distance between adjacent FDDI devices is 40 km with single-mode fiber and 2 km with multimode fiber. In order to control communication delay, however, it is advisable to limit the maximum distance between any two OpenVMS Cluster nodes on an FDDI ring to 40 km.

4.12.4 Throughput

The maximum throughput of the FDDI interconnect (100 Mb/s) is 10 times higher than that of Ethernet.

In addition, FDDI supports transfers using large packets (up to 4468 bytes). Only FDDI nodes connected exclusively by FDDI can make use of large packets.

Because FDDI adapters do not provide processing assistance for OpenVMS Cluster protocols, more processing power is required than for CI or DSSI.

4.12.5 Supported Adapters and Bus Types

Following is a list of FDDI adapters and the buses they support:

Reference: For complete information about each adapter's features and order numbers, see the DIGITAL Systems and Options Catalog.

To access the most recent DIGITAL Systems and Options Catalog on the World Wide Web, use the following URL:


http://www.digital.com:80/info/soc/ 

4.12.6 Configuration Guidelines for FDDI-Based Clusters

FDDI-based configurations use FDDI for node-to-node communication. The following general guidelines apply to FDDI configurations:

4.12.7 Multiple FDDI Adapters

Because FDDI is ideal for spanning great distances, you may want to supplement its high throughput with high availability by ensuring that critical nodes are connected to multiple FDDI rings. Physical separation of the two FDDI paths helps ensure that the configuration is disaster tolerant.

4.12.8 Multiple FDDI Load Balancing

If only FDDI paths are available, the OpenVMS Cluster software bases the choice between which path to use on latency (computed network delay). If delays are equal, either path can be used. Otherwise, OpenVMS Cluster software chooses the channel with the least latency. The network delay across each segment is calculated approximately every 3 seconds. Traffic is balanced across all communication paths between local and remote adapters.


Chapter 5
Choosing OpenVMS Cluster Storage Subsystems

This chapter describes how to design a storage subsystem. The design process involves the following steps:

  1. Understanding storage product choices
  2. Estimating storage capacity requirements
  3. Choosing disk performance optimizers
  4. Determining disk availability requirements
  5. Understanding advantages and tradeoffs for:

The rest of this chapter contains sections that explain these steps in detail.

5.1 Understanding Storage Product Choices

In an OpenVMS CLuster, storage choices include the StorageWorks family of products, a modular storage expansion system based on the Small Computer Systems Interface (SCSI--2) standard. StorageWorks helps you configure complex storage subsystems by choosing from the following modular elements:

5.1.1 Criteria for Choosing Devices

Consider the following criteria when choosing storage devices:

5.1.2 How Interconnects Affect Storage Choices

One of the benefits of OpenVMS Cluster systems is that you can connect storage devices directly to OpenVMS Cluster interconnects to give member systems access to storage.

In an OpenVMS Cluster system, the following storage devices and adapters can be connected to OpenVMS Cluster interconnects:

Table 5-1 lists the kinds of storage devices that you can attach to specific interconnects.

Table 5-1 Interconnects and Corresponding Storage Devices
Storage Interconnect Storage Devices
CI HSJ and HSC controllers and SCSI storage
DSSI HSD controllers, ISEs, and SCSI storage
SCSI HSZ controllers and SCSI storage
Fibre Channel HSG controllers and SCSI storage
FDDI HS xxx controllers and SCSI storage

5.1.3 How Floor Space Affects Storage Choices

If the cost of floor space is high and you want to minimize the floor space used for storage devices, consider these options:

5.2 Determining Storage Capacity Requirements

Storage capacity is the amount of space needed on storage devices to hold system, application, and user files. Knowing your storage capacity can help you to determine the amount of storage needed for your OpenVMS Cluster configuration.

5.2.1 Estimating Disk Capacity Requirements

To estimate your online storage capacity requirements, add together the storage requirements for your OpenVMS Cluster system's software, as explained in Table 5-2.

Table 5-2 Estimating Disk Capacity Requirements
Software Component Description
OpenVMS operating system Estimate the number of blocks 1 required by the OpenVMS operating system.

Reference: Your OpenVMS installation documentation and Software Product Description (SPD) contain this information.

Page, swap, and dump files Use AUTOGEN to determine the amount of disk space required for page, swap, and dump files.

Reference: The OpenVMS System Manager's Manual provides information about calculating and modifying these file sizes.

Site-specific utilities and data Estimate the disk storage requirements for site-specific utilities, command procedures, online documents, and associated files.
Application programs Estimate the space required for each application to be installed on your OpenVMS Cluster system, using information from the application suppliers.

Reference: Consult the appropriate Software Product Description (SPD) to estimate the space required for normal operation of any layered product you need to use.

User-written programs Estimate the space required for user-written programs and their associated databases.
Databases Estimate the size of each database. This information should be available in the documentation pertaining to the application-specific database.
User data Estimate user disk-space requirements according to these guidelines:
  • Allocate from 10,000 to 100,000 blocks for each occasional user.

    An occasional user reads, writes, and deletes electronic mail; has few, if any, programs; and has little need to keep files for long periods.

  • Allocate from 250,000 to 1,000,000 blocks for each moderate user.

    A moderate user uses the system extensively for electronic communications, keeps information on line, and has a few programs for private use.

  • Allocate 1,000,000 to 3,000,000 blocks for each extensive user.

    An extensive user can require a significant amount of storage space for programs under development and data files, in addition to normal system use for electronic mail. This user may require several hundred thousand blocks of storage, depending on the number of projects and programs being developed and maintained.

Total requirements The sum of the preceding estimates is the approximate amount of disk storage presently needed for your OpenVMS Cluster system configuration.


1Storage capacity is measured in blocks. Each block contains 512 bytes.

5.2.2 Additional Disk Capacity Requirements

Before you finish determining your total disk capacity requirements, you may also want to consider future growth for online storage and for backup storage.

For example, at what rate are new files created in your OpenVMS Cluster system? By estimating this number and adding it to the total disk storage requirements that you calculated using Table 5-2, you can obtain a total that more accurately represents your current and future needs for online storage.

To determine backup storage requirements, consider how you deal with obsolete or archival data. In most storage subsystems, old files become unused while new files come into active use. Moving old files from online to backup storage on a regular basis frees online storage for new files and keeps online storage requirements under control.

Planning for adequate backup storage capacity can make archiving procedures more effective and reduce the capacity requirements for online storage.

5.3 Choosing Disk Performance Optimizers

Estimating your anticipated disk performance work load and analyzing the work load data can help you determine your disk performance requirements.

You can use the Monitor utility and DECamds to help you determine which performance optimizer best meets your application and business needs.

5.3.1 Performance Optimizers

Performance optimizers are software or hardware products that improve storage performance for applications and data. Table 5-3 explains how various performance optimizers work.

Table 5-3 Disk Performance Optimizers
Optimizer Description
DECram for OpenVMS A disk device driver that enables system managers to create logical disks in memory to improve I/O performance. Data on an in-memory DECram disk can be accessed at a faster rate than data on hardware disks. DECram disks are capable of being shadowed with Volume Shadowing for OpenVMS and of being served with the MSCP server. 1
Solid-state disks In many systems, approximately 80% of the I/O requests can demand information from approximately 20% of the data stored on line. Solid-state devices can yield the rapid access needed for this subset of the data.
Disk striping Disk striping (RAID level 0) lets applications access an array of disk drives in parallel for higher throughput. Disk striping works by grouping several disks into a "stripe set" and by dividing the application data into "chunks," which are spread equally across the disks in the stripe set in a round-robin fashion.

By reducing access time, disk striping may improve performance, especially if the application:

  • Performs large data transfers in parallel.
  • Requires load balancing across drives.

Two independent types of disk striping are available:

  • Controller-based striping, in which HSJ and HSD controllers combine several disks into a single stripe set. This stripe set is presented to OpenVMS as a single volume. This type of disk striping is hardware based.
  • Host-based striping, which creates stripe sets on an OpenVMS host. The OpenVMS software breaks up an I/O request into several simultaneous requests that it sends to the disks of the stripe set. This type of disk striping is software based.

Note: You can use Volume Shadowing for OpenVMS software in combination with disk striping to make stripe set members redundant. You can shadow controller-based stripe sets or you can host-based disk stripe shadow sets.

Virtual I/O cache (VIOC) OpenVMS offers host-based caching in the form of VIOC, a clusterwide, file-oriented disk cache. VIOC reduces I/O bottlenecks within OpenVMS Cluster systems by reducing the number of I/Os from the system to the disk subsystem.
Controllers with disk cache Some storage technologies use memory to form disk caches. Accesses that can be satisfied from the cache can be done almost immediately and without any seek time or rotational latency. For these accesses, the two largest components of the I/O response time are eliminated. The HSC, HSJ, HSD, and HSZ controllers contain caches. Every RF and RZ disk has a disk cache as part of its embedded controller.


1The MSCP server makes locally connected disks to which it has direct access available to other systems in the OpenVMS Cluster.

Reference: See Section 10.8 for more information about how these performance optimizers increase an OpenVMS Cluster's ability to scale I/Os.


Previous Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement  
6318PRO_004.HTML