Document revision date: 19 July 1999
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index


Chapter 3
Choosing OpenVMS Cluster Systems

This chapter provides information to help you select systems for your OpenVMS Cluster to satisfy your business and application requirements.

3.1 Alpha and VAX Architectures

An OpenVMS Cluster can include systems running OpenVMS Alpha, OpenVMS VAX, or both. Compaq provides a full range of systems for both the Alpha and VAX architectures.

3.2 Types of Systems

Alpha and VAX systems span a range of computing environments, including:

3.3 Choosing Systems

Your choice of systems depends on your business, your application needs, and your budget. With a high-level understanding of systems and their characteristics, you can make better choices.

Table 3-1 is a comparison of recently shipped OpenVMS Cluster systems. While laptop and personal computers can be configured in an OpenVMS Cluster as client satellites, they are not discussed extensively in this manual. For more information about configuring PCs and laptops, see the PATHWORKS Version 6.0 for DOS and Windows: Installation and Configuration Guide

Table 3-1 System Types
System Type Useful for Examples
Workstations Users who require their own systems with high processor performance. Examples include users running mechanical computer-aided design, scientific analysis, and data-reduction and display applications. Workstations offer the following features:
  • Lower cost than departmental and data center systems
  • Small footprint
  • Useful for modeling and imaging
  • 2D and 3D graphics capabilities
DIGITAL Personal Workstation DPWau series
AlphaStation 200
AlphaStation 500
AlphaStation 600
VAXstation 4000
MicroVAX 3100
Departmental systems Midrange office computing. Departmental systems offer the following capabilities:
  • High processor and I/O performance
  • Supports a moderate number of users, client PCs, and workstations
AlphaServer 400
AlphaServer 1000
AlphaServer 1000A
AlphaServer 1200
AlphaServer 2000
AlphaServer 2100
AlphaServer 4100
VAX 4000
Data center systems Large-capacity configurations and highly available technical and commercial applications. Data center systems have a high degree of expandability and flexibility and offer the following features:
  • Highest CPU and I/O performance
  • Ability to support thousands of terminal users, hundreds of PC clients, and up to 95 workstations
AlphaServer 8400
AlphaServer 8200
VAX 7800

3.4 Scalability Considerations

When you choose a system based on scalability, consider the following:

The OpenVMS environment offers a wide range of alternative ways for growing and expanding processing capabilities of a data center, including the following:

Reference: For more information about scalability, see Chapter 10.

3.5 Availability Considerations

An OpenVMS Cluster system is a highly integrated environment in which multiple systems share access to resources. This resource sharing increases the availability of services and data. OpenVMS Cluster systems also offer failover mechanisms that are transparent and automatic, and require little intervention by the system manager or the user.

Reference: See Chapter 8 and Chapter 9 for more information about these failover mechanisms and about availability.

3.6 Performance Considerations

The following factors affect the performance of systems:

With these requirements in mind, compare processor performance, I/O throughput, memory capacity, and disk capacity in the Alpha and VAX specifications that follow.

3.7 System Specifications

The DIGITAL Systems and Options Catalog provides ordering and configuring information for Intel, Alpha, and VAX workstations and servers. It also contains detailed information about storage devices, printers, and network application support.

To access the most recent DIGITAL Systems and Options Catalog on the World Wide Web, use the following URL:


http://www.digital.com:80/info/soc/ 


Chapter 4
Choosing OpenVMS Cluster Interconnects

An interconnect is a hardware connection between OpenVMS Cluster nodes over which the nodes can communicate. This chapter contains information about the following interconnects and how they are used in OpenVMS Clusters:

The software that enables OpenVMS Cluster systems to communicate over an interconnect is the System Communications Services (SCS).

4.1 Characteristics

The six interconnects described in this chapter share some general characteristics. Table 4-1 describes these characteristics.

Table 4-1 Interconnect Characteristics
Characteristic Description
Throughput The quantity of data transferred across the interconnect.

Some interconnects require more processor overhead than others. For example, Ethernet and FDDI interconnects require more processor overhead than do CI or DSSI.

Larger packet sizes allow higher data-transfer rates (throughput) than do smaller packet sizes.

Cable length Interconnects range in length from 3 m to 40 km.
Maximum number of nodes The number of nodes that can connect to an interconnect varies among interconnect types. Be sure to consider this when configuring your OpenVMS Cluster system.
Supported systems and storage Each OpenVMS Cluster node and storage subsystem requires an adapter to connect the internal system bus to the interconnect. First consider the storage and processor I/O performance, then the adapter performance, when choosing an interconnect type.

4.2 Comparison of Interconnect Types

Table 4-2 shows key statistics for a variety of interconnects.

Table 4-2 Comparison of Interconnect Types
Attribute CI DSSI FDDI SCSI MEMORY
CHANNEL
Ethernet/
Fast/
GB
Fibre Channel
Maximum throughput (Mb/s) 140 32 100 160 800 10/100/1000 1000
Hardware-assisted data link 1 Yes Yes No No No No No
Connection to storage Direct and
MSCP served
Direct and
MSCP served
MSCP served Direct and
MSCP served
MSCP served MSCP served Direct and
MSCP served
Topology Radial coaxial cable Bus Dual ring of trees Bus or radial to a hub Radial copper or fiber cable Linear coaxial or fiber cable, radial to a hub or switch Radial to a switch
Maximum nodes 32 2 8 3 96 4 8--16 5 4 96 4 8 7
Maximum length 45 m 6 m 6 40 km 25 m 3 m 2800 m 400 m


1Hardware-assisted data link reduces the processor overhead required.
2Up to 16 OpenVMS Cluster computers; up to 31 HSC controllers.
3Up to 4 OpenVMS Cluster computers; up to 7 storage devices.
4OpenVMS Cluster computers.
5Up to 3 OpenVMS Cluster computers, up to 4 with the DWZZH-05 and fair arbitration; up to 15 storage devices.
6DSSI cabling lengths vary based on cabinet cables.
7Up to 4 OpenVMS Cluster computers; up to 4 storage ports (larger numbers are planned).

4.3 Multiple Interconnects

You can use multiple interconnects to achieve the following benefits:

4.4 Mixed Interconnects

A mixed interconnect is a combination of two or more different types of interconnects in an OpenVMS Cluster system. You can use mixed interconnects to combine the advantages of each type and to expand your OpenVMS Cluster system. For example, an Ethernet cluster that requires more storage can expand with the addition of CI, DSSI, or SCSI connections.

4.5 Interconnects Supported by Alpha and VAX Systems

Table 4-3 shows the OpenVMS Cluster interconnects supported by Alpha and VAX systems. You can also refer to the most recent OpenVMS Cluster SPD to see the latest information on supported interconnects.

Table 4-3 System Support for Cluster Interconnects
Systems CI DSSI SCSI FDDI Ethernet MEMORY
CHANNEL
Fibre
Channel
AlphaServer GS60, GS140 X X X X 1 X X X
AlphaServer ES40 X X X X X 2 X  
AlphaServer DS20 X X X X X 2 X  
AlphaServer DS10   X X X X 2 X  
AlphaStation XP900   X X X X 2 X  
AlphaServer 8400, 8200 X X X X 1 X X X
AlphaServer 4100, 2100, 2000 X X X X 1 X 1 X X 3
AlphaServer 1000   X X X X 1 X  
AlphaServer 400   X X X X 1    
AlphaStation series     X X X 1    
DEC 7000/10000 X X   X 1 X    
DEC 4000   X   X X 1    
DEC 3000     X X 1 X 1    
DEC 2000       X X 1    
VAX 6000/7000/10000 X X   X X    
VAX 4000, MicroVAX 3100   X   X X 1    
VAXstation 4000       X X 1    


1Able to boot over the interconnect as a satellite node.
2Support for MEMORY CHANNEL Version 2.0 hardware, only.
3Support on AlphaServer 4100; support on additional AlphaServer systems in the future.

As Table 4-3 shows, OpenVMS Clusters support a wide range of interconnects: CI, DSSI, SCSI, FDDI, Ethernet, and MEMORY CHANNEL. This power and flexibility means that almost anything will work well. The most important factor to consider is how much I/O you need, as explained in Chapter 2.

In most cases, the I/O requirements will be less than the capabilities of any one OpenVMS Cluster interconnect. Ensure that you have a reasonable surplus I/O capacity, then choose your interconnects based on other needed features.

4.6 Fibre Channel Interconnect

Fibre Channel is a high-performance ANSI standard network and storage interconnect for PCI-based Alpha systems. It is a full-duplex serial interconnect and can simultaneously transmit and receive 100 megabytes per second. For the initial release, Fibre Channel will support simultaneous access of SCSI storage by multiple nodes connected to a Fibre Channel switch. A second type of interconnect is needed for node-to-node communications. Node-to-node communications over Fibre Channel is planned for a future release.

For multihost access to Fibre Channel storage, the following components are required:

4.6.1 Advantages

The Fibre Channel interconnect offers the following advantages:

4.6.2 Throughput

The Fibre Channel interconnect transmits up to 1.06 Gb/s. It is a full-duplex serial interconnect that can simultaneously transmit and receive 100 MB/s.

4.6.3 Supported Adapter

The Fibre Channel adapter, the KGPSA, connects to the PCI bus.

Reference: For complete information about each adapter's features and order numbers, see the DIGITAL Systems and Options Catalog.

To access the most recent DIGITAL Systems and Options Catalog on the World Wide Web, use the following URL:


http://www.digital.com:80/info/soc/ 

4.7 MEMORY CHANNEL Interconnect

MEMORY CHANNEL is a high-performance cluster interconnect technology for PCI-based Alpha systems. With the benefits of very low latency, high bandwidth, and direct memory access, MEMORY CHANNEL complements and extends the unique ability of OpenVMS Clusters to work as a single, virtual system.

Three hardware components are required by a node to support a MEMORY CHANNEL connection:

A MEMORY CHANNEL hub is a PC size unit that provides a connection among systems. MEMORY CHANNEL can support up to four Alpha nodes per hub. You can configure systems with two MEMORY CHANNEL adapters in order to provide failover in case an adapter fails. Each adapter must be connected to a different hub.

A MEMORY CHANNEL hub is not required in clusters that comprise only two nodes. In a two-node configuration, one PCI adapter is configured, using module jumpers, as a virtual hub.


Previous Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement  
6318PRO_002.HTML