Document revision date: 30 March 2001
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

4.6 Fibre Channel Interconnect

Fibre Channel is a high-performance ANSI standard network and storage interconnect for PCI-based Alpha systems. It is a full-duplex serial interconnect and can simultaneously transmit and receive 100 megabytes per second. For the initial release, Fibre Channel will support simultaneous access of SCSI storage by multiple nodes connected to a Fibre Channel switch. A second type of interconnect is needed for node-to-node communications. Node-to-node communications over Fibre Channel is planned for a future release.

For multihost access to Fibre Channel storage, the following components are required:

4.6.1 Advantages

The Fibre Channel interconnect offers the following advantages:

4.6.2 Throughput

The Fibre Channel interconnect transmits up to 1.06 Gb/s. It is a full-duplex serial interconnect that can simultaneously transmit and receive 100 MB/s.

4.6.3 Supported Adapter

The Fibre Channel adapters, the KGPSA and the KSPSA, connect to the PCI bus.

Reference: For complete information about each adapter's features and order numbers, access the Compaq website at:


http://www.compaq.com/ 

Under Products, select Servers, then AlphaServers, then the Alpha system of interest. You can then obtain detailed information about all options supported on that system.

4.7 MEMORY CHANNEL Interconnect

MEMORY CHANNEL is a high-performance cluster interconnect technology for PCI-based Alpha systems. With the benefits of very low latency, high bandwidth, and direct memory access, MEMORY CHANNEL complements and extends the unique ability of OpenVMS Clusters to work as a single, virtual system.

Three hardware components are required by a node to support a MEMORY CHANNEL connection:

A MEMORY CHANNEL hub is a PC size unit that provides a connection among systems. MEMORY CHANNEL can support up to four Alpha nodes per hub. You can configure systems with two MEMORY CHANNEL adapters in order to provide failover in case an adapter fails. Each adapter must be connected to a different hub.

A MEMORY CHANNEL hub is not required in clusters that comprise only two nodes. In a two-node configuration, one PCI adapter is configured, using module jumpers, as a virtual hub.

4.7.1 Advantages

MEMORY CHANNEL technology provides the following features:

4.7.2 Throughput

The MEMORY CHANNEL interconnect has a very high maximum throughput of 100 MB/s. If a single MEMORY CHANNEL is not sufficient, up to two interconnects (and two MEMORY CHANNEL hubs) can share throughput.

4.7.3 Supported Adapter

The MEMORY CHANNEL adapter connects to the PCI bus. The MEMORY CHANNEL adapter, CCMAA--BA, provides improved performance over the earlier adapter.

Reference: For complete information about the adapter's features and order number, access the Compaq website at:


http://www.compaq.com/ 

Under Products, select Servers, then AlphaServers, then the Alpha system of interest. You can then obtain detailed information about all options supported on that system.

4.8 SCSI Interconnect

The SCSI interconnect is an industry standard interconnect that supports one or more computers, peripheral devices, and interconnecting components. SCSI is a single-path, daisy-chained, multidrop bus. It is a single 8-bit or 16-bit data path with byte parity for error detection. Both inexpensive single-ended and differential signaling for longer distances are available.

In an OpenVMS Cluster, multiple Alpha computers on a single SCSI interconnect can simultaneously access SCSI disks. This type of configuration is called multihost SCSI connectivity. A second type of interconnect is required for node-to-node communication. For multihost access to SCSI storage, the following components are required:

For larger configurations, the following components are available:

Reference: For a detailed description of how to connect SCSI configurations, see Appendix A.

4.8.1 Advantages

The SCSI interconnect offers the following advantages:

4.8.2 Throughput

Table 4-4 show throughput for the SCSI interconnect.

Table 4-4 Maximum Data Transfer Rates in Megabytes per Second
Mode Narrow (8-Bit) Wide (16-Bit)
Standard 5 10
Fast 10 20
Ultra 20 40

4.8.3 SCSI Interconnect Distances

The maximum length of the SCSI interconnect is determined by the signaling method used in the configuration and, for single-ended signaling, by the data transfer rate.

There are two types of electrical signaling for SCSI interconnects: single ended and differential. Both types can operate in standard mode, fast mode, or ultra mode. For differential signaling, the maximum SCSI cable length possible is the same for standard mode and fast mode.

Table 4-5 summarizes how the type of signaling method affects SCSI interconnect distances.

Table 4-5 Maximum SCSI Interconnect Distances
Signaling Technique Rate of Data Transfer Maximum Cable Length
Single ended Standard 6 m 1
Single ended Fast 3 m
Single ended Ultra 20.5 m 2
Differential Standard or Fast 25 m
Differential Ultra 25.5 m 3


1The SCSI standard specifies a maximum length of 6 m for this interconnect. However, it is advisable, where possible, to limit the cable length to 4 m to ensure the highest level of data integrity.
2This length is attainable if devices are attached only at each end. If devices are spaced along the interconnect, they must be at least 1 m apart, and the interconnect cannot exceed 4 m.
3More than two devices can be supported.

4.8.4 Supported Adapters, Bus Types, and Computers

Table 4-6 shows SCSI adapters with the internal buses and computers they support.

Table 4-6 SCSI Adapters
Adapter Internal Bus Supported
Computers
Embedded (NCR-810 based)/KZPAA 1 PCI See the options specifications for your system.
KZPSA 2 PCI Supported on all Alpha computers that support KZPSA in single-host configurations. 3
KZTSA 2 TURBOchannel DEC 3000
KZPBA-CB 4 PCI Supported on all Alpha computers that support KZPBA in single-host configurations. 3


1Single-ended.
2Fast-wide differential (FWD).
3See the system-specific hardware manual.
4Ultra differential. The ultra single-ended adapter (KZPBA-CA) does not support multihost systems.

Reference: For complete information about each adapter's features and order numbers, access the Compaq website at:


http://www.compaq.com/ 

Under Products, select Servers, then AlphaServers, then the Alpha system of interest. You can then obtain detailed information about all options supported on that system.

4.9 CI Interconnect

The CI interconnect is a radial bus through which OpenVMS Cluster systems communicate. It comprises the following components:

4.9.1 Advantages

The CI interconnect offers the following advantages:

4.9.2 Throughput

The CI interconnect has a high maximum throughput. CI adapters use high-performance microprocessors that perform many of the processing activities usually performed by the CPU. As a result, they consume minimal CPU processing power.

Because the effective throughput of the CI bus is high, a single CI interconnect is not likely to be a bottleneck in a large OpenVMS Cluster configuration. If a single CI is not sufficient, multiple CI interconnects can increase throughput.

4.9.3 Supported Adapters and Bus Types

The following are CI adapters and internal buses that each supports:

Reference: For complete information about each adapter's features and order numbers, access the Compaq website at:


http://www.compaq.com/ 

Under Products, select Servers, then AlphaServers, then the Alpha system of interest. You can then obtain detailed information about all options supported on that system.

4.9.4 Multiple CI Adapters

You can configure multiple CI adapters on some OpenVMS nodes. Multiple star couplers can be used in the same OpenVMS Cluster.

With multiple CI adapters on a node, adapters can share the traffic load. This reduces I/O bottlenecks and increases the total system I/O throughput.

For the maximum number of CI adapters supported on your system, check the options list for your system in your hardware manual or on the AlphaServer web pages.

4.9.5 Configuration Guidelines for CI Clusters

Use the following guidelines when configuring systems in a CI cluster:

4.10 Digital Storage Systems Interconnect (DSSI)

DSSI is a single-path, daisy-chained, multidrop bus. It provides a single, 8-bit parallel data path with both byte parity and packet checksum for error detection.

4.10.1 Advantages

DSSI offers the following advantages:

4.10.2 Maintenance Consideration

DSSI storage often resides in the same cabinet as the CPUs. For these configurations, the whole system may need to be shut down for service, unlike configurations and interconnects with separately housed systems and storage devices.

4.10.3 Throughput

The maximum throughput is 32 Mb/s.

DSSI has highly intelligent adapters that require minimal CPU processing overhead.

4.10.4 DSSI Adapter Types

There are two types of DSSI adapters:

4.10.5 Supported Adapters and Bus Types

The following are DSSI adapters and internal bus that each supports:

Reference: For complete information about each adapter's features and order numbers, access the Compaq website at:


http://www.compaq.com/ 

Under Products, select Servers, then AlphaServers, then the Alpha system of interest. You can then obtain detailed information about all options supported on that system.

4.10.6 DSSI-Connected Storage

DSSI configurations use HSD intelligent controllers to connect disk drives to an OpenVMS Cluster. HSD controllers serve the same purpose with DSSI as HSJ controllers serve with CI: they enable you to configure more storage.

Alternatively, DSSI configurations use integrated storage elements (ISEs) connected directly to the DSSI bus. Each ISE contains either a disk and disk controller or a tape and tape controller.

4.10.7 Multiple DSSI Adapters

Multiple DSSI adapters are supported for some systems, enabling higher throughput than with a single DSSI bus.

For the maximum number of DSSI adapters supported on a system, check the options list for the system of interest on the AlphaServer web pages.

4.10.8 Configuration Guidelines for DSSI Clusters

The following configuration guidelines apply to all DSSI clusters:

Reference: For more information about DSSI, see the DSSI OpenVMS Cluster Installation and Troubleshooting Manual.

4.11 LAN Interconnects

Ethernet (including Fast Ethernet and Gigabit Ethernet), ATM, and FDDI are LAN-based interconnects. OpenVMS supports LAN emulation over ATM.

These interconnects provide the following features:

Following the discussion of multiple LAN adapters, information specific to each supported LAN interconnect (Ethernet, ATM, and FDDI) is provided.


Previous Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement  
6318PRO_003.HTML