[OpenVMS documentation]
[Site home] [Send comments] [Help with this site] [How to order documentation] [OpenVMS site] [Compaq site]
Updated: 11 December 1998

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

B.1.5.2 Large-Memory Systems' Use of NPAGEVIR Parameter

On systems containing very large amounts of nonpaged pool memory, MEMORY CHANNEL may be unable to complete initialization. If this happens, the console displays the following message repeatedly:


Hub timeout - reinitializing adapter 

To fix this problem, examine the value of the SYSGEN parameter NPAGEVIR. If its value is greater than 1 gigabyte, consider lowering it to about half of that. Thereafter, a reboot of your system should allow the MEMORY CHANNEL to complete initialization.

B.1.6 Configurations

Figure B-4 shows a basic MEMORY CHANNEL cluster that uses the SCSI interconnect for storage. This configuration provides two advantages: high performance on the MEMORY CHANNEL interconnect and low cost on the SCSI interconnect.

Figure B-4 MEMORY CHANNEL- and SCSI-Based Cluster


In a configuration like the one shown in Figure B-4, the MEMORY CHANNEL interconnect handles internode communication while the SCSI bus handles storage communication.

You can integrate MEMORY CHANNEL with your current systems. Figure B-5 shows an example of how to add MEMORY CHANNEL to a mixed-architecture CI- and SCSI-based cluster. In this example, the BI- and XMI-based VAX systems are joined in the same CI cluster with the PCI-based Alpha MEMORY CHANNEL systems.

Figure B-5 MEMORY CHANNEL CI- and SCSI-Based Cluster


Because the MEMORY CHANNEL interconnect is not used for storage and booting, you must provide access to a boot device through one of the other interconnects. To use Figure B-5 as an example, one of the CI-based disks would be a good choice for a boot device because all nodes have direct access to it over the CI.

MEMORY CHANNEL can also be integrated into an existing DSSI cluster, as shown in Figure B-6.

Figure B-6 MEMORY CHANNEL DSSI-Based Cluster


As Figure B-6 shows, the three MEMORY CHANNEL systems and the VAX system have access to the storage that is directly connected to the DSSI interconnect as well as to the SCSI storage attached to the HSD controller. In this configuration, MEMORY CHANNEL handles the Alpha internode traffic, while the DSSI handles the storage traffic.

B.1.6.1 Configuration Support

MEMORY CHANNEL supports the platforms and configurations shown in Table B-1.

Table B-1 MEMORY CHANNEL Configuration Support
Requirement Description
Configuration MEMORY CHANNEL supports the following configurations:
  • Up to eight nodes per MEMORY CHANNEL hub.
  • For two-hub configurations, up to two PCI adapters per node; each adapter must be connected to a different hub.
  • For two-node configurations, no hub is required.
Cables MEMORY CHANNEL supports the following cables:
  • Copper cables up to a 10-m (32.8 ft) radial topology
  • Fiber-optic cables from Compaq up to a 30-m (98.4 ft) radial topology; fiber-optic cables from other vendors, up to a 3-km (1.8 miles) radial topology
Host systems MEMORY CHANNEL supports the following systems:
  • AlphaServer 8400
  • AlphaServer 8200
  • AlphaServer 4100
  • AlphaServer 2100A
  • AlphaServer 1200
  • AlphaServer 800

Note

You can configure a computer in an OpenVMS Cluster system with both a MEMORY CHANNEL Version 1.5 hub and a MEMORY CHANNEL Version 2.0 hub. However, the version number of the adapter and the cables must match the hub's version number for MEMORY CHANNEL to function properly.

In other words, you must use MEMORY CHANNEL Version 1.5 adapters with the MEMORY CHANNEL Version 1.5 hub and MEMORY CHANNEL Version 1.5 cables. Similarly, you must use MEMORY CHANNEL Version 2.0 adapters with the MEMORY CHANNEL Version 2.0 hub and MEMORY CHANNEL Version 2.0 cables.

B.2 Technical Overview

This section describes in more technical detail how MEMORY CHANNEL works.

B.2.1 Comparison With Traditional Networks and SMP

You can think of MEMORY CHANNEL as a form of "stretched SMP bus" that supports enough physical distance to interconnect up to eight systems. However, MEMORY CHANNEL differs from an SMP environment where multiple CPUs can directly access the same physical memory. MEMORY CHANNEL requires each node to maintain its own physical memory, even though the nodes share MEMORY CHANNEL global address space.

MEMORY CHANNEL fills a price/performance gap between the high performance of SMP systems and traditional packet-based networks. Table B-2 shows a comparison among the characteristics of SMP, MEMORY CHANNEL, and standard networks.

Table B-2 Comparison of SMP, MEMORY CHANNEL, and Standard Networks
Characteristics SMP MEMORY CHANNEL Standard Networking
Bandwidth (MB/s) 1000+ 100+ 10+
Latency (ms/simplest message) 0.5 Less than 5 About 300
Overhead (ms/simplest message) 0.5 Less than 5 About 250
Hardware communication model Shared memory Memory-mapped Message passing
Hardware communication primitive Store to memory Store to memory Network packet
Hardware support for broadcast n/a Yes Sometimes
Hardware support for synchronizaton Yes Yes No
Hardware support for node hot swap No Yes Yes
Software communication model Shared memory Fast messages, shared memory Messages
Communication model for errors Not recoverable Recoverable Recoverable
Supports direct user mode communication Yes Yes No
Typical physical interconnect technology Backplane etch Parallel copper cables Serial fiber optics
Physical interconnect error rate Extremely low
order: less than one per year
Extremely low
order: less than one per year
Low order:
several per day
Hardware interconnect method Special purpose connector and logic Standard I/O bus adapter (PCI) Standard I/O bus adapter (PCI and others)
Distance between nodes (m) 0.3 20 (copper) or 60 (fiber-optic) in a hub configuration and 10 (copper) or 30 (fiber-optic) in a two-node configuration 50--1000
Number of nodes 1 8 Hundreds
Number of processors 6--12 8 times the maximum number of CPUs in an SMP system Thousands
Failure model Fail together Fail separately Fail separately

B.2.2 MEMORY CHANNEL in the OpenVMS Cluster Architecture

As Figure B-7 shows, MEMORY CHANNEL functionality has been implemented in the OpenVMS Cluster architecture just below the System Communication Services layer. This design ensures that no changes are required to existing applications because higher layers of OpenVMS Cluster software are unchanged.

Figure B-7 OpenVMS Cluster Architecture and MEMORY CHANNEL


MEMORY CHANNEL software consists of two new drivers:
Driver Description
PMDRIVER Emulates a cluster port driver.
MCDRIVER Provides MEMORY CHANNEL services and an interface to MEMORY CHANNEL hardware.

B.2.3 MEMORY CHANNEL Addressing

In a MEMORY CHANNEL configuration, a section of system physical address space is shared among all nodes. When a system writes data to this address space, the MEMORY CHANNEL hardware also performs a global write so that this data is stored in the memories of other systems. In other words, when a node's CPU writes data to the PCI address space occupied by the MEMORY CHANNEL adapter, the data is sent across the MEMORY CHANNEL interconnect to the other nodes. The other nodes' PCI adapters map this data into their own memory. This infrastructure enables a write to an I/O address on one system to get mapped to a physical address on the other system. The next two figures explain this in more detail.

Figure B-8 shows how MEMORY CHANNEL global address space is addressed in physical memory.

Figure B-8 Physical Memory and I/O Address Space


Figure B-8 shows the typical address space of a system, divided into physical memory and I/O address space. Within the PCI I/O address space, MEMORY CHANNEL consumes 128 to 512 MB of address space. Therefore, the MEMORY CHANNEL PCI adapter can be addressed within this space, and the CPU can write data to it.

Every system in a MEMORY CHANNEL cluster allocates this address space for MEMORY CHANNEL data and communication. By using this address space, a CPU can perform global writes to the memories of other nodes.

To explain global writes more fully, Figure B-9 shows the internal bus architecture of two nodes, node A and node B.

Figure B-9 MEMORY CHANNEL Bus Architecture


In the example shown in Figure B-9, node A is performing a global write to node B's memory, in the following sequence:

  1. Node A's CPU performs a write to MEMORY CHANNEL address space, which is part of PCI address space. The write makes its way through the PCI bus to the PCI/MEMORY CHANNEL adapter and out on the MEMORY CHANNEL interconnect.
  2. Node B's PCI adapter receives the data, which is picked up by its PCI bus and DMA-mapped to memory.

If all nodes in the cluster agree to address MEMORY CHANNEL global address space in the same way, they can virtually "share" the same address space and the same data. This is why MEMORY CHANNEL address space is depicted as a common, central address space in Figure B-9.

MEMORY CHANNEL global address space is divided into pages of 8 KB (8,192 bytes). These are called MC pages. These 8 KB pages can be mapped similarly among systems.

The "shared" aspect of MEMORY CHANNEL global address space is set up using the page control table, or PCT, in the PCI adapter. The PCT has attributes that can be set for each MC page. Table B-3 explains these attributes.

Table B-3 MEMORY CHANNEL Page Attributes
Attribute Description
Broadcast Data is sent to all systems or, with a node ID, data is sent to only the specified system.
Loopback Data that is sent to the other nodes in a cluster is also written to memory by the PCI adapter in the transmitting node. This provides message order guarantees and a greater ability to detect errors.
Interrupt Specifies that if a location is written in this MC page, it generates an interrupt to the CPU. This can be used for notifying other nodes.
Suppress transmit/receive after error Specifies that if an error occurs on this page, transmit and receive operations are not allowed until the error condition is cleared.
ACK A write to a page causes each receiving system's adapter to respond with an ACK (acknowledge), ensuring that a write (or other operation) has occurred on remote nodes without interrupting their hosts. This is used for error checking and error recovery.

B.2.4 MEMORY CHANNEL Implementation

MEMORY CHANNEL software comes bundled with the OpenVMS Cluster software. After setting up the hardware, you configure the MEMORY CHANNEL software by responding to prompts in the CLUSTER_CONFIG.COM procedure. A prompt asks whether you want to enable MEMORY CHANNEL for node-to-node communications for the local computer. By responding "Yes", MC_SERVICES_P2, the system parameter that controls whether MEMORY CHANNEL is in effect, is set to 1. This setting causes the driver, PMDRIVER, to be loaded and the default values for the other MEMORY CHANNEL system parameters to take effect.

For a description of all the MEMORY CHANNEL system parameters, refer to the OpenVMS Cluster Systems manual.

For more detailed information about setting up the MEMORY CHANNEL hub, link cables, and PCI adapters, see the MEMORY CHANNEL User's Guide, order number EK-PCIRM-UG.


Appendix C
CI-to-PCI Adapter (CIPCA) Support

This appendix describes the CI-to-PCI adapter (CIPCA) which was introduced in OpenVMS Alpha Version 6.2--1H2 and is supported on all subsequent versions, except OpenVMS Version 7.0. The CIPCA adapter supports specific Alpha servers and OpenVMS Cluster configurations.

This appendix contains the following sections:

C.1 CIPCA Overview

The CIPCA adapter, developed in partnership with CMD Technologies, enables Alpha servers with PCI buses or with a PCI bus and an EISA bus to connect to the CI. The CIPCA adapter provides the following features and benefits:
Feature Benefit
Lower entry cost and more configuration choices If you require midrange compute power for your business needs, CIPCA enables you to integrate midrange Alpha servers into your existing CI cluster.
High-end Alpha speed and power If you require maximum compute power, you can use the CIPCA with both the AlphaServer 8200 systems and AlphaServer 8400 systems that have PCI and EISA I/O subsystems.
Cost-effective Alpha migration path If you want to add Alpha servers to an existing CI VAXcluster, CIPCA provides a cost-effective way to start migrating to a mixed-architecture cluster in the price/performance range that you need.
Advantages of the CI The CIPCA connects to the CI, which offers the following advantages:
  • High speed to accommodate larger processors and I/O-intensive applications.
  • Efficient, direct access to large amounts of storage.
  • Minimal CPU overhead for communication. CI adapters are intelligent interfaces that perform much of the communication work in OpenVMS Cluster systems.
  • High availability through redundant, independent data paths, because each CI adapter is connected with two pairs of CI cables.
  • Multiple access paths to disks and tapes.

Figure C-1 shows an example of a mixed-architecture CI OpenVMS Cluster that has two servers: an Alpha and a VAX.

Figure C-1 CIPCA in a Mixed-Architecture OpenVMS Cluster


As Figure C-1 shows, you can use the CIPCA adapter to connect an Alpha server to a CI OpenVMS Cluster that contains a VAX server with a CIXCD (or CIBCA-B) adapter. This enables you to smoothly integrate an Alpha server into a cluster that previously comprised only high-end VAX systems.

Figure C-2 shows another example of a configuration that uses the CIPCA to connect systems with the CI. In this example, each Alpha has two CIPCA adapters that allow connectivity to multiple CI star couplers and HSJ storage controllers for I/O load balancing or for OpenVMS shadow-set member isolation. Also, the Alpha systems are connected to a high-speed FDDI interconnect that provides additional connectivity for PC clients and OpenVMS satellites.

Figure C-2 CIPCA in an Alpha OpenVMS Cluster


Figure C-1 and Figure C-2 show that the CIPCA makes the performance, availability, and large storage access of the CI available to a wide variety of users. The CI has a high maximum throughput. Both the PCI-based CIPCA and the XMI based CIXCD are highly intelligent microprocessor-controlled adapters that consume minimal CPU overhead.

Because the effective throughput of the CI bus is high, the CI interconnect is not likely to be a bottleneck. In large configurations like the one shown in Figure C-2, multiple adapters and CI connections provide excellent availability and throughput.

Although not shown in Figure C-1 and Figure C-2, you can increase availablity by placing disks on a SCSI interconnect between a pair of HSJ controllers and connecting each HSJ to the CI.

C.2 Technical Specifications

The CIPCA is a two-slot optional adapter. Two CIPCA models are available, the CIPCA-AA and the CIPCA-BA.

The CIPCA-AA was introduced first. It requires one PCI backplane slot and one EISA backplane slot. The EISA slot supplies only power (not bus signals) to the CIPCA. The CIPCA-AA is suitable for older systems with a limited number of PCI slots.

The CIPCA-BA requires two PCI slots and is intended for newer systems with a limited number of EISA slots.

The CIPCA driver is named the SYS$PCAdriver. It is included in the OpenVMS operating system software.

Table C-1 shows the performance of the CIPCA in relation to the CIXCD adapter.

Table C-1 CIPCA and CIXCD Performance
Performance Metric CIPCA CIXCD
Read request rate (I/Os) 4900 5500
Read data Rate (MB/s) 10.6 10.5
Write request rate (I/Os) 4900 4500
Write data rate (MB/s) 9.8 5.8
Mixed request rate (I/Os) 4800 5400
Mixed data rate (MB/s) 10.8 9.2

For information about installing and operating the CIPCA, refer to the hardware manual that came with your CIPCA adapter: CIPCA PCI-CI Adapter User's Guide.

C.3 Configuration Support and Restrictions

The CIPCA adapter is supported by AlphaServers with PCI buses, by CI-connected VAX host systems, by storage controllers, and by the CI star coupler expander.

C.3.1 AlphaServer Support

Table C-2 describes CIPCA support on AlphaServer systems with PCI buses, including the maximum number of CIPCAs supported on each system.

Table C-2 AlphaServer Support for CIPCAs
System Maximum CIPCAs Comments
AlphaServer 8400 26 Can use a combination of CIPCA and CIXCD adapters, not to exceed 26. Prior to OpenVMS Version 7.1, the maximum is 10.
AlphaServer 8200 26 Prior to OpenVMS Version 7.1, the maximum is 10.
AlphaServer 4000, 4100 3 When using three CIPCAs, one must be a CIPCA-AA and two must be CIPCA-BA.
AlphaServer 4000 plus I/O expansion module 6 When using six CIPCAs, only three can be CIPCA-AA.
AlphaServer 1200 2 First supported in OpenVMS Version 7.1-1H1.
AlphaServer 2100A 3  
AlphaServer 2000, 2100 2 Only one can be a CIPCA-BA.


Previous Next Contents Index

[Site home] [Send comments] [Help with this site] [How to order documentation] [OpenVMS site] [Compaq site]
[OpenVMS documentation]

Copyright © Compaq Computer Corporation 1998. All rights reserved.

Legal
6318PRO_020.HTML