|
|
Updated:
11 December 1998
|
Guidelines for OpenVMS Cluster Configurations
A.7.6.4 Procedure for Hot Plugging StorageWorks SBB Disks
To remove an SBB (storage building block) disk from an active SCSI bus,
use the following procedure:
- Use an ESD grounding strap that is attached either to a grounding
stud or to unpainted metal on one of the cabinets in the system. Refer
to the system installation procedures for guidance.
- Follow the procedure in Section A.7.6.3 to make the disk inactive.
- Squeeze the clips on the side of the SBB, and slide the disk out of
the StorageWorks shelf.
To plug an SBB disk into an active SCSI bus, use the following
procedure:
- Use an ESD grounding strap that is attached either to a grounding
stud or to unpainted metal on one of the cabinets in the system. Refer
to the system installation procedures for guidance.
- Ensure that the SCSI ID associated with the device (either by
jumpers or by the slot in the StorageWorks shelf) conforms to the
following:
- The SCSI ID is unique for the logical SCSI bus.
- The SCSI ID is already configured as a DK device on all of the
following:
- Any member of the OpenVMS Cluster system that already has that ID
configured
- Any OpenVMS processor on the same SCSI bus that is running the
MSCP server
- Slide the SBB into the StorageWorks shelf.
- Configure the disk on OpenVMS Cluster members, if required, using
SYSMAN IO commands.
A.7.6.5 Procedure for Hot Plugging HSZxx
To remove an HSZxx controller from an active SCSI bus:
- Use an ESD grounding strap that is attached either to a grounding
stud or to unpainted metal on one of the cabinets in the system. Refer
to the system installation procedures for guidance.
- Follow the procedure in Section A.7.6.3 to make the HSZxx
inactive.
- The HSZxx can be powered down, but it must remain plugged
in to the power distribution system to maintain grounding.
- Unscrew and remove the differential triconnector from the
HSZxx.
- Protect all exposed connector pins from ESD and from contacting any
electrical conductor while they are disconnected.
To plug an HSZxx controller into an active SCSI bus:
- Use an ESD grounding strap that is attached either to a grounding
stud or to unpainted metal on one of the cabinets in the system. Refer
to the system installation procedures for guidance. Also, ensure that
the ground offset voltages between the HSZxx and all
components that will be attached to it are within the limits specified
in Section A.7.8.
- Protect all exposed connector pins from ESD and from contacting any
electrical conductor while they are disconnected.
- Power up the HSZxx and ensure that the disk units
associated with the HSZxx conform to the following:
- The disk units are unique for the logical SCSI bus.
- The disk units are already configured as DK devices on the
following:
- Any member of the OpenVMS Cluster system that already has that ID
configured
- Any OpenVMS processor on the same SCSI bus that is running the MSCP
server
- Ensure that the HSZxx will make a legal stubbing
connection to the active segment. (The connection is legal when the
triconnector is attached directly to the HSZxx controller
module, with no intervening cable.)
- Attach the differential triconnector to the HSZxx, using
care to ensure that it is properly aligned. Tighten the screws.
- Configure the HSZxx virtual disks on OpenVMS Cluster
members, as required, using SYSMAN IO commands.
A.7.6.6 Procedure for Hot Plugging Host Adapters
To remove a host adapter from an active SCSI bus:
- Use an ESD grounding strap that is attached either to a grounding
stud or to unpainted metal on one of the cabinets in the system. Refer
to the system installation procedures for guidance.
- Verify that the connection to be broken is a stubbing connection.
If it is not, then do not perform the hot plugging procedure.
- Follow the procedure in Section A.7.6.3 to make the host adapter
inactive.
- The system can be powered down, but it must remain plugged in to
the power distribution system to maintain grounding.
- Remove the "Y" cable from the host adapter's single-ended
connector.
- Protect all exposed connector pins from ESD and from contacting any
electrical conductor while they are disconnected.
- Do not unplug the adapter from the host's internal bus
while the host remains powered up.
At this point, the adapter has
disconnected from the SCSI bus. To remove the adapter from the host,
first power down the host, then remove the adapter from the host's
internal bus.
To plug a host adapter into an active SCSI bus:
- Use an ESD grounding strap that is attached either to a grounding
stud or to unpainted metal on one of the cabinets in the system. Refer
to the system installation procedures for guidance. Also, ensure that
the ground offset voltages between the host and all components that
will be attached to it are within the limits specified in Section A.7.8.
- Protect all exposed connector pins from ESD and from contacting any
electrical conductor while they are disconnected.
- Ensure that the host adapter will make a legal stubbing connection
to the active segment (the stub length must be within allowed limits,
and the host adapter must not provide termination to the active
segment).
- Plug the adapter into the host (if it is unplugged).
- Plug the system into the power distribution system to ensure proper
grounding. Power up, if desired.
- Attach the "Y" cable to the host adapter, using care to
ensure that it is properly aligned.
A.7.6.7 Procedure for Hot Plugging DWZZx Controllers
Use the following procedure to remove a DWZZx from an active
SCSI bus:
- Use an ESD grounding strap that is attached either to a grounding
stud or to unpainted metal on one of the cabinets in the system. Refer
to the system installation procedures for guidance.
- Verify that the connection to be broken is a stubbing connection.
If it is not, then do not perform the hot plugging procedure.
- Do not power down the DWZZx. This can disrupt the
operation of the attached SCSI bus segments.
- Determine which SCSI bus segment will remain active after the
disconnection. Follow the procedure in Section A.7.6.3 to make the other
segment inactive.
When the DWZZx is removed from the
active segment, the inactive segment must remain inactive until the
DWZZx is also removed from the inactive segment, or until
proper termination is restored to the DWZZx port that was
disconnected from the active segment.
- The next step depends on the type of DWZZx and the segment
that is being hot plugged, as follows:
DWZZx Type |
Condition |
Action |
SBB
1
|
Single-ended segment will remain active.
|
Squeeze the clips on the side of the SBB, and slide the DWZZ
x out of the StorageWorks shelf.
|
SBB
1
|
Differential segment will remain active.
|
Unscrew and remove the differential triconnector from the DWZZ
x.
|
Table top
|
Single-ended segment will remain active.
|
Remove the "Y" cable from the DWZZ
x's single-ended connector.
|
Table top
|
Differential segment will remain active.
|
Unscrew and remove the differential triconnector from the DWZZ
x.
|
1SSB is the StorageWorks abbreviation for storage building
block.
- Protect all exposed connector pins from ESD and from contacting any
electrical conductor while they are disconnected.
To plug a DWZZx into an active SCSI bus:
- Use an ESD grounding strap that is attached either to a grounding
stud or to unpainted metal on one of the cabinets in the system. Refer
to the system installation procedures for guidance. Also, ensure that
the ground offset voltages between the DWZZx and all
components that will be attached to it are within the limits specified
in Section A.7.8.
- Protect all exposed connector pins from ESD and from contacting any
electrical conductor while they are disconnected.
- Ensure that the DWZZx will make a legal stubbing
connection to the active segment (the stub length must be within
allowed limits, and the DWZZx must not provide termination to
the active segment).
- The DWZZx must be powered up. The SCSI segment that is
being added must be attached and properly terminated. All devices on
this segment must be inactive.
- The next step depends on the type of DWZZx, and which
segment is being hot plugged, as follows:
DWZZx Type |
Condition |
Action |
SBB
1
|
Single-ended segment is being hot plugged.
|
Slide the DWZZ
x into the StorageWorks shelf.
|
SBB
1
|
Differential segment is being hot plugged.
|
Attach the differential triconnector to the DWZZ
x, using care to ensure that it is properly aligned. Tighten
the screws.
|
Table top
|
Single-ended segment is being hot plugged.
|
Attach the "Y" cable to the DWZZ
x, using care to ensure that it is properly aligned.
|
Table top
|
Differential segment is being hot plugged.
|
Attach the differential triconnector to the DWZZ
x, using care to ensure that it is properly aligned. Tighten
the screws.
|
1SSB is the StorageWorks abbreviation for storage building
block.
- If the newly attached segment has storage devices on it, then
configure them on OpenVMS Cluster members, if required, using SYSMAN IO
commands.
A.7.7 OpenVMS Requirements for Devices Used on Multihost SCSI OpenVMS Cluster Systems
At this time, the only devices approved for use on multihost SCSI
OpenVMS Cluster systems are those listed in Table A-2. While not
specifically approved for use, other disk devices might be used in a
multihost OpenVMS Cluster system when they conform to the following
requirements:
- Support for concurrent multi-initiator I/O.
- Proper management for the following states or conditions on a
per-initiator basis:
- Synchronous negotiated state and speed
- Width negotiated state
- Contingent Allegiance and Unit Attention conditions
- Tagged command queuing. This is needed to provide an ordering
guarantee used in OpenVMS Cluster systems to ensure that I/O has been
flushed. The drive must implement queuing that complies with Section
7.8.2 of the SCSI--2 standard, which says (in part):
- "...All commands received with a simple queue tag message
prior to a command received with an ordered queue tag message,
regardless of initiator, shall be executed before that command
with the ordered queue tag message." (Emphasis added.)
- Support for command disconnect.
- A reselection timeout procedure compliant with Option b of Section
6.1.4.2 of the SCSI--2 standard. Furthermore, the device shall
implement a reselection retry algorithm that limits the amount of bus
time spent attempting to reselect a nonresponsive initiator.
- Automatic read reallocation enabled (ARRE) and automatic write
reallocation enabled (AWRE) (that is, drive-based bad block
revectoring) to prevent multiple hosts from unnecessarily revectoring
the same block. To avoid data corruption, it is essential that the
drive comply with Section 9.3.3.6 of the SCSI--2 Standard, which says
(in part):
- "...The automatic reallocation shall then be performed only if
the target successfully recovers the data." (Emphasis
added.)
- Storage devices should not supply TERMPWR. If they do, then it is
necessary to apply configuration rules to ensure that there are no more
than four sources of TERMPWR on a segment.
Finally, if the device or any other device on the same segment will be
hot plugged, then the device must meet the electrical requirements
described in Section A.7.6.2.
A.7.8 Grounding Requirements
This section describes the grounding requirements for electrical
systems in a SCSI OpenVMS Cluster system.
Improper grounding can result in voltage differentials, called ground
offset voltages, between the enclosures in the configuration. Even
small ground offset voltages across the SCSI interconnect (as shown in
step 3 of Table A-8) can disrupt the configuration and cause system
performance degradation or data corruption.
Table A-8 describes important considerations to ensure proper
grounding.
Table A-8 Steps for Ensuring Proper Grounding
Step |
Description |
1
|
Ensure that site power distribution meets all local electrical codes.
|
2
|
Inspect the entire site power distribution system to ensure that:
- All outlets have power ground connections.
- A grounding prong is present on all computer equipment power cables.
- Power-outlet neutral connections are not actual ground connections.
- All grounds for the power outlets are connected to the same power
distribution panel.
- All devices that are connected to the same circuit breaker as the
computer equipment are UL® or IEC approved.
|
3
|
If you have difficulty verifying these conditions, you can use a
hand-held multimeter to measure the ground offset voltage between any
two cabinets. To measure the voltage, connect the multimeter leads to
unpainted metal on each enclosure. Then determine whether the voltage
exceeds the following allowable ground offset limits:
- Single-ended signaling: 50 millivolts (maximum allowable offset)
- Differential signaling: 800 millivolts (maximum allowable offset)
The multimeter method provides data for only the moment it is
measured. The ground offset values may change over time as additional
devices are activated or plugged into the same power source. To ensure
that the ground offsets remain within acceptable limits over time,
Compaq recommends that you have a power survey performed by a qualified
electrician.
|
4
|
If you are uncertain about the grounding situation or if the measured
offset exceeds the allowed limit, Compaq recommends that a qualified
electrician correct the problem. It may be necessary to install
grounding cables between enclosures to reduce the measured offset.
|
5
|
If an unacceptable offset voltage was measured and a ground cable was
installed, then measure the voltage again to ensure it is less than the
allowed limits. If not, an electrician must determine the source of the
ground offset voltage and reduce or eliminate it.
|
Appendix B
MEMORY CHANNEL Technical Summary
This appendix contains information about MEMORY CHANNEL, a
high-performance cluster interconnect technology. MEMORY CHANNEL, which
was introduced in OpenVMS Alpha Version 7.1, supports several
configurations.
This chapter contains the following sections:
Section |
Content |
Product Overview
|
High-level introduction to the MEMORY CHANNEL product and its benefits,
hardware components, and configurations.
|
Technical Overview
|
More in-depth technical information about how MEMORY CHANNEL works.
|
B.1 Product Overview
MEMORY CHANNEL is a high-performance cluster interconnect technology
for PCI-based Alpha systems. With the benefits of very low latency,
high bandwidth, and direct memory access, MEMORY CHANNEL complements
and extends the unique ability of an OpenVMS Cluster to work as a
single, virtual system.
MEMORY CHANNEL offloads internode cluster traffic (such as lock
management communication) from existing interconnects---CI, DSSI, FDDI,
and Ethernet---so that they can process storage and network traffic
more effectively. MEMORY CHANNEL significantly increases throughput and
decreases the latency associated with traditional I/O processing.
Any application that must move large amounts of data among nodes will
benefit from MEMORY CHANNEL. It is an optimal solution for applications
that need to pass data quickly, such as real-time and transaction
processing. MEMORY CHANNEL also improves throughput in high-performance
databases and other applications that generate heavy OpenVMS Lock
Manager traffic.
B.1.1 MEMORY CHANNEL Features
MEMORY CHANNEL technology provides the following features:
- Offers excellent price/performance.
With
several times the CI bandwidth, MEMORY CHANNEL provides a 100 MB/s
interconnect with minimal latency. MEMORY CHANNEL architecture is
designed for the industry-standard PCI bus.
- Requires no change to existing applications.
MEMORY CHANNEL works seamlessly with existing cluster software, so
that no change is necessary for existing applications. The new MEMORY
CHANNEL drivers, PMDRIVER and MCDRIVER, integrate with the Systems
Communication Services layer of OpenVMS Clusters in the same way as
existing port drivers. Higher layers of cluster software are unaffected.
- Offloads CI, DSSI, and the LAN in SCSI clusters.
You cannot connect storage directly to MEMORY CHANNEL.
While
MEMORY CHANNEL is not a replacement for CI and DSSI, when used in
combination with those interconnects, it offloads their node-to-node
traffic. This enables them to be dedicated to storage traffic,
optimizing communications in the entire cluster.
When used in a
cluster with SCSI and LAN interconnects, MEMORY CHANNEL offloads
node-to-node traffic from the LAN, enabling it to handle more TCP/IP or
DECnet traffic.
- Provides fail-separately behavior.
When a
system failure occurs, MEMORY CHANNEL nodes behave like any failed node
in an OpenVMS Cluster. The rest of the cluster continues to perform
until the failed node can rejoin the cluster.
B.1.2 MEMORY CHANNEL Version 2.0 Features
When first introduced in OpenVMS Version 7.1, MEMORY CHANNEL supported
a maximum of four nodes in a 10-foot radial topology. Communication
occurred between one sender-receiver pair at a time. MEMORY CHANNEL
Version 1.5 introduced support for eight nodes, a new adapter
(CCMAA--BA), time stamps on all messages, and more robust performance.
MEMORY CHANNEL Version 2.0 provides the following new capabilities:
- Support for a new adapter (CCMAB-AA) and new hubs (CCMHB-AA and
CCMHB-BA)
- Support for simultaneous communication between four sender-receiver
pairs
- Support for longer cables for a radial topology up to 3 km
B.1.3 Hardware Components
A MEMORY CHANNEL cluster is joined together by a hub, a desktop-PC
sized unit which provides a connection among systems. The hub is
connected to a system's PCI adapter by a link cable. Figure B-1
shows all three hardware components required by a node to support
MEMORY CHANNEL:
- A PCI-to-MEMORY CHANNEL adapter
- A link cable
- A port in a MEMORY CHANNEL hub (except for a two-node configuration
in which the cable connects just two PCI adapters.)
Figure B-1 MEMORY CHANNEL Hardware Components
The PCI adapter pictured in Figure B-1 has memory mapping logic that
enables each system to communicate with the others in the MEMORY
CHANNEL cluster.
Figure B-2 shows an example of four-node MEMORY CHANNEL cluster with
a hub at its center.
Figure B-2 Four-Node MEMORY CHANNEL Cluster
A MEMORY CHANNEL hub is not required in clusters that contain only two
nodes. In a two-node configuration like the one shown Figure B-3,
the same adapters and cable are used, and one of the PCI adapters
serves as a virtual hub. You can continue to use the adapters and cable
if you expand to a larger configuration later.
Figure B-3 Virtual Hub MEMORY CHANNEL Cluster
B.1.4 Backup Interconnect for High-Availability Configurations
MEMORY CHANNEL requires a central hub in configurations of three or
more nodes. The MEMORY CHANNEL hub contains active, powered electronic
components. In the event of a hub failure, resulting from either a
power shutdown or component failure, the MEMORY CHANNEL interconnect
ceases operation. This type of failure does not occur with the other
cluster interconnects, such as CI, DSSI, and most LAN configurations.
Compaq therefore recommends that customers with MEMORY CHANNEL
configurations who have high availability requirements consider using
one of the following configurations to provide a second backup
interconnect:
- In most cases a second interconnect can easily be configured by
enabling the LAN (Ethernet or FDDI) for clustering. FDDI and 100 Mb/s
Ethernet usually provide acceptable interconnect performance in the
event of MEMORY CHANNEL failure. (See OpenVMS Cluster Systems and Guidelines for OpenVMS Cluster Configurations
for details about how to enable the LAN for clustering.)
- CI and DSSI interconnects automatically act as a backup for MEMORY
CHANNEL.
- A configuration with two MEMORY CHANNEL interconnects provides the
highest possible performance as well as continued operation if one
MEMORY CHANNEL interconnect fails.
B.1.5 Software Requirements
The use of MEMORY CHANNEL imposes certain requirements on memory and on
your choice of diagnostic tools.
B.1.5.1 Memory Requirements
MEMORY CHANNEL consumes memory during normal operations. Each system in
your MEMORY CHANNEL cluster must have at least 128 MB of memory.