[OpenVMS documentation]
[Site home] [Send comments] [Help with this site] [How to order documentation] [OpenVMS site] [Compaq site]
Updated: 11 December 1998

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

11.11 Tape Backup

Backup tape storage provides the least expensive storage medium. Tapes are the most common medium for offline storage and provide a range of capacities, cost, and shelf life. In general, tape storage is removable and generally off line.

11.11.1 For More Information

Backup procedures are described in detail in the following manuals:

11.11.2 Benefits of Unattended Backup

With current tape-drive technology, you can initiate a large backup operation that completes without operator intervention (that is, changing tapes). Such unattended backups can save significant time and reduce staffing costs. Cartridge tape loaders with tape magazines, such as the Tx8x7 or the TA91, allow unattended backups of up to nearly 42 GB of online storage. Backups can also be performed on robot-accessible media, such as the StorageTek® 4400 ACS through the TC44 interconnect adapter, which provides terabyte capacity for backup archives.

11.11.3 Archive/Backup System for OpenVMS

Archive/Backup System for OpenVMS is a replacement for the Storage Library System (SLS). Archive/Backup provides lower system management costs, reduced equipment costs, and data security. It uses the POLYCENTER Media Library Manager (MLM) and the POLYCENTER Media Robot Manager (MRM) to move data to inexpensive tapes, and allows you to find and restore backed up and archived data easily. POLYCENTER MLM and MRM are the first Compaq products to provide OpenVMS users secure, highly reliable, fully automated access to tape and optical removable media through cost-effective media robots, such as the Odetics 5480 and the Tx8x7 family.

11.11.4 StorageTek 4400 ACS

You can attach the StorageTek 4400 ACS, a storage silo, to either an HSC using the TC44 adapter or directly to the XMI bus of a system using a KCM44 adapter. The StorageTek Silo automates access to a library of IBM® 3480 compatible cartridge tapes. The library can contain up to 16 library storage modules. Each module can hold up to 1.2 TB of data in 6000 tape cartridges. A robotic arm can find and mount a requested tape within 45 to 90 seconds. Data movement for tape applications, such as the OpenVMS Backup utility, is performed the same way as with a TA90 tape drive.

11.11.5 Tape-Drive Performance and Capacity

Table 11-8 describes the performance and capacity of various tape drives and the interconnects to which they attach.

Table 11-8 Tape-Drive Performance and Capacity
Interconnect Description
CI (STI tapes) The TA92 can transfer at a rate of 2.6 MB/s. Its magazine of IBM 3480 compatible cartridge tapes lets it back up 38 GB unattended. To achieve highest performance, connect the TA92 through a KDM70 controller or configure it with multiple CI adapters, so that the path to the tape drives is separate from the path to the disk drives.
DSSI The TF867 offers the best tape performance. Its magazine of half-inch cartridge tapes can hold up to 42 GB of data for unattended backup. Its transfer rate is 0.8 MB/s. The TF857 can read TK50 and TK70 tapes, and its magazine can hold up to 18 GB of data.
SCSI The TSZ07 allows SCSI configurations to access 9-track reel-to-reel tapes. It has a capacity of 140 MB per reel and a 750 KB/s transfer rate. The TZK10 offers a less expensive but slower-performing tape solution for SCSI configurations. It uses a quarter-inch cartridge that holds 525 MB and can transfer at a rate of 200 KB/s.


Appendix A
SCSI as an OpenVMS Cluster Interconnect

One of the benefits of OpenVMS Cluster systems is that multiple computers can simultaneously access storage devices connected to a OpenVMS Cluster storage interconnect. Together, these systems provide high performance and highly available access to storage.

This appendix describes how OpenVMS Cluster systems support the Small Computer Systems Interface (SCSI) as a storage interconnect. Multiple Alpha computers, also referred to as hosts or nodes, can simultaneously access SCSI disks over a SCSI interconnect. Such a configuration is called a SCSI multihost OpenVMS Cluster. A SCSI interconnect, also called a SCSI bus, is an industry-standard interconnect that supports one or more computers, peripheral devices, and interconnecting components.

The discussions in this chapter assume that you already understand the concept of sharing storage resources in an OpenVMS Cluster environment. OpenVMS Cluster concepts and configuration requirements are also described in the following OpenVMS Cluster documentation:

This appendix includes two primary parts:

A.1 Conventions Used in This Appendix

Certain conventions are used throughout this appendix to identify the ANSI Standard and for elements in figures.

A.1.1 SCSI ANSI Standard

OpenVMS Cluster systems configured with the SCSI interconnect must use standard SCSI--2 or SCSI--3 components. The SCSI--2 components must be compliant with the architecture defined in the American National Standards Institute (ANSI) Standard SCSI--2, X3T9.2, Rev. 10L. The SCSI--3 components must be compliant with approved versions of the SCSI--3 Architecture and Command standards. For ease of discussion, this appendix uses the term SCSI to refer to both SCSI--2 and SCSI--3.

A.1.2 Symbols Used in Figures

Figure A-1 is a key to the symbols used in figures throughout this appendix.

Figure A-1 Key to Symbols Used in Figures


A.2 Accessing SCSI Storage

In OpenVMS Cluster configurations, multiple VAX and Alpha hosts can directly access SCSI devices in any of the following ways:

You can also access SCSI devices indirectly using the OpenVMS MSCP server.

The following sections describe single-host and multihost access to SCSI storage devices.

A.2.1 Single-Host SCSI Access in OpenVMS Cluster Systems

Prior to OpenVMS Version 6.2, OpenVMS Cluster systems provided support for SCSI storage devices connected to a single host using an embedded SCSI adapter, an optional external SCSI adapter, or a special-purpose RAID (redundant arrays of independent disks) controller. Only one host could be connected to a SCSI bus.

A.2.2 Multihost SCSI Access in OpenVMS Cluster Systems

Beginning with OpenVMS Alpha Version 6.2, multiple Alpha hosts in an OpenVMS Cluster system can be connected to a single SCSI bus to share access to SCSI storage devices directly. This capability allows you to build highly available servers using shared access to SCSI storage.

Figure A-2 shows an OpenVMS Cluster configuration that uses a SCSI interconnect for shared access to SCSI devices. Note that another interconnect (for example, a local area network [LAN]) is required for host-to-host OpenVMS Cluster (System Communications Architecture [SCA]) communications.

Figure A-2 Highly Available Servers for Shared SCSI Access


You can build a three-node OpenVMS Cluster system using the shared SCSI bus as the storage interconnect, or you can include shared SCSI buses within a larger OpenVMS Cluster configuration. A quorum disk can be used on the SCSI bus to improve the availability of two- or three-node configurations. Host-based RAID (including host-based shadowing) and the MSCP server are supported for shared SCSI storage devices.

A.3 Configuration Requirements and Hardware Support

This section lists the configuration requirements and supported hardware for multihost SCSI OpenVMS Cluster systems.

A.3.1 Configuration Requirements

Table A-1 shows the requirements and capabilities of the basic software and hardware components you can configure in a SCSI OpenVMS Cluster system.

Table A-1 Requirements for SCSI Multihost OpenVMS Cluster Configurations
Requirement Description
Software All Alpha hosts sharing access to storage on a SCSI interconnect must be running:
  • OpenVMS Alpha Version 6.2 or later
  • OpenVMS Cluster Software for OpenVMS Alpha Version 6.2 or later
Hardware Table A-2 lists the supported hardware components for SCSI OpenVMS Cluster systems. See also Section A.7.7 for information about other hardware devices that might be used in a SCSI OpenVMS Cluster configuration.
SCSI tape, floppies, and CD-ROM drives You cannot configure SCSI tape drives, floppy drives, or CD-ROM drives on multihost SCSI interconnects. If your configuration requires SCSI tape, floppy, or CD-ROM drives, configure them on single-host SCSI interconnects. Note that SCSI tape, floppy, or CD-ROM drives may be MSCP or TMSCP served to other hosts in the OpenVMS Cluster configuration.
Maximum hosts on a SCSI bus You can connect up to three hosts on a multihost SCSI bus. You can configure any mix of the hosts listed in Table A-2 on the same shared SCSI interconnect.
Maximum SCSI buses per host You can connect each host to a maximum of six multihost SCSI buses. The number of nonshared (single-host) SCSI buses that can be configured is limited only by the number of available slots on the host bus.
Host-to-host communication All members of the cluster must be connected by an interconnect that can be used for host-to-host (SCA) communication; for example, DSSI, CI, Ethernet, FDDI, or MEMORY CHANNEL.
Host-based RAID (including host-based shadowing) Supported in SCSI OpenVMS Cluster configurations.
SCSI device naming The name of each SCSI device must be unique throughout the OpenVMS Cluster system. When configuring devices on systems that include a multihost SCSI bus, adhere to the following requirements:
  • A host can have, at most, one adapter attached to a particular SCSI interconnect.
  • All host controllers attached to a given SCSI interconnect must have the same OpenVMS device name (for example, PKA0), unless port allocation classes are used (see OpenVMS Cluster Systems).
  • Each system attached to a SCSI interconnect must have the same nonzero node allocation class, regardless of whether port allocation classes are used (see OpenVMS Cluster Systems).

A.3.2 Hardware Support

Table A-2 shows the supported hardware components for SCSI OpenVMS Cluster systems; it also lists the minimum required revision for these hardware components. That is, for any component, you must use either the version listed in Table A-2 or a subsequent version. For host support information, refer to the DIGITAL Systems and Options Catalog on the World Wide Web at the following address:


http://www.digital.com:80/info/soc/ 

You can also access the DIGITAL Systems and Options Catalog from the OpenVMS web site by selecting Publications and then selecting this catalog.

For disk support information, refer to StorageWorks documentation. You can access the StorageWorks web site at the following address:


http://www.storage.digital.com/ 

You can also access the StorageWorks web site from the OpenVMS web site by selecting Hardware and then selecting Storage.

The SCSI interconnect configuration and all devices on the SCSI interconnect must meet the requirements defined in the ANSI Standard SCSI--2 document, or the SCSI--3 Architecture and Command standards, and the requirements described in this appendix. See also Section A.7.7 for information about other hardware devices that might be used in a SCSI OpenVMS Cluster configuration.

Table A-2 Supported Hardware for SCSI OpenVMS Cluster Systems
Component Supported Item Minimum Firmware (FW) Version1
Controller HSZ40--B 2.5 (FW)
  HSZ50  
  HSZ70  
  HSZ80 8.3 (FW)
Adapters 2 Embedded (NCR-810 based)  
  KZPAA (PCI to SCSI)  
  KZPSA (PCI to SCSI) A11 (FW)
  KZPBA-CB (PCI to SCSI) 5.53 (FW)
  KZTSA (TURBOchannel to SCSI) A10-1 (FW)


1Unless stated in this column, the minimum firmware version for a device is the same as required for the operating system version you are running. There are no additional firmware requirements for a SCSI multihost OpenVMS Cluster configuration.
2You can configure other types of SCSI adapters in a system for single-host access to local storage.

A.4 SCSI Interconnect Concepts

The SCSI standard defines a set of rules governing the interactions between initiators (typically, host systems) and SCSI targets (typically, peripheral devices). This standard allows the host to communicate with SCSI devices (such as disk drives, tape drives, printers, and optical media devices) without having to manage the device-specific characteristics.

The following sections describe the SCSI standard and the default modes of operation. The discussions also describe some optional mechanisms you can implement to enhance the default SCSI capabilities in areas such as capacity, performance, availability, and distance.

A.4.1 Number of Devices

The SCSI bus is an I/O interconnect that can support up to 16 devices. A narrow SCSI bus supports up to 8 devices; a wide SCSI bus support up to 16 devices. The devices can include host adapters, peripheral controllers, and discrete peripheral devices such as disk or tape drives. The devices are addressed by a unique ID number from 0 through 15. You assign the device IDs by entering console commands, or by setting jumpers or switches, or by selecting a slot on a StorageWorks enclosure.

Note

In order to connect 16 devices to a wide SCSI bus, the devices themselves must also support wide addressing. Narrow devices do not talk to hosts above ID 7. Presently, the HSZ40 does not support addresses above 7. Host adapters that support wide addressing are KZTSA, KZPSA, and the QLogic wide adapters (KZPBA, KZPDA, ITIOP, P1SE, and P2SE). Only the KZPBA--CB is supported in a multihost SCSI OpenVMS Cluster configuration.

When configuring more devices than the previous limit of eight, make sure that you observe the bus length requirements (see Table A-4).

To configure wide IDs on a BA356 box, refer to the BA356 manual StorageWorks Solutions BA356-SB 16-Bit Shelf User's Guide (order number EK-BA356-UG). Do not configure a narrow device in a BA356 box that has a starting address of 8.

To increase the number of devices on the SCSI interconnect, some devices implement a second level of device addressing using logical unit numbers (LUNs). For each device ID, up to eight LUNs (0--7) can be used to address a single SCSI device as multiple units. The maximum number of LUNs per device ID is eight.

Note

When connecting devices to a SCSI interconnect, each device on the interconnect must have a unique device ID. You may need to change a device's default device ID to make it unique. For information about setting a single device's ID, refer to the owner's guide for the device.

A.4.2 Performance

The default mode of operation for all SCSI devices is 8-bit asynchronous mode. This mode, sometimes referred to as narrow mode, transfers 8 bits of data from one device to another. Each data transfer is acknowledged by the device receiving the data. Because the performance of the default mode is limited, the SCSI standard defines optional mechanisms to enhance performance. The following list describes two optional methods for achieving higher performance:

Because all communications on a SCSI interconnect occur between two devices at a time, each pair of devices must negotiate to determine which of the optional features they will use. Most, if not all, SCSI devices implement one or more of these options.

Table A-3 shows data rates when using 8- and 16-bit transfers with standard, fast, and ultra synchronous modes.

Table A-3 Maximum Data Transfer Rates (MB/s)
Mode Narrow (8-bit) Wide (16-bit)
Standard 5 10
Fast 10 20
Ultra 20 40

A.4.3 Distance

The maximum length of the SCSI interconnect is determined by the signaling method used in the configuration and by the data transfer rate. There are two types of electrical signaling for SCSI interconnects:

Table A-4 summarizes how the type of signaling method affects SCSI interconnect distances.

Table A-4 Maximum SCSI Interconnect Distances
Signaling Technique Rate of Data Transfer Maximum Cable Length
Single ended Standard 6 m 1
Single ended Fast 3 m
Single ended Ultra 20.5 m 2
Differential Standard or fast 25 m
Differential Ultra 25.5 m 2


1The SCSI standard specifies a maximum length of 6 m for this type of interconnect. However, where possible, it is advisable to limit the cable length to 4 m to ensure the highest level of data integrity.
2For more information, refer to the StorageWorks UltraSCSI Configuration Guidelines, order number EK--ULTRA--CG.

The DWZZA, DWZZB, and DWZZC converters are single-ended to differential converters that you can use to connect single-ended and differential SCSI interconnect segments. The DWZZA is for narrow (8-bit) SCSI buses, the DWZZB is for wide (16-bit) SCSI buses, and the DWZZC is for wide Ultra SCSI buses.

The differential segments are useful for the following:

Because the DWZZA, the DWZZB, and the DWZZC are strictly signal converters, you can not assign a SCSI device ID to them. You can configure a maximum of two DWZZA or two DWZZB converters in the path between any two SCSI devices. Refer to the StorageWorks UltraSCSI Configuration Guidelines for information on configuring the DWZZC.


Previous Next Contents Index

[Site home] [Send comments] [Help with this site] [How to order documentation] [OpenVMS site] [Compaq site]
[OpenVMS documentation]

Copyright © Compaq Computer Corporation 1998. All rights reserved.

Legal
6318PRO_015.HTML