Document revision date: 19 July 1999
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]

Guidelines for OpenVMS Cluster Configurations


Previous Contents Index

11.6 State Transition Strategies

OpenVMS Cluster state transitions occur when a system joins or leaves an OpenVMS Cluster system and when the OpenVMS Cluster recognizes a quorum-disk state change. The connection manager handles these events to ensure the preservation of data integrity throughout the OpenVMS Cluster.

State transitions should be a concern only if systems are joining or leaving an OpenVMS Cluster system frequently enough to cause disruption.

A state transition's duration and effect on users and applications is determined by the reason for the transition, the configuration, and the applications in use. By managing transitions effectively, system managers can control:

11.6.1 Dealing with State Transitions

The following guidelines describe effective ways of dealing with transitions so that you can minimize the actual transition time as well as the side effects after the transition.

Reference: For more detailed information about OpenVMS Cluster transitions and their phases, system parameters, quorum management, see OpenVMS Cluster Systems.

11.7 Migration and Warranted Support for Multiple Versions

Compaq provides two levels of support for mixed-version and mixed-architecture OpenVMS Cluster systems: warranted support and migration support. Warranted support means that Compaq has fully qualified the two versions coexisting in an OpenVMS Cluster and will answer all problems identified by customers using these configurations.

Migration support is a superset of the Rolling Upgrade support provided in earlier releases of OpenVMS and is available for mixes that are not warranted. Migration support means that Compaq has qualified the versions for use together in configurations that are migrating in a staged fashion to a newer version of OpenVMS VAX or to OpenVMS Alpha. Problem reports submitted against these configurations will be answered by Compaq. However, in exceptional cases, Compaq may request that you move to a warranted configuration as part of answering the problem.

Migration support helps customers move to warranted OpenVMS Cluster version mixes with minimal impact on their cluster environments. Table 11-6 shows the level of support provided for all possible version pairings.

Table 11-6 OpenVMS Cluster Warranted and Migration Support
  Alpha V6.2--xxx Alpha V7.1--xxx Alpha V7.2
VAX V6.2-- xxx Warranted Migration Migration
VAX V7.1-- xxx Migration Warranted Migration
VAX V7.2 Migration Migration Warranted

For OpenVMS Version 6.2 nodes to participate in a cluster with systems running either Version 7.1 or Version 7.2, the cluster compatibility kit must be installed on each Version 6.2 node. For more information about this kit, see the OpenVMS Version 7.2 Release Notes.

Compaq does not support the use of Version 7.2 with Version 6.1 (or earlier versions) in an OpenVMS Cluster. In many cases, mixing Version 7.2 with versions prior to Version 6.2 will operate successfully, but Compaq cannot commit to resolving problems experienced with such configurations.

Note

Nodes running OpenVMS VAX Version 5.5--2 or earlier, or OpenVMS Alpha Version 1.0 or 1.5, cannot participate in a cluster with one or more OpenVMS Version 7.2 nodes. For more information, see the OpenVMS Version 7.2 Release Notes.

11.8 Alpha and VAX Systems in the Same OpenVMS Cluster

OpenVMS Alpha and OpenVMS VAX systems can work together in the same OpenVMS Cluster to provide both flexibility and migration capability. You can add Alpha processing power to an existing VAXcluster, enabling you to utilize applications that are system specific or hardware specific.

Table 11-6 depicts the OpenVMS version pairs for which Compaq provides migration and warranted support.

11.8.1 OpenVMS Cluster Satellite Booting Across Architectures

OpenVMS Alpha Version 7.1 and OpenVMS VAX Version 7.1 enable VAX boot nodes to provide boot service to Alpha satellites and Alpha boot nodes to provide boot service to VAX satellites. This support, called cross-architecture booting, increases configuration flexibility and higher availability of boot servers for satellites.

Two configuration scenarios make cross-architecture booting desirable:

11.8.2 Restrictions

You cannot perform OpenVMS operating system and layered product installations and upgrades across architectures. For example, you must install and upgrade OpenVMS Alpha software using an Alpha system. When you configure OpenVMS Cluster systems that take advantage of cross-architecture booting, ensure that at least one system from each architecture is configured with a disk that can be used for installations and upgrades.

System disks can contain only a single version of the OpenVMS operating system and are architecture specific. For example, OpenVMS VAX Version 7.1 cannot coexist on a system disk with OpenVMS Alpha Version 7.1.

11.9 Determining Backup and Storage Management Strategies

In any system, hardware and electrical failures as well as human errors occur. All important data must be backed up to limit the effects of these errors. You can do this in a number of ways, depending on the time and resources available.

11.9.1 Steps for Determining a Backup Strategy

Follow these steps to determine a backup strategy:
Step Description
1 Decide how much lost work is acceptable in the event of a failure. This determines how often the data needs to be backed up.
2 Decide how long the data can remain unavailable while it is being backed up. This determines the methods of backup.
3 Establish a backup schedule, including the frequency and times of the day and week that backups will occur.

Determine:

  • How much data will be backed up daily, weekly, and monthly?
  • Will you conduct full or incremental backups? How often for each?
4 Make sure that sufficient backup media are available. Determine both the initial amount of backup media needed and its growth rate.
5 Determine if your backup strategy requires backup media to be stored off site.

11.10 Disk Backup

Table 11-7 describes ways to provide a copy of data for backup.

Table 11-7 Backup Methods for Data
Type of Data Backup Method
Database is continually changing; transactions cannot be lost. Use a combination of database backup (at a time when it is known to be static) and journaling transactions to the database.

Reference: See the following manuals for additional information:

  • RMS Journaling for OpenVMS Manual
  • Guide to OpenVMS File Applications
  • DEC Rdb Guide to Database Design and Definition
  • DEC DBMS Database Design Guide
  • DEC DBMS Database Maintenance and Performance Guide
Data must be accessible at all times, including nights and weekends. Use Volume Shadowing for OpenVMS software to accomplish rapid disk backup. Remove a member from a three-member shadow set by dismounting the shadow set, remounting the shadow set with two members, and copying the third disk to magnetic tape. After this, the third disk can be included again in the shadow set.
Data can be unavailable for an extended period of time for backup. Use the OpenVMS Backup utility (BACKUP) to make an image backup of a volume or a file-by-file copy of specified sets of files. BACKUP can make a copy to another disk (or set of disks) or to magnetic tape. Restoring from an image copy requires that the entire image be written to a disk. When you restore specific files, they are copied from the restored disk to the intended destination.

On the other hand, an image copy is faster than a file-by-file copy, which copies files one at a time. Restoring a single file from the backup copy is easy. Also, a file-by-file restore greatly reduces fragmentation of the restored disk.

Data is static. Archiving copies of the data on magnetic tape and excluding the online files from other backup procedures may be sufficient. Examples are program sources, documentation files, and distribution kits.
Scratch files and intermediate files. You can choose not to provide any backup for these files.

11.11 Tape Backup

Backup tape storage provides the least expensive storage medium. Tapes are the most common medium for offline storage and provide a range of capacities, cost, and shelf life. In general, tape storage is removable and generally off line.

11.11.1 For More Information

Backup procedures are described in detail in the following manuals:

11.11.2 Benefits of Unattended Backup

With current tape-drive technology, you can initiate a large backup operation that completes without operator intervention (that is, changing tapes). Such unattended backups can save significant time and reduce staffing costs. Cartridge tape loaders with tape magazines, such as the Tx8x7 or the TA91, allow unattended backups of up to nearly 42 GB of online storage. Backups can also be performed on robot-accessible media, such as the StorageTek® 4400 ACS through the TC44 interconnect adapter, which provides terabyte capacity for backup archives.

11.11.3 Archive/Backup System for OpenVMS

Archive/Backup System for OpenVMS is a replacement for the Storage Library System (SLS). Archive/Backup provides lower system management costs, reduced equipment costs, and data security. It uses the POLYCENTER Media Library Manager (MLM) and the POLYCENTER Media Robot Manager (MRM) to move data to inexpensive tapes, and allows you to find and restore backed up and archived data easily. POLYCENTER MLM and MRM are the first Compaq products to provide OpenVMS users secure, highly reliable, fully automated access to tape and optical removable media through cost-effective media robots, such as the Odetics 5480 and the Tx8x7 family.

11.11.4 StorageTek 4400 ACS

You can attach the StorageTek 4400 ACS, a storage silo, to either an HSC using the TC44 adapter or directly to the XMI bus of a system using a KCM44 adapter. The StorageTek Silo automates access to a library of IBM® 3480 compatible cartridge tapes. The library can contain up to 16 library storage modules. Each module can hold up to 1.2 TB of data in 6000 tape cartridges. A robotic arm can find and mount a requested tape within 45 to 90 seconds. Data movement for tape applications, such as the OpenVMS Backup utility, is performed the same way as with a TA90 tape drive.

11.11.5 Tape-Drive Performance and Capacity

Table 11-8 describes the performance and capacity of various tape drives and the interconnects to which they attach.

Table 11-8 Tape-Drive Performance and Capacity
Interconnect Description
CI (STI tapes) The TA92 can transfer at a rate of 2.6 MB/s. Its magazine of IBM 3480 compatible cartridge tapes lets it back up 38 GB unattended. To achieve highest performance, connect the TA92 through a KDM70 controller or configure it with multiple CI adapters, so that the path to the tape drives is separate from the path to the disk drives.
DSSI The TF867 offers the best tape performance. Its magazine of half-inch cartridge tapes can hold up to 42 GB of data for unattended backup. Its transfer rate is 0.8 MB/s. The TF857 can read TK50 and TK70 tapes, and its magazine can hold up to 18 GB of data.
SCSI The TSZ07 allows SCSI configurations to access 9-track reel-to-reel tapes. It has a capacity of 140 MB per reel and a 750 KB/s transfer rate. The TZK10 offers a less expensive but slower-performing tape solution for SCSI configurations. It uses a quarter-inch cartridge that holds 525 MB and can transfer at a rate of 200 KB/s.


Appendix A
SCSI as an OpenVMS Cluster Interconnect

One of the benefits of OpenVMS Cluster systems is that multiple computers can simultaneously access storage devices connected to a OpenVMS Cluster storage interconnect. Together, these systems provide high performance and highly available access to storage.

This appendix describes how OpenVMS Cluster systems support the Small Computer Systems Interface (SCSI) as a storage interconnect. Multiple Alpha computers, also referred to as hosts or nodes, can simultaneously access SCSI disks over a SCSI interconnect. Such a configuration is called a SCSI multihost OpenVMS Cluster. A SCSI interconnect, also called a SCSI bus, is an industry-standard interconnect that supports one or more computers, peripheral devices, and interconnecting components.

The discussions in this chapter assume that you already understand the concept of sharing storage resources in an OpenVMS Cluster environment. OpenVMS Cluster concepts and configuration requirements are also described in the following OpenVMS Cluster documentation:

This appendix includes two primary parts:

A.1 Conventions Used in This Appendix

Certain conventions are used throughout this appendix to identify the ANSI Standard and for elements in figures.

A.1.1 SCSI ANSI Standard

OpenVMS Cluster systems configured with the SCSI interconnect must use standard SCSI--2 or SCSI--3 components. The SCSI--2 components must be compliant with the architecture defined in the American National Standards Institute (ANSI) Standard SCSI--2, X3T9.2, Rev. 10L. The SCSI--3 components must be compliant with approved versions of the SCSI--3 Architecture and Command standards. For ease of discussion, this appendix uses the term SCSI to refer to both SCSI--2 and SCSI--3.

A.1.2 Symbols Used in Figures

Figure A-1 is a key to the symbols used in figures throughout this appendix.

Figure A-1 Key to Symbols Used in Figures


A.2 Accessing SCSI Storage

In OpenVMS Cluster configurations, multiple VAX and Alpha hosts can directly access SCSI devices in any of the following ways:

You can also access SCSI devices indirectly using the OpenVMS MSCP server.

The following sections describe single-host and multihost access to SCSI storage devices.

A.2.1 Single-Host SCSI Access in OpenVMS Cluster Systems

Prior to OpenVMS Version 6.2, OpenVMS Cluster systems provided support for SCSI storage devices connected to a single host using an embedded SCSI adapter, an optional external SCSI adapter, or a special-purpose RAID (redundant arrays of independent disks) controller. Only one host could be connected to a SCSI bus.

A.2.2 Multihost SCSI Access in OpenVMS Cluster Systems

Beginning with OpenVMS Alpha Version 6.2, multiple Alpha hosts in an OpenVMS Cluster system can be connected to a single SCSI bus to share access to SCSI storage devices directly. This capability allows you to build highly available servers using shared access to SCSI storage.

Figure A-2 shows an OpenVMS Cluster configuration that uses a SCSI interconnect for shared access to SCSI devices. Note that another interconnect (for example, a local area network [LAN]) is required for host-to-host OpenVMS Cluster (System Communications Architecture [SCA]) communications.

Figure A-2 Highly Available Servers for Shared SCSI Access


You can build a three-node OpenVMS Cluster system using the shared SCSI bus as the storage interconnect, or you can include shared SCSI buses within a larger OpenVMS Cluster configuration. A quorum disk can be used on the SCSI bus to improve the availability of two- or three-node configurations. Host-based RAID (including host-based shadowing) and the MSCP server are supported for shared SCSI storage devices.

A.3 Configuration Requirements and Hardware Support

This section lists the configuration requirements and supported hardware for multihost SCSI OpenVMS Cluster systems.

A.3.1 Configuration Requirements

Table A-1 shows the requirements and capabilities of the basic software and hardware components you can configure in a SCSI OpenVMS Cluster system.

Table A-1 Requirements for SCSI Multihost OpenVMS Cluster Configurations
Requirement Description
Software All Alpha hosts sharing access to storage on a SCSI interconnect must be running:
  • OpenVMS Alpha Version 6.2 or later
  • OpenVMS Cluster Software for OpenVMS Alpha Version 6.2 or later
Hardware Table A-2 lists the supported hardware components for SCSI OpenVMS Cluster systems. See also Section A.7.7 for information about other hardware devices that might be used in a SCSI OpenVMS Cluster configuration.
SCSI tape, floppies, and CD-ROM drives You cannot configure SCSI tape drives, floppy drives, or CD-ROM drives on multihost SCSI interconnects. If your configuration requires SCSI tape, floppy, or CD-ROM drives, configure them on single-host SCSI interconnects. Note that SCSI tape, floppy, or CD-ROM drives may be MSCP or TMSCP served to other hosts in the OpenVMS Cluster configuration.
Maximum hosts on a SCSI bus You can connect up to three hosts on a multihost SCSI bus. You can configure any mix of the hosts listed in Table A-2 on the same shared SCSI interconnect.
Maximum SCSI buses per host You can connect each host to a maximum of six multihost SCSI buses. The number of nonshared (single-host) SCSI buses that can be configured is limited only by the number of available slots on the host bus.
Host-to-host communication All members of the cluster must be connected by an interconnect that can be used for host-to-host (SCA) communication; for example, DSSI, CI, Ethernet, FDDI, or MEMORY CHANNEL.
Host-based RAID (including host-based shadowing) Supported in SCSI OpenVMS Cluster configurations.
SCSI device naming The name of each SCSI device must be unique throughout the OpenVMS Cluster system. When configuring devices on systems that include a multihost SCSI bus, adhere to the following requirements:
  • A host can have, at most, one adapter attached to a particular SCSI interconnect.
  • All host controllers attached to a given SCSI interconnect must have the same OpenVMS device name (for example, PKA0), unless port allocation classes are used (see OpenVMS Cluster Systems).
  • Each system attached to a SCSI interconnect must have the same nonzero node allocation class, regardless of whether port allocation classes are used (see OpenVMS Cluster Systems).


Previous Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement  
6318PRO_017.HTML