Document revision date: 19 July 1999
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]
OpenVMS Cluster Systems

OpenVMS Cluster Systems

Order Number: AA--PV5WD--TK

January 1999

This manual describes procedures and guidelines for configuring and managing OpenVMS Cluster systems. Except where noted, the procedures and guidelines apply equally to VAX and Alpha computers. This manual also includes information for providing high availability, building-block growth, and unified system management across coupled systems.

Revision/Update Information: This manual supersedes VMScluster Systems for OpenVMS, OpenVMS Alpha Version 7.1 and OpenVMS VAX Version 7.1.

Software Version: OpenVMS Alpha Version 7.2 OpenVMS VAX Version 7.20

Compaq Computer Corporation
Houston, Texas

January 1999

Compaq Computer Corporation makes no representations that the use of its products in the manner described in this publication will not infringe on existing or future patent rights, nor do the descriptions contained in this publication imply the granting of licenses to make, use, or sell equipment or software in accordance with the description.

Possession, use, or copying of the software described in this publication is authorized only pursuant to a valid written license from Compaq or an authorized sublicensor.

Compaq conducts its business in a manner that conserves the environment and protects the safety and health of its employees, customers, and the community.

© Compaq Computer Corporation 1999. All rights reserved.

The following are trademarks of Compaq Computer Corporation: Alpha, BI, CI, DEC, DECdtm, DECmcc, DECnet, DECram, DELNI, DELUA, DEMPR, DEQNA, DESTA, DIGITAL, DIGITAL UNIX, HSC, HSD, HSJ, HSZ, KDA, LAN Bridge 200, LAT, MASSBUS, MicroVAX, MicroVAX II, MSCP, OpenVMS, POLYCENTER, Q-bus, RA, RK, RL, RZ, SDI, STI, StorageWorks, TA, ThinWire, TK, TMSCP, TU, UDA, UNIBUS, VAX VAX DOCUMENT, VAX 11/750, VAX 11/780, VAX 6000, VAX 8200, VAX 8250, VAX 8300, VAX 8350, VAX 8600, VAX 9000, VAX RMS, VAXcluster, VAXft, VAXsimPLUS VAXstation, VAXstation 4000 VLC, VMS, VMScluster, VT, VT200, VT300, VT320, VT330, VT340, VT420, XMI, XUI, and the Compaq logo.

The following are third-party trademarks:

Adobe, Adobe Illustrator, Display POSTSCRIPT, and POSTSCRIPT are registered trademarks of Adobe Systems Incorporated.

Internet is a registered trademark of Internet, Inc.

Hewlett-Packard, HP, and HP 4927A LAN Protocol Analyzer are registered trademarks of the Hewlett-Packard Company.

Microsoft, Windows, and Windows NT are registered trademarks of Microsoft Corporation.

NT is a trademark of Microsoft Corporation.

All other trademarks and registered trademarks are the property of their respective holders.


The OpenVMS documentation set is available on CD-ROM.

This document was prepared using VAX DOCUMENT, Version V3.2n.

Contents Index



OpenVMS Cluster Systems describes system management for OpenVMS Cluster systems. Although the OpenVMS Cluster software for VAX and Alpha computers is separately purchased, licensed, and installed, the difference between the two architectures lies mainly in the hardware used. Essentially, system management for VAX and Alpha computers in an OpenVMS Cluster is identical. Exceptions are pointed out.

Who Should Use This Manual

This document is intended for anyone responsible for setting up and managing OpenVMS Cluster systems. To use the document as a guide to cluster management, you must have a thorough understanding of system management concepts and procedures, as described in the OpenVMS System Manager's Manual.

How This Manual Is Organized

OpenVMS Cluster Systems contains ten chapters and seven appendixes.

Chapter 1 introduces OpenVMS Cluster systems.

Chapter 2 presents the software concepts integral to maintaining OpenVMS Cluster membership and integrity.

Chapter 3 describes various OpenVMS Cluster configurations and the ways they are interconnected.

Chapter 4 explains how to set up an OpenVMS Cluster system and coordinate system files.

Chapter 5 explains how to set up an environment in which resources can be shared across nodes in the OpenVMS Cluster system.

Chapter 6 discusses disk and tape management concepts and procedures and how to use Volume Shadowing for OpenVMS to prevent data unavailability.

Chapter 7 discusses queue management concepts and procedures.

Chapter 8 explains how to build an OpenVMS Cluster system once the necessary preparations are made, and how to reconfigure and maintain the cluster.

Chapter 9 provides guidelines for configuring and building large OpenVMS Cluster systems, booting satellite nodes, and cross-architecture booting.

Chapter 10 describes ongoing OpenVMS Cluster system maintenance.

Appendix A lists and defines OpenVMS Cluster system parameters.

Appendix B provides guidelines for building a cluster common user authorization file.

Appendix C provides troubleshooting information.

Appendix D presents three sample programs for LAN control and explains how to use the Local Area OpenVMS Cluster Network Failure Analysis Program.

Appendix E describes the subroutine package used with local area OpenVMS Cluster sample programs.

Appendix F provides techniques for troubleshooting network problems related to the NISCA transport protocol.

Appendix G describes how the interactions of workload distribution and network topology affect OpenVMS Cluster system performance, and discusses transmit channel selection by PEDRIVER.

Associated Documents

This document is not a one-volume reference manual. The utilities and commands are described in detail in the OpenVMS System Manager's Manual, the OpenVMS System Management Utilities Reference Manual, and the OpenVMS DCL Dictionary.

For additional information on the topics covered in this manual, refer to the following documents:

For additional information on the Open Systems Software Group (OSSG) products and services, access the following Digital OpenVMS World Wide Web address: 


1 This manual has been archived but is available in PostScript and DECW$BOOK (Bookreader) formats on the OpenVMS Documentation CD-ROM. A printed book can be ordered through DECdirect (800-354-4825).

Reader's Comments

Compaq welcomes your comments on this manual.

Print or edit the online form SYS$HELP:OPENVMSDOC_COMMENTS.TXT and send us your comments by:
Fax 603 884-0120, Attention: OSSG Documentation, ZKO3-4/U08
Mail Compaq Computer Corporation
OSSG Documentation Group, ZKO3-4/U08
110 Spit Brook Rd.
Nashua, NH 03062-2698

How To Order Additional Documentation

Use the following World Wide Web address to order additional documentation:

If you need help deciding which documentation best meets your needs, call 800-DIGITAL (800-344-4825).


VMScluster systems are now referred to as OpenVMS Cluster systems. Unless otherwise specified, references in this document to OpenVMS Clusters or clusters are synonymous with VMSclusters.

Note: Discussions that refer to OpenVMS Cluster environments apply to both VAXcluster systems that include only VAX nodes and OpenVMS Cluster systems that include at least one Alpha node, unless indicated otherwise. When the behavior differs significantly between a VAXcluster system and an OpenVMS Cluster system, that behavior is described in text and is marked with the Alpha or VAX icon, as appropriate.

The following conventions are also used in this manual:
Ctrl/ x A sequence such as Ctrl/ x indicates that you must hold down the key labeled Ctrl while you press another key or a pointing device button.
[Return] In examples, a key name enclosed in a box indicates that you press a key on the keyboard. (In text, a key name is not enclosed in a box.)
... Horizontal ellipsis points in examples indicate one of the following possibilities:
  • Additional optional arguments in a statement have been omitted.
  • The preceding item or items can be repeated one or more times.
  • Additional parameters, values, or other information can be entered.
Vertical ellipsis points indicate the omission of items from a code example or command format; the items are omitted because they are not important to the topic being discussed.
( ) In command format descriptions, parentheses indicate that, if you choose more than one option, you must enclose the choices in parentheses.
[ ] In command format descriptions, brackets indicate optional elements. You can choose one, none, or all of the options. (Brackets are not optional, however, in the syntax of a directory name in an OpenVMS file specification or in the syntax of a substring specification in an assignment statement.)
{ } In command format descriptions, braces surround a required choice of options; you must choose one of the options listed.
bold text This text style represents the introduction of a new term or the name of an argument, an attribute, or a reason.
italic text Italic text indicates important information, complete titles of manuals, or variables. Variables include information that varies in system output (Internal error number), in command lines (/PRODUCER= name), and in command parameters in text (where dd represents the predefined code for the device type).
UPPERCASE TEXT Uppercase text indicates a command, the name of a routine, the name of a file, or the abbreviation for a system privilege.
Monospace text Monospace type indicates code examples and interactive screen displays.
- A hyphen in code examples indicates that additional arguments to the request are provided on the line that follows.
numbers All numbers in text are assumed to be decimal unless otherwise noted. Nondecimal radixes---binary, octal, or hexadecimal---are explicitly indicated.

Chapter 1
Introduction to OpenVMS Cluster System Management

Digital Equipment Corporation pioneered "cluster" technology in 1983 with the VAXcluster system, which was built using multiple standard VAX computing systems and the VMS operating system. The initial VAXcluster system offered the power and manageability of a centralized system and the flexibility of many physically distributed computing systems.

Through the years, the technology has evolved into OpenVMS Cluster systems, which support both the OpenVMS Alpha and the OpenVMS VAX operating systems and hardware, as well as a multitude of additional features and options.

1.1 Overview

An OpenVMS Cluster system is a highly integrated organization of OpenVMS software, Alpha and VAX computers, and storage devices that operate as a single system. The OpenVMS Cluster acts as a single virtual system, even though it is made up of many distributed systems. As members of an OpenVMS Cluster system, Alpha and VAX computers can share processing resources, data storage, and queues under a single security and management domain, yet they can boot or shut down independently.

The distance between the computers in an OpenVMS Cluster system depends on the interconnects that you use. The computers can be located in one computer lab, on two floors of a building, between buildings on a campus, or on two different sites up to 500 kilometers apart.

An OpenVMS Cluster system with computers located on two different sites is known as a multiple-site OpenVMS Cluster system. A multiple-site OpenVMS Cluster forms the basis of a disaster tolerant OpenVMS Cluster system. For more information about multiple site clusters, refer to Guidelines for OpenVMS Cluster Configurations.

Disaster Tolerant Cluster Services for OpenVMS is a system management and software package for configuring and managing OpenVMS disaster tolerant clusters. For more information about Disaster Tolerant Cluster Services for OpenVMS, contact your Compaq Services representative.

1.1.1 Uses

OpenVMS Cluster systems are an ideal environment for developing high-availability applications, such as transaction processing systems, servers for network client/server applications, and data-sharing applications.

1.1.2 Benefits

Computers in an OpenVMS Cluster system interact to form a cooperative, distributed operating system and derive a number of benefits, as shown in the following table.
Benefit Description
Resource sharing OpenVMS Cluster software automatically synchronizes and load balances batch and print queues, storage devices, and other resources among all cluster members.
Flexibility Application programmers do not have to change their application code, and users do not have to know anything about the OpenVMS Cluster environment to take advantage of common resources.
High availability System designers can configure redundant hardware components to create highly available systems that eliminate or withstand single points of failure.
Nonstop processing The OpenVMS operating system, which runs on each node in an OpenVMS Cluster, facilitates dynamic adjustments to changes in the configuration.
Scalability Organizations can dynamically expand computing and storage resources as business needs grow or change without shutting down the system or applications running on the system.
Performance An OpenVMS Cluster system can provide high performance.
Management Rather than repeating the same system management operation on multiple OpenVMS systems, management tasks can be performed concurrently for one or more nodes.
Security Computers in an OpenVMS Cluster share a single security database that can be accessed by all nodes in a cluster.
Load balancing Distributes work across cluster members based on the current load of each member.

1.2 Hardware Components

OpenVMS Cluster system configurations consist of hardware components from the following general groups:

References: Detailed OpenVMS Cluster configuration guidelines can be found in the OpenVMS Cluster Software Software Product Description (SPD) and in Guidelines for OpenVMS Cluster Configurations.

1.2.1 Computers

Up to 96 computers, ranging from desktop to mainframe systems, can be members of an OpenVMS Cluster system. Active members that run the OpenVMS Alpha or OpenVMS VAX operating system and participate fully in OpenVMS Cluster negotiations can include:

1.2.2 Physical Interconnects

An interconnect is a physical path that connects computers to other computers, and to storage subsystems. OpenVMS Cluster systems support a variety of interconnects, also referred to as buses, so that members can communicate using the most appropriate and effective method possible:

1.2.3 Storage Devices

A shared storage device is a disk or tape that is accessed by multiple computers in the cluster. Nodes access remote disks and tapes by means of the MSCP and TMSCP server software (described in Section 1.3.1).

Systems within an OpenVMS Cluster support a wide range of storage devices:

1.3 Software Components

The OpenVMS operating system, which runs on each node in the OpenVMS Cluster, includes several software components that facilitate resource sharing and dynamic adjustments to changes in the underlying hardware configuration.

If one computer becomes unavailable, the OpenVMS Cluster system continues operating because OpenVMS is still running on the remaining computers.

1.3.1 OpenVMS Cluster Software Functions

The following table describes the software components and their main function.
Component Facilitates Function
Connection manager Member integrity Coordinates participation of computers in the cluster and maintains cluster integrity when computers join or leave the cluster.
Distributed lock manager Resource synchronization Synchronizes operations of the distributed file system, job controller, device allocation, and other cluster facilities. If an OpenVMS Cluster computer shuts down, all locks that it holds are released so processing can continue on the remaining computers.
Distributed file system Resource sharing Allows all computers to share access to mass storage and file records, regardless of the type of storage device (DSA, RF, SCSI, and solid state subsystem) or its location.
Distributed job controller Queuing Makes generic and execution queues available across the cluster.
MSCP server Disk serving Implements the proprietary mass storage control protocol in order to make disks available to all nodes that do not have direct access to those disks.
TMSCP server Tape serving Implements the proprietary tape mass storage control protocol in order to make tape drives available to all nodes that do not have direct access to those tape drives.

Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement