Document revision date: 30 March 2001
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]
OpenVMS Cluster Systems

OpenVMS Cluster Systems

Order Number: AA--PV5WE--TK


April 2001

This manual describes procedures and guidelines for configuring and managing OpenVMS Cluster systems. Except where noted, the procedures and guidelines apply equally to VAX and Alpha computers. This manual also includes information for providing high availability, building-block growth, and unified system management across coupled systems.

Revision/Update Information: This manual supersedes OpenVMS Cluster Systems, OpenVMS Alpha Version 7.2 and OpenVMS VAX Version 7.2.

Software Version: OpenVMS Alpha Version 7.3 OpenVMS VAX Version 7.3




Compaq Computer Corporation
Houston, Texas


© 2001 Compaq Computer Corporation

Compaq, VAX, VMS, and the Compaq logo Registered in U.S. Patent and Trademark Office.

OpenVMS is a trademark of Compaq Information Technologies Group, L.P. in the United States and other countries.

Motif, OSF/1, and UNIX are trademarks of The Open Group in the United States and other countries.

All other product names mentioned herein may be trademarks of their respective companies.

Confidential computer software. Valid license from Compaq required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.

The information in this document is provided "as is" without warranty of any kind and is subject to change without notice. The warranties for Compaq products are set forth in the express limited warranty statements accompanying such products. Nothing herein should be construed as constituting an additional warranty.

ZK4477

The Compaq OpenVMS documentation set is available on CD-ROM.

Contents Index


Preface

Introduction

OpenVMS Cluster Systems describes system management for OpenVMS Cluster systems. Although the OpenVMS Cluster software for VAX and Alpha computers is separately purchased, licensed, and installed, the difference between the two architectures lies mainly in the hardware used. Essentially, system management for VAX and Alpha computers in an OpenVMS Cluster is identical. Exceptions are pointed out.

Who Should Use This Manual

This document is intended for anyone responsible for setting up and managing OpenVMS Cluster systems. To use the document as a guide to cluster management, you must have a thorough understanding of system management concepts and procedures, as described in the OpenVMS System Manager's Manual.

How This Manual Is Organized

OpenVMS Cluster Systems contains ten chapters and seven appendixes.

Chapter 1 introduces OpenVMS Cluster systems.

Chapter 2 presents the software concepts integral to maintaining OpenVMS Cluster membership and integrity.

Chapter 3 describes various OpenVMS Cluster configurations and the ways they are interconnected.

Chapter 4 explains how to set up an OpenVMS Cluster system and coordinate system files.

Chapter 5 explains how to set up an environment in which resources can be shared across nodes in the OpenVMS Cluster system.

Chapter 6 discusses disk and tape management concepts and procedures and how to use Volume Shadowing for OpenVMS to prevent data unavailability.

Chapter 7 discusses queue management concepts and procedures.

Chapter 8 explains how to build an OpenVMS Cluster system once the necessary preparations are made, and how to reconfigure and maintain the cluster.

Chapter 9 provides guidelines for configuring and building large OpenVMS Cluster systems, booting satellite nodes, and cross-architecture booting.

Chapter 10 describes ongoing OpenVMS Cluster system maintenance.

Appendix A lists and defines OpenVMS Cluster system parameters.

Appendix B provides guidelines for building a cluster common user authorization file.

Appendix C provides troubleshooting information.

Appendix D presents three sample programs for LAN control and explains how to use the Local Area OpenVMS Cluster Network Failure Analysis Program.

Appendix E describes the subroutine package used with local area OpenVMS Cluster sample programs.

Appendix F provides techniques for troubleshooting network problems related to the NISCA transport protocol.

Appendix G describes how the interactions of workload distribution and network topology affect OpenVMS Cluster system performance, and discusses transmit channel selection by PEDRIVER.

Associated Documents

This document is not a one-volume reference manual. The utilities and commands are described in detail in the OpenVMS System Manager's Manual, the OpenVMS System Management Utilities Reference Manual, and the OpenVMS DCL Dictionary.

For additional information on the topics covered in this manual, refer to the following documents:

For additional information about Compaq OpenVMS products and services, access the Compaq website at the following location:


http://www.openvms.compaq.com/ 

Note

1 This manual has been archived but is available on the OpenVMS Documentation CD-ROM.

Reader's Comments

Compaq welcomes your comments on this manual. Please send comments to either of the following addresses:
Internet openvmsdoc@compaq.com
Mail Compaq Computer Corporation
OSSG Documentation Group, ZKO3-4/U08
110 Spit Brook Rd.
Nashua, NH 03062-2698

How To Order Additional Documentation

Visit the following World Wide Web address for information about how to order additional documentation:


http://www.openvms.compaq.com/ 

If you need help deciding which documentation best meets your needs, call 800-282-6672.

Conventions

The following conventions are used in this manual:
[Return] In examples, a key name enclosed in a box indicates that you press a key on the keyboard. (In text, a key name is not enclosed in a box.)

In the HTML version of this document, this convention appears as brackets, rather than a box.

... A horizontal ellipsis in examples indicates one of the following possibilities:
  • Additional optional arguments in a statement have been omitted.
  • The preceding item or items can be repeated one or more times.
  • Additional parameters, values, or other information can be entered.
.
.
.
A vertical ellipsis indicates the omission of items from a code example or command format; the items are omitted because they are not important to the topic being discussed.
( ) In command format descriptions, parentheses indicate that you must enclose choices in parentheses if you specify more than one.
[ ] In command format descriptions, brackets indicate optional choices. You can choose one or more items or no items. Do not type the brackets on the command line. However, you must include the brackets in the syntax for OpenVMS directory specifications and for a substring specification in an assignment statement.
| In command format descriptions, vertical bars separate choices within brackets or braces. Within brackets, the choices are optional; within braces, at least one choice is required. Do not type the vertical bars on the command line.
{ } In command format descriptions, braces indicate required choices; you must choose at least one of the items listed. Do not type the braces on the command line.
bold text This typeface represents the introduction of a new term. It also represents the name of an argument, an attribute, or a reason.
italic text Italic text indicates important information, complete titles of manuals, or variables. Variables include information that varies in system output (Internal error number), in command lines (/PRODUCER= name), and in command parameters in text (where dd represents the predefined code for the device type).
UPPERCASE TEXT Uppercase text indicates a command, the name of a routine, the name of a file, or the abbreviation for a system privilege.
Monospace text Monospace type indicates code examples and interactive screen displays.

In the C programming language, monospace type in text identifies the following elements: keywords, the names of independently compiled external functions and files, syntax summaries, and references to variables or identifiers introduced in an example.

- A hyphen at the end of a command format description, command line, or code line indicates that the command or statement continues on the following line.
numbers All numbers in text are assumed to be decimal unless otherwise noted. Nondecimal radixes---binary, octal, or hexadecimal---are explicitly indicated.


Chapter 1
Introduction to OpenVMS Cluster System Management

"Cluster" technology was pioneered by Digital Equipment Corporation in 1983 with the VAXcluster system. The VAXcluster system was built using multiple standard VAX computing systems and the VMS operating system. The initial VAXcluster system offered the power and manageability of a centralized system and the flexibility of many physically distributed computing systems.

Through the years, the technology has evolved into OpenVMS Cluster systems, which support both the OpenVMS Alpha and the OpenVMS VAX operating systems and hardware, as well as a multitude of additional features and options. When Compaq Computer Corporation acquired Digital Equipment Corporation in 1998, it acquired the most advanced cluster technology available. Compaq continues to enhance and expand OpenVMS Cluster capabilities.

1.1 Overview

An OpenVMS Cluster system is a highly integrated organization of OpenVMS software, Alpha and VAX computers, and storage devices that operate as a single system. The OpenVMS Cluster acts as a single virtual system, even though it is made up of many distributed systems. As members of an OpenVMS Cluster system, Alpha and VAX computers can share processing resources, data storage, and queues under a single security and management domain, yet they can boot or shut down independently.

The distance between the computers in an OpenVMS Cluster system depends on the interconnects that you use. The computers can be located in one computer lab, on two floors of a building, between buildings on a campus, or on two different sites hundreds of miles apart.

An OpenVMS Cluster system with computers located on two different sites is known as a multiple-site OpenVMS Cluster system. A multiple-site OpenVMS Cluster forms the basis of a disaster tolerant OpenVMS Cluster system. For more information about multiple site clusters, refer to Guidelines for OpenVMS Cluster Configurations.

Disaster Tolerant Cluster Services for OpenVMS is a Compaq Services system management and software package for configuring and managing OpenVMS disaster tolerant clusters. For more information about Disaster Tolerant Cluster Services for OpenVMS, contact your Compaq Services representative.

1.1.1 Uses

OpenVMS Cluster systems are an ideal environment for developing high-availability applications, such as transaction processing systems, servers for network client/server applications, and data-sharing applications.

1.1.2 Benefits

Computers in an OpenVMS Cluster system interact to form a cooperative, distributed operating system and derive a number of benefits, as shown in the following table.
Benefit Description
Resource sharing OpenVMS Cluster software automatically synchronizes and load balances batch and print queues, storage devices, and other resources among all cluster members.
Flexibility Application programmers do not have to change their application code, and users do not have to know anything about the OpenVMS Cluster environment to take advantage of common resources.
High availability System designers can configure redundant hardware components to create highly available systems that eliminate or withstand single points of failure.
Nonstop processing The OpenVMS operating system, which runs on each node in an OpenVMS Cluster, facilitates dynamic adjustments to changes in the configuration.
Scalability Organizations can dynamically expand computing and storage resources as business needs grow or change without shutting down the system or applications running on the system.
Performance An OpenVMS Cluster system can provide high performance.
Management Rather than repeating the same system management operation on multiple OpenVMS systems, management tasks can be performed concurrently for one or more nodes.
Security Computers in an OpenVMS Cluster share a single security database that can be accessed by all nodes in a cluster.
Load balancing Distributes work across cluster members based on the current load of each member.

1.2 Hardware Components

OpenVMS Cluster system configurations consist of hardware components from the following general groups:

References: Detailed OpenVMS Cluster configuration guidelines can be found in the OpenVMS Cluster Software Software Product Description (SPD) and in Guidelines for OpenVMS Cluster Configurations.

1.2.1 Computers

Up to 96 computers, ranging from desktop to mainframe systems, can be members of an OpenVMS Cluster system. Active members that run the OpenVMS Alpha or OpenVMS VAX operating system and participate fully in OpenVMS Cluster negotiations can include:

1.2.2 Physical Interconnects

An interconnect is a physical path that connects computers to other computers and to storage subsystems. OpenVMS Cluster systems support a variety of interconnects (also referred to as buses) so that members can communicate using the most appropriate and effective method possible:

1.2.3 Storage Devices

A shared storage device is a disk or tape that is accessed by multiple computers in the cluster. Nodes access remote disks and tapes by means of the MSCP and TMSCP server software (described in Section 1.3.1).

Systems within an OpenVMS Cluster support a wide range of storage devices:

1.3 Software Components

The OpenVMS operating system, which runs on each node in the OpenVMS Cluster, includes several software components that facilitate resource sharing and dynamic adjustments to changes in the underlying hardware configuration.

If one computer becomes unavailable, the OpenVMS Cluster system continues operating because OpenVMS is still running on the remaining computers.

1.3.1 OpenVMS Cluster Software Functions

The following table describes the software components and their main function.
Component Facilitates Function
Connection manager Member integrity Coordinates participation of computers in the cluster and maintains cluster integrity when computers join or leave the cluster.
Distributed lock manager Resource synchronization Synchronizes operations of the distributed file system, job controller, device allocation, and other cluster facilities. If an OpenVMS Cluster computer shuts down, all locks that it holds are released so processing can continue on the remaining computers.
Distributed file system Resource sharing Allows all computers to share access to mass storage and file records, regardless of the type of storage device (DSA, RF, SCSI, and solid state subsystem) or its location.
Distributed job controller Queuing Makes generic and execution queues available across the cluster.
MSCP server Disk serving Implements the proprietary mass storage control protocol in order to make disks available to all nodes that do not have direct access to those disks.
TMSCP server Tape serving Implements the proprietary tape mass storage control protocol in order to make tape drives available to all nodes that do not have direct access to those tape drives.

1.4 Communications

The System Communications Architecture (SCA) defines the communications mechanisms that allow nodes in an OpenVMS Cluster system to cooperate. It governs the sharing of data between resources at the nodes and binds together System Applications (SYSAPs) that run on different Alpha and VAX computers.

SCA consists of the following hierarchy of components:
Communications Software Function
System applications (SYSAPs) Consists of clusterwide applications (for example, disk and tape class drivers, connection manager, and MSCP server) that use SCS software for interprocessor communication.
System Communications Services (SCS) Provides basic connection management and communication services, implemented as a logical path, between system applications (SYSAPs) on nodes in an OpenVMS Cluster system.
Port drivers Control the communication paths between local and remote ports.
Physical interconnects Consists of ports or adapters for CI, DSSI, Ethernet (10/100 and Gigabit), ATM, FDDI, and MEMORY CHANNEL interconnects.


Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement  
4477PRO.HTML