[OpenVMS documentation]
[Site home] [Send comments] [Help with this site] [How to order documentation] [OpenVMS site] [Compaq site]
Updated: 11 December 1998

OpenVMS Cluster Systems


Previous Contents Index


Chapter 8
Configuring an OpenVMS Cluster System

This chapter provides an overview of the cluster configuration command procedures and describes the preconfiguration tasks required before running either command procedure. Then it describes each major function of the command procedures and the postconfiguration tasks, including running AUTOGEN.COM.

8.1 Overview of the Cluster Configuration Procedures

Two similar command procedures are provided for configuring and reconfiguring an OpenVMS Cluster system: CLUSTER_CONFIG_LAN.COM and CLUSTER_CONFIG.COM. The choice depends on whether you use the LANCP utility or DECnet for satellite booting in your cluster. CLUSTER_CONFIG_LAN.COM provides satellite booting services with the LANCP utility; CLUSTER_CONFIG.COM provides satellite booting sevices with DECnet. See Section 4.5 for the factors to consider when choosing a satellite booting service.

These configuration procedures automate most of the tasks required to configure an OpenVMS Cluster system. When you invoke CLUSTER_CONFIG_LAN.COM or CLUSTER_CONFIG.COM, the following configuration options are displayed:

By selecting the appropriate option, you can configure the cluster easily and reliably without invoking any OpenVMS utilities directly. Table 8-1 summarizes the functions that the configuration procedures perform for each configuration option.

The phrase cluster configuration command procedure, when used in this chapter, refers to both CLUSTER_CONFIG_LAN.COM and CLUSTER_CONFIG.COM. The questions of the two configuration procedures are identical except where they pertain to LANCP and DECnet.

Note: For help on any question in these command procedures, type a question mark (?) at the question.

Table 8-1 Summary of Cluster Configuration Functions
Option Functions Performed
ADD Enables a node as a cluster member:
  • Establishes the new computer's root directory on a cluster common system disk and generates the computer's system parameter files (ALPHAVMSSYS.PAR for Alpha systems or VAXVMSSYS.PAR for VAX systems), and MODPARAMS.DAT in its SYS$SPECIFIC:[SYSEXE] directory.
  • Generates the new computer's page and swap files (PAGEFILE.SYS and SWAPFILE.SYS).
  • Sets up a cluster quorum disk (optional).
  • Sets disk allocation class values, or port allocation class values (Alpha only), or both, with the ALLOCLASS parameter for the new computer, if the computer is being added as a disk server. If the computer is being added as a tape server, sets a tape allocation class value with the TAPE_ALLOCLASS parameter.

    Note: ALLOCLASS must be set to a value greater than zero if you are configuring an Alpha computer on a shared SCSI bus and you are not using a port allocation class.

  • Generates an initial (temporary) startup procedure for the new computer. This initial procedure:
    • Runs NETCONFIG.COM to configure the network.
    • Runs AUTOGEN to set appropriate system parameter values for the computer.
    • Reboots the computer with normal startup procedures.
  • If the new computer is a satellite node, the configuration procedure updates:
    • Network databases for the computer on which the configuration procedure is executed to add the new computer.
    • SYS$MANAGER:NETNODE_UPDATE.COM command procedure on the local computer (as described in Section 10.4.2).
REMOVE Disables a node as a cluster member:
  • Deletes another computer's root directory and its contents from the local computer's system disk. If the computer being removed is a satellite, the cluster configuration command procedure updates SYS$MANAGER:NETNODE_UPDATE.COM on the local computer.
  • Updates the permanent and volatile remote node network databases on the local computer.
  • Removes the quorum disk.
CHANGE Displays the CHANGE menu and prompts for appropriate information to:
  • Enable or disable the local computer as a disk server
  • Enable or disable the local computer as a boot server
  • Enable or disable the Ethernet or FDDI LAN for cluster communications on the local computer
  • Enable or disable a quorum disk on the local computer
  • Change a satellite's Ethernet or FDDI hardware address
  • Enable or disable the local computer as a tape server
  • Change the local computer's ALLOCLASS or TAPE_ALLOCLASS value
  • Change the local computer's shared SCSI port allocation class value
  • Enable or disable MEMORY CHANNEL for node-to-node cluster communications on the local computer
CREATE Duplicates the local computer's system disk and removes all system roots from the new disk.
MAKE Creates a directory structure for a new root on a system disk.
DELETE Deletes a root from a system disk.

8.1.1 Before Configuring the System

Before invoking either the CLUSTER_CONFIG_LAN.COM or the CLUSTER_CONFIG.COM procedure to configure an OpenVMS Cluster system, perform the tasks described in Table 8-2.

Table 8-2 Preconfiguration Tasks
Task Procedure
Determine whether the computer uses DECdtm. When you add a computer to or remove a computer from a cluster that uses DECdtm services, there are a number of tasks you need to do in order to ensure the integrity of your data.

Reference: See the chapter about DECdtm services in the OpenVMS System Manager's Manual for step-by-step instructions on setting up DECdtm in an OpenVMS Cluster system.

If you are not sure whether your cluster uses DECdtm services, enter this command sequence:

$ SET PROCESS /PRIVILEGES=SYSPRV

$ RUN SYS$SYSTEM:LMCP
LMCP> SHOW LOG

If your cluster does not use DECdtm services, the SHOW LOG command will display a "file not found" error message. If your cluster uses DECdtm services, it displays a list of the files that DECdtm uses to store information about transactions.

Ensure the network software providing the satellite booting service is up and running and all computers are connected to the LAN. For nodes that will use the LANCP utility for satellite booting, run the LANCP utility and enter the LANCP command LIST DEVICE/MOPDLL to display a list of LAN devices on the system:
$ RUN SYS$SYSTEM:LANCP

LANCP> LIST DEVICE/MOPDLL

For nodes running DECnet for OpenVMS, enter the DCL command SHOW NETWORK to determine whether the network is up and running:

$ SHOW NETWORK

VAX/VMS Network status for local node 63.452 VIVID on 5-NOV-1994
This is a nonrouting node, and does not have any network information.
The designated router for VIVID is node 63.1021 SATURN.

This example shows that the node VIVID is running DECnet for OpenVMS. If DECnet has not been started, the message "SHOW-I-NONET, Network Unavailable" is displayed.

For nodes running DECnet--Plus, refer to DECnet for OpenVMS Network Management Utilities for information about determining whether the DECnet--Plus network is up and running.

Select MOP and disk servers. Every OpenVMS Cluster configured with satellite nodes must include at least one Maintenance Operations Protocol (MOP) and disk server. When possible, select multiple computers as MOP and disk servers. Multiple servers give better availability, and they distribute the work load across more LAN adapters.

Follow these guidelines when selecting MOP and disk servers:

  • Ensure that MOP servers have direct access to the system disk.
  • Ensure that disk servers have direct access to the storage that they are serving.
  • Choose the most powerful computers in the cluster. Low-powered computers can become overloaded when serving many busy satellites or when many satellites boot simultaneously. Note, however, that two or more moderately powered servers may provide better performance than a single high-powered server.
  • If you have several computers of roughly comparable power, it is reasonable to use them all as boot servers. This arrangement gives optimal load balancing. In addition, if one computer fails or is shut down, others remain available to serve satellites.
  • After compute power, the most important factor in selecting a server is the speed of its LAN adapter. Servers should be equipped with the highest-bandwidth LAN adapters in the cluster.
Make sure you are logged in to a privileged account. Log in to a privileged account.

Rules: If you are adding a satellite, you must be logged into the system manager's account on a boot server. Note that the process privileges SYSPRV, OPER, CMKRNL, BYPASS, and NETMBX are required, because the procedure performs privileged system operations.

Coordinate cluster common files. If your configuration has two or more system disks, follow the instructions in Chapter 5 to coordinate the cluster common files.
Optionally, disable broadcast messages to your terminal. While adding and removing computers, many such messages are generated. To disable the messages, you can enter the DCL command REPLY/DISABLE=(NETWORK, CLUSTER). See also Section 10.6 for more information about controlling OPCOM messages.
Predetermine answers to the questions asked by the cluster configuration procedure. Table 8-3 describes the data requested by the cluster configuration command procedures.

8.1.2 Data Requested by the Cluster Configuration Procedures

The following table describes the questions asked by the cluster configuration command procedures and describes how you might answer them. The table is supplied here so that you can determine answers to the questions before you invoke the procedure.

Because many of the questions are configuration specific, Table 8-3 lists the questions according to configuration type, and not in the order they are asked.

Table 8-3 Data Requested by CLUSTER_CONFIG_LAN.COM and CLUSTER_CONFIG.COM
Information Required How to Specify or Obtain
For all configurations
Device name of cluster system disk on which root directories will be created Press Return to accept the default device name which is the translation of the SYS$SYSDEVICE: logical name, or specify a logical name that points to the common system disk.
Computer's root directory name on cluster system disk Press Return to accept the procedure-supplied default, or specify a name in the form SYS x:
  • For computers with direct access to the system disk, x is a hexadecimal digit in the range of 1 through 9 or A through D (for example, SYS1 or SYSA).
  • For satellites, x must be in the range of 10 through FFFF.
Workstation windowing system System manager specifies. Workstation software must be installed before workstation satellites are added. If it is not, the procedure indicates that fact.
Location and sizes of page and swap files This information is requested only when you add a computer to the cluster. Press Return to accept the default size and location (The default sizes displayed in brackets by the procedure are minimum values. The default location is the device name of the cluster system disk.)

If your configuration includes satellite nodes, you may realize a performance improvement by locating satellite page and swap files on a satellite's local disk, if such a disk is available. The potential for performance improvement depends on the configuration of your OpenVMS Cluster system disk and network.

To set up page and swap files on a satellite's local disk, the cluster configuration procedure creates a command procedure called SATELLITE_PAGE.COM in the satellite's [SYS n.SYSEXE] directory on the boot server's system disk. The SATELLITE_PAGE.COM procedure performs the following functions:

  • Mounts the satellite's local disk with a volume label that is unique in the cluster in the format node-name_SCSSYSTEMID.

    Reference: Refer to Section 8.6.5 for information about altering the volume label.

  • Installs the page and swap files on the satellite's local disk.

Note: To relocate the satellite's page and swap files (for example, from the satellite's local disk to the boot server's system disk, or the reverse) or to change file sizes:

  1. Create new PAGE and SWAP files on a shared device, as shown:
     $ MCR SYSGEN CREATE
    
    device:[directory]PAGEFILE.SYS/SIZE= block-count

    Note: If page and swap files will be created for a shadow set, you must edit SATELLITE_PAGE accordingly.

  2. Rename the SYS$SPECIFIC:[SYSEXE]PAGEFILE.SYS and SWAPFILE.SYS files to PAGEFILE.TMP and SWAPFILE.TMP.
  3. Reboot, and then delete the .TMP files.
  4. Modify the SYS$MANAGER:SYPAGSWPFILES.COM procedure to load the files.
Value for local computer's allocation class (ALLOCLASS or TAPE_ALLOCLASS) parameter. The ALLOCLASS parameter can be used for a node allocation class or, on Alpha computers, a port allocation class. Refer to Section 6.2.1 for complete information about specifying allocation classes.
Physical device name of quorum disk System manager specifies.
For systems running DECnet for OpenVMS
Computer's DECnet node address for Phase IV For the DECnet node address, you obtain this information as follows:
  • If you are adding a computer, the network manager supplies the address.
  • If you are removing a computer, use the SHOW NETWORK command (as shown in Table 8-2).
Computer's DECnet node name Network manager supplies. The name must be from 1 to 6 alphanumeric characters and cannot include dollar signs ($) or underscores (_).
For systems running DECnet--Plus
Computer's DECnet node address for Phase IV (if you need Phase IV compatibility) For the DECnet node address, you obtain this information as follows:
  • If you are adding a computer, the network manager supplies the address.
  • If you are removing a computer, use the SHOW NETWORK command (as shown in Table 8-2).
Node's DECnet full name Determine the full name with the help of your network manager. Enter a string comprised of:
  • The namespace name, ending with a colon (:). This is optional.
  • The root directory, designated by a period (.).
  • Zero or more hierarchical directories, designated by a character string followed by a period (.).
  • The simple name, a character string that, combined with the directory names, uniquely identifies the node. For example:
    • .SALES.NETWORKS.MYNODE
    • MEGA:.INDIANA.JONES
    • COLUMBUS:.FLATWORLD
SCS node name for this node Enter the OpenVMS Cluster node name, which is a string of 6 or fewer alphanumeric characters.
DECnet synonym Press Return to define a DECnet synonym, which is a short name for the node's full name. Otherwise, enter N.
Synonym name for this node Enter a string of 6 or fewer alphanumeric characters. By default, it is the first 6 characters of the last simple name in the full name. For example:
  • +Full name: BIGBANG:.GALAXY.NOVA.BLACKHOLE
  • Synonym: BLACKH

Note: The node synonym does not need to be the same as the OpenVMS Cluster node name.

MOP service client name for this node Enter the name for the node's MOP service client when the node is configured as a boot server. By default, it is the OpenVMS Cluster node name (for example, the SCS node name). This name does not need to be the same as the OpenVMS Cluster node name.
For systems running the LANCP Utility for Satellite Booting
Computer's SCS node name and ID See Section 4.2.3.
For LAN configurations
Cluster group number and password This information is requested only when the CHANGE option is chosen. See Section 2.5 for information about assigning cluster group numbers and passwords.
Satellite's LAN hardware address Address has the form xx-xx-xx-xx-xx-xx. You must include the hyphens when you specify a hardware address. Proceed as follows:
  • ++On Alpha systems, enter the following command at the satellite's console:
     >>> SHOW NETWORK
    

    Note that you can also use the SHOW CONFIG command.

  • +On MicroVAX II and VAXstation II satellite nodes. When the DECnet for OpenVMS network is running on a boot server, enter the following commands at the satellite's console:
     >>> B/100 XQA0
    
    Bootfile: READ_ADDR
  • +On MicroVAX 2000 and VAXstation 2000 satellite nodes. When the DECnet for OpenVMS network is running on a boot server, enter the following commands at successive console-mode prompts:
     >>> T 53
    
    2 ?>>> 3
    >>> B/100 ESA0
    Bootfile: READ_ADDR

    If the second prompt appears as 3 ?>>>, press Return.

  • +On MicroVAX 3 xxx and 4 xxx series satellite nodes, enter the following command at the satellite's console:
     >>> SHOW ETHERNET
    


+DECnet--Plus full-name functionality is VAX specific.
++Alpha specific

8.1.3 Invoking the Procedure

Once you have made the necessary preparations, you can invoke the cluster configuration procedure to configure your OpenVMS Cluster system. Log in to the system manager account and make sure your default is SYS$MANAGER. Then, invoke the procedure at the DCL command prompt as follows:


$ @CLUSTER_CONFIG_LAN

or


$ @CLUSTER_CONFIG

Caution: Do not invoke multiple sessions simultaneously. You can run only one cluster configuration session at a time.

Once invoked, both procedures display the following information and menu. (The only difference between CLUSTER_CONFIG_LAN.COM and CLUSTER_CONFIG.COM at this point is the command procedure name that is displayed.) Depending on the menu option you select, the procedure interactively requests configuration information from you. (Predetermine your answers as described in Table 8-3.)


                 Cluster Configuration Procedure 
 
    Use CLUSTER_CONFIG.COM to set up or change an OpenVMS Cluster configuration. 
    To ensure that you have the required privileges, invoke this procedure 
    from the system manager's account. 
 
    Enter ? for help at any prompt. 
 
            1. ADD a node to the cluster. 
            2. REMOVE a node from the cluster. 
            3. CHANGE a cluster member's characteristics. 
            4. CREATE a second system disk for JUPITR. 
            5. MAKE a directory structure for a new root on a system disk. 
            6. DELETE a root from a system disk. 
 
    Enter choice [1]: 
   .
   .
   .

This chapter contains a number of sample sessions showing how to run the cluster configuration procedures. Although the CLUSTER_CONFIG_LAN.COM and the CLUSTER_CONFIG.COM procedure function the same for both Alpha and VAX systems, the questions and format may appear slightly different according to the type of computer system.


Previous Next Contents Index

[Site home] [Send comments] [Help with this site] [How to order documentation] [OpenVMS site] [Compaq site]
[OpenVMS documentation]

Copyright © Compaq Computer Corporation 1998. All rights reserved.

Legal
4477PRO_013.HTML