Updated: 12 December 1998 |
OpenVMS Version 7.2 Release Notes
Previous | Contents | Index |
The following release notes pertain to OpenVMS Cluster systems.
4.14.1 Changes and Enhancements
This section contains notes about changes and enhancements to OpenVMS
Cluster systems.
4.14.1.1 New HSZ Allocation Class (Alpha Only)
V7.2
OpenVMS Alpha Version 7.2 includes a new device-naming option, the HSZ allocation class, for devices on HSZ70 and HSZ80 storage controllers. HSZ allocation classes are documented in the new chapter, Configuring Multiple Paths to SCSI and Fibre Channel Storage, in Guidelines for OpenVMS Cluster Configurations.
The HSZ allocation class is required when an HSZ is used in a multipath configuration. It can optionally be used in non-multipath configurations provided that all the systems with a direct SCSI connection to the HSZ are running OpenVMS Version 7.2.
The HSZ allocation class takes precedence over all other device-naming methods. That is, if an HSZ controller has a valid HSZ allocation class, then the HSZ allocation class is always used to form the device name; the port allocation class (PAC), node allocation class, port letter, and node name are not used to form the device name.
As a result, the SCSI bus configuration algorithm has been modified. Some of the previous restrictions placed on a shared SCSI bus do not apply when an HSZ allocation class is used.
Prior to OpenVMS Version 7.2, the SCSI bus configuration code could configure devices on a shared SCSI bus only if all nodes supplied either matching PACs or matching port letters and matching node allocation classes. Such matching values were required to name a device consistently across all nodes in the cluster. Not only did the configuration code refuse to configure devices that failed these checks, but if the system had been running for less than 20 minutes, a system halt occurred.
The behavior of the configuration code in OpenVMS Version 7.2 is as follows:
Note that certain devices, including those whose names include an HSZ
allocation class, cannot be selectively autoconfigured using the SYSMAN
utility. For more information on selective autoconfiguration, see
Section 6.3.
4.14.1.2 OpenVMS Cluster Compatibility Kits for Version 6.2 Systems
V7.1--1H1
The OpenVMS Cluster Compatibility Kits that shipped with OpenVMS Version 7.1 have been superseded by remedial kits ALPCLUSIOnn_062 for Alpha systems and VAXCLUSIOnn_062 for VAX systems. These kits are required for OpenVMS Version 6.2 systems that are members of an OpenVMS Cluster system that includes systems running one or more of the following OpenVMS versions:
The kits contain the OpenVMS Version 7.1 enhancements to Volume Shadowing, Mount, lock manager, and other quality improvements for OpenVMS Version 6.2 systems, as well as additional enhancements to these subsystems made after the release of OpenVMS Version 7.1. The kits also contain limited support for SCSI device naming using port allocation classes.
These remedial kits are included on the OpenVMS Version 7.2 CD-ROM. You can also obtain these kits from your Compaq support representative or from the following web site:
V7.1
A change has been made to OpenVMS Cluster Client licensing. Prior to Version 7.1, the OpenVMS Cluster Client license enabled full OpenVMS Cluster functionality, with the following exceptions:
Previously, the first exception regarding voting was not enforced.
Starting with Version 7.1, this exception is enforced.
4.14.2 Problems and Restrictions
This section describes problems and restrictions pertaining to OpenVMS
Cluster systems. Note that most SCSI cluster restrictions are described
in the SCSI OpenVMS Cluster appendix in Guidelines for OpenVMS Cluster Configurations, although some
appear only in these release notes.
4.14.2.1 Mixed-Version Incompatibilities
V7.2
A change to the OpenVMS Cluster Version 7.2 software prevents systems running OpenVMS Version 7.2 from participating in a cluster in which one or more nodes are running any of the following software versions:
If you attempt to boot a Version 7.2 node into a cluster containing a node with any of these older versions, the Version 7.2 node cannot join the cluster, and it crashes with a CLUSWVER bugcheck.
Similarly, if you attempt to boot a node running one of these older versions into a cluster in which one or more nodes are running Version 7.2, the older version node cannot join the cluster, and it crashes with the same CLUSWVER bugcheck.
For information about supported software versions in mixed-version and
mixed-architecture clusters, refer to the OpenVMS Version 7.2 New Features Manual.
4.14.2.2 DECnet-Plus Satellite Boot Restriction (Alpha Only)
V7.2
The DECnet-Plus MOP satellite boot service cannot be used to successfully boot an OpenVMS Cluster satellite system from a system disk that has a positive SCSI Port Allocation Class (PAC) value or an HSZ70/80 controller allocation class.
The failing satellite displays the following messages during boot:
%VMScluster-I-MSCPCONN, Connected to a MSCP server for the system disk, node ALAN %VMScluster-E-NOT_SERVED, Configuration change, the system disk is no longer served by node ALAN |
These messages may repeat indefinitely, possibly with different boot server node names, as the satellite attempts to find a usable boot server.
This problem can be avoided by using the LAN MOP service on any boot server that is running DECnet-Plus and is serving a satellite system disk that has a SCSI PAC or HSZ70/80 controller allocation class. The LAN MOP service can be enabled by using the CLUSTER_CONFIG.COM procedure or the LANCP utility. Refer to the OpenVMS Cluster Systems manual for additional information about LAN MOP booting.
Note that this restriction does not apply if the boot server is running
DECnet for OpenVMS (Phase IV).
4.14.2.3 MSCP_SERVE_ALL and Mixed-Version Clusters
V7.2
MSCP_SERVE_ALL is a system parameter that controls disk serving in an OpenVMS Cluster system. Starting with OpenVMS Version 7.2, in certain configurations, it is possible to serve disks connected to HSx controllers with a node allocation class different from the system's node allocation class. However, for that to happen, all systems must be running OpenVMS Version 7.2.
In a mixed-version OpenVMS Cluster system, serving "all available disks" is restricted to its pre-Version 7.2 meaning, that is, serving locally attached disks and disks connected to HSx and DSSI controllers whose node allocation class matches that of the system's node allocation class. To serve "all available disks" in a mixed-version cluster, you must specify the value 9.
For more information about changes to the MSCP_SERVE_ALL system
parameter, see Section 4.21.1.6.
4.14.2.4 SCSI Device-Naming Restrictions When Port Allocation Class Used
V7.2
For OpenVMS Alpha Version 7.2, the internal device-naming model for SCSI devices with a port allocation class greater than 0 has been modified. This modification is part of the solution described in Section 4.14.3.2.
In the new model, the only valid device name that you can use to address a SCSI device with a port allocation class is the fully specified device name, such as $3$DKA500. You can no longer use names like DKA500 and FUZZY$DKA500 to address such devices. In addition, the $GETDVI system service has been changed to return the fully specified device name, including the port allocation class, for SCSI devices with a port allocation class greater than 0.
A second restriction concerns using the SYSMAN utility to selectively
autoconfigure devices. Certain devices, including those whose names
include a port allocation class, cannot be selectively autoconfigured
using SYSMAN. For more information, see Section 6.3.
4.14.2.5 Multipath Support for Parallel SCSI and Fibre Channel (Alpha Only)
V7.2
Multipath support for parallel SCSI and Fibre Channel is documented in a new chapter, Configuring Multiple Paths to SCSI and Fibre Channel Storage, in Guidelines for OpenVMS Cluster Configurations. The following three restrictions apply to the initial release of OpenVMS Version 7.2:
These restrictions are also documented in the chapter noted above. The
restrictions will be removed by an update kit shortly after the release
of OpenVMS Version 7.2. Contact your Compaq representative to obtain
this kit.
4.14.2.6 Fibre Channel Support (Alpha Only)
V7.2
Fibre Channel support is latent in OpenVMS Alpha Version 7.2. Fibre Channel support will be available shortly after the release of OpenVMS Alpha Version 7.2.
To help you plan for the use of Fibre Channel in your OpenVMS Cluster system, Fibre Channel support is documented in two new chapters, Configuring Fibre Channel as an OpenVMS Cluster Storage Interconnect and Configuring Multiple Paths to SCSI and Fibre Channel Storage, in Guidelines for OpenVMS Cluster Configurations.
Contact your Compaq support representative for the availability of
Fibre Channel support for OpenVMS Alpha Version 7.2.
4.14.2.7 SCSI Shared Interconnect Requires Same Node Allocation Class (Alpha Only)
V7.1--1H1
Prior to the introduction of port allocation classes in OpenVMS Version 7.1, nodes sharing a SCSI interconnect were required to use the same nonzero node allocation class. This requirement is still in effect, whether or not you are also using port allocation classes.
When port allocation classes were introduced in OpenVMS Version 7.1,
this requirement was mistakenly removed from Guidelines for OpenVMS Cluster Configurations, Table A-1.
4.14.2.8 AlphaServer 4000/4100 Systems Problem in SCSI Clusters
An AlphaServer 4000/4100 system that accesses its system disk through a KZPSA adapter may not boot or write a crash dump file if another system on the SCSI bus is booting or shutting down at the same time. Subsequent attempts to boot should succeed.
Compaq recommends that you not attempt to perform these operations
simultaneously in this configuration until you have updated your
firmware.
4.14.2.9 CI-to-PCI (CIPCA) Adapter (Alpha Only)
The release notes in this section describe restrictions for using the
CIPCA module on OpenVMS Alpha systems. For more information about the
CIPCA adapter, including permanent restrictions, refer to Appendix C in
Guidelines for OpenVMS Cluster Configurations.
4.14.2.9.1 HSJ50 Firmware Version Restriction for Use of 4K CI Packets
V7.2
Do not attempt to enable the use of 4K CI packets by the HSJ50 controller unless the HSJ50 firmware is Version 5.0J--3 or higher. If the HSJ50 firmware version is less than Version 5.0J--3 and 4K CI packets are enabled, data can become corrupted. If your HSJ50 firmware does not meet this requirement, contact your Compaq support representative.
For more information about the use of 4K CI packets by the HSJ50
controller, refer to OpenVMS Cluster Systems.
4.14.2.9.2 Multiprocessor Systems with CIPCAs: CPUSPINWAIT Bugcheck Avoidance
V7.1--1H1
If your multiprocessor system uses a CIPCA adapter, you must reset the value of the SMP_SPINWAIT parameter to 300000 (3 seconds) instead of the default 100000 (1 second).
If you do not change the value of SMP_SPINWAIT, a CIPCA adapter error could generate a CPUSPINWAIT system bugcheck similar to the following:
**** OpenVMS (TM) Alpha Operating System V7.1-1H1 - BUGCHECK **** ** Code=0000078C: CPUSPINWAIT, CPU spinwait timer expired |
This restriction will be removed in a future OpenVMS release.
This release note is a revision of a release note that was published in OpenVMS Version 7.1 Release Notes (note 4.15.2.4.5). The SYSTEM_CHECK parameter restriction in that note is incorrect. |
The following sections contain guidelines and restrictions that apply to MEMORY CHANNEL. For detailed information about setting up the MEMORY CHANNEL hardware, see the MEMORY CHANNEL User's Guide (order number EK--PCIMC--UG.A01). You can copy this manual from the OpenVMS Version 7.2 CD-ROM using the following file name:
[DOCUMENTATION]HW_MEMORY_CHANNEL2_UG.PS |
V7.2
OpenVMS Version 7.2 supports rolling upgrades in an OpenVMS Cluster system from Version 6.2, Version 6.2--1Hx, Version 7.1, and 7.1--1Hx to Version 7.2. This note applies only to rolling upgrades from Version 6.2 and Version 6.2--1Hx to Version 7.2.
If MEMORY CHANNEL adapters (CCMAA-xx) have been added to the cluster before upgrading OpenVMS to Version 7.2, an MC_FORCEDCRASH bugcheck occurs on the first system when the second and subsequent systems perform AUTOGEN and SHUTDOWN during their Version 7.2 installation. This problem is caused by conflicting system parameters.
To avoid this problem when upgrading, use one of the following procedures:
V7.1
Compaq recommends the use of the LANCP utility for all MOP booting requirements. If you choose to use DECnet-Plus MOP booting instead of LANCP, note the following restriction when adding a satellite: CLUSTER_CONFIG.COM uses the first circuit configured for MOP in the NET$MOP_CIRCUIT_STARTUP.NCL file.
To use a different circuit, you must edit NET$MOP_CIRCUIT_STARTUP.NCL
before invoking CLUSTER_CONFIG.COM. Place your desired circuit at the
beginning of the NET$MOP_CIRCUIT_STARTUP.NCL file, then invoke
CLUSTER_CONFIG.COM.
4.14.2.12 System Startup in an OpenVMS Cluster Environment (Alpha Only)
V6.2
In an OpenVMS Cluster environment on Alpha systems, under some circumstances the system startup procedure may fail to write a new copy of the ALPHAVMSSYS.PAR file. If this occurs, the console output from the boot sequence reports the following messages:
%SYSGEN-E-CREPARFIL, unable to create parameter file -RMS-E-FLK, file currently locked by another user |
This error creates an operational problem only when changing system parameters using a conversational boot. For a normal, nonconversational boot, this error message is purely cosmetic because the parameter file has not changed. If a conversational boot is used, and system parameters are changed at boot time, these changed parameters will be correctly used for the current boot of the system. However, since the boot procedure does not successfully write a new copy of the parameter file, these changed parameters will not be used in subsequent boots.
To permanently change system parameters that have been changed by a conversational boot, run SYSGEN after the system has completed booting, and enter the following commands:
SYSGEN> USE ACTIVE SYSGEN> WRITE CURRENT |
The following notes describe corrections to OpenVMS Cluster systems.
4.14.3.1 SCSI Device Naming and Quorum Disk Problem Corrected (Alpha Only)
V7.2
Prior to OpenVMS Version 7.2, an OpenVMS system that was booting and attempting to form a new OpenVMS Cluster could not do so if the following conditions existed:
Under these conditions, the booting system hung shortly after displaying the following message on the system console terminal:
%SYSINIT-I- waiting to form or join an OpenVMS Cluster |
DIGITAL recommended that you not use SCSI port allocation classes if you needed to rely on a quorum disk, unless you could designate the system disk as the quorum disk.
Compaq has removed this restriction. A system booting under these
conditions now boots successfully.
4.14.3.2 SCSI Device-Naming Problem with PKA Device Corrected
V7.2
Port allocation classes, introduced in OpenVMS Version 7.1, provide a new method for naming SCSI devices attached to Alpha systems in an OpenVMS Cluster. A port allocation class is enabled on a port when its value is set to 0 or a positive integer.
After OpenVMS Version 7.1 was released, a potential problem pertaining to the SCSI port with the OpenVMS device name PKA was discovered. Failure to enable a port allocation class on PKA could cause some I/O operations to be issued to the wrong SCSI device, causing data corruption or loss.
Immediately after this problem was detected, a SCSI device-naming advisory was issued. Customers using port allocation classes for shared SCSI devices were advised to enable a port allocation class on the SCSI port with the OpenVMS device name PKA. This was required in all cases, regardless of whether PKA was connected to a shared (multihost) bus or a private bus. Customers were also advised to use port allocation classes only on systems that were members of an OpenVMS Cluster system.
Starting with OpenVMS Version 7.1--2, this problem has been eliminated. Provided that all systems in an OpenVMS Cluster system are running Version 7.1--2 or later, a port allocation class for the SCSI port with the OpenVMS device name of PKA is no longer required.
OpenVMS Cluster systems with mixed operating system versions must continue to follow the OpenVMS Version 7.1 SCSI device-naming advisory until all systems have been upgraded to OpenVMS Version 7.1--2 or later, or until systems with earlier versions have installed compatibility kits for this change. These compatibility kits are not yet available.
For more information about port allocation classes, see the
OpenVMS Cluster Systems manual.
4.14.3.3 SCSI Device-Naming Conflict That Prevented Satellite Booting Corrected
V7.2
Prior to OpenVMS Version 7.2, a satellite system that used SCSI port allocation classes for its directly attached SCSI ports could not successfully boot into an OpenVMS Cluster system under the following conditions:
For example, these conditions were met if a satellite system attempted to boot from an MSCP served system disk named $100$DKB100 and if its PKB port had a port allocation class of 20.
When these conditions existed, the satellite system displayed the following message on the console and failed to make any further progress:
%VMScluster-I-RETRY, Attempting to reconnect to a system disk server |
This problem has been corrected in OpenVMS Version 7.2.
4.14.3.4 MSCP_CMD_TMO System Parameter Corrections
V7.2
In OpenVMS Version 7.1, the default value for the MSCP_CMD_TMO system parameter was inappropriately set at 600. This has been corrected. The default setting is now 0.
Also, when you executed the SYSGEN command SHOW MSCP_CMD_TMO in OpenVMS Version 7.1, the parameter unit was incorrectly listed as CNTLRTMOs (controller timeouts) instead of seconds. This error has also been corrected.
MSCP_CMD_TMO is the time in seconds that the OpenVMS MSCP server uses to detect MSCP command timeouts. The MSCP Server must complete the command within a built-in time of approximately 40 seconds plus the value of the MSCP_CMD_TMO parameter. For more information about this parameter, refer to the OpenVMS System Management Utilities Reference Manual: M--Z.
Previous | Next | Contents | Index |
Copyright © Compaq Computer Corporation 1998. All rights reserved. Legal |
6534PRO_009.HTML
|