Document revision date: 9 May 2001 | |
![]() |
![]() ![]() ![]() ![]() |
![]() |
Previous | Contents | Index |
V7.1
Compaq recommends using external authentication on OpenVMS Cluster systems only if all systems are running OpenVMS Version 7.1 or later.
LOGINOUT on earlier version systems continues to enforce normal OpenVMS
password policy (password expiration, password history, and so on), on
all users, including externally authenticated users.
5.3.10 LGI Callout Services Disable External Authentication
V7.1
Starting with Version 7.1, the presence of LOGINOUT (LGI) callouts
disables external authentication.
5.3.11 No Password Expiration Notification on Workstations
V7.1
In the LAN Manager domain, a user cannot log in once a password expires.
Users on personal computers (PCs) receive notification of impending
external user password expiration and can change passwords before they
expire. However, when a user logs in from an OpenVMS workstation using
external authentication, the login process cannot determine if the
external password is about to expire. Therefore, sites that enforce
password expiration, and whose user population does not primarily use
PCs, may elect not to use external authentication for workstation users.
5.4 FDL Utility---Fixing EDIT/FDL Recommended Bucket Size When Disk Cluster Size Is Large
V7.3
Prior to OpenVMS V7.3, when running EDIT/FDL, the calculated bucket sizes were always rounded up to the closest disk-cluster boundary, with a maximum bucket size of 63. This could cause problems when the disk-cluster size was large, but the "natural" bucket size for the file was small, because the bucket size was rounded up to a much larger value than required. Larger bucket sizes increase record and bucket lock contention, and can seriously impact performance.
OpenVMS V7.3 modifies the algorithms for calculating the recommended
bucket size to suggest a more reasonable size when the disk cluster is
large.
5.5 OpenVMS Galaxy Version 7.3
This section contains OpenVMS Galaxy release notes for OpenVMS Version
7.3 and notes from OpenVMS Versions 7.2-1H1, 7.2-1, and 7.2 that apply
to this release.
5.5.1 Using Fibre Channel in OpenVMS Galaxy Configurations
Fibre Channel support for OpenVMS Galaxy configurations is included in OpenVMS Alpha Version 7.3 and OpenVMS Alpha Version 7.2-1H1. For OpenVMS Alpha Version 7.2-1, Fibre Channel support for OpenVMS Galaxy configurations is available in Fibre Channel remedial kits, starting with V721_FIBRECHAN-V0200. For the most current information about OpenVMS Fibre Channel configurations, go to:
The release of the Compaq Analyze service tool that supports the new Compaq AlphaServer GS Series systems includes a Director process that sets hard affinity to a CPU. A CPU with processes hard affinitized to it cannot be reassigned from one Galaxy instance to another.
This is a temporary restriction.
For more information about Compaq Analyze and its operation, contact
your Compaq support representative.
5.5.3 Compatibility of Galaxy Computing Environment and Non-Galaxy Cluster Members
OpenVMS Version 7.2 introduced new security classes that are used in an OpenVMS Galaxy computing environment. The new security classes are not valid on non-Galaxy systems. If your OpenVMS Galaxy is configured in an existing OpenVMS Cluster, you must ensure that all the nodes in the cluster recognize the new security classes as described in this release note.
This situation applies if all of the following conditions are met:
OpenVMS VAX and Alpha systems running OpenVMS Version 6.2 or Version 7.1 will crash if they encounter an unknown security class in the VMS$OBJECTS.DAT file.
To allow VAX and Alpha systems running older versions of OpenVMS to cooperate with Version 7.2 Galaxy instances in the same OpenVMS Cluster environment, a SECURITY.EXE image is provided for each of these versions. The appropriate remedial kit from the following list must be installed on all system disks used by these systems. (Later versions of these remedial kits may be used if available.)
Alpha V7.1 and V7.1-1xx | ALPSYS20_071 |
Alpha V6.2 and V6.2-1xx | ALPSYSB03_062 |
VAX V7.1 | VAXSYSB02_071 |
VAX V6.2 | VAXSYSB03_062 |
Before you create any galaxywide global sections, you must reboot all
cluster members sharing one of the updated system disks.
5.5.4 AlphaServer GS60/GS60E/GS140 Multiple I/O Port Module Configuration Restriction
AlphaServer GS60/GS60E/GS140 configurations with more than a single I/O Port Module, KFTHA-AA or KFTIA-AA, might experience system crashes.
When upgrading OpenVMS Galaxy and non-Galaxy AlphaServer 8200/8400 configurations with multiple I/O Port Modules to GS60/GS60E/GS140 systems, customers must install one minimum revision B02 KN7CG-AB EV6 CPU (E2063-DA/DB rev D01) module as described in Compaq Action Blitz # TD 2632.
For complete details about this restriction and its solution, refer to
Compaq Action Blitz # TD 2632.
5.5.5 MOP Booting Restrictions
In an OpenVMS Galaxy computing environment, MOP (Maintenance Operations
Protocol) Booting is only supported on Instance 0. This restriction
will be removed in a future release.
5.5.6 Restriction on KFMSB and CIXCD Adapters in Galaxy Configurations
Permanent Restriction
Due to firmware addressing limitations on driver-adapter control data
structures, KFMSB and CIXCD adapters can only be used on hardware
partitions based at physical address (PA) = 0. In OpenVMS Galaxy
configurations, this restricts their use to Instance 0.
5.6 LAN ATM (Alpha Only)
This section contains a release note pertaining to the local area
network (LAN) asynchronous transfer mode (ATM) software.
5.6.1 Requirements/Restrictions Using DAPBA/DAPCA Adapters for LAN Emulation over ATM (Alpha Only)
The DAPBA (155 Mb/s) and the DAPCA (622 Mb/s) are ATM adapters for PCI-bus systems that are supported by SYS$HWDRIVER4.EXE.
Both adapters require a great deal of non-paged pool, and therefore, care should be taken when configuring them. For each DAPBA, Compaq recommends increasing the SYSGEN parameter NPAGEVIR by 3000000. For each DAPCA, Compaq recommends increasing NPAGEVIR by 6000000. To do this, add the ADD_NPAGEVIR parameter to MODPARAMS.DAT and then run AUTOGEN. For example, add the following command to MODPARAMS.DAT on a system with two DAPBAs and one DAPCA:
ADD_NPAGEVIR = 12000000 |
The following restrictions apply to the DAPBA and DAPCA adapters:
This section contains notes pertaining to the lock manager.
5.7.1 Lock Manager System Parameter Renamed (Alpha Only)
V7.3
The OpenVMS Performance Management incorrectly refers to the LOCKMGR_CPU system parameter
in its discussion of the Dedicated CPU lock manager. The LOCKMGR_CPU
system parameter name has been changed to LCKMGR_CPUID.
5.7.2 Instituting the Dedicated CPU Lock Manager Functionality (Alpha Only)
V7.3
With OpenVMS Version 7.3, Compaq introduces an alternative locking mode that allows a CPU to be dedicated to the lock manager. The dedicated CPU lock manager can perform better than the traditional lock manager under heavy locking loads. The performance gains are a result of reducing SMP contention and obtaining the benefits of improved CPU cache utilization on the CPU dedicated to the lock manager.
Usage of the dedicated CPU lock manager is only of benefit to systems
with a large number of CPUs and heavy SMP contention due to the lock
manager. By default, a CPU will not be dedicated to the lock manager.
See the OpenVMS Version 7.3 New Features and Documentation Overview for information and details about enabling the
dedicated CPU lock manager.
5.7.3 Fast Lock Remastering and PE1 (Alpha Only)
V7.3
The OpenVMS Distributed Lock Manager has a feature called lock remastering. A lock remaster is the process of moving the lock mastership of a resource tree to another node in the cluster. The node that masters a lock tree can process local locking requests much faster because communication is not required with another node in the cluster. Having a lock tree reside on the node doing the most locking operations can improve overall system performance.
Prior to OpenVMS Version 7.3, lock remastering resulted in all nodes sending one message per local lock to the new master. For a very large lock tree, it could require a substantial amount of time to perform the lock remastering operation. During the operation, all application locking to the lock tree is stalled.
Starting with OpenVMS Version 7.3, sending lock data to the new master is done with very large transfers. This is a much more efficient process and results in moving a lock tree from 3 to 20 times faster.
Only nodes running Version 7.3 or later can use large transfers for lock remastering. Remastering between OpenVMS Version 7.3 nodes and prior version nodes still requires sending a single message per lock.
If you currently use the PE1 system parameter to limit the size of lock
trees that can be remastered, Compaq recommends that you either try
increasing the value to allow large lock trees to move or try setting
the value to zero (0) to allow any size lock tree to move.
5.7.4 Lock Manager and Nonpaged Pool (Alpha Only)
V7.2
To improve application scalability on OpenVMS Alpha systems, most of the lock manager data structures have been moved from nonpaged pool to S2 space. On many systems, the lock manager data structures accounted for a large percentage of nonpaged pool usage.
Because of this change to nonpaged pool, Compaq recommends the following steps:
The SHOW MEMORY documentation in the OpenVMS DCL Dictionary: N--Z describes the memory
associated with the lock manager.
5.8 OPCOM
This section contains release notes pertaining to the Operator
Communication Manager (OPCOM).
5.8.1 OPCOM Messages Changed (Alpha Only)
V7.2
In OpenVMS Alpha Version 7.2 and later, OPCOM messages from the job controller and the queue manager now display SYSTEM as the user process. For example:
%%%%%%%%%%% OPCOM 16-NOV-2000 15:07:49.33 %%%%%%%%%%% Message from user SYSTEM on NODEX %JBC-E-FAILCREPRC, job controller could not create a process %%%%%%%%%%% OPCOM 16-NOV-2000 15:07:49.34 %%%%%%%%%%% (from node BENN at 16-NOV-2000 15:07:49.34) Message from user SYSTEM on NODEX -QMAN-I-QUEAUTOOFF, queue NODEX$BATCH is now autostart inactive |
The examples in the OpenVMS System Manager's Manual do not currently reflect this change.
5.8.2 Handling of Invalid Operator Classes
V7.3
Previously, if the OPC$OPA0_CLASSES or OPC$LOGFILE_CLASSES logicals contained an invalid class, it would cause OPCOM to signal the error and run down the process.
This problem has been corrected in OpenVMS Version 7.3.
The following two messages have been added to OPCOM:
%%%%%%%%%%% OPCOM 18-MAY-2000 13:28:33.12 %%%%%%%%%%% "BADCLASS" is not a valid class name in OPC$LOGFILE_CLASSES %%%%%%%%%%% OPCOM 18-MAY-2000 13:28:33.12 %%%%%%%%%%% "BADCLASS" is not a valid class name in OPC$OPA0_CLASSES |
If an invalid class name is specified in either of the logicals, the appropriate error message is displayed. These messages are displayed on the console at system startup and logged to the OPERATOR.LOG.
The list of all operator classes is:
CARDS
CENTRAL
CLUSTER
DEVICES
DISKS
LICENSE
NETWORK
OPER1 through OPER12
PRINTER
SECURITY
TAPES
When you specify an invalid class, all classes are enabled. This change
causes the error messages listed to reach as many operators as possible.
5.8.3 Handling OPC$ALLOW_INBOUND and OPC$ALLOW_OUTBOUND
V7.3
The algorithm formerly used by OPCOM when OPC$ALLOW_INBOUND and OPC$ALLOW_OUTBOUND were set to FALSE was found to be too restrictive. These logical names do not allow messages to flow into or out of the OPCOM process.
When these logicals were used together in an OpenVMS Cluster, it was possible for OPCOM processes on different systems in the cluster to stop communicating. As a result, OPERATOR.LOG files would fill up with messages similar to the following:
%%%%%%%%%%% OPCOM 29-APR-2000 11:33:31.73 %%%%%%%%%%% OPCOM on AAAAA is trying again to talk to BBBBB, csid 00010001, system 00001 |
To correct this problem, the algorithm has been relaxed to allow OPCOM processes in an OpenVMS Cluster to pass communication messages back and forth between one another.
Compaq still recommends caution in the use of these logical names,
which should be used only by individuals who truly understand the
impact to the entire system if OPCOM messages are disabled in one or
both directions.
5.8.4 Workstations in OpenVMS Clusters
V7.3
The default behavior of OPCOM is to not enable OPA0: on workstations in clusters. OPCOM also does not enable the logfile, OPERATOR.LOG, on these systems. The only exception is if the system is the first system into the cluster.
OPCOM determines whether a system is a workstation by testing if it has a graphics device. This test is specifically:
F$DEVICE ("*", "WORKSTATION", "DECW_OUTPUT") |
OPCOM is treating systems shipped with graphic devices as workstations. As a result, OPA0: and OPERATOR.LOG are not enabled by default.
To override the default behavior, define the following logical names in SYS$MANAGER:SYLOGICALS.COM to be TRUE:
5.9 OpenVMS Cluster Systems
The release notes in this section pertain to OpenVMS Cluster systems.
5.9.1 New Error Message About Packet Loss
Prior to OpenVMS Version 7.3, an SCS virtual circuit closure was the first indication that a LAN path had become unusable. In OpenVMS Version 7.3, whenever the last usable LAN path is losing packets at an excessive rate, PEDRIVER displays the following console message:
%PEA0, Excessive packet losses on LAN Path from local-device-name - _ to device-name on REMOTE NODE node-name |
This message is displayed when PEDRIVER had to recently perform an excessively high rate of packet retransmissions on the LAN path consisting of the local device, the intervening network, and the device on the remote node. The message indicates that the LAN path has degraded and is approaching, or has reached, the point where reliable communications with the remote node are no longer possible. It is likely that the virtual circuit to the remote node will close if the losses continue. Furthermore, continued operation with high LAN packet losses can result in significant loss in performance because of the communication delays resulting from the packet loss detection timeouts and packet retransmission.
Take the following corrective steps:
$ SHOW DEVICE local-device-name $ MC SCACP SCACP> SHOW LAN device-name $ MC LANCP LANCP> SHOW DEVICE device-name/COUNT |
For additional PEDRIVER troubleshooting information, see Appendix F of
the OpenVMS Cluster Systems manual.
5.9.2 Class Scheduler in a Mixed Version Cluster
V7.3
When using the new permanent Class Scheduler in a mixed-version cluster environment with nodes running OpenVMS Alpha Version 7.2x, the SMISERVER process on these nodes aborts when you issue any SYSMAN CLASS_SCHEDULE subcommand that involves those nodes.
If this happens, you can quickly restart the SMISERVER process on those nodes with the following command:
@SYS$SYSTEM:STARTUP SMISERVER |
A remedial kit will be available from the following web site to correct this problem:
This problem exists only on Alpha platforms running OpenVMS Alpha
Version 7.2x.
5.9.3 Remedial Kits Required for Extended File Cache (XFC) Used in Mixed Version OpenVMS Cluster Systems
V7.3
The Extended File Cache (XFC), introduced in this version of the OpenVMS Alpha operating system, improves I/O performance and gives you control over the choice of cache and cache parameters.
If you have an OpenVMS Cluster system that contains earlier versions of OpenVMS Alpha or OpenVMS VAX and you want to use XFC with OpenVMS Version 7.3, you must install remedial kits on the systems that are running the earlier versions of OpenVMS. See Section 5.9.5 for information on the required kits.
These remedial kits correct errors in the cache locking protocol and allow older versions of the caches to operate safely with the new XFC. Without the remedial kit functionality, the system or processes could hang. |
OpenVMS supports SANworks Data Replication Manager (DRM), except when using the DEC-AXPVMS-VMS721_FIBRECHAN-V0300-4.PCSI kit. An incompatibility exists between DRM and this kit that causes hard hangs. This problem has been addressed in two new remedial kits, one for OpenVMS Alpha Version 7.2-1 and one for OpenVMS Alpha Version 7.2-1H1. For kit names, see Section 5.9.5.
Note that the kit name format has changed. The SCSI and Fibre Channel remedial kits have been consolidated into one kit; the new name format reflects this consolidation.
This remedial kit is not required for V7.3 because the relevant fix is
included with the operating system.
5.9.5 Remedial Kits Needed for Cluster Compatibility
Before you introduce an OpenVMS Version 7.3 system into an existing OpenVMS Cluster system, you must apply certain remedial kits to your systems running earlier versions of OpenVMS. If you are using Fibre Channel, XFC or Volume Shadowing, additional remedial kits are required. Note that these kits are version-specific.
Table 5-1 indicates the facilities that require remedial kits and the file names of the remedial kit files.
You can either download the remedial kits from the following web site, or contact your Compaq support representative to receive the remedial kits on
Remedial kits are periodically updated on an as-needed basis. Always use the most recent remedial kit for the facility, as indicated by the version number in the kit's ReadMe file. The most recent version of each kit is the version posted to the WWW site. |
Facility | File Name |
---|---|
OpenVMS Alpha Version 7.2-1H1 | |
All facilities except kits named below | DEC-AXPVMS-VMS721H1_UPDATE-V0100--4.PCSI |
Fibre Channel | DEC-AXPVMS-VMS721H1_UPDATE-FIBRE_SCSI-V0100--4.PCSI |
Volume Shadowing |
DEC-AXPVMS-VMS721H1_SHADOWING-V0100--4.PCSI
This kit provides Fibre Channel disaster-tolerant support. |
VCC | DEC-AXPVMS-VMS721H1_SYS-V0100--4.PCSI |
OpenVMS Alpha Version 7.2-1 | |
All facilities except kits named below | DEC-AXPVMS-VMS721_UPDATE-V100-4.PCSI |
Fibre Channel | DEC-AXPVMS-VMS721_UPDATE-FIBRE_SCSI-V0100--4.PCSI |
Volume Shadowing |
DEC-AXPVMS-VMS721_SHADOWING-V0300--4.PCSI
This kit provides Fibre Channel disaster-tolerant support. |
VCC | DEC-AXPVMS-VMS721_SYS-V0800--4.PCSI |
OpenVMS Versions 7.1 and 7.1-2 | |
OpenVMS Cluster | DEC-AXPVMS-VMS712_PORTS-V0100--4.PCSI (Alpha 7.1-2) |
Fibre Channel |
ALPDRIV11_071 (Alpha 7.1)
DEC-AXPVMS-VMS712_DRIVER-V0200--4.PCSI (Alpha 7.1-2) VAXDRIV05_071 (VAX 7.1) |
Monitor |
ALPMONT02_071 (Alpha 7.1)
VAXMONT02_071 (VAX 7.1) |
Mount |
ALPMOUN07_071 (Alpha 7.1)
VAXMOUN05_071 (VAX 7.1) DEC-AXPVMS-VMS712_MOUNT96-V0100--4.PCSI (Alpha 7.1-2) |
Volume Shadowing |
ALPSHAD07_071 (Alpha 7.1)
VAXSHAD06_071 (VAX 7.1) |
VCC | DEC-AXPVMS-VMS712_SYS-V0100--4.PCSI (Alpha 7.1-2) |
Previous | Next | Contents | Index |
![]() ![]() ![]() ![]() |
privacy and legal statement | ||
6637PRO_004.HTML |