Document revision date: 31 July 2002 | |
Previous | Contents | Index |
Previously, under some conditions, if a multipath member of a shadow set switched paths prior to being dismounted, and no I/Os were issued immediately before the DISMOUNT command was issued, the dismount failed and the following error message was displayed:
%DISM-W-CANNOTDMT |
This problem has now been corrected.
4.22.14 Multipath Failover Failure (Infrequent) on HSZ70/HSZ80 Controllers---Corrected on HSZ70
Under heavy load, a host-initiated manual or automatic path switch from one controller to another may fail on an HSZ70 or HSZ80 controller. Testing has shown this to occur infrequently.
This problem has been corrected for the HSZ70 in the firmware revision HSOF V7.7 (and higher versions) and will be corrected for the HSZ80 in a future release. It does not occur on the HSG80 controller. |
V7.3-1
Automatic path switching is not implemented in OpenVMS Alpha Version 7.3-1 for SCSI medium changers (tape robots) attached to Fibre Channel via a Fibre-to-SCSI tape bridge. Multiple paths can be configured for such devices, but the only way to switch from one path to another is to use manual path switching with the SET DEVICE/SWITCH command.
This restriction will be removed in a future release.
4.22.16 Fibre Channel Multipath Tapes and Third-Party Products
V7.3-1
OpenVMS Alpha Version 7.3 introduced multipath support for SCSI tape devices that are accessed via fibre channel adapters.
Third-party products that rely on altering the Driver Dispatch Table (DDT) of the OpenVMS Alpha SCSI tape class driver (SYS$MKDRIVER.EXE) may require changes to work correctly with such multipath fibre channel tape devices.
Manufacturers of such software can contact Compaq at
vms_drivers@zko.dec.com for more information.
4.22.17 SCSI Multipath Incompatibility with Some Third-Party Products
V7.2
OpenVMS Alpha Version 7.2 introduced the SCSI multipath feature, which provides support for failover between the multiple paths that can exist between a system and a SCSI device.
This SCSI multipath feature may be incompatible with some third-party disk caching, disk shadowing, or similar products. Compaq advises you to avoid the use of such software on SCSI devices that are configured for multipath failover (for example, SCSI devices that are connected to HSZ70 and HSZ80 controllers in multibus mode) until this feature is supported by the manufacturer of the software.
Third-party products that rely on altering the Driver Dispatch Table (DDT) of the OpenVMS Alpha SCSI disk class driver (SYS$DKDRIVER.EXE) may require changes to work correctly with the SCSI multipath feature. Manufacturers of such software can contact Compaq at vms_drivers@zko.dec.com for more information.
For more information about OpenVMS Alpha SCSI multipath features, refer
to the Guidelines for OpenVMS Cluster Configurations.
4.22.18 Gigabit Ethernet Switch Restriction in an OpenVMS Cluster System
V7.3
Attempts to add a Gigabit Ethernet node to an OpenVMS Cluster system over a Gigabit Ethernet switch will fail if the switch does not support autonegotiation. The DEGPA enables autonegotiation by default, but not all Gigabit Ethernet switches support autonegotiation. For example, the current Gigabit Ethernet switch made by Cabletron does not.
Furthermore, the messages that are displayed may be misleading. If the node is being added using CLUSTER_CONFIG.COM and the option to install a local page and swap disk is selected, the problem may look like a disk-serving problem. The node running CLUSTER_CONFIG.COM displays the message "waiting for node-name to boot," while the booting node displays "waiting to tune system." The list of available disks is never displayed because of a missing network path. The network path is missing because of the autonegotiation mismatch between the DEGPA and the switch.
To avoid this problem, disable autonegotiation on the new node's DEGPA, as follows:
V7.3-1
Because of a DECthreads problem, users of the OpenVMS Management Station must upgrade to Version 3.2 if they are on OpenVMS Version 7.3-1 or 7.3. It is recommended that all users of the OpenVMS Management Station upgrade to Version 3.2; it is mandatory on Version 7.3-1 or 7.3.
The OpenVMS Alpha Version 7.3-1 installation includes OpenVMS
Management Station Version 3.2, which is also available on the web.
4.24 PPPD Utility---Line Disconnect Problem
V7.3-1
If you use the PPPD utility to disconnect a PPP connection and you upgrade from Version 7.3 to Version 7.3-1, you will see the following message:
PPPD> DISCONNECT TTA0: %PPPD-E-PPPCONNECTERR, error connecting to PPP device %SYSTEM-W-NOSUCHDEV, no such device available PPPD-F-ABORT, fatal error encountered; operation terminated |
This problem will be corrected in a future remedial kit.
4.25 OpenVMS Registry
The release notes in this section pertain to the OpenVMS Registry.
4.25.1 Registry Services in a Mixed-Version Cluster
V7.3
Removing the data transfer size restrictions on the OpenVMS NT Registry required a change in the communication protocol used by the Registry. The change means that components of the Registry (the $REGISTRY system service and the Registry server) in OpenVMS Version 7.2-2 or higher are incompatible with their counterparts in OpenVMS Version 7.2-1 and Version 7.2-1H1 or earlier.
If you plan to run a cluster with mixed versions of OpenVMS, and you plan to use the $REGISTRY service or a product that uses the $REGISTRY service (such as Advanced Server, or COM for OpenVMS) then you are restricted to running these services on the OpenVMS Version 7.2-2 or higher nodes only, or on the Version 7.2-1 nodes only, but not both. If you are upgrading Version 7.2-1 nodes to Version 7.2-2 or higher, then you need to follow the procedure outlined in Section 1.9.
If you need to run Registry services on both Version 7.2-2 or higher
and Version 7.2-1 or lower nodes in the same cluster, please contact
your Compaq Services representative.
4.25.2 Registry Data Transfer Size Restriction Eased
Previous versions of OpenVMS placed restrictions on the size of a data transfer between the $REGISTRY system service and the OpenVMS Registry server. The data transfer restrictions, in turn, placed restrictions on the maximum size of a single block of data that can be stored or retrieved from the Registry database. They also limited the depth of a REG$CP Search command, and placed limits on the number of Advanced Server domain groups of which a user can be a member. These restrictions were eased in OpenVMS Version 7.3, but still have not been eliminated entirely.
Previously the restrictions were approximately 8K bytes transmit (service to server) and approximately 4K bytes receive. The current restriction depends on the setting of the system parameter MAXBUF. The range for MAXBUF is 4K to 64K, with a default of 8K.
MAXBUF is the maximum allowable size for any single buffered I/O
packet. You should be aware that by changing MAXBUF you also affect
other areas of the system that perform buffered I/O.
4.25.3 Registry Master Failover in a Mixed-Version Cluster
V7.3-1
When you run the Registry server on more than one node in a cluster, only one of the servers is the Registry server master. The others are standby servers, in case the node running the current master is shut down.
Normally, you can failover the role of master from one node to another in any one of the following ways:
Beginning with OpenVMS V7.3-1, the Registry server uses a faster and more efficient method to determine if its relative priority has changed in a cluster.
All the Registry servers running in a cluster must use the same method for determining their relative priorities.
If you are running Registry servers in a mixed-version cluster, therefore, you cannot use the priority method to cause the Registry server master to failover from one node to another.
The Registry server master will still failover from one node to another when you request the current Registry server master to exit, or when you shut down a node on which the master is running.
It is recommended that in a mixed cluster, the servers on the new nodes should all be higher priority than the servers on the old nodes. This will ensure that a server on an old node will never be master as long as there are servers running on a new node.
For further information on Registry server failover in a cluster, refer
to the OpenVMS Registry System Management section of the COM, Registry, and Events for OpenVMS Developer's Guide.
4.26 Mixed-Version Cluster Restrictions for Registry Server and COM
V7.3-1
This section contains notes about the Registry and COM in mixed-version
clusters.
4.26.1 OpenVMS Registry Server in a Mixed-Version Cluster
V7.2-2
This note has been updated for Version 7.3-1.
For OpenVMS Version 7.2-2 or higher, the Registry components (the $REGISTRY services and the Registry server) were modified to use enhanced interprocess communication software. Consequently, certain versions of the Registry are incompatible with other versions. Specifically, the pre-Version 7.2-2 Registry components are incompatible with the enhanced Version 7.2-2, 7.3, and 7.3-1 Registry components. If you have an OpenVMS Cluster system with mixed versions of OpenVMS, ensure that all cluster members running or using the Registry components are running compatible versions of OpenVMS. (Note that mixed-version support in clusters is for migration purposes only.) You can run the Registry components and the products that use it on the following version combinations only:
The Registry server is used by cluster members if they are running:
If you are running the Registry components in an OpenVMS Cluster system running OpenVMS Version 7.2-1 or Version 7.2-1H1, and you want to upgrade one or more of these members to OpenVMS Version 7.2-2, 7.3, or 7.3-1, follow the instructions in Section 1.9.
If you need to run Registry services in a cluster with one or more
nodes running OpenVMS Alpha Version 7.2-2, 7.3, or 7.3-1 and one or
more nodes running OpenVMS Alpha Version 7.2-1 or 7.2-1H1, please
contact your Compaq Services representative.
4.26.2 COM for OpenVMS Restrictions in Some Mixed-Version Clusters (Alpha Only)
V7.3-1
Because of changes to the OpenVMS Registry protocol, you cannot run COM for OpenVMS software on OpenVMS Alpha Version 7.2-2 or higher systems and Version 7.2-1 or lower systems in the same cluster.
In a mixed-version OpenVMS Cluster system, if the Registry Server is on a Version 7.3-1, 7.3, or 7.2-2 node, then COM applications can run only on those nodes in the cluster that are running Version 7.3-1, 7.3, or 7.2-2. If the Registry Server is on a Version 7.2-1 or lower node in the cluster, then COM applications can run only on those nodes in the cluster that are running Version 7.2-1 or lower.
For more information about the OpenVMS Registry protocol change, see Section 4.25.
For information about installing and configuring COM for OpenVMS, refer
to the COM, Registry, and Events for OpenVMS Developer's Guide.
4.27 RMS Journaling
The following release notes pertain to RMS Journaling for OpenVMS.
4.27.1 Modified Journal File Creation
Prior to Version 7.2, recovery unit (RU) journals were created temporarily in the [SYSJNL] directory on the same volume as the file that was being journaled. The file name for the recovery unit journal had the form RMS$process_id (where process_id is the hexadecimal representation of the process ID) and a file type of RMS$JOURNAL.
The following changes have been introduced to RU journal file creation in OpenVMS Version 7.2:
These changes reduce the directory overhead associated with journal file creation and deletion.
The following example shows both the previous and current versions of journal file creation:
Previous versions: [SYSJNL]RMS$214003BC.RMS$JOURNAL;1
Current version: [SYSJNL.NODE1]CB300412.;1
If RMS does not find either the [SYSJNL] directory or the node-specific
directory, RMS creates them automatically.
4.27.2 Recovery Unit Journaling Incompatible with Kernel Threads
V7.3
Because DECdtm Services is not supported in a multiple kernel threads
environment and RMS recovery unit journaling relies on DECdtm Services,
RMS recovery unit journaling is not supported in a process with
multiple kernel threads enabled.
4.27.3 After-Image (AI) Journaling
V6.0
You can use after-image (AI) journaling to recover a data file that becomes unusable or inaccessible. AI recovery uses the AI journal file to roll forward a backup copy of the data file to produce a new copy of the data file at the point of failure.
In the case of either a process deletion or system failure, an update can be written to the AI journal file, but not make it to the data file. If only AI journaling is in use, the data file and journal are not automatically made consistent. If additional updates are made to the data file and are recorded in the AI journal, a subsequent roll forward operation could produce an inconsistent data file.
If you use Recovery Unit (RU) journaling with AI journaling, the automatic transaction recovery restores consistency between the AI journal and the data file.
Under some circumstances, an application that uses only AI journaling can take proactive measures to guard against data inconsistencies after process deletions or system failures. For example, a manual roll forward of AI-journaled files ensures consistency after a system failure involving either an unshared AI application (single accessor) or a shared AI application executing on a standalone system.
However, in a shared AI application, there may be nothing to prevent
further operations from being executed against a data file that is out
of synchronization with the AI journal file after a process deletion or
system failure in a cluster. Under these circumstances, consistency
among the data files and the AI journal file can be provided by using a
combination of AI and RU journaling.
4.27.4 Remote Access of Recovery Unit Journaled Files in an OSI Environment
V6.1
OSI nodes that host recovery unit journaled files that are to be
accessed remotely from other nodes in the network must define SYS$NODE
to be a Phase IV-style node name. The node name specified by SYS$NODE
must be known to any remote node attempting to access the recovery unit
journaled files on the host node. It must also be sufficiently unique
for the remote node to use this node name to establish a DECnet
connection to the host node. This restriction applies only to recovery
unit journaled files accessed across the network in an OSI or mixed OSI
and non-OSI environment.
4.27.5 VFC Format Sequential Files
You cannot update variable fixed-length control (VFC) sequential files
when using before-image or recovery unit journaling. The VFC sequential
file format is indicated by the symbolic value FAB$C_VFC in the
FAB$B_RFM field of the FAB.
4.28 Security---Changes to DIRECTORY Command Output
V7.3-1
In OpenVMS Version 7.1 and higher, if you execute the DCL command DIRECTORY/SECURITY or DIRECTORY/FULL for files that contain PATHWORKS access control entries (ACEs), the hexadecimal representation for each PATHWORKS ACE is no longer displayed. Instead, the total number of PATHWORKS ACEs encountered for each file is summarized in this message: "Suppressed n PATHWORKS ACE."
To display the suppressed PATHWORKS ACEs, use the DCL DIRECTORY command
with the /NOSUPPRESS qualifier, along with either the /FULL, /SECURITY,
or /ACL qualifier.
4.29 System Parameter Changes
V7.3-1
The following sections list obsolete, modified, and new system
parameters.
4.29.1 Obsolete System Parameters
V7.3
Starting with OpenVMS Version 7.3, the following system parameters are obsolete:
Initially, the MAXBOBS0S1 and MAXBOBS2 parameters were intended to ensure that users could not adversely affect the system by creating huge buffer objects. However, as users began to use buffer objects more widely, managing the combination of these parameters proved to be too complex.
Users who want to create buffer objects must either hold the VMS$BUFFER_OBJECT_USER identifier or execute in executive or kernel mode. Therefore, these users are considered privileged applications, and the additional safeguard that these parameters provided is unnecessary.
To determine current usage of system memory resources, enter the following command:
$ SHOW MEMORY/BUFFER_OBJECT |
V7.3-1
Definitions of the following system parameters have been modified in OpenVMS Version 7.3-1:
Refer to online help for changes in the definitions of these parameters.
4.29.3 New System Parameters
V7.3-1
The following list contains new system parameters in OpenVMS Version 7.3-1:
Refer to online help for the definitions of these new parameters.
4.30 TCP/IP Services Mandatory Update for OpenVMS Version 5.3
When running TCP/IP Services for OpenVMS Version 5.3 (included with this release), some customers may encounter floating-point errors in application code when the NFS Client is accessing a file system served by the OpenVMS NFS Server.
To prevent this problem, install the TCP/IP Services for OpenVMS Version 5.3 Mandatory Update (MUP) kit, which is available from your Compaq support representative.
Note that customers who are not using the NFS Server should not
encounter this problem.
4.31 Terminal Fallback Facility (TFF)
On OpenVMS Alpha systems, the Terminal Fallback Facility (TFF) includes a fallback driver (SYS$FBDRIVER.EXE), a shareable image (TFFSHR.EXE), a terminal fallback utility (TFU.EXE), and a fallback table library (TFF$MASTER.DAT).
TFFSHR has been removed from IMAGELIB because it is not a documented, user-callable interface. The image is still available in the SYS$LIBRARY: directory. |
To start TFF, invoke the TFF startup command procedure located in SYS$MANAGER, as follows:
$ @SYS$MANAGER:TFF$SYSTARTUP.COM |
To enable fallback or to change fallback characteristics, invoke the Terminal Fallback Utility (TFU), as follows:
$ RUN SYS$SYSTEM:TFU TFU> |
To enable default fallback to the terminal, enter the following DCL command:
$ SET TERMINAL/FALLBACK |
OpenVMS Alpha TFF differs from OpenVMS VAX TFF in the following ways:
Table Name | Base | Description |
---|---|---|
BIG5_HANYU | BIG5 | BIG5 for CNS 11643 (SICGCC) terminal/printer |
HANYU_BIG5 | CNS | CNS 11643 (SICGCC) for BIG5 terminal/printer |
HANYU_TELEX | CNS | CNS 11643 for MITAC TELEX-CODE terminal |
HANGUL_DS | KS | KS for DOOSAN 200 terminal |
RT terminals are not supported by TFF.
For more information about the Terminal Fallback Facility, refer to the OpenVMS Terminal Fallback Utility Manual. You can access it on line from the OpenVMS Documentation CD-ROM (in the archived manuals directory).
Previous | Next | Contents | Index |
privacy and legal statement | ||
6652PRO_007.HTML |