Document revision date: 15 July 2002
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]

OpenVMS Alpha Version 7.3--1 New Features and Documentation Overview


Previous Contents Index

4.12 Kerberos for OpenVMS

Starting with OpenVMS Alpha Version 7.3--1, Kerberos for OpenVMS ships as part of the OpenVMS operating system. Previously, Kerberos shipped as a layered product.

Kerberos is a network authentication protocol designed to provide strong authentication for client/server applications by using secret-key cryptography.

For more information, refer to the Kerberos Version 1.0 Security Client for OpenVMS Release Notes.

4.13 New LANCP Qualifiers for SET DEVICE and SHOW DEVICES

Table 4-1 lists the new qualifers for the SET DEVICE command.

Table 4-1 SET DEVICE Command Qualifiers
Qualifier Meaning
/AUTONEGOTIATE
/NOAUTONEGOTIAGE
Enables or disables the use of auto-negotiation to determine the settings. You need to disable link auto-negotiation only when connected to a switch or device that does not support auto-negotiation. /AUTONEGOTIAGE is the default.
/CONTENDER
/NOCONTENDER
Specifies that the Token Ring is to participate in the Monitor Contention process when it joins the ring. The /NOCONTENDER qualifier directs the device not to challenge the current ring server.
/DEVICE_SPECIFIC=FUNCTION Allows some device-specific parameters to be adjusted. For the device-specific function commands, see the OpenVMS System Management Utilities Reference Manual: A--L.
/JUMBO
/NOJUMBO
Enables the use of jumbo frames on a LAN device. Only the Gigabit Ethernet NICs support jumbo frames. /NOJUMBO is the default.

Table 4-2 lists the new qualifiers for the SHOW DEVICES command.

Table 4-2 SHOW DEVICES Command Qualifiers
Qualifier Meaning
/INTERNAL_COUNTERS Displays internal counters. By default, it does not display zero counters. To see all counters, including zero, use the additional qualifier /ZERO. To see debug counters, use the additional qualifier /DEBUG.
/TRACE Displays LAN driver trace data.

For more information, see the OpenVMS System Management Utilities Reference Manual: A--L.

4.14 LIBDECOMP.COM Enhancements

Several enhancements have been made to the Library Decompression utility (LIBDECOMP.COM):

For detailed information about using LIBDECOMP.COM and its parameters, refer to the OpenVMS System Manager's Manual.

4.15 ODS-5 Volume Structure

ODS-5 is an optional volume structure that provides support for longer file names with a greater range of legal characters. The legal character set has been extended. Now you can use all the characters from the ISO Latin-1 Multinational character set, except the asterisk (*) and the question mark (?), in file names. Some of the characters from the ISO Latin-1 character set require an escape character to be interpreted properly. For more information, refer to the OpenVMS User's Manual.

In previous versions of OpenVMS, ODS-5 volumes could not be used as system disks, and it was recommended that ODS-5 disks be used in homogeneous Alpha clusters only. These restrictions have been removed. OpenVMS Version 7.3--1 supports the use of ODS-5 volumes as system disks and in heterogeneous Alpha clusters.

The ODS-5 volume structure supports many new file system features, including hard links and support for case sensitivity. For detailed information about the new file system features, see Section 4.10. For more information about the system management of ODS-5 disks, refer to the OpenVMS System Manager's Manual, Volume 1: Essentials.

4.16 OpenVMS Cluster New Features

This section briefly describes the new OpenVMS Cluster features introduced in this release.

4.16.1 OpenVMS Cluster Performance Improvements

Many modules that are used to send messages or transfer data in an OpenVMS Cluster system have been optimized for speed of execution on the Alpha architecture. Several major modules have been completely rewritten, routine by routine, for performance, correctness, and clarity.

The resultant images are significantly smaller, invoke far fewer memory accesses, and generate much more efficient code.

The benefits of these changes are:

Background

The cluster communication subsystem consists of several layers:

The major system applications used by OpenVMS are the disk class driver (DUDRIVER), the MSCP server (MSCP), and the cluster connection manager (part of the SYS$CLUSTER image).

SYS$CLUSTER is a shared system application encompassing many subfacilities, notably:

These subfacilities have many subfunctions and services themselves. For example, the distributed lock manager has services for sending $ENQ and $DEQ messages, remastering resource trees, deadlock detection, failover recovery, and others.

To send a message:

  1. A system application invokes common interfaces to SCS routines that handle messages or data transfer.
  2. The SCS layer calls through a common interface to access port-specific routines for the adapters used to reach the remote system or controller.
  3. The Port layer calls the specific device driver or adapter interface.

Incoming messages reverse this ordering. Within the connection manager, an additional layer to provide sequenced message and block transfer services is available to all of its subfacilities.

Revised Components

The following subsystems have been totally or incrementally rewritten for speed of execution on Alpha, resulting in performance improvement for all system applications in the cluster:

The following subsections of SYS$CLUSTER have also been totally rewritten to optimize performance:

4.16.2 Fibre Channel Driver Optimizations

For OpenVMS Alpha Version 7.3--1, the Fibre Channel driver has been optimized to reduce I/O lock hold time by 3--6 microseconds per I/O, resulting in significant I/O performance improvements. This optimization, coupled with the coalescing of the I/O completion interrupts, has reduced I/O lock times by as much as 50%, potentially doubling Fibre Channel throughput.

Unlike the coalescing of the I/O completion interrupts, the Fibre Channel driver optimizations cannot be backported to earlier versions of OpenVMS Alpha.

For more information about Fibre Channel configurations, refer to Guidelines for OpenVMS Cluster Configurations.

4.16.3 I/O Interrupt Coalescing for Fibre Channel Configurations

OpenVMS Alpha Version 7.3--1 contains a major enhancement to the processing of I/O completion interrupts in the host bus adapter. Instead of being sent one at a time, the I/O completion interrupts are grouped together and then transmitted as a group. This major enhancement improves I/O performance in environments with high I/O work loads. Initial tests show a 25% reduction in IOLOCK8 hold time, which translates directly into a 25% increase in I/O throughput performance.

This feature will be backported to OpenVMS Alpha Version 7.2--2 and OpenVMS Alpha Version 7.3.

For more information about Fibre Channel configurations, refer to Guidelines for OpenVMS Cluster Configurations.

4.16.4 MSCP Served Devices: Increase from 512 to 1000

OpenVMS Alpha Version 7.3--1 contains a major enhancement to MSCP serving. The previous limit of 512 disks that could be MSCP served by any system in an OpenVMS Cluster system has been raised to 1000. This allows greater flexibility in configuring OpenVMS Cluster storage. A further increase making the supported limit greater than 1000 will be considered for a future release.

Note that this increase affects Alpha systems, only. There is no intent to increase the number of disks served by VAX nodes. A VAX node can still be a client of as many disks as are presented in the cluster, within memory resource limits.

For more information about MSCP serving, refer to OpenVMS Cluster Systems and Guidelines for OpenVMS Cluster Configurations.

4.16.5 Multipath Failover to an MSCP-Served Path in SCSI and Fibre Channel Configurations

In a multipath SCSI or Fibre Channel configuration, OpenVMS supports failover, from one path to another path, to a device. Establishing multiple paths to a device has the following advantages:

Prior versions of OpenVMS Alpha supported failover between direct paths.

OpenVMS Alpha Version 7.3--1 introduces support for failover between direct-attached and MSCP-served paths to disks in the following OpenVMS Cluster configurations:

If all direct paths to a device are broken, the I/O automatically fails over to an MSCP-served path. When the direct paths are restored, the I/O is automatically rerouted from the MSCP-served paths to the direct-attached paths.

Use the MPDEV_REMOTE system parameter to enable this capability. For more information about multipath support, including failover to an MSCP-served path, refer to Guidelines for OpenVMS Cluster Configurations.

4.16.6 Automatic Multipath Balancing for Fibre Channel and SCSI Devices

A major enhancement to multipath failover in OpenVMS Alpha Version 7.3--1 for Fibre Channel and SCSI disk and tape devices is automatic path balancing. The path selection of each device is now biased toward the connected path with the fewest devices using it as a current path.

In addition to the introduction of automatic, multipath path balancing, the path selection algorithm for multipath failover has been modified to improve performance. For more information, see Section 6.7.8, Path Selection by OpenVMS, in Guidelines for OpenVMS Cluster Configurations.

4.16.7 Multipath Tape Support, Including Failover, for Fibre Channel Configurations

Tapes are supported in Fibre Channel configurations as of OpenVMS Alpha Version 7.3. You can attach SCSI tape devices to the Fibre Channel via a Fibre-to-SCSI bridge known as the Modular Data Router (MDR). The MDR is dual ported, allowing two paths into the MDR. For example, if an Alpha system has four KGPSA adapters, there are four distinct paths to a tape drive on the Fibre Channel. An Alpha system with four KGPSA adapters leading to a dual-ported MDR actually has eight different paths to a tape drive.

OpenVMS Alpha Version 7.3 did not take advantage of multiple paths. It used only the first path detected during autoconfiguration. The remaining paths were never recognized or made available, even if the first path broke. This single-path model had two limitations:

The first limitation has a workaround that uses the selective storage presentation (SSP) feature of the MDR utility; the second limitation has no workaround at all.

OpenVMS Alpha Version 7.3--1 removes both limitations. All possible paths from an Alpha system to a Fibre Channel tape are configured and made available. You can specify a particular path with the DCL command SET DEVICE/SWITCH. In the event of a broken connection, automatic failover takes place.

Note

Multipath failover is supported between tape devices directly connected to the Fibre Channel that are members of a multipath set. If one member of the set fails, another member provides the tape device to the client.

However, multipath failover between direct and MSCP-served paths is not supported for tape devices.

For more information about tape support in Fibre Channel configurations, refer to Guidelines for OpenVMS Cluster Configurations.


Previous Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement  
6657PRO_004.HTML