Document revision date: 19 July 1999
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]

OpenVMS Version 7.2
New Features Manual


Previous Contents Index

3.13 LANCP Define Device and Set Device for Classical IP: New Qualifier (Alpha Only)

OpenVMS Version 7.2 has two new qualifiers, /PVC and /NOPVC, for Define Device and Set Device commands for LANCP. Table 3-3 shows the qualifiers for the Define Device and Set Device commands.

Table 3-3 Define Device and Set Device
Qualifier Description
/PVC=([vc-id],... ) On Alpha systems, defines or sets the permanent virtual channel (PVC). This is an optional qualifier. An example of enabling the PVC is:
/PVC = (200,105)

/NOPVC=([vc-id],...) On Alpha systems, does not define or set the permanent virtual channel (PVC). This is an optional qualifier.

3.14 LANCP Classical IP (CLIP) Qualifier: New and Changed Keyword (Alpha Only)

OpenVMS Version 7.2 has a new and changed keyword for the /CLIP qualifier for the Define Device and Set Device LANCP commands. The description for the keyword type=server has changed and the new keyword is shown in Table 3-4.

Table 3-4 Keyword for /CLIP Qualifier for Define Device and Set Device
Qualifier Description
/CLIP The meanings for the syntax of keyword and subkeyword for /CLIP are as follows:
  • type=server

    Starts up a CLIP server. Only one server for each LIS is allowed, and the server needs to be started first.

  • type=(server,client)

    Starts up a CLIP server and client.

3.15 Monitor Utility: TCP/IP Support Added

The Monitor utility has been enhanced to be able to use either TCP/IP (if available) or DECnet as its transport. MONITOR will try to access TCP/IP first; if TCP/IP is not available, MONITOR will use DECnet.

To take advantage of this enhancement, you must uncomment the following line in SYS$STARTUP:SYSTARTUP_VMS.COM:


$ @SYS$STARTUP:VPM$STARTUP.COM 

See SYS$STARTUP:SYSTARTUP_VMS.TEMPLATE for examples.

3.16 OpenVMS Cluster Systems

The following new OpenVMS Cluster features are described in this section:

3.16.1 New CIPCA Adapter Support

The CIPCA adapter is a PCI-to-CI storage host bus adapter. CIPCA enables users to connect their existing CI-based storage to high-performance PCI-based AlphaServer systems.

Since the release of OpenVMS Version 7.1, the CIPCA-BA adapter has been qualified on OpenVMS Version 7.1. The CIPCA-BA adapter requires two PCI slots. The earlier CIPCA-AA adapter requires one EISA slot and one PCI slot. The CIPCA-BA is intended for newer servers that have a limited number of EISA slots.

The CIPCA adapter enables users to include the latest Alpha-based servers within an existing OpenVMS Cluster system, thereby taking advantage of servers offering the best price performance while maintaining previous investments in storage subsystems. The CIPCA adapter and the HSJ50 storage controller allow PCI to CI connectivity and increase the performance of the CI with support for 4K packets.

The maximum number of CIPCA adapters that can be configured within a system is dependent upon the system, the available number of EISA and PCI slots, the combination of CIPCA models selected, as well as other system options. For more information about CIPCA, see Guidelines for OpenVMS Cluster Configurations.

3.16.2 Clusterwide Logical Names

Clusterwide logical names are an extension to the existing logical name support in OpenVMS. They are available on both OpenVMS Alpha and OpenVMS VAX.

This section provides information about clusterwide logical names for system managers. For programming aspects of clusterwide logical names, see Section 4.2.

3.16.2.1 Overview

Clusterwide logical names extend the convenience and ease-of-use features of shareable logical names to OpenVMS Cluster systems. Existing applications can take advantage of clusterwide logical names without any changes to the application code. Only a minor modification to the logical name tables referenced by the application (directly or indirectly) is required.

New logical names created on OpenVMS Version 7.2 are local by default. Clusterwide is an attribute of a logical name table. In order for a new logical name to be clusterwide, it must be created in a clusterwide logical name table.

3.16.2.2 Features

Some of the most important features of clusterwide logical names are:

For more information about clusterwide logical names, refer to the OpenVMS Cluster Systems manual.

3.16.3 Gigabit Ethernet as a Cluster Interconnect

Note

OpenVMS Cluster support for Gigabit Ethernet will be available shortly after the release of OpenVMS Version 7.2. This documentation is provided to help you plan for the introduction of Gigabit Ethernet in your OpenVMS Cluster configurations,

OpenVMS Alpha Version 7.2 supports Gigabit Ethernet as a cluster interconnect. The nodes in a Gigabit Ethernet OpenVMS Cluster system are connected to a Gigabit Ethernet switch, or, if there are only two nodes, they can be connected point-to-point so that no switch is needed, as shown in Figure 3-2.

Figure 3-2 Point-to-Point Gigabit Ethernet OpenVMS Cluster Configuration


Most Gigabit Ethernet switches can be configured with Gigabit Ethernet or a combination of Gigabit Ethernet and Fast Ethernet (100 Mbps). Each node can have a single connection to the switch or can be configured with multiple connections. For example, a node can be connected by Gigabit Ethernet and by Fast Ethernet.

Figure 3-3 shows a five-node cluster with two nodes connected by Gigabit Ethernet, one node connected by both Gigabit Ethernet and Fast Ethernet, and two nodes connected by Fast Ethernet. Note that the currently supported DIGITAL PCI-to-Gigabit Ethernet adapter is known as a DEGPA. The currently supported Fast Ethernet family of adapters is named DE50x-xx.

Figure 3-3 Switched Gigabit Ethernet OpenVMS Cluster Configuration


In a multipath configuration where a node is connected to the switch by two or more cables, such as the middle node shown in Figure 3-3, if one path fails, the remaining path can assume the load of the failed path.

3.16.3.1 System Support

Gigabit Ethernet is supported as a cluster interconnect on several AlphaServer models, as shown in Table 3-5.

Table 3-5 AlphaServer Support for Gigabit Ethernet Adapters
AlphaServer Model Maximum Number of Adapters Minimum Memory
AlphaServer GS140 4 128 1
AlphaServer 8400 4 128 1
AlphaServer 8200 4 128 1
AlphaServer 4 x00 4 128 1
AlphaServer 1200 2 128 1
AlphaServer 800 2 128 1


1256 MB is strongly recommended.

3.16.3.2 OpenVMS Cluster Functions Planned for Future Release

A limited number of cluster functions will not be supported in OpenVMS Version 7.1--2 or OpenVMS Version 7.2, as described in Table 3-6. Support for these cluster functions is planned for a future release.

Table 3-6 Cluster Functions Not Currently Supported
Function Comment
Jumbo frames (frame size >1518 and <9,018) Jumbo frames are supported at this time over Gigabit Ethernet but not for cluster communications. The frame size supported for cluster communications is the standard 1518-byte maximum Ethernet frame size.
Optimum path selection Because optimum path selection is not implemented in this release, you cannot rely on the cluster software to always select the optimal path.
Satellite booting with the DEGPA as the boot device Although the DEGPA cannot be used as the boot device, satellites can be booted over standard 10/100 Ethernet network adapters configured on a Gigabit switch.

3.16.4 Intra-Cluster Communication (ICC) System Services

The new intra-cluster communications (ICC) system services provide an applications programming interface (API) for applications that will run in an OpenVMS Cluster system. Using these services, application program developers can create connections between different processes on the same or different systems within a single OpenVMS Cluster system. For more information about the new ICC system services, refer to the OpenVMS System Services Reference Manual: GETQUI--Z, and Section 4.11 in this manual.

3.16.5 Lock Manager Improvements

The lock manager synchronizes resources in an OpenVMS system. For OpenVMS Version 7.2, the lock manager software has been enhanced to improve the performance of applications that issue a large number of lock manager requests and to improve application scaling. These improvements pertain to single systems and to OpenVMS Cluster systems.

For OpenVMS Alpha, the changes include a new location for most of the lock manager data structures. These data structures now reside in S2 space in a structure known as a Pool Zone. The SHOW MEMORY display has been modified to display attributes of the Pool Zone memory used by the lock manager. For more information, refer to the OpenVMS DCL Dictionary.

3.16.6 MEMORY CHANNEL Enhancements (Alpha Only)

MEMORY CHANNEL is a high-performance cluster interconnect technology for PCI-based Alpha systems. It is suitable for applications that must move large amounts of data among nodes, such as high-performance databases.

MEMORY CHANNEL supports node-to-node cluster communication only. A second interconnect is required for network traffic or storage access.

When introduced in OpenVMS Alpha Version 7.1, MEMORY CHANNEL supported a maximum of 4 nodes in a 10-foot radial topology. OpenVMS Alpha Version 7.1--1H1 provided the following enhancements for MEMORY CHANNEL Version 1.5:

MEMORY CHANNEL Version 2.0, supported by OpenVMS Alpha Version 7.2, provides the following new capabilities:

You can configure a computer in an OpenVMS Cluster system with both a MEMORY CHANNEL Version 1.5 hub and a MEMORY CHANNEL Version 2.0 hub. However, the version number of the adapter and the cables must match the hub's version number for MEMORY CHANNEL to function properly.

In other words, you must use MEMORY CHANNEL Version 1.5 adapters with the MEMORY CHANNEL Version 1.5 hub and MEMORY CHANNEL Version 1.5 cables. Similarly, you must use MEMORY CHANNEL Version 2.0 adapters with the MEMORY CHANNEL Version 2.0 hub and MEMORY CHANNEL Version 2.0 cables.

For more information about MEMORY CHANNEL, refer to Guidelines for OpenVMS Cluster Configurations.

3.16.7 Multipath SCSI Configurations with Parallel SCSI or Fibre Channel

OpenVMS Alpha Version 7.2 introduces multipath support for parallel SCSI configurations. Shortly after the release of OpenVMS Version 7.2, multipath support will also be available for Fibre Channel configurations.

SCSI multipath support means support for failover between multiple paths that may exist between an OpenVMS system and a SCSI device, as shown in Figure 3-4. If the current path to a mounted disk fails, the system will automatically fail over to the alternate path.

Figure 3-4 Multipath SCSI Configuration


Multipath support is provided for:

Note

Multipath support for failover between direct SCSI paths and MSCP served paths will be available soon after the release of OpenVMS Version 7.2.

Figure 3-5 shows a SCSI multipath configuration with multiple direct connections to the HSx controllers as well as MSCP served paths to the HSx controllers.

Figure 3-5 Direct SCSI and MSCP Served Paths


Multipath SCSI devices can be directly attached to Alpha systems and served to Alpha or VAX systems.

SCSI multipath failover for redundant paths to a storage device greatly improves data availability and, in some configurations, improves performance.

For more information about SCSI multipath support, refer to Guidelines for OpenVMS Cluster Configurations.

3.16.8 SCSI OpenVMS Cluster System Supports Four Nodes

With the introduction of the SCSI hub DWZZH-05, four nodes can now be supported in a SCSI multihost OpenVMS Cluster system. In order to support four nodes, the hub's fair arbitration feature must be enabled. This hub is supported with either KZPSA or KZPBA-CB adapters.

This configuration is supported on the following versions of OpenVMS Alpha:

Prior to the introduction of the SCSI hub DWZZH-05, a maximum of three nodes were supported in a SCSI multihost OpenVMS cluster system.

3.16.9 Ultra SCSI Configuration Support

OpenVMS Alpha Version 7.1--1H1 introduced support for certain Ultra SCSI devices in Ultra SCSI mode in single-host configurations. Since the release of OpenVMS Alpha Version 7.1--1H1, additional Ultra SCSI devices have been qualified on OpenVMS, and support for multihost configurations has been added to OpenVMS.

OpenVMS Version 7.2 includes Ultra SCSI support for both single-host and multihost configurations. A maximum of four hosts is supported in an Ultra SCSI multihost configuration when a DWZZH-05 hub is used with fair arbitration enabled.

Table 3-7 summarizes the Ultra SCSI support provided by OpenVMS, including support for several significant Ultra SCSI devices. For information about all Ultra SCSI devices supported by OpenVMS and about configuring OpenVMS Alpha Ultra SCSI clusters, see the documents described in Table 3-9.

Table 3-7 OpenVMS Alpha Ultra SCSI Support
Configuration/Adapter Version Description
Single-host configurations using the KZPBA-CA 7.1--1H1 The KZPBA-CA is a single-ended adapter. The KZPAC Ultra SCSI host RAID controller is also supported in single-host configurations.
Single-host configurations using the KZPBA-CB 7.1--1H1 The KZPBA-CB is a differential adapter. The HSZ70 is also supported in Ultra SCSI mode, using the KZPBA-CB.
Multihost configurations using the KZPBA-CB 7.1--1H1 with a remedial kit Up to four hosts can share the Ultra SCSI bus when a DWZZH-05 hub is used with fair arbitration enabled. The HSZ70 is also supported on the multihost bus.
Multihost configurations using the KZPBA-CB 7.2 Up to four hosts can share the Ultra SCSI bus when a DWZZH-05 hub is used with fair arbitration enabled. The HSZ70 is also supported on the multihost bus.

Note the restrictions described in Table 3-8.

Table 3-8 OpenVMS Restrictions
Restriction Comments
Firmware for the KZPBA--CB must be Version 5.53 or higher. Earlier firmware versions do not provide multihost support.
Console firmware must be updated with the Alpha Systems Firmware Update CD Version 5.1 or higher. All console SCSI driver fixes are included on this CD. This CD also includes the latest version of the KZPBA-CB firmware (Version 5.53 or higher).

Table 3-9 provides pointers to additional documentation for Ultra SCSI devices and for configuring OpenVMS Alpha Ultra SCSI clusters.

Table 3-9 Documentation for Configuring OpenVMS Alpha Ultra SCSI Clusters
Topic Document Order Number
SCSI devices that support Ultra SCSI operations and how to configure them StorageWorks UltraSCSI Configuration Guidelines EK--ULTRA--CG
KZPBA--CB UltraSCSI storage adapter KZPBA--CB UltraSCSI Storage Adapter Module Release Notes AA--R5XWA--TE
Multihost SCSI bus operation in OpenVMS Cluster systems Guidelines for OpenVMS Cluster Configurations AA--Q28LB--TK
Systems and devices supported by OpenVMS Version 7.1--1H1 OpenVMS Operating System for Alpha and VAX, Version 7.1--1H1 Software Product Description SPD 25.01. xx
Multihost SCSI support OpenVMS Cluster Software Software Product Description SPD 29.78. xx

Information about StorageWorks Ultra SCSI products is available and periodically updated on the World Wide Web at the following URL:


http://www.storage.digital.com 

OpenVMS software product descriptions are also available and periodically updated on the World Wide Web at the following URL:


http://www.openvms.digital.com 

You will find the software product descriptions under Publications, a choice on the home page.

3.16.10 Warranted and Migration Support

OpenVMS Alpha Version 7.2 and OpenVMS VAX Version 7.2 provide two levels of support, warranted and migration, for mixed-version and mixed-architecture OpenVMS Cluster systems.

Warranted support means that Compaq has fully qualified the two versions coexisting in an OpenVMS Cluster and will answer all problems identified by customers using these configurations.

Migration support is a superset of the Rolling Upgrade support provided in earlier releases of OpenVMS and is available for mixes that are not warranted. Migration support means that Compaq has qualified the versions for use together in configurations that are migrating in a staged fashion to a newer version of OpenVMS VAX or of OpenVMS Alpha. Problem reports submitted against these configurations will be answered by Compaq. However, in exceptional cases, Compaq may request that you move to a warranted configuration as part of answering the problem.

Migration support helps customers move to warranted OpenVMS Cluster version mixes with minimal impact on their cluster environments.

Table 3-10 shows the level of support provided for all possible version pairings.

Table 3-10 OpenVMS Cluster Warranted and Migration Support
  Alpha V6.2--xxx Alpha V7.1--xxx Alpha V7.2
VAX V6.2-- xxx WARRANTED Migration Migration
VAX V7.1-- xxx Migration WARRANTED Migration
VAX V7.2 Migration Migration WARRANTED

For OpenVMS Version 6.2 nodes to participate in a cluster with systems running either Version 7.1 or Version 7.2, the cluster compatibility kit must be installed on each Version 6.2 node. In addition, if you use the Monitor Utility in a mixed version utility, you must install a new remedial kit. For more information about these kits, refer to the OpenVMS Version 7.2 Release Notes.

Compaq does not support the use of Version 7.2 with Version 6.1 (or earlier versions) in an OpenVMS Cluster. In many cases, mixing Version 7.2 with versions prior to Version 6.2 will successfully operate, but Compaq cannot commit to resolving problems experienced with such configurations.

Note

Nodes running OpenVMS VAX Version 5.5--2 or earlier versions, or OpenVMS Alpha Version 1.0 or 1.5, cannot participate in a cluster with one or more OpenVMS Version 7.2 nodes. For more information, refer to the OpenVMS Version 7.2 Release Notes.


Previous Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement  
6520PRO_003.HTML