Document revision date: 30 March 2001
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]

OpenVMS Version 7.3
New Features and Documentation Overview


Previous Contents Index

4.4 Dedicated CPU Lock Manager (Alpha)

The Dedicated CPU Lock Manager is a new feature that improves performance on large SMP systems that have heavy lock manager activity. The feature dedicates a CPU to performing lock manager operations.

A dedicated CPU has the following advantages for overall system performance as follows:

4.4.1 Implementing the Dedicated CPU Lock Manager

For the Dedicated CPU Lock Manager to be effective, systems must have a high CPU count and a high amount of MP_SYNCH due to the lock manager. Use the MONITOR utility and the MONITOR MODE command to see the amount of MP_SYNCH. If your system has more than five CPUs and if MP_SYNCH is higher than 200%, your system may be able to take advantage of the Dedicated CPU Lock Manager. You can also use the spinlock trace feature in the System Dump Analyzer (SDA) to help determine if the lock manager is contributing to the high amount of MP_SYNCH time.

The Dedicated CPU Lock Manager is implemented by a LCKMGR_SERVER process. This process runs at priority 63. When the Dedicated CPU Lock Manager is turned on, this process runs in a compute bound loop looking for lock manager work to perform. Because this process polls for work, it is always computable; and with a priority of 63 the process will never give up the CPU, thus consuming a whole CPU.

If the Dedicated CPU Lock Manager is running when a program calls either the $ENQ or $DEQ system services, a lock manager request is placed on a work queue for the Dedicated CPU Lock Manager. While a process waits for a lock request to be processed, the process spins in kernel mode at IPL 2. After the dedicated CPU processes the request, the status for the system service is returned to the process.

The Dedicated CPU Lock Manager is dynamic and can be turned off if there are no perceived benefits. When the Dedicated CPU Lock Manager is turned off, the LCKMGR_SERVER process is in a HIB (hibernate) state. The process may not be deleted once started.

4.4.2 Enabling the Dedicated CPU Lock Manager

To use the Dedicated CPU Lock Manager, set the LCKMGR_MODE system parameter. Note the following about the LCKMGR_MODE system parameter:

Setting LCKMGR_MODE to a number greater than zero (0) triggers the creation of a detached process called LCKMGR_SERVER. The process is created, and it starts running if the number of active CPUs equals the number set by the LCKMGR_MODE system parameter.

In addition, if the number of active CPUs should ever be reduced below the required threshold by either a STOP/CPU command or by CPU reassignment in a Galaxy configuration, the Dedicated CPU Lock Manager automatically turns off within one second, and the LCKMGR_SERVER process goes into a hibernate state. If the CPU is restarted, the LCKMGR_SERVER process again resumes operations.

4.4.3 Using the Dedicated CPU Lock Manager With Affinity

The LCKMGR_SERVER process uses the affinity mechanism to set the process to the lowest CPU ID other than the primary. You can change this by indicating another CPU ID with the LCKMGR_CPUID system parameter. The Dedicated CPU Lock Manager then attempts to use this CPU. If this CPU is not available, it reverts back to the lowest CPU other than the primary.

The following shows how to dynamically change the CPU used by the LCKMGR_SERVER process:


$RUN SYS$SYSTEM:SYSGEN
SYSGEN>USE ACTIVE
SYSGEN>SET LCKMGR_CPUID 2
SYSGEN>WRITE ACTIVE
SYSGEN>EXIT

This change applies to the currently running system. A reboot reverts back to the lowest CPU other than the primary. To permanently change the CPU used by the LCKMGR_SERVER process, set LCKMGR_CPUID in your MODPARAMS.DAT file.

To verify the CPU dedicated to the lock manager, use the SHOW SYSTEM command, as follows:


$ SHOW SYSTEM/PROCESS=LCKMGR_SERVER
OpenVMS V7.3 on node JYGAL  24-OCT-2000 10:10:11.31  Uptime  3 20:16:56 
  Pid    Process Name    State  Pri      I/O       CPU       Page flts  Pages 
4CE0021C LCKMGR_SERVER   CUR  2  63        9   3 20:15:47.78        70     84

Note that the State field shows the process is currently running on CPU 2.

Compaq highly recommends that a process not be given hard affinity to the CPU used by the Dedicated CPU Lock Manager. With hard affinity when such a process becomes computable, it cannot obtain any CPU time, because the LCKMGR_SERVER process is running at the highest possible real-time priority of 63. However, the LCKMGR_SERVER detects once per second if there are any computable processes that are set by the affinity mechanism to the dedicated lock manager CPU. If so, the LCKMGR_SERVER switches to a different CPU for one second to allow the waiting process to run.

4.4.4 Using the Dedicated CPU Lock Manager with Fast Path Devices

OpenVMS Version 7.3 also introduces Fast Path for SCSI and Fibre Channel Controllers along with the existing support of CIPCA adapters. The Dedicated CPU Lock Manager supports both the LCKMGR_SERVER process and Fast Path devices on the same CPU. However, this may not produce optimal performance.

By default, the LCKMGR_SERVER process runs on the first available nonprimary CPU. Compaq recommends that the CPU used by the LCKMGR_SERVER process not have any Fast Path devices. This can be accomplished in either of the following ways:

4.4.5 Using the Dedicated CPU Lock Manager on the AlphaServer GS Series Systems

The new AlphaServer GS Series Systems (GS80, GS160, and the GS320) have NUMA memory characteristics. When using the Dedicated CPU Lock Manager on one of these systems, the best performance is obtained by utilizing a CPU and memory from within a single Quad Building Block (QBB).

For OpenVMS Version 7.3, the Dedicated CPU Lock Manager does not yet have the ability to decide from where QBB memory should be allocated. However, there is a method to preallocate lock manager memory from the low QBB. This can be done with the LOCKIDTBL system parameter. This system parameter indicates the initial size of the Lock ID Table, along with the initial amount of memory to preallocate for lock manager data structures.

To preallocate the proper amount of memory, this system parameter should be set to the highest number of locks plus resources on the system. The command MONITOR LOCK can provide this information. If MONITOR indicates the system has 100,000 locks and 50,000 resources, then setting LOCKIDTBL to the sum of these two values will ensure that enough memory is initially allocated. Adding in some additional overhead may also be beneficial. Setting LOCKIDTBL to 200,000 thus might be appropriate.

If necessary, use the LCKMGR_CPUID system parameter to ensure that the LCKMGR_SERVER runs on a CPU in the low QBB.

4.5 OpenVMS Enterprise Directory for e-Business (Alpha)1

OpenVMS Enterprise Directory for e-Business is a massively scalable directory service, providing both X.500 and LDAPv3 services on OpenVMS Alpha with no separate license fee. OpenVMS Enterprise Directory for e-Business provides the following:

For more detailed information, refer to the Compaq OpenVMS e-Business Infrastructure CD-ROM package which is included in the OpenVMS Version 7.3 CD-ROM kit.

Note

1 On OpenVMS VAX a similar service, but without LDAP support and with more limited performance, is still available with Compaq X.500 Directory Service Version 3.1.

4.6 Extended File Cache (Alpha)

The Extended File Cache (XFC) is a new virtual block data cache provided with OpenVMS Alpha Version 7.3 as a replacement for the Virtual I/O Cache.

Similar to the Virtual I/O Cache, the XFC is a clusterwide, file system data cache. Both file system data caches are compatible and coexist in an OpenVMS Cluster.

The XFC improves I/O performance with the following features that are not available with the Virtual I/O Cache:

For more information, refer to the chapter on Managing Data Caches in the OpenVMS System Manager's Manual, Volume 2: Tuning, Monitoring, and Complex Systems.

4.7 /ARB_SUPPORT Qualifier Added to INSTALL Utility (Alpha)

Beginning with OpenVMS Alpha Version 7.3, you can use the /ARB_SUPPORT qualifier with the ADD, CREATE, and REPLACE commands in the INSTALL utility. The ARB_SUPPORT qualifier provides Access Rights Block (ARB) support to products that have not yet been updated the per-thread security Persona Security Block (PSB) data structure.

This new qualifier is included in the INSTALL utility documentation in the OpenVMS System Management Utilities Reference Manual.

4.8 MONITOR Utility New Features

The MONITOR utility has two new class names, RLOCK and TIMER, which you can use as follows:

These enhancements are discussed in more detail in the MONITOR section of the OpenVMS System Management Utilities Reference Manual and in the appendix that discusses MONITOR record formats in that manual.

Also in the MONITOR utility, the display screens of MONITOR CLUSTER, PROCESSES/TOPCPU, and SYSTEM now have new and higher scale values. Refer to the OpenVMS System Management Utilities Reference Manual: M--Z for more information.

4.9 OpenVMS Cluster Systems

The following OpenVMS Cluster features are discussed in this section:

4.9.1 Clusterwide Intrusion Detection

OpenVMS Version 7.3 includes clusterwide intrusion detection, which extends protection against attacks of all types throughout the cluster. Intrusion data and information from each system are integrated to protect the cluster as a whole. Member systems running versions of OpenVMS prior to Version 7.3 and member systems that disable this feature are protected individually and do not participate in the clusterwide sharing of intrusion information.

You can modify the SECURITY_POLICY system parameter on the member systems in your cluster to maintain either a local or a clusterwide intrusion database of unauthorized attempts and the state of any intrusion events.

If bit 7 in SECURITY_POLICY is cleared, all cluster members are made aware if a system is under attack or has any intrusion events recorded. Events recorded on one system can cause another system in the cluster to take restrictive action. (For example, the person attempting to log in is monitored more closely and limited to a certain number of login retries within a limited period of time. Once a person exceeds either the retry or time limitation, he or she cannot log in.) The default for bit 7 in SECURITY_POLICY is clear.

For more information on the system services $DELETE_INTRUSION, $SCAN_INTRUSION, and $SHOW_INTRUSION, refer to the OpenVMS System Services Reference Manual.

For more information on the DCL commands DELETE/INTRUSION_RECORD and SHOW INTRUSION, refer to the OpenVMS DCL Dictionary.

For more information on clusterwide intrusion detection, refer to the OpenVMS Guide to System Security.

4.9.2 Fast Path for SCSI and Fibre Channel (Alpha)

Fast Path for SCSI and Fibre Channel (FC) is a new feature with OpenVMS Version 7.3. This feature improves the performance of Symmetric Multi-Processing (SMP) machines that use certain SCSI ports, or FC.

In previous versions of OpenVMS, SCSI and FC I/O completion was processed solely by the primary CPU. When Fast Path is enabled, the I/O completion processing can occur on all the processors in the SMP system. This substantially increases the potential I/O throughput on an SMP system, and helps to prevent the primary CPU from becoming saturated.

See Section 4.12.2 for information about the SYSGEN parameter, FAST_PATH_PORTS, that has been introduced to control Fast Path for SCSI and FC.

4.9.3 Floppy Disks Served in an OpenVMS Cluster System (Alpha)

Until this release, MSCP was limited to serving disks. Beginning with OpenVMS Version 7.3, serving floppy disks in an OpenVMS Cluster system is supported, enabled by MSCP.

For floppy disks to be served in an OpenVMS Cluster system, floppy disk names must conform to the naming conventions for port allocation class names. For more information about device naming with port allocation classes, refer to the OpenVMS Cluster Systems manual.

OpenVMS VAX clients can access floppy disks served from OpenVMS Alpha Version 7.3 MSCP servers, but OpenVMS VAX systems cannot serve floppy disks. Client systems can be any version that supports port allocation classes.

4.9.4 New Fibre Channel Support (Alpha)

Support for new Fibre Channel hardware, larger configurations, Fibre Channel Fast Path, and larger I/O operations is included in OpenVMS Version 7.3. The benefits include:

The following new Fibre Channel hardware has been qualified on OpenVMS Version 7.2-1 and on OpenVMS Version 7.3:

OpenVMS now supports Fibre Channel fabrics. A Fibre Channel fabric is multiple Fibre Channel switches connected together. (A Fibre Channel fabric is also known as cascaded switches.)

Configurations that use Fibre Channel fabrics can be extremely large. Distances up to 100 kilometers are supported in a multisite OpenVMS Cluster system. OpenVMS supports the Fibre Channel SAN configurations described in the Compaq StorageWorks Heterogeneous Open SAN Design Reference Guide, available at the following Compaq web site:


http://www.compaq.com/storage 

Enabling Fast Path for Fibre Channel can substantially increase the I/O throughput on an SMP system. For more information about this new feature, see Section 4.9.2.

Prior to OpenVMS Alpha Version 7.3, I/O requests larger than 127 blocks were segmented by the Fibre Channel driver into multiple I/O requests. Segmented I/O operations generally have lower performance than one large I/O. In OpenVMS Version 7.3, I/O requests up to and including 256 blocks are done without segmenting.

For more information about Fibre Channel usage in OpenVMS Cluster configurations, refer to the Guidelines for OpenVMS Cluster Configurations.

4.9.4.1 New Fibre Channel Tape Support (Alpha)

Fibre Channel tape functionality refers to the support of SCSI tapes and SCSI tape libraries in an OpenVMS Cluster system with shared Fibre Channel storage. The SCSI tapes and libraries are connected to the Fibre Channel by a Fibre-to-SCSI bridge known as the Modular Data Router (MDR).

For configuration information, refer to the Guidelines for OpenVMS Cluster Configurations.

4.9.5 LANs as Cluster Interconnects

An OpenVMS Cluster system can use several LAN interconnects for node-to-node communication, including Ethernet, Fast Ethernet, Gigabit Ethernet, ATM, and FDDI.

PEDRIVER, the cluster port driver, provides cluster communications over LANs using the NISCA protocol. Originally designed for broadcast media, PEDRIVER has been redesigned to exploit all the advantages offered by switched LANs, including full duplex transmission and more complex network topologies.

Users of LANs for their node-to-node cluster communication will derive the following benefits from the redesigned PEDRIVER:

4.9.5.1 SCA Control Program

The SCA Control Program (SCACP) utility is designed to monitor and manage cluster communications. (SCA is the abbreviation of Systems Communications Architecture, which defines the communications mechanisms that enable nodes in an OpenVMS Cluster system to communicate.)

In OpenVMS Version 7.3, you can use SCACP to manage SCA use of LAN paths. In the future, SCACP might be used to monitor and manage SCA communications over other OpenVMS Cluster interconnects.

This utility is described in more detail in a new chapter in the OpenVMS System Management Utilities Reference Manual: M--Z.

4.9.5.2 New Error Message About Packet Loss

Prior to OpenVMS Version 7.3, an SCS virtual circuit closure was the first indication that a LAN path had become unusable. In OpenVMS Version 7.3, whenever the last usable LAN path is losing packets at an excessive rate, PEDRIVER displays the following console message:


%PEA0, Excessive packet losses on LAN Path from local-device-name - 
 _  to device-name on REMOTE NODE node-name

This message is displayed after PEDRIVER performs an excessively high rate of packet retransmissions on the LAN path consisting of the local device, the intervening network, and the device on the remote node. The message indicates that the LAN path has degraded and is approaching, or has reached, the point where reliable communications with the remote node are no longer possible. It is likely that the virtual circuit to the remote node will close if the losses continue. Furthermore, continued operation with high LAN packet losses can result in a significant loss in performance because of the communication delays resulting from the packet loss detection timeouts and packet retransmission.

The corrective steps to take are:

  1. Check the local and remote LAN device error counts to see if a problem exists on the devices. Issue the following commands on each node:


    $ SHOW DEVICE local-device-name
    $ MC SCACP 
    SCACP> SHOW LAN device-name
    $ MC LANCP 
    LANCP> SHOW DEVICE device-name/COUNT 
    

  2. If device error counts on the local devices are within normal bounds, contact your network administrators to request that they diagnose the LAN path between the devices.
    If necessary, contact your COMPAQ support representative for assistance in diagnosing your LAN path problems.

For additional PEDRIVER troubleshooting information, see Appendix F of the OpenVMS Cluster Systems manual.

4.9.6 Warranted and Migration Support

Compaq provides two levels of support, warranted and migration, for mixed-version and mixed-architecture OpenVMS Cluster systems.

Warranted support means that Compaq has fully qualified the two versions coexisting in an OpenVMS Cluster and will answer all problems identified by customers using these configurations.

Migration support is a superset of the Rolling Upgrade support provided in earlier releases of OpenVMS and is available for mixes that are not warranted. Migration support means that Compaq has qualified the versions for use together in configurations that are migrating in a staged fashion to a newer version of OpenVMS VAX or of OpenVMS Alpha. Problem reports submitted against these configurations will be answered by Compaq. However, in exceptional cases, Compaq may request that you move to a warranted configuration as part of answering the problem.

Compaq supports only two versions of OpenVMS running in a cluster at the same time, regardless of architecture. Migration support helps customers move to warranted OpenVMS Cluster version mixes with minimal impact on their cluster environments.

Table 4-2 shows the level of support provided for all possible version pairings.

Table 4-2 OpenVMS Cluster Warranted and Migration Support
  Alpha/VAX V7.3 Alpha V7.2--xxx/
VAX V7.2
Alpha/VAX V7.1
Alpha/VAX V7.3 WARRANTED Migration Migration
Alpha V7.2-- xxx/
VAX V7.2
Migration WARRANTED Migration
Alpha/VAX V7.1 Migration Migration WARRANTED

In a mixed-version cluster with OpenVMS Version 7.3, you must install remedial kits on earlier versions of OpenVMS. For OpenVMS Version 7.3, two new features, XFC and Volume Shadowing minicopy, cannot be run on any node in a mixed version cluster unless all nodes running earlier versions of OpenVMS have installed the required remedial kit or upgrade. Remedial kits are available now for XFC. An upgrade for systems running OpenVMS Version 7.2-xx that supports minicopy will be made available soon after the release of OpenVMS Version 7.3.

For a complete list of required remedial kits, refer to the OpenVMS Version 7.3 Release Notes.


Previous Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement  
6620PRO_002.HTML