[OpenVMS documentation]
[Site home] [Send comments] [Help with this site] [How to order documentation] [OpenVMS site] [Compaq site]
Updated: 11 December 1998

OpenVMS Performance Management


Previous Contents Index

12.2.1 How Does the Cache Work?

The virtual I/O cache can store data files and image files. For example, ODS-2 disk file data blocks are copied to the virtual I/O cache the first time they are accessed. Any subsequent read requests of the same data blocks are satisfied from the virtual I/O cache (hits) eliminating any physical disk I/O operations (misses) that would have occurred.

Depending on your system work load, you should see increased application throughput, increased interactive responsiveness, and reduced I/O load.

Note

Applications that initiate single read and write requests will not benefit from virtual I/O caching as the data is never reread from the cache. Applications that rely on implicit I/O delays might abort or yield unpredictable results.

Several policies govern how the cache manipulates data as follows:

12.2.2 Displaying Virtual I/O Cache Statistics

Use the SHOW MEMORY/CACHE/FULL DCL command to display statistics about the virtual I/O cache as shown in the following example:


$ SHOW MEMORY/CACHE/FULL
            System Memory Resources on 10-OCT-1994 18:36:12.79
Virtual I/O Cache
Total Size (pages)   (1)   2422    Read IO Count          (6)     9577
Free Pages           (2)   18      Read Hit Count         (7)     5651
Pages in Use         (3)   2404    Read Hit Rate          (8)     59%
Maximum Size (SPTEs) (4)   11432   Write IO Count         (9)     2743
Files Retained       (5)   99      IO Bypassing the Cache (10)     88
 

Note

This example shows the output for the SHOW MEMORY/CACHE/FULL command on a VAX system. The SHOW MEMORY/CACHE/FULL command will display slightly different field on an Alpha system.
(1) Total Size Displays the total number of system memory pages that the Virtual I/O cache currently controls.
(2) Free Pages Displays the number of pages controlled by the Virtual I/O cache that do not contain cache data.
(3) Pages in Use Displays the number of pages controlled by the Virtual I/O cache that contain valid cached data.
(4) Maximum Size Shows the maximum size that the cache could ever grow to.
(5) Files Retained Displays the number of files that are closed but the file system control information is being retained because they have valid data residing in the cache.
(6) Read I/O Count Displays the total number of read I/Os that have been seen by the Virtual I/O cache since the last system.
(7) Read Hit Count Displays the total number of read I/Os that did not do a physical I/O because the data for them was found in the cache since the last system BOOT.
(8) Read Hit Rate Displays the read hit count and read I/O count ratio.
(9) Write I/O Count Shows the total number of write I/Os that have been seen by the cache since the last system BOOT.
(10) I/O Bypassing Displays the count of I/Os that for some reason did not attempt to satisfy the request/update by the cache.

12.2.3 Enabling Virtual I/O Caching

By default, virtual I/O caching is enabled. Use the following system parameters to enable or disable caching. Change the value of the parameters in MODPARAMS.DAT as follows:
Parameter Enabled Disabled
VCC_FLAGS (Alpha) 1 0
VBN_CACHE_S (VAX) 1 0

Once you have updated MODPARAMS.DAT to change the value of the appropriate parameter, you must run AUTOGEN and reboot the node or nodes on which you have enabled or disabled caching. Caching is automatically enabled or disabled during system initialization. No further user action is required.

12.2.4 Determining If Virtual I/O Caching Is Enabled

SHOW MEMORY/CACHE will indicate if VIO caching is on or off on a running system. (This is a lot easier than using SYSGEN.)

SYSGEN can be used to examine parameters before a system is booted. For example, you can check the system parameter VCC_FLAGS (on Alpha) or VBN_CACHE_S (on VAX) to see if virtual I/O caching is enabled by using SYSGEN as shown in the following Alpha example:


$ RUN SYS$SYSTEM:SYSGEN
SYSGEN> SHOW VCC_FLAGS

A value of 0 indicates that caching is disabled; the value 1 indicates caching is enabled.

12.2.5 Memory Allocation and the Virtual I/O Cache

The memory allocated to caching is determined by the size of the free-page list. The size of the virtual I/O cache can grow if one of the following conditions is true:

The cache size is also limited by the following:

How is memory reclaimed from the cache? The swapper can reclaim memory allocated to the virtual I/O cache by using first-level trimming. In addition, a heuristic primitive shrinks the cache returning memory in small increments.

12.2.6 Adjusting the Virtual I/O Cache Size

The size of the virtual I/O cache is controlled by the system parameter VCC_MAXSIZE. The amount of memory specified by this parameter is statically allocated at system initialization and remains owned by the virtual I/O cache.

To increase or decrease the size of the cache, modify VCC_MAXSIZE and reboot the system.

12.2.7 Virtual I/O Cache and OpenVMS Cluster Configurations

The cache works on all supported configurations from single-node systems to large mixed-interconnect OpenVMS Cluster systems. The virtual I/O cache is nodal, that is, the cache is local to each OpenVMS Cluster member. Any base system can support virtual I/O caching; an OpenVMS Cluster license is not required to use the caching feature.

Note

If any member of an OpenVMS Cluster does not have caching enabled, then no caching can occur on any node in the OpenVMS Cluster (including the nodes that have caching enabled). This condition remains in effect until the node or nodes that have caching disabled either enable caching or leave the cluster.

The lock manager controls cache coherency. The cache is flushed when a node leaves the OpenVMS Cluster. Files opened on two or more nodes with write access on one or more nodes are not cached.

12.3 Enlarge Hardware Capacity

If there seem to be few appropriate or productive ways to shift the demand away from the bottleneck point using available hardware, you may have to acquire additional hardware. Adding capacity can refer to either supplementing the hardware with another similar piece or replacing the item with one that is larger, faster, or both.

Try to avoid a few of the more common mistakes. It is easy to conclude that more disks of the same type will permit better load distribution, when the truth is that providing another controller for the disks you already have might bring much better results. Likewise, rather than acquiring more disks of the same type, the real solution might be replacing one or more existing disks with a disk that has a faster transfer rate. Another mistake to avoid is acquiring disks that immediately overburden the controller or bus you place them on.

To make the correct choice, you must know whether your problem is due to limitations in space and placement or to speed limitations. If you need speed improvement, be sure you know whether it is needed at the device or the controller. You must invest the effort to understand the I/O subsystem and the distribution of the I/O work load across it before you can expect to make the correct choices and configure them optimally. You should try to understand at all times just how close to capacity each part of your I/O subsystem is.

12.4 Improve RMS Caching

The Guide to OpenVMS File Applications is your primary reference for information on tuning RMS files and applications. RMS reduces the load on the I/O subsystems through buffering. Both the size of the buffers and the number of buffers are important in this reduction. In trying to determine reasonable values for buffer sizes and buffer counts, you should look for the optimal balance between minimal RMS I/O (using sufficiently large buffers) and minimal memory management I/O. Note that, if you define RMS buffers that are too large, you can more than fill the process's entire working set with these buffers, ultimately inducing more process paging.

12.5 Adjust File System Caches

The considerations for tuning disk file system caches are similar to those for tuning RMS buffers. Again, the issue is minimizing I/O. A disk file system maintains caches of various file system data structures such as file headers and directories. These caches are allocated from paged pool when the volume is mounted for ODS-2 volumes (default). (For an ODS-1 ACP, they are part of the ACP working set.) File system operations that only read data from the volume (as opposed to those that write) can be satisfied without performing a disk read, if the desired data items are in the file system caches. It is important to seek an appropriate balance point that matches the work load.

To evaluate file system caching activity:

  1. Enter the MONITOR FILE_SYSTEM_CACHE command.
  2. Examine the data items displayed. (For detailed descriptions of these items, refer to the OpenVMS System Management Utilities Reference Manual.)
  3. Invoke SYSGEN and modify, if necessary, appropriate ACP system parameters.

Data items in the FILE_SYSTEM_CACHE display correspond to ACP parameters as follows:
FILE_SYSTEM_CACHE Item ACP/XQP Parameters
Dir FCB ACP_SYSACC
  ACP_DINDXCACHE
Dir Data ACP_DIRCACHE
File Hdr ACP_HDRCACHE
File ID ACP_FIDCACHE
Extent ACP_EXTCACHE
  ACP_EXTLIMIT
Quota ACP_QUOCACHE
Bitmap ACP_MAPCACHE

When you change the ACP cache parameters, remember to reboot the system to make the changes effective.

12.6 Use Solid-State Disks

There are two types of solid-state disk:

With solid-state storage, seek time and latency do not affect performance, and throughput is limited only by the bandwidth of the data path rather than the speed of the device. Solid-state disks are capable of providing higher I/O performance than magnetic disks with device throughput of up to 1200 I/O requests per second and peak transfer rates of 2.5M bytes per second or higher.

The operating system can read from and write to a solid-state disk using standard disk I/O operations.

Two types of applications benefit from using solid-state disks:


Chapter 13
Compensating for CPU-Limited Behavior

This chapter describes corrective procedures for CPU resource limitations described in Chapters 5 and 9.

13.1 Improving CPU Responsiveness

Before taking action to correct CPU resource problems, do the following:

It is always good practice to review the methods for improving CPU responsiveness to see if there are ways to recover CPU power:

13.1.1 Equitable CPU Sharing

If you have concluded that a large compute queue is affecting the responsiveness of your CPU, try to determine whether the resource is being shared on an equitable basis. Ask yourself the following questions:

The operating system uses a round-robin scheduling technique for all nonreal-time processes at the same scheduling priority. However, there are 16 time-sharing priority levels, and as long as a higher level process is ready to use the CPU, none of the lower level processes will execute. A compute-bound process whose base priority is elevated above that of other processes can usurp the CPU. Conversely, the CPU will service processes with base priorities lower than the system default only when no other processes of default priority are ready for service.

Do not confuse inequitable sharing with the priority-boosting scheme of the operating system, which gives temporary priority boosts to processes encountering certain events, such as I/O completion. These boosts are temporary and they cannot cause inequities.

Detecting Inequitable CPU Sharing

You can detect inequitable sharing by using either of the following methods:

CPU Allocation and Processing Requirements

It can sometimes be difficult to judge whether processes are receiving appropriate amounts of CPU allocation because the allocation depends on their processing requirements.
If... Then...
the MONITOR collection interval is too large to provide a sufficient level of detail enter the command on the running system (live mode) during a representative period using the default three-second collection interval.
there is an inequity try to obtain more information about the process and the image being run by entering the DCL command SHOW PROCESS/CONTINUOUS.

13.1.2 Reduction of CPU Consumption by the System

Depending on the amount of service required by your system, operating system functions can consume anywhere from almost no CPU cycles to a significant amount. Any reductions you can make in services represent additional available CPU cycles. These can be used by processes in the COM state, thereby lowering the average size of the compute queue and making the CPU more responsive.

The information in this section will help you identify the system components that are using the CPU. You can then decide whether it is reasonable to reduce the involvement of those components.

Processor Modes

The principal body of information about system CPU activity is contained in the MONITOR MODES class. Its statistics represent rates of clock ticks (10-millisecond units) per second; but they can also be viewed as percentages of time spent by the CPU in each of the various processor modes.

Note that interrupt time is really kernel mode time that cannot be charged to a particular process. Therefore, it is sometimes convenient to consider these two together.

The following table lists of some of the activities that execute in each processor mode:
Mode Activity
Interrupt 1,2 Interrupts from peripheral devices such as disks, tapes, printers, and terminals. The majority of system scheduling code executes in interrupt state, because for most of the time spent executing that code, there is no current process.
MP Synchronization Time spent by a processor in a multiprocessor system waiting to acquire a spin lock.
Kernel 2 Most local system functions, including local lock requests, file system (XQP) requests, memory management, and most system services (including $QIO).
Executive RMS is the major consumer of executive mode time. Some optional products such as ACMS, DBMS, and Rdb also run in executive mode.
Supervisor The command language interpreters DCL and MCR.
User Most user-written code.
Idle Time during which all processes are in scheduling wait states and there are no interrupts to service.


1In an OpenVMS Cluster configuration, services performed on behalf of a remote node execute in interrupt state because there is no local process to which the time can be charged. These include functions involving system communication services (SCS), such as remote lock requests and MSCP requests.
2As a general rule, the combination of interrupt time and kernel mode time should be less than 40 percent of the total CPU time used.

Although MONITOR provides no breakdown of modes into component parts, you can make inferences about how the time is distributed within a mode by examining some of the other MONITOR classes in your summary report and through your knowledge of the work load.

Interrupt Time

In OpenVMS Cluster systems, interrupt time per node can be higher than in noncluster systems because of the remote services performed. However, if this time appears excessive, you should investigate the remote services and look for deviations from typical values. Enter the following commands:

Even though OpenVMS Cluster systems can be expected to consume marginally more CPU resources than noncluster systems because of this remote activity, there is no measurable loss in CPU performance when a system becomes a member of an OpenVMS Cluster. OpenVMS Clusters achieve their sense of "clusterness" by making use of SCS, a very low overhead protocol. Furthermore, in a quiescent cluster with default system parameter settings, each system needs to communicate with every other system only once every five seconds.

Multiprocessing Synchronization Time

Multiprocessing (MP) synchronization time is a measure of the contention for spin locks in an MP system. A spin lock is a mechanism that guarantees the synchronization of processors in their manipulation of operating system databases. A certain amount of time in this mode is expected for MP systems. However, MP synchronization time above roughly 8% of total processing time usually indicates a moderate to high level of paging, I/O, or locking activity. You should evaluate the usage of those resources by examining the IO, DLOCK, PAGE, and DISK statistics.

Kernel Mode Time

High kernel mode time (greater than 25%) can indicate several conditions warranting further investigation:


Previous Next Contents Index

[Site home] [Send comments] [Help with this site] [How to order documentation] [OpenVMS site] [Compaq site]
[OpenVMS documentation]

Copyright © Compaq Computer Corporation 1998. All rights reserved.

Legal
6491PRO_014.HTML