Updated: 11 December 1998 |
OpenVMS Performance Management
Previous | Contents | Index |
Using RMS Global Buffering reduces the amount of memory required by
allowing processes to share caches. It can also reduce I/O if multiple
processes access data in similar areas.
11.26 Reduce Demand or Add Memory
At this point, when all the tuning options have been exhausted, there are only two options: reduce the demand for memory by modifying the work load or add memory to the system.
The cost to add memory to a system has decreased significantly over time. This trend will likely continue.
For many modern systems, adding memory is the most cost effective way
to address performance problems. For older systems, the cost of adding
memory may be significantly higher than for newer systems, but it may
still be less than the cost of a system manager performing many hours
of system analysis and tuning, and the additional time it may take to
achieve better performance. All relevant costs need to be taken into
account when deciding if working with the existing hardware will be
less expensive than adding hardware.
11.26.1 Reduce Demand
Section 1.4 describes a number of options (including workload
management) that you can explore to shift the demand on your system so
that it is reduced at peak times.
11.26.2 Add Memory
Adding memory is often the best solution to performance problems.
If you conclude you need to add memory, you must then determine how much to add. Add as much memory as you can afford. If you need to establish the amount more scientifically, try the following empirical technique:
The amount of memory required by the processes that are outswapped represents an approximation of the amount of memory your system would need to obtain the desired performance under load conditions.
Once you add memory to your system, be sure to invoke AUTOGEN so that new parameter values can be assigned on the basis of the increased physical memory size.
This chapter describes corrective procedures for I/O resource
limitations described in Chapters 5 and 8.
12.1 Improving Disk I/O Responsiveness
It is always good practice to check methods for improving disk I/O
responsiveness to see if there are ways to use the available capacity
more efficiently, even if no problem seems to exist currently.
12.1.1 Equitable Disk I/O Sharing
If you identify certain disks as good candidates for improvement, check for excessive use of the disk resource by one or more processes. The best way to do this is to use the MONITOR playback feature to obtain a display of the top direct I/O users during each collection interval. The direct I/O operations reported by MONITOR include all user disk I/O and any other direct I/O for other device types. In many cases, disk I/O represents the vast majority of direct I/O activity on OpenVMS systems, so you can use this technique to obtain information on processes that might be supporting excessive disk I/O activity.
Enter a MONITOR command similar to the following:
$ MONITOR /INPUT=SYS$MONITOR:file-spec /VIEWING_TIME=1 PROCESSES /TOPDIO |
You may want to specify the /BEGINNING and /ENDING qualifiers to select
a time interval that covers the problem period.
12.1.1.1 Examining Top Direct I/O Processes
If it appears that one or two processes are consistently the top direct I/O users, you may want to obtain more information about which images they are running and which files they are using. Because this information is not recorded by MONITOR, it can be obtained in any of the following ways:
To run MONITOR in live mode, do the following:
The system uses the disk I/O subsystem for three activities: paging,
swapping, and XQP operations. This kind of disk I/O is a good place to
start when setting out to trim disk I/O load. All three types of system
I/O can be reduced readily by offloading to memory. Swapping I/O is a
particularly data-transfer-intensive operation, while the other types
tend to be more seek-intensive.
12.1.2.1 Paging I/O Activity
Page Read I/O Rate, also known as the hard fault rate, is the rate of read I/O operations necessary to satisfy page faults. Since the system attempts to cluster several pages together whenever it performs a read, the number of pages actually read will be greater than the hard fault rate. The rate of pages read is given by the Page Read Rate.
Use the following equation to compute the average transfer size (in bytes) of a page read I/O operation:
average transfer size = page read rate/page read IO rate * page size in bytes |
The page size is 512 bytes on a VAX; it is currently 8192 bytes on all Alphas, but this value is subject to change in future implementations of the Alpha architecture.
Effects on the Secondary Page Cache
Most page faults are soft faults. Such faults require no disk I/O operation, because they are satisfied by mapping to a global page or to a page in the secondary page cache (free-page list and modified-page list). An effectively functioning cache is important to overall system performance. A guideline that may be applied is that the rate of hard faults---those requiring a disk I/O operation---should be less than 10% of the overall page fault rate, with the remaining 90% being soft faults. Even if the hard fault rate is less than 10%, you should try to reduce it further if it represents a significant fraction of the disk I/O load on any particular node or individual disk (see Section 7.2.1.2).
Note that the number of hard faults resulting from image activation can be reduced only by curtailing the number of image activations or by exercising LINKER options such as /NOSYSSHR (to reduce image activations) and reassignment of PSECT attributes (to increase the effectiveness of page fault clustering).
This guideline is provided to direct your attention to a potentially suboptimal configuration parameter that may affect the overall performance of your system. The nature of your system may make this objective unachievable or render change of the parameter ineffective. Upon investigating the secondary page cache fault rate, you may determine that the secondary page cache size is not the only limiting factor. Manipulating the size of the cache may not affect system performance in any measurable way. This may be due to the nature of the workload, or bottlenecks that exist elsewhere in the system. You may need to upgrade memory, the paging disk, or other hardware.
The Page Write I/O Rate represents the rate of disk I/O operations to write pages from the modified-page list to backing store (paging and section files). As with page read operations, page write operations are clustered. The rate of pages written is given by the Page Write Rate.
Use the following equation to compute the average transfer size (in bytes) of a page write I/O operation:
average transfer size = page write rate/page write IO rate * page size in bytes |
The frequency with which pages are written depends on the page modification behavior of the work load and on the size of the modified-page list. In general, a larger modified-page list must be written less often than a smaller one.
Obtaining Information About Paging Files
You can obtain information on each paging file, including the disks on
which they are located, with the SHOW MEMORY/FILES/FULL DCL command.
12.1.2.2 Swapping I/O Activity
Swapping I/O should be kept as low as possible. The Inswap Rate item of the I/O class lists the rate of inswap I/O operations. In typical cases, for each inswap, there can also be just as many outswap operations. Try to keep the inswap rate as low as possible---no greater than 1. This is not to say that swapping should always be eliminated. Swapping, as implemented by the active memory reclamation policy, is desirable to force inactive processes out of memory.
Swap I/O operations are very large data transfers; they can cause
device and channel contention problems if they occur too frequently.
Enter the DCL command SHOW MEMORY/FILES/FULL to list the swapping files
in use. If you have disk I/O problems on the channels servicing the
swapping files, attempt to reduce the swap rate. (Refer to
Section 11.13 for information about converting to a system that rarely
swaps.)
12.1.2.3 File System (XQP) I/O Activity
To determine the rate of I/O operations issued by the XQP on a nodewide basis, do the following:
Examining Cache Hit and Miss Rates
Check the FILE_SYSTEM_CACHE class for the level of activity (Attempt Rate) and Hit Percentage for each of the seven caches maintained by the XQP. The categories represent types of data maintained by the XQP on all mounted disk volumes. When an attempt to retrieve an item from a cache misses, the item must be retrieved by issuing one or more disk I/O requests. It is therefore important to supply memory caches large enough to keep the hit percentages high and disk I/O operations low.
Cache sizes are controlled by the ACP/XQP system parameters. Data items in the FILE_SYSTEM_CACHE display correspond to ACP/XQP parameters as follows:
FILE_SYSTEM_CACHE Item | ACP/XQP Parameters |
---|---|
Dir FCB |
ACP_SYSACC
ACP_DINDXCACHE |
Dir Data | ACP_DIRCACHE |
File Hdr | ACP_HDRCACHE |
File ID | ACP_FIDCACHE |
Extent |
ACP_EXTCACHE
ACP_EXTLIMIT |
Quota | ACP_QUOCACHE |
Bitmap | ACP_MAPCACHE |
The values determined by AUTOGEN should be adequate. However, if hit percentages are low (less than 75%), you should increase the appropriate cache sizes (using AUTOGEN), particularly when the attempt rates are high.
If you decide to change the ACP/XQP cache parameters, remember to reboot the system to make the changes effective. For more information on these parameters, refer to the OpenVMS System Management Utilities Reference Manual.
If your system is running with the default HIGHWATER_MARKING attribute enabled on one or more disk volumes, check the Erase Rate item of the FCP class. This item represents the rate of erase I/O requests issued by the XQP to support the high-water marking feature. If you did not intend to enable this security feature, see Section 2.2 for instructions on how to disable it on a per-volume basis.
When a disk becomes seriously fragmented, it can cause additional XQP disk I/O operations and consequent elevation of the disk read and disk write rates. You can restore contiguity for badly fragmented files by using the Backup (BACKUP) and Convert (CONVERT) utilities, the COPY/CONTIGUOUS DCL command, or the DEC File Optimizer for OpenVMS, an optional software product. It is a good performance management practice to do the following:
To avoid excessive disk I/O, enable RMS local and global buffers on the file level. This allows processes to share data in file caches, which reduces the total memory requirement and reduces the I/O load for information already in memory.
Global buffering is enabled on a per file basis via the SET FILES/GLOBAL_BUFFER=n DCL command. You can also set default values for RMS for the entire system through the SET RMS_DEFAULT command and check values with the SHOW RMS_DEFAULT command. For more information on these commands, refer to the OpenVMS DCL Dictionary. Background material on this topic is available in the Guide to OpenVMS File Applications.
Note that file buffering can also be controlled programmatically by
applications (see the description of XAB$_MULTIPLEBUFFER_COUNT in the
OpenVMS Record Management Services Reference Manual). Therefore, your DCL command settings may be overridden.
12.1.3 Disk I/O Offloading
This section describes techniques for offloading disk I/O onto other resources, most notably memory.
The objective of disk I/O load balancing is to minimize the amount of contention for use by the following:
You can accomplish that objective by moving files from one disk to another or by reconfiguring the assignment of disks to specific channels.
Contention causes increased response time and, ultimately, increased
blocking of the CPU. In many systems, contention (and therefore
response time) for some disks is relatively high, while for others,
response time is near the achievable values for disks with no
contention. By moving some of the activity on disks with high response
times to those with low response times, you will probably achieve
better overall response.
12.1.4.1 Moving Disks to Different Channels
Use the guidelines in Section 8.2 to identify disks with excessively high response times that are at least moderately busy and attempt to characterize them as mainly seek intensive or data-transfer intensive. Then use the following techniques to attempt to balance the load---moving files from one disk to another or by moving an entire disk to a different physical channel:
When using Array Controllers (HSC, HSJ, HSZ, or other network or RAID controllers), the channels on the controller should also be balanced. The controller console can be used to obtain information on the location of the disks. |
To move files from one disk to another, you must know, in general, what
each disk is used for and, in particular, which files are ones for
which large transfers are issued. You can obtain a list of open files
on a disk volume by entering the SHOW DEVICE/FILES DCL command.
However, because the system does not maintain transfer-size
information, your knowledge of the applications running on your system
must be your guide.
12.1.4.3 Load Balancing System Files
The following are suggestions for load balancing system files:
All the tuning solutions for performance problems based on I/O
limitations involve using memory to relieve the I/O subsystem. The five
most accessible mechanisms are the Virtual I/O cache, the ACP caches,
RMS buffering, file system caches, and RAM disks.
12.2 Use Virtual I/O Caching
The virtual I/O cache is a clusterwide, write-through, file-oriented, disk cache that can reduce the number of disk I/O operations and increase performance. The purpose of the virtual I/O cache is to increase system throughput by reducing file I/O response times with minimum overhead. The virtual I/O cache operates transparently of system management and application software, and maintains system reliability while it significantly improves virtual disk I/O read performance.
Previous | Next | Contents | Index |
Copyright © Compaq Computer Corporation 1998. All rights reserved. Legal |
6491PRO_013.HTML
|