The file SYS$STARTUP:UCX$NFS_SERVER_STARTUP.COM also defines a set of logical names that set the file system parameters. Table 14-3 documents these logical names.
Logical Name | Description |
---|---|
UCX$CFS_CACHE_LOW_LIMIT |
Defines the minimum size of the free buffer list. Below this number,
the file system starts to reclaim used buffers.
Default: 4 buffers. The free buffer list needs at least four free buffers (not taken by cache). If the actual number of free buffers is less than UCX$CFS_CACHE_LOW_LIMIT, the used buffers are freed up and returned to the free list, until the size of the free list reaches UCX$CFS_CACHE_HIGH_LIMIT. |
UCX$CFS_CACHE_HIGH_LIMIT |
Defines the number of buffers the file system tries to keep in the free
buffer list.
Default: eight buffers. See UCX$CFS_CACHE_LOW_LIMIT. In a busy server environment, setting this parameter higher is likely to improve performance. |
UCX$CFS_TRANSFERSIZE |
Defines the size, in bytes, of the data transferred between server and
client on READ and WRITE operations.
Default: 8 Kbytes (8192 bytes). This value is used in most NFS server implementations. |
UCX$CFS_KEEP_ALLOC |
Defines whether the KEEP_ALLOC option should be turned on or off.
Default: 0 (OFF). If the KEEP_ALLOC option is OFF, unused blocks at the end of a file are freed. If it is ON, then unused blocks are kept allocated. |
UCX$CFS_SHOW_VERSION |
Sets the SHOW_VERSION logical name on or off. If ON, the NFS server
returns to the client file names with version numbers, even if there is
only one version of the file.
Default: 0 (OFF). |
UCX$CFS_MODUS_OPERANDI | Defines various operating modes. Use only under the advice of your DIGITAL support representative. |
UCX$CFS_FATAL_MESSAGES |
Defines the terminal device to which the important error messages are
directed, in addition to the normal error messages that are sent to the
operator's console.
Default: _OPA0:. |
This section provides information to help you identify and resolve
problems and tune system performance.
14.13.1 Displaying Performance Information
The SHOW NFS_SERVER command displays information about the running NFS server that you can use to tune its performance. You can issue SHOW NFS_SERVER for a specific client or host if it is listed in the proxy database. The counter information can be especially useful in determining the load on your system.
In the following sample display, the numbers are keyed to the discussion that follows.
UCX> SHOW NFS_SERVER Server:NFS$SERVER Loaded: 14-NOV-1995 15:35:01.73 Status: ACTIVE Running: 0 00:24:21.26 Memory allocated (1) 470260 RPC errors (2) Message processing: Authentication 0 Threads busy (3) 0 Others 0 Threads free 15 Mount data base: (4) Max. threads busy 5 Mounted file systems 1 Duplicate cache xid (5) 0 Current users 1 Duplicate active xid 0 Maximum mounted 1 Dropped 0 Maximum users 1 Data exchange: (6) NFS operations: (7) Bytes sent 11839124 null 0 getattr 42 Bytes rcvd 10900824 setattr 12 lookup 186 Messages sent 2956 readlink 0 rename 0 Messages rcvd 2956 read 1417 write 1284 Max. message sent 8292 statfs 1 create 2 Max. message rcvd 8328 remove 1 link 0 Open files: (8) symlink 0 mkdir 1 Maximum opened 2 rmdir 1 readdir 7 Closed per interval 0 Total NFS operations 2954 Currently opened 0 Error messages (9) 0
The SHOW CFS command is useful for monitoring the distribution of the file system services and the consumption of system time by the various system services. See the DIGITAL TCP/IP Services for OpenVMS Management Command Reference manual for a detailed description of the SHOW CFS command.
Example 14-1 provides an example SHOW CFS display. The numbers in the example are keyed to the discussion that follows. (The NFS server must be running when you issue this command.)
Example 14-1 SHOW CFS
UCX> SHOW CFS CFS SERVICES 13-MAR-1997 14:10:02.74 CFS Services(1) VAX/OpenVMS System Services(2) ----------------------- ------------------------------------------------ CLOSE 8 CREATE_FH 1 $ASSIGN 9 $QIO 0 CREDIR_FH 0 $DASSGN 8 Access 7 FREEBUFF 0 Create 1 GETATTR 28 $DEQ 182 Deaccess 8 LINK_FH 0 $ENQ 603 LOOKUP_FH 72 Read_attr 159 OPEN_FH 7 $EXPREG 16 Write_attr 33 READ 1 $SETPRT 5 READBUFF 0 Lookup 108 READDIR_FH 2 $CLREF 169 READLINK_FH 0 $SETEF 169 Extend 1 REMDIR_FH 0 REMOVE_FH 0 $DCLAST 232 Delete 0 RENAME_FH 0 $CLRAST 9 Enter 0 SETATTR 8 $SETAST 360 Remove 0 STATFS 1 SYMLINK_FH 0 $GETDVI 7 Read_V 9 WRITE 41 Write_V 57 OTHER 1 $CHKPRO 95 TOTAL 170
The SHOW CFS/SUMMARY command provides a good indication of file system performance at the moment by providing the current and maximum values. See the DIGITAL TCP/IP Services for OpenVMS Management Command Reference manual for a description of the SHOW CFS/SUMMARY command.
Example 14-2 is an example of a SHOW CFS/SUMMARY display. The numbers in the example are keyed to the discussion that follows.
Example 14-2 SHOW CFS/SUMMARY Display
UCX> SHOW CFS /SUMMARY CFS Service status and performance 21-MAR-1997 10:36:05.82 (1) (2) Service State Cur Max Total Cacheop Cur Hit I/O Inc Status --------------- --- --- --------- ------- ---- ---- ---- ---- ------------ FP-Access 0 1 374 Read 0 0 0 0 Clusize 16 FP-Attributes 0 1 2246 Read-A 0 0 0 0 Limit 256 FP-Delete 0 0 0 Write 0 0 0 0 Inuse 0 FP-Dir 0 1 989 Write-A 0 0 0 0 Busy 0 FP-Rename 0 0 0 Write-D 0 Hitrate 0 FP Sub_total 0 2 3609 (3) Buffered 0 0 0 Nameop Cur Hit Status Internal 0 1 127 ------ ---- ---- ------------- I/O 0 1 11 Add 0 Tabsize 0 Lock 0 3 73 Delete 0 Inuse 0 Logging 0 0 0 Lookup 0 0 Hitrate 0 Resource 0 0 0 (4) RMS 0 0 0 Fileop Cur Hit Status Service 0 1 1 ------ ---- ---- ------------- Timer 0 1 1179 Find 0 0 Limit 0 Other 0 0 0 Find-A 0 Inuse 0 Synch 0 1 994 Find-C 0 Timeout 5 Hitrate 0 (5) Services count 1 4 2863 ATCBs:4 TBABs:32 RDCBs:260 Pages:34(6)
Value | Description |
---|---|
Limit | Maximum number of resource descriptors and control blocks (RDCBs) that can be in cached status at any moment |
Inuse | Number of RDCBs in use |
Timeout | Number of seconds that any cached file's atributes are considered valid. |
Hit rate | Number of XQP QIOs (read attributes) that were saved because the valid RDCB was found (other overhead is also saved, for example, $ENQ/$DEQ time) |
Value | Description |
---|---|
ATCBs | Asynchronous thread control blocks (one per thread) |
TBABs | Thread backout attachment blocks (one per active virtual lock) |
RDCBs | Resource description and control blocks (one per node, one per file system, one per file, one per cache buffer) |
Hit rate | Number of XQP QIOs (read attributes) that were saved because the valid RDCB was found (other overhead is also saved, for example, $ENQ/$DEQ time) |
Because the NFS server does not maintain state about any of its clients, clients do not send explicit open and close file requests to the server. Instead, the server opens and closes the files for the clients. Any read or write request causes the server to open the file (if it does not already have the file open). The server caches the open files to create an internal state for each file within the NFS server environment.
The server uses the following guidelines to close the files:
A short interval may cause the server to close files more often,
thereby reducing performance. The default interval is 2 minutes. You
can control the time interval by setting a new value for the
UCX$NFS00000000_INACTIVITY logical name or with the /INACTIVITY_TIMER
qualifier of the SET NFS SERVER command.
14.13.4 Increasing the Number of Active Threads
The NFS server is an asynchronous, multithreaded process, which means that multiple NFS requests can be processed concurrently. Each NFS request is referred to as a thread. With increased server activity, client users may experience timeout conditions. Assuming the server host has the available resources (CPU, memory, and disk speed), you can improve server response by increasing the number of active threads. You do this by changing the value for the UCX$NFS00000000_THREADS logical name. Use the SHOW NFS_SERVER command to display the maximum number of active threads.
If you increase the number of active threads, you should also consider increasing the timeout period on UNIX clients. You do this with the timeo or /TIMEOUT option to the MOUNT command.
If your clients still experience timeout conditions after increasing
the number of active threads and the timout period on the client, you
may need higher-performing hardware.
14.13.5 Increasing the Size of the Host Table
During NFS initialization, the server builds a host table from the proxy database file. If the number of client hosts listed in the proxy database exceeds the number specified with the UCX$NFS00000000_HOST logical name, the excess client host names will be ignored. This action has the effect of disabling access to the server for those client host names that cannot be loaded into the server's cache.
Making the parameter value larger than needed causes NFS to allocate
excess virtual memory within the server.
14.13.6 Increasing the Size of the Transaction Cache
Because NFS uses UDP as an underlying protocol, delivery between the client and the server is not guaranteed. Because delivery is not guaranteed, the client often reissues an NFS request if it does not receive a response from the server within a certain time period. This may cause CREATE, DELETE, LINK, RENAME, SYMLINK, and SETATTR operations to successfully complete the first time but fail the second time with a false error.
Each server response to a client request is identified by the host name and a transaction identifier (XID). The server stores the responses it makes to any client requests for directory and file access. This way, whenever a client request is recognized as a duplicate, the server checks the response cache to see if a response has been sent. If it finds a matching response, the response is resent, otherwise, the duplicate request is ignored.
If the the request is not a duplicate, the message is dispatched and processed.
The size of the cache for these operations is limited by the value set with the UCX$NFS00000000_XID logical name. The default is 20, but you should increase this value if you notice any of the following situations on a client:
The NFS server supports access to XQP+, the eXtended QIO processor file system enhancements introduced in OpenVMS Version 6.1. The XQP+ features are described in detail in the OpenVMS Version 6.1 New Features Manual.
Performance improvements offered by XQP+ are especially beneficial for NFS servers that are handling a heavy load. By default, the XQP+ functionality is disabled; you must enable it by setting certain SYSGEN parameters.
The file SYS$STARTUP:UCX$NFS_SERVER_STARTUP.COM describes how to enable OpenVMS and UCX-level usage of XQP+ features. The file also describes how to define logical names in SYS$STARTUP:UCX$SYSTARTUP.COM so that the site-specific choices are preserved across NFS and OpenVMS upgrades.
The NFS server can take advantage of the following XQP+ features:
$ DEFINE/SYSTEM/EXECUTIVE_MODE UCX$CFS_ACP_ENABLE_THREADS 1
$ DEFINE/SYSTEM/EXEC UCX$CFS_ACP_ENABLE_DEFERRED_HEADER_WRITES 1
$ ANALYZE/DISK_STRUCTURE/REPAIR device
The NFS server is started by the auxiliary server using the NFS server account UCX$NFS, which includes the NFS server process quotas and resource limits.
You can increase the NFS server account quotas and limits to improve
the performance of the NFS server. Use the OpenVMS Authorize utility to
change the quotas and limits for the UCX$NFS account.
14.13.9 Increasing UAF File Limits
A file limit (FILLM) quota is associated with each OpenVMS user account. The FILLM quota specifies the maximum number of files that can be accessed simultaneously. If a file is accessed to process a read or write request, the file access cannot exceed the FILLM quota specified in the OpenVMS user's authorization information. To make this file limit transparent to the client, NFS closes the least-recently-accessed file and accesses the new file. When this happens, the error log file contains an "exceeded quota" message.
Because the open files quota information is loaded into the NFS server upon startup, changes to a user's authorization information do not take effect until you restart the NFS server.
An FILLM quota is also associated with the NFS server account. If the FILLM quota is set too low, it degrades the server's performance. Set this quota so it is large enough to accommodate the total of all the individual NFS users' quotas.
Under normal situations, the NFS request is successfully completed after you reset the quota limitation. The inactivity timer interval can reduce the required open files quota by closing old files automatically before the contention.