DIGITAL TCP/IP Services for OpenVMS
Management


Previous | Contents

You may have set up proxy mapping such that many incoming users map to one OpenVMS account, causing multiple users to share the same file quota. In such a case, ensure that the FILLM value for this account is large enough to provide satisfactory performance for all the users so associated.

The CHANNELCNT parameter sets the maximum number of channels that a process can use. Ensure that CHANNELCNT is set large enough to handle the total number of files accessed by all clients.

14.13.10 OpenVMS SYSGEN Parameters that Impact Performance

The following OpenVMS SYSGEN parameters impact NFS server performance:


Chapter 15
NFS Client

The Network File System (NFS) client software enables client users to access file systems made available by an NFS server. These files and directories physically reside on the remote (server) host but appear to the client as if they were on the local system. For example, any files accessed by an OpenVMS client --- even a UNIX file --- appear to be OpenVMS files and have typical OpenVMS file names.

This chapter reviews key concepts and provides guidelines for configuring and administering the NFS client software on your OpenVMS system. For information about the NFS server, see Chapter 14.

15.1 Reviewing Key Concepts

Because the NFS software was originally developed on and used for UNIX machines, NFS implementations use UNIX-style file system conventions and characteristics. This means that the rules and conventions that apply to UNIX file types, file names, file ownership, and user identification also apply to NFS.

Because the DIGITAL TCP/IP Services NFS client runs on OpenVMS, the client must accommodate the differences between the two file systems, for example, by converting file names and mapping file ownership information. You must understand these differences to properly configure NFS and successfully mount file systems from an NFS server.

The following sections serve as a review only. If you are not familiar with these topics, see the DIGITAL TCP/IP Services for OpenVMS Concepts and Planning guide for a more detailed discussion of the NFS implementation available with DIGITAL TCP/IP Services for OpenVMS.

15.1.1 NFS Clients and Servers

NFS is a client/server environment that allows computers to share disk space and users to work with their files from multiple computers without copying them to the local system. Computers that make files available to remote users are NFS servers. Computers with local users accessing and creating remote files are NFS clients. A computer can be an NFS server, an NFS client, or both a server and a client.

Attaching a remote directory to the local file system is called mounting a directory. A directory cannot be mounted unless it is first exported by an NFS server. The NFS client identifies each file system by the name of its mount point on the server. The mount point is the name of the device or directory at the top of the file system hierarchy. An NFS device is always named DNFSn.

All files below the mount point are available to client users as if they resided on the local system. The NFS client requests file operations by contacting a remote NFS server. The server then performs the requested operation. The NFS client automatically converts all mounted directories and file structures, contents, and names to the format required by OpenVMS. For example, a UNIX file named

/usr/webster/.login

would appear to an OpenVMS client as

DNFS1:[USR.WEBSTER].LOGIN;1

For more information on how NFS converts file names, see Appendix F

15.1.2 Storing File Attributes

The OpenVMS operating system supports multiple file types and record formats. In contrast, NFS and UNIX systems support only byte-stream files, seen to the OpenVMS client as sequential STREAM_LF files.

This means the client must use special record handling to store and access non-STREAM_LF files. The OpenVMS NFS client accomplishes this with attribute description files (ADFs) --- special companion files the client uses to hold the attribute information that would otherwise be lost in the translation to STREAM_LF format. For example, a SET FILE/NOBACKUP command causes the client to create an ADF, because NFS has no concept of this OpenVMS attribute.

The client stores ADFs, as far as available memory allows, in a file attribute cache. When you dismount a directory, the NFS client flushes the cache. It no longer can "reconstruct" an original file. A record-format-recognition algorithm reads the files to determine record attributes:

15.1.2.1 Using Default ADFs

The client provides default ADFs for files with the following extensions: .EXE, .HLB, .MLB, .OBJ, .OLB, .STB, and .TLB. (The client does not provide ADFs for files with the .TXT and .C extensions, because these are STREAM_LF.) The client maintains these ADFs on the server.

For example, SYS$SYSTEM:UCX$EXE.ADF is the default ADF for all .EXE type files. When you create .EXE files (or if they exist on the server), they are defined with the record attributes from the single default ADF file. The client refers only to the record attributes and file characteristics fields in the default ADF.

15.1.2.2 How the Client Uses ADFs

By default, the client uses ADFs if they exist on the server. The client updates existing ADFs or creates them as needed for new files. If you create a non-STREAM_LF OpenVMS file or a file with access control lists (ACLs) associated with it on the NFS server, the NFS client checks to see if a default ADF can be applied. If not, the client creates a companion ADF to hold the attributes.

The client hides these companion files from the user's view. If a user renames or deletes the orginal file, the client automatically renames or deletes the companion file. However, if a user renames or deletes a file on the server side, they must also rename the companion file, otherwise file attributes are lost.

You can modify this behavior with the /NOADF qualifier to the MOUNT command. The /NOADF qualifier tells the client to handle all files as STREAM_LF unless a default ADF matches. This mode is only appropriate for "read-only" file systems because the client cannot adequately handle application-created files when /NOADF is operational.

15.1.2.3 Creating Customized Default ADFs

You can create customized default ADFs for special applications. To do so:

  1. On the client, create a special application file that results in creating an ADF on the server. Suppose that application file is called TEST.GAF.
  2. On the server, check the listing for the newly created file. For example:
    > ls -a 
    . 
    .. 
    .$ADF$test.gaf;1 
    test.gaf 
     
    

    Note the ADF (.$ADF$test.gaf;1) was created with the data file (TEST.GAF).
  3. On the server, copy the ADF file to a newly created default ADF file on the client. For example:
    > cp .\$ADF\$test.gaf\;1 gaf.adf 
     
    

    Note that the backslashes (\) are required to enter the UNIX system non-standard "$" and ";" symbols.
  4. On the client, copy the new default ADF file to the SYS$SYSTEM directory. For example:
    $ COPY GAF.ADF SYS$COMMON:[SYSEXE]UCX$GAF.ADF 
    
  5. Dismount all the NFS volumes and mount them again. This starts another NFS ancillary control process (ACP) so the newly copied default ADF file can take effect.

15.1.3 How the NFS Client Authenticates Users

Both the NFS server and NFS client use the proxy database to authenticate users. The proxy database is a collection of entries used to register user identities. To access file systems on the remote server, local users must have valid accounts on the remote server system.

The proxy entries map each user's OpenVMS identity to a corresponding NFS identity on the server host. When a user initiates a file access request, NFS checks the proxy database before granting or denying access to the file.

The proxy database is an index file called UCX$PROXY.DAT. If you use the configuration procedure to configure NFS, this empty file is created for you. You populate this file by adding entries for each NFS user. See Section 15.2 for instructions on how to add entries to the proxy database.

15.1.4 How the Client Maps User Identities

Both OpenVMS and UNIX-based systems use identification codes as a general method of resource protection and access control. Just as OpenVMS employs user names and UICs for identification, UNIX identifies users with a user name and a user identification (UID) and group identification (GID) pair. Both UIDs and GIDs are simply numbers to identify a user on a system.

The proxy database contains entries for each user wishing to access files on a server host. Each entry contains the user's local OpenVMS account name, the UID/GID pair that identifies the user's account on the server system, and the name of the server host. This file is loaded into dynamic memory when the NFS client starts. Whenever you modify the UID/GID to UIC mapping, you must reload the NFS client software. (Proxy mapping always occurs even when operating in OpenVMS-to-OpenVMS mode.)

When a user on the client host requests file access, the client searches the proxy database for an entry that maps the requester's identity to a corresponding UID/GID pair. If the client finds a match, it sends a message to the server that contains the following:

If the remote server is running OpenVMS, the server scans its copy of the proxy database for an entry that corresponds to the requester's UID/GID. If it finds a valid entry, the server grants access according to the privileges set for that account.

Before sending a user request to the NFS server, the client checks access based on the UNIX-style protection mask. This check occurs on the client host and causes the client to grant or deny access.

The protections set for files located on DNFSn: devices might be different from OpenVMS protection parameters. Some remote file systems do not have system-level file protections (the superuser has all privileges for all files).

The only permission required by the UNIX file system for deleting a file is write access to the last directory in the path specification.

You can print a file that is located on a DNFSn: device. However, the print symbiont, which runs as user SYSTEM, opens the file only if it is world readable or there is an entry in the proxy database allowing read access to user SYSTEM.

15.1.4.1 Default Mapping

It is possible to give client users access to the server file systems even if the user does not have a corresponding account identity on the server system. You accomplish this by creating a proxy identity for the default user (usually the UNIX user "nobody" --2/--2).

If the outgoing UID does not map to an OpenVMS account on the server, the server may still grant access to the client user, but access is restricted. The server searches its version of the proxy database for a entry for the default user UID/GID pair. (See Section 14.1.5 in Chapter 14 for more information about the default user.) If such an entry exists, the unmapped user is mapped to that account.

15.1.4.2 Providing Universal Access to World-Readable Files

To provide universal access to world-readable files, you can use the default UID instead of creating a proxy entry for every NFS client user.

DIGITAL strongly recommends that, for any other purposes, you provide a proxy with a unique UID for every client user. Otherwise, client users may see unpredictable and confusing results when they try to create files.

15.1.5 How the Client Grants File Access

Both OpenVMS and UNIX based systems use a protection mask that defines categories assigned to a file and the type of access granted to each category. The NFS server file protection categories, like those found on UNIX systems, include: user, group and other, each having read (r), write (w), or execute (x) access. The OpenVMS categories are SYSTEM, OWNER, GROUP, and WORLD. Each category can have up to four types of access: READ (R), WRITE, (W), EXECUTE (E), and DELETE (D). The NFS client handles file protection mapping from server to client.

OpenVMS DELETE access does not directly translate to NFS. An NFS user can delete a file as long as he or she has write access to the parent directory. The user can see whether or not he or she has permissions to delete a file by looking at the protections on the parent directory. This design is similar to OpenVMS where the absence of write access to the parent directory prevents users from deleting files, even when protections on the file itself appear to allow delete access.

15.1.6 Guidelines for Working with DNFS Devices

The following list summarizes the guidelines and restrictions associated with DNFS devices.

15.1.7 How NFS Converts File Names

Because NFS uses UNIX-style syntax for file names, valid OpenVMS file names may be invalid on the NFS server and vice versa. The NFS software automatically converts file names to the format required by either the client or server. It is important to note that NFS always converts file names even when both the NFS client and the NFS server are OpenVMS hosts.

All name-mapping sequences on the OpenVMS client begin with the "$" escape character. Appendix F lists the rules that govern these conversions and provides a list of character sequences, server characters, and octal values used for NFS name conversion.

15.2 Registering Users in the Proxy Database

Users on your client host must have corresponding accounts on the NFS server host. After making sure client users have appropriate accounts, you must register them with the proxy database. The NFS client, the NFS server, and the PC-NFS daemon all use the proxy database.

If you use UCX$CONFIG to configure NFS, the index file UCX$PROXY.DAT is created for you. This file is empty until you populate it with proxy entries. If you do not use the configuration procedure, use the CREATE PROXY¹ command to create the empty database file. The file UCX$PROXY.DAT resides in the SYS$COMMON:[SYSEXE] directory by default. You can change the location of the proxy database by redefining the logical name UCX$PROXY.

Use the following commands to manage the proxy database:

Issue these commands at the UCX prompt. For example:

UCX> ADD PROXY username /NFS=type /UID=n /GID=n /HOST=host_name

Changes in the proxy database take effect only after you dismount all DNFSn: devices and remount them. An exception is DNFS0:, which is present if the NFS client driver is loaded and cannot be mounted or dismounted.

Each entry in the proxy database has four fields as defined in Table 15-1.

Table 15-1 Required Fields for NFS Proxy Entries
Field Meaning
OpenVMS user name Name of the NFS user's OpenVMS account (no default)
Type Direction of NFS communication allowable to the user. Specify one of the following:
  • O (outgoing)---Used by the client to map the OpenVMS UIC of a local client user to the UID/GID pair.
  • N (incoming)---Used by the NFS server to map a UID/GID pair to the OpenVMS UIC.
  • ON (outgoing and incoming)---Used by both client and server.
  • D (dynamic)---Entry is loaded in the server's dynamic memory. When the NFS server starts, it creates a copy of the proxy database in dynamic memory. (If the account does not exist or the account is disabled, the entry for the account will be missing from dynamic memory.)
UID/GID pair Remote identity of the user. Required even if both client and server are OpenVMS hosts.
Remote host name Name of the remote host, which is one of the following:
  • Remote client of the local NFS server
  • Remote server for the local NFS client
  • Both
  • Wildcard ( * ) for all hosts

(No default)

To add a user name to the proxy database, take the following steps:

  1. For each NFS user, obtain the OpenVMS user name from the OpenVMS user authorization file (UAF). If a user name is not in the UAF, use the Authorize utility to add it.
  2. Obtain the UID/GID pair for each NFS user from the /etc/password file on the NFS server.
  3. Issue SHOW PROXY.
  4. Issue ADD PROXY for each NFS user you want to add to the proxy database. For example:
    UCX> ADD PROXY GANNET /NFS=(OUTGOING,INCOMING) /UID=1111 /GID=22 /HOST=CLIENT1 
    
  5. Re-issue SHOW PROXY to confirm the new information.

The following illustrates a portion of a proxy database file:

 
VMS User_name     Type      User_ID    Group_ID   Host_name 
 
GANNET            OND          1111          22   CLIENT1, client1 
GEESE             OND          1112          22   * 
GREBE             OND          1113          22   client1, client2 
GROUSE            OD           1114          23   client3 
GUILLEMOT         OD           1115          23   client3 
GULL              OD           1116          23   client4 
 


Note

¹ You may also create a proxy database file from a UNIX-formatted /etc/password file with the CONVERT/VMS PROXY command.


15.3 Mounting Files and Directories

Attaching remote files and directories exported by an NFS server is called mounting. The NFS client identifies each file system by the name of its mount point on the server. The client provides the following MOUNT commands:

Issue these commands at either the UCX> prompt or the DCL prompt ($), for example:

UCX> MOUNT mount_point /HOST="host" /PATH="/path/name" 

or

$ UCX MOUNT mount_point /HOST="host" /PATH="/path/name" 


Note

By default, a mount is now considered a system mount and privileges are required unless the /SHARE qualifier is used. See Section 15.3.1 for information on user-level mounting.

When you issue a MOUNT command, the NFS client creates a new DNFS device and mounts the remote file system onto it. For example, the following command mounts, onto local device DNFS2:, the remote directory /usr/users/curlew, which physically resides on NFS server loon.

UCX> MOUNT DNFS2: /HOST="loon" /PATH="/usr/users/curlew" 

After issuing the command, a confirmation message such as the following is displayed:

%DNFS-S-MOUNTED, /users/curlew mounted on DNFS2:[000000] 

If you specify DNFS0 in a mount command, the client selects the next available unit number for you, for example:

MOUNT DNFS0:
%DNFS-S-MOUNTED, /usr/curlew mounted on DNFS3:[000000] 

Qualifiers to the MOUNT command let you modify the way a traditional mount occurs. For example, you may specify background mounting, modify existing mounts, or hide subdirectories from view. See the following sections for more information:

See the DIGITAL TCP/IP Services for OpenVMS Management Command Reference manual for a complete list of mount options and commmand qualifiers.

15.3.1 User-Level Mounting

The NFS client supports shared mounting by using the /SHARE qualifier with the MOUNT command. Any user can mount a file system using the /SHARE qualifier---SYSNAM or GRPNAM privileges are not required. The /SHARE qualifier places the logical name in the job logical name table and increments the volume mount count, regardless of the number of job mounts. When the job logs out, all job mounts are dismounted which causes the volume mount count to be decremented.

The following example illustrates how to specify a shared mount.

 
UCX> MOUNT DNFS1: /HOST=BART /PATH="/DKA100/ENG" 
UCX> MOUNT DNFS1: /HOST=BART /PATH="/DKA100/ENG" /SHARE 

This mount request increments the mount count by one. You must specify the /SHARE qualifier with the same host name and path as used in the initial mount to ensure the mount is seen as a shared mount instead of a new mount request.

With a shared mount, the mount requests increment the mount count by 1 under the following circumstances:

In this way, if the main process of the job logs out, the job mount is deallocated, and the volume mount count decrements by one (if zero, the device is dismounted). OpenVMS handles dismounting differently based on whether you use the UCX management command DISMOUNT or the DCL command DISMOUNT. These differences are explained below.

Consider the mount counts in the sample MOUNT/DISMOUNT sequence shown below.

  1. UCX> MOUNT DNFS1:/HOST=BART /PATH="/DKA0/ENG"/
    Mount count: 1 system mount, not incremented
  2. UCX> MOUNT DNFS1:[A] /HOST=BART /PATH="/DKA0/ENG" /SHARE
    Mount count: 2 (incremented)
  3. $ MOUNT/SHARE DNFS1:
    Mount count: 3 (incremented)
  4. UCX> MOUNT DNFS1:[B] /HOST=MARGE /PATH="DKA0/TEST"
    Mount count: 3 (system mount, not incremented)
  5. UCX> DISMOUNT DNFS1:[A]
    Mount count: 2
  6. $ DISMOUNT DNFS1:
    Mount count: 1 (removed mount in example 3, decremented)
  7. $ DISMOUNT DNFS1:
    Mount count: 0 (removed mount in example 4, decremented)

The original mount for BART "/ENG" on DNFS1:[A] along with its shared mount is dismounted. The subsequent DISMOUNT commands dismount examples 3 and 4, leaving nothing mounted.

15.3.2 Automounting

Automounting allows you to mount a remote file system on an as-needed basis. This means that the client automatically and transparently mounts a remote server path as soon as the user accesses the path name. Automounting is convenient for file systems that are inactive for large periods of time. When a user on a system invokes a command to access a remote file or directory, the automount daemon mounts the file and keeps it mounted as long as the user needs it. When a specified amount of time elapses without the file being accessed, it is dismounted. You can specify an inactivity period (5 minutes is the default), after which the software automatically dismounts the path.

You specify automounting and an inactivity interval with the qualifier /AUTOMOUNT=INACTIVITY:OpenVMS_delta_time.

The inactivity interval is the maximum inactive period for the mount attempt. When this period expires, the NFS client dismounts the path name as described below.

In this example, the client automounts directory /usr/webster residing on host robin onto the OpenVMS mount point DNFS67: When it references the path name, the client keeps the path mounted unless it reaches an inactive period of 10 minutes, after which it dismounts the file system. With subsequent references, the client remounts the file system.

UCX> MOUNT DNFS67: /HOST="robin" - 
_UCX> /PATH="/usr/webster" /AUTOMOUNT=INACTIVITY=00:10:00 

15.3.3 Background Mounting

Background mounting allows you to retry a file system mount that initially failed. For example, you may have set mount points in your system startup command file so they are automatically mounted every time your system reboots. In this scenario, if the server is unavailable (because, for example, the server is also rebooting), the mount requests fail. With background option set, the client continues to try the mount after the initial failure. The client continues trying up to 10 times at 30-second intervals (default) or for the number of retries and interval you specify.


Previous | Next | Contents