Compaq TCP/IP Services for OpenVMS
Management


Previous Contents Index

21.1.3 How the NFS Client Authenticates Users

Both the NFS server and NFS client use the proxy database to authenticate users. The proxy database is a collection of entries used to register user identities. To access file systems on the remote server, local users must have valid accounts on the remote server system.

The proxy entries map each user's OpenVMS identity to a corresponding NFS identity on the server host. When a user initiates a file access request, NFS checks the proxy database before granting or denying access to the file.

The proxy database is an index file called TCPIP$PROXY.DAT. If you use the configuration procedure to configure NFS, this empty file is created for you. You populate this file by adding entries for each NFS user. See Section 21.3 for instructions on how to add entries to the proxy database.

Note

The configuration procedure for the NFS server creates a nonprivileged account with the user name TCPIP$NOBODY. You may want to add a proxy record for the default user (-2/-2) that maps to the TCPIP$NOBODY account.

21.1.4 How the Client Maps User Identities

Both OpenVMS and UNIX based systems use identification codes as a general method of resource protection and access control. Just as OpenVMS employs user names and UICs for identification, UNIX identifies users with a user name and a user identifier (UID) and group identifier (GID) pair. Both UIDs and GIDs are used to identify a user on a system.

The proxy database contains entries for each user wanting to access files on a server host. Each entry contains the user's local OpenVMS account name, the UID/GID pair that identifies the user's account on the server system, and the name of the server host. This file is loaded into dynamic memory when the NFS client starts. Whenever you modify the UID/GID to UIC mapping, you must restart the NFS client software by dismounting and remounting all the client devices. (Proxy mapping always occurs even when operating in OpenVMS to OpenVMS mode.)

The only permission required by the UNIX file system for deleting a file is write access to the last directory in the path specification.

You can print a file that is located on a DNFSn: device. However, the print symbiont, which runs as user SYSTEM, opens the file only if it is world readable or if there is an entry in the proxy database that allows read access to user SYSTEM.

21.1.4.1 Default User

You can associate a client device with a default user by designating the user with the /UID and /GID qualifiers to the MOUNT command. If you do not specify a user with the /UID and /GID qualifiers, NFS uses the default user --2/--2. If the local user or the NFS client has no proxy for the host serving a DNFS device, all operations performed by that user on that device are seen as coming from the default user (--2/--2).

To provide universal access to world-readable files, you can use the default UID instead of creating a proxy entry for every NFS client user.

Compaq strongly recommends that, for any other purposes, you provide a proxy with a unique UID for every client user. Otherwise, client users may see unpredictable and confusing results when they try to create files.

21.1.5 How the Client Maps UNIX Permissions to OpenVMS Protections

Both OpenVMS and UNIX based systems use a protection mask that defines categories assigned to a file and the type of access granted to each category. The NFS server file protection categories, like those on UNIX systems, include: user, group and other , each having read ( r ), write ( w ), or execute ( x ) access. The OpenVMS categories are SYSTEM, OWNER, GROUP, and WORLD. Each category can have up to four types of access: read (R), write, (W), execute (E), and delete (D). The NFS client handles file protection mapping from server to client.

OpenVMS delete access does not directly translate to a UNIX protection category. A UNIX user can delete a file as long as he or she has write access to the parent directory. The user can see whether or not he or she has permissions to delete a file by looking at the protections on the parent directory. This design corresponds to OpenVMS where the absence of write access to the parent directory prevents users from deleting files, even when protections on the file itself appear to allow delete access. For this reason, the NFS client always displays the protection mask of remote UNIX files as permitting delete access for all categories of users.

Since a UNIX file system does not have a SYSTEM protection mask (the superuser has all permissions for all files) the NFS client displays the SYSTEM as identical to the OWNER mask.

21.1.6 Guidelines for Working with DNFS Devices

The following list summarizes the guidelines and restrictions associated with DNFS devices:

21.1.7 How NFS Converts File Names

Because NFS uses UNIX style syntax for file names, valid OpenVMS file names may be invalid on the NFS server and vice versa. The NFS software automatically converts file names to the format required by either the client or the server. (NFS always converts file names even when both the NFS client and the NFS server are OpenVMS hosts.)

All name-mapping sequences on the OpenVMS client begin with the dollar sign ($) escape character. Appendix C lists the rules that govern these conversions and provides a list of character sequences, server characters, and octal values used for NFS name conversion.

21.2 NFS Client Startup and Shutdown

The NFS client can be shut down and started independently of TCP/IP Services. This is useful when you change parameters or logical names that require the service to be restarted.

The following files are provided:

To preserve site-specific parameter settings and commands, create the following files. These files are not overwritten when you reinstall TCP/IP Services:

21.3 Registering Users in the Proxy Database

Users on your client host must have corresponding accounts on the NFS server host. After making sure client users have appropriate accounts, you must register them with the proxy database. The NFS client, the NFS server, and the PC-NFS daemon all use the proxy database.

If you use TCPIP$CONFIG to configure NFS, the index file TCPIP$PROXY.DAT is created for you. This file is empty until you populate it with proxy entries. If you do not use the configuration procedure, use the CREATE PROXY command to create the empty database file. The file TCPIP$PROXY.DAT resides in the SYS$COMMON:[SYSEXE] directory by default. You can change the location of the proxy database by redefining the logical name TCPIP$PROXY. (You can also create a proxy database file from a UNIX formatted /etc/password file by using the CONVERT/VMS PROXY command.)

Use the following TCP/IP management commands to manage the proxy database:

For example:


TCPIP> ADD PROXY username /NFS=type /UID=n /GID=n /HOST=host_name

Changes in the proxy database take effect only after you dismount all DNFSn: devices and remount them. An exception is DNFS0:, which is present if the NFS client driver is loaded and cannot be mounted or dismounted.

Each entry in the proxy database has the fields that are listed in Table 21-1.

Table 21-1 Required Fields for NFS Proxy Entries
Field Meaning
OpenVMS user name Name of the NFS user's OpenVMS account
Type Direction of NFS communication allowable to the user. Specify one of the following:
  • O (outgoing). Used by the NFS client.
  • N (incoming). Used by the NFS server.
  • ON (outgoing and incoming). Used by both client and server.
  • D (dynamic). Entry is loaded in the server's dynamic memory. When the NFS server starts, it creates a copy of the proxy database in dynamic memory. (If the account does not exist or the account is disabled, the entry for the account will be missing from dynamic memory.)
UID/GID pair Remote identity of the user. Required even if both client and server are OpenVMS hosts.
Remote host name Name of the remote host, which is one of the following:
  • Remote client of the local NFS server
  • Remote server for the local NFS client
  • Both
  • Wildcard ( *) for all hosts

To add a user name to the proxy database, take the following steps:

  1. For each NFS user, obtain the OpenVMS user name from the OpenVMS user authorization file (UAF). If a user name is not in the UAF, use the OpenVMS Authorize utility to add it.
  2. Obtain the UID/GID pair for each NFS user from the /etc/password file on the NFS server.
  3. Enter SHOW PROXY.
  4. Enter ADD PROXY for each NFS user you want to add to the proxy database. For example:


    TCPIP> ADD PROXY GANNET /NFS=(OUTGOING,INCOMING) /UID=1111 /GID=22 /HOST=CLIENT1 
    

  5. Reenter SHOW PROXY to confirm the new information.

The following illustrates a portion of a proxy database file:


 
VMS User_name     Type      User_ID    Group_ID   Host_name 
 
GANNET            OND          1111          22   CLIENT1, client1 
GEESE             OND          1112          22   * 
GREBE             OND          1113          22   client1, client2 
GROUSE            OD           1114          23   client3 
GUILLEMOT         OD           1115          23   client3 
GULL              OD           1116          23   client4 
 

21.4 Mounting Files and Directories

Attaching remote files and directories exported by an NFS server is called mounting. The NFS client identifies each file system by the name of its mount point on the server. The client provides the following TCP/IP management commands:

For example:


TCPIP> MOUNT mount_point /HOST="host" /PATH="/path/name" 

Note

By default, a mount is considered a system mount and privileges are required unless the /SHARE qualifier is used. See Section 21.4.1 for information on user-level mounting.

When you issue a MOUNT command, the NFS client creates a new DNFS device and mounts the remote file system onto it. For example, the following command mounts, onto local device DNFS2:, the remote directory /usr/users/curlew , which physically resides on NFS server loon .


TCPIP> MOUNT DNFS2: /HOST="loon" /PATH="/usr/users/curlew" 

After entering the command, a confirmation message such as the following is displayed:


%DNFS-S-MOUNTED, /users/curlew mounted on DNFS2:[000000] 

If you specify DNFS0 in a mount command, the client selects the next available unit number for you, for example:


MOUNT DNFS0:/HOST="loon" /PATH="/usr/curlew" 
%DNFS-S-MOUNTED, /usr/curlew mounted on DNFS3:[000000] 

Qualifiers to the MOUNT command let you modify the way a traditional mount occurs. For example, you may specify background mounting, modify existing mounts, or hide subdirectories from view. See the following sections for more information:

See the Compaq TCP/IP Services for OpenVMS Management Command Reference manual for a complete list of MOUNT options and command qualifiers.

21.4.1 User-Level Mounting

The NFS client supports shared mounting by using the /SHARE qualifier with the MOUNT command. Any user can mount a file system using the /SHARE qualifier---SYSNAM or GRPNAM privileges are not required. The /SHARE qualifier places the logical name in the job logical name table and increments the volume mount count, regardless of the number of job mounts. When the job logs out, all job mounts are dismounted, which causes the volume mount count to be decremented.

The following example illustrates how to specify a shared mount:


 
TCPIP> MOUNT DNFS1: /HOST=BART /PATH="/DKA100/ENG" 
TCPIP> MOUNT DNFS1: /HOST=BART /PATH="/DKA100/ENG" /SHARE 

This mount request increments the mount count by 1. You must specify the /SHARE qualifier with the same host name and path as used in the initial mount to ensure that the mount is seen as a shared mount instead of as a new mount request.

With a shared mount, the mount requests increment the mount count by 1 under the following circumstances:

In this way, if the main process of the job logs out, the job mount is deallocated, and the volume mount count decrements by 1 (if zero, the device is dismounted). OpenVMS handles dismounting differently based on whether you use the TCP/IP management command DISMOUNT or the DCL command DISMOUNT. These differences are as follows:

Consider the mount counts in the following sample MOUNT/DISMOUNT sequence:

  1. TCPIP> MOUNT DNFS1:/HOST=BART /PATH="/DKA0/ENG"/
    Mount count: 1 system mount, not incremented
  2. TCPIP> MOUNT DNFS1:[A] /HOST=BART /PATH="/DKA0/ENG" /SHARE
    Mount count: 2 (incremented)
  3. $ MOUNT/SHARE DNFS1:
    Mount count: 3 (incremented)
  4. TCPIP> MOUNT DNFS1:[B] /HOST=MARGE /PATH="DKA0/TEST"
    Mount count: 3 (system mount, not incremented)
  5. TCPIP> DISMOUNT DNFS1:[A]
    Mount count: 2
  6. $ DISMOUNT DNFS1:
    Mount count: 1 (removed mount in example 3, decremented)
  7. $ DISMOUNT DNFS1:
    Mount count: 0 (removed mount in example 4, decremented)

The original mount for BART "/ENG" on DNFS1:[A], along with its shared mount, is dismounted. The subsequent DISMOUNT commands dismount examples 3 and 4, leaving nothing mounted.

21.4.2 Automounting

Automounting allows you to mount a remote file system on an as-needed basis. This means that the client automatically and transparently mounts a remote server path as soon as the user accesses the path name.

Automounting is convenient for file systems that are inactive for large periods of time. When a user on a system invokes a command to access a remote file or directory, the automount daemon mounts the file and keeps it mounted as long as the user needs it. When a specified amount of time elapses without the file being accessed, it is dismounted. You can specify an inactivity period (5 minutes is the default), after which the software automatically dismounts the path.

You specify automounting and an inactivity interval with the qualifier /AUTOMOUNT=INACTIVITY:OpenVMS_delta_time.

The inactivity interval is the maximum inactive period for the mount attempt. When this period expires, the NFS client dismounts the path name as described below.

In this example, the client automounts directory /usr/webster residing on host robin onto the OpenVMS mount point DNFS67:. When it references the path name, the client keeps the path mounted unless it reaches an inactive period of 10 minutes, after which it dismounts the file system. With subsequent references, the client remounts the file system. For example:


TCPIP> MOUNT DNFS67: /HOST="robin" - 
_TCPIP> /PATH="/usr/webster" /AUTOMOUNT=INACTIVITY=00:10:00 

21.4.3 Background Mounting

Background mounting allows you to retry a file system mount that initially failed. For example, you may have set mount points in your system startup command file so they are automatically mounted every time your system reboots. In this scenario, if the server is unavailable (because, for example, the server is also rebooting), the mount requests fail. With background option set, the client continues to try the mount after the initial failure. The client continues trying up to 10 times at 30-second intervals (default) or for the number of retries and interval you specify.

If you specify background mounting, you should also use the /RETRIES qualifier with a small nonzero number. This qualifier sets the number of times the transaction itself should be retried. Specify background mounting, along with the desired delay time and retry count parameters, with the qualifier /BACKGROUND=[DELAY:OpenVMS_delta_time,RETRY:n].

For example, the following command attempts to mount in background mode, on local device DNFS4:, the file system /flyer , which physically resides on host migration . If the mount fails, the NFS client waits 1 minute and then retries the connection up to 20 times. For example:


TCPIP> MOUNT DNFS4: /HOST="migration" /PATH="/flyer" - 
_TCPIP> /BACKGROUND=(DELAY:00:01:00, RETRY:20) /RETRIES=4 

If you use the /BACKGROUND qualifier, Compaq strongly recommends that you also use the /RETRIES qualifier specifying a nonzero value. If you use the default value for /RETRIES (zero), the first mount attempt can never complete except by succeeding, and the process doing the mount will hang until the server becomes available.

21.4.4 Overmounting

Overmounting allows you to mount another path onto an existing mount point. Specify overmounting with the /FORCE qualifier. The client dismounts the original mount point and replaces it with a new one.

Mounting a higher or lower directory level in a previously used path is also an overmount. For example, an overmount occurs when you execute two MOUNT commands in the following order:


TCPIP> MOUNT DNFS123:[USERS.MNT] /HOST="robin" /PATH="/usr" 
 
%DNFS-S-MOUNTED, /usr mounted on _DNFS123:[USERS.MNT] 
 
TCPIP> MOUNT DNFS123:[USERS.MNT] /HOST="robin" /PATH="/usr/tern" /FORCE 
 
%DNFS-S-REMOUNTED, _DNFS123:[USERS.MNT] remounted as /usr/tern on ROBIN 

The second MOUNT command specifies a lower level in the server path. This constitutes another path name and qualifies for an overmount.


Previous Next Contents Index