After the NFS server is installed on your computer, you must configure the server to allow network file access.
This chapter reviews key NFS concepts and provides guidelines for configuring and managing the NFS server on your OpenVMS system. See Chapter 15 for information on managing the NFS client.
If your network includes PC clients, you may wish to configure the
PC-NFS daemon. Section 14.1.9 and Section 14.4 provide more information.
14.1 Reviewing Key Concepts
NFS software was originally developed on and used for UNIX machines. For this reason, NFS implementations use UNIX-style conventions and characteristics. The rules and conventions that apply to UNIX files, file types, file names, file ownership, and user identification also apply to NFS.
Because the DIGITAL TCP/IP Services for OpenVMS product runs on OpenVMS, the NFS software must accommodate the differences between UNIX and OpenVMS file systems, for example, by converting file names and mapping file ownership information. You must understand these differences to configure NFS properly on your system, select the correct file system for the NFS client, and to ensure that your file systems are adequately protected while granting access to users on remote hosts.
The following sections serve as a review only. If you are not familiar
with NFS, see the DIGITAL TCP/IP services for OpenVMS Concepts and
Planning manual or other introductory NFS documentation for more
information.
14.1.1 Clients and Servers
NFS is a client/server environment that allows computers to share disk space and users to work with their files from multiple computers without copying them to their local system. The NFS server can make any of its file systems available to the network by exporting the files and directories. Users on authorized client hosts access the files by mounting the exported files and directories. The NFS client systems accessing your server may be running UNIX, OpenVMS, or other operating systems.
The NFS client identifies each file system by the name of its
mount point on the server. The mount point is the name
of the device or directory at the top of the file system hierarchy that
you create on the server. An NFS device is always named DNFSn.
The NFS client makes file operation requests by contacting your NFS
server. The server then performs the requested operation.
14.1.2 NFS File Systems on OpenVMS
The OpenVMS system includes a hierarchy of devices, directories and files stored on a Files--11 On-Disk Structure (ODS-2) formatted disk. OpenVMS and ODS-2 define a set of rules that govern files within the OpenVMS file system. These rules define the way that files are named and catalogued within directories.
If you are not familiar with OpenVMS file systems, refer to the OpenVMS System Manager's Manual to learn how to set up and initialize a Files--11 disk.
You can set up and export two different kinds of file systems: a traditional OpenVMS file system or a UNIX-style file system built on top of an OpenVMS file system. This UNIX-style file system is called a container file system.
Each file system is a multi-level directory hierarchy: On OpenVMS
systems, the top level of the directory structure is the master file
directory (MFD). The MFD is always named [000000] and contains all the
top-level directories and reserved system files. On UNIX systems, or
with a container file system, the top-level directory is called root.
14.1.2.1 Selecting a File System
As previously stated, you can set up and export an OpenVMS file system or a container file system. What you use depends on your environment and the user needs on the NFS client host.
For example, you might use an OpenVMS file system if
You might use a container file system if
The NFS software lets you create a logical UNIX-style file system on your OpenVMS host that conforms to UNIX file system rules. This means that any UNIX application that accesses this file system continues to work as if it were accessing files on a UNIX host.
An OpenVMS server can support multiple container file systems. Creating a container file system is comparable to initializing a new disk with an OpenVMS volume structure because it provides the structure that enables users to create files. The file system parameters, directory structure, UNIX-style file names, and file attributes are catalogued in a data file called a container file.
The number of UNIX containers to create depends on how you want to manage your system.
In a container file system, each conventional UNIX file is stored as a separate data file. The container file also stores a representation of the UNIX-style directory hierarchy and, for each file name, a pointer to the data file. In addition to its UNIX-style name, each file in the container file system has a system-assigned valid Files--11 file name.
An OpenVMS directory exists for each UNIX directory stored in the container. All files catalogued in a UNIX directory are also catalogued in the corresponding OpenVMS directory; however, the UNIX directory hierarchy is not duplicated in the OpenVMS directory hierarchy.
Because each UNIX-style file is represented as an OpenVMS data file, OpenVMS utilities such as BACKUP can use standard access methods to access these files.
The container file system shared library (SYS$SHARE:UCX$CFS_SHR.EXE) is an OpenVMS shared library that is used by both the NFS server process and the management control program to process files within the container file system.
Important
You cannot use DCL commands to manipulate files in a container file system. Instead, use the commands described in Section 14.9.
The server uses the following database files to grant access to users on client hosts:
These database files are created by UCX$CONFIG or manually and can be shared by all OpenVMS cluster hosts running DIGITAL TCP/IP Services for OpenVMS. If you use UCX$CONFIG, it will query you for file protection preference. If you use the CREATE PROXY command, world access is automatically denied.
Section 14.5 describes how to create these database files on your
server.
14.1.4 How the Server Maps User Identities
Both OpenVMS and UNIX-based systems use identification codes as a general method of resource protection and access control. Just as OpenVMS employs user names and UICs for identification, UNIX identifies users with a user name and a user identification (UID) and group identification (GID) pair. Both UIDs and GIDs are simply numbers to identify a user on a system.
The proxy database contains entries for each user accessing a file system on your local server. Each entry contains the OpenVMS user name, the UID/GID pair that identifies the user's account on the client system, and the name of the client host. This file is loaded into dynamic memory when the server starts.
When a user on the OpenVMS client host requests access to a file, the client searches its proxy database¹ for an entry that maps the requester's identity to a corresponding UID/GID pair. If the client finds a match, it sends a message to the server that contains the
The server searches its proxy database¹ for an entry that corresponds to the requester's UID/GID pair. If this incoming UID maps to an OpenVMS account, the server grants access to the file system according to the privileges set for that account. Consider the following example: The proxy entry maps a client user with UID=15/GID=15, to the OpenVMS account named ACCOUNT2. Any files owned by user ACCOUNT2 are deemed to be also owned by user UID15 and GID15.
OpenVMS User_name Type User_ID Group_ID Host_name ACCOUNT2 OND 15 15 *
After the OpenVMS identity is resolved, the NFS server uses this
acquired identity for all data access, as described in Section 14.1.7.
14.1.5 Mapping the Default User
In a trusted environment, you may want the server to grant restricted access even if the incoming UID does not map to an OpenVMS account. This is done by adding a proxy entry for the default user. The NFS server defines the default user at startup with the logical names UCX$NFS00000000_UID and UCX$NFS00000000_GID. If the server finds a proxy entry for the default user (UCX normally uses the UNIX user "nobody" --2/--2), it grants access to OpenVMS files as the OpenVMS user associated with "nobody" in the proxy record.
To temporarily modify runtime values for the default user, use the /UID_DEFAULT and /GID_DEFAULT qualifiers to the SET NFS_SERVER command. To permanently modify these values, edit, SYS$STARTUP:UCX$NFS_SERVER_STARTUP.COM and define new values for the UID and GID logical names. See Section 14.11 for instructions on modifying server logical names to change the default values.
Note
The configuration procedure for the NFS server creates a nonprivileged account with the user name UCX$NOBODY. You may wish to add a proxy record for the default user that maps the UCX$NOBODY account.
If you require tighter restrictions, you can disable the default user
mapping and set additional security controls by setting the bit mask
with the logical name UCX$NFS00000000_SECURITY. See Section 14.10 for
details and Table 14-2 for more information.
14.1.6 Mapping a Remote Superuser
When a remote UNIX client does a mount, it is often performed by the superuser. (In some UNIX implementations, this can be performed only by the superuser.)
A superuser (root) on a remote client does not automatically become a privileged user on the server. Instead, the superuser (UID=0) is mapped to the default user defined with the logical names UCX$NFS00000000_UID and UCX$NFS00000000_GID. (Remember, UCX uses the user "nobody" (--2/--2) by default.)
You may have remote clients that use the superuser to mount file
systems. If you want to grant normal root permissions, change the
values set with the UID and GID logical names to UID=0/GID=1 and add a
proxy record, mapping this pair to a corresponding OpenVMS account. The
ability of the remote superuser to mount and access files on the server
is controlled by the privileges you grant for this OpenVMS account.
14.1.7 How OpenVMS and the NFS Server Grant File Access
To properly protect your exported file systems, you must take care when granting account and system privileges for remote users. You must also understand how OpenVMS grants access to files.
As previously stated, the NFS server uses the proxy database to map the incoming user identity to an OpenVMS account. The server uses the account's UIC to evaluate the protection code, along with other security components, before granting or denying access to files.
When a user tries to access a protected file or directory, the OpenVMS system uses the following sequence to compare the security profile of the user against the security profile of the target file or directory.
For a more thorough discussion on access checking, refer to the
OpenVMS Guide to System Security.
14.1.8 Understanding the Client's Role in Granting Access
Before sending a user request to the NFS server, the client performs its own access checks. This check occurs on the client host and causes the client to grant or deny access to data. This means that even though the server may grant access, the client may deny access before the user's request is even sent to the server host. If the client user maps to an OpenVMS acount that lacks protection mask access to a file, an ACL entry may not allow access from an NFS client as it would locally for that OpenVMS account.
It is also possible for the server to reject an operation that was
otherwise allowed by the client. With the logical name
UCX$NFS00000000_SECURITY, you can use the ACL for additional access
control. See Section 14.10 for a complete description of the security
features set with this logical name. With the appropriate bit set, the
UCX startup procedure creates the UCX$NFS_REMOTE identifier. Using this
identifier in the ACLs, you can, for example, reject access to some (or
all) files available through NFS. (See Section 14.11 for more
information about logical names.)
14.1.9 Granting Access to PC-NFS Clients
DIGITAL TCP/IP Services for OpenVMS provides authentication services to PC-NFS clients by means of the PC-NFS daemon. As with any NFS client, users must have a valid account on the NFS server host and user identities must be registered in the proxy database.
Because PC operating systems do not identify users with UID/GID pairs, these pairs must be assigned to them. The PC-NFS daemon assigns UID/GID pairs based on information you supply in the proxy database.
The following describes this assignment sequence:
When it detects a request from a client host, the auxiliary server starts the NFS server by processing the startup command procedure located in SYS$STARTUP:UCX$NFS_SERVER_STARTUP.COM. This command procedure defines a set of logical names that provide default values for NFS characteristics.
To stop the NFS server, you invoke the command procedure SYS$STARTUP:UCX$NFS_SHUTDOWN.COM. You can stop the NFS server even though clients still have file systems mounted on the server. If a client has a file system mounted with the hard option of the UNIX mount command and the client accesses the file system while the server is down, the client will stall while it is waiting for a response from the server.
Alternatively, if the client has a file system mounted using the soft option of the UNIX mount command, the client will receive an error message if it attempts to access a file.
Because the NFS protocol is stateless, clients with file systems
mounted on the server do not need to remount when the server is
restarted. To ensure this uninterrupted service, you must be sure all
file systems are mapped before restarting the NFS server. The simplest
way to do this is to use the SET CONFIGURATION MAP command.
14.3 Running the NFS Server on an OpenVMS Cluster System
If the NFS server resides on more than one host in an OpenVMS Cluster system, you can manage the proxy database and the export database as a homogeneous OpenVMS Cluster system (one proxy file on the OpenVMS Cluster system) or a heterogeneous OpenVMS Cluster system (a different proxy database on each host in the cluster). If the database files are to be shared by all hosts on the OpenVMS Cluster, be sure to set the file protection to allow WORLD read access and to deny WORLD write access.
The NFS server automatically responds to the requests it receives on any TCP/IP network interface. Therefore, if several of your OpenVMS Cluster hosts have internet cluster interfaces, the server can execute as a clusterwide application. Clients that mount file systems using the cluster internet host name can then be served by any of the NFS servers in the cluster. Also, because NFS uses cluster failover, if one of the servers is taken down, client requests are redirected to another host in the cluster.
To allow NFS clients to access the cluster, define a cluster alias and
a network interface name for each cluster member. See Section 1.3 for
more information.
14.4 Setting Up the PC-NFS Daemon
If you plan to export file systems to PC-NFS client hosts, you must configure the PC-NFS daemon using UCX$CONFIG. The daemon will start automatically.
You can also use the following commands to manage the PC-NFS daemon:
Users on client hosts must have corresponding OpenVMS accounts on your NFS server host. These accounts can be unique for each user or you can use the same OpenVMS account for multiple users². Choose UICs that match the intended work style. If you have users in the same work group who share files, assign the users' UICs in the same group.
If you have multiple users accessing one OpenVMS account, be sure to set file limits to provide satisfactory performance for all users accessing this account. See Section 14.13.9 for more information.
After setting up appropriate accounts, you must register users in the
proxy database and set mount points in the export database.
14.5.1 Adding Proxy Entries
Each user accessing your local server must be registered in the proxy database. See Section 14.1.3 if you are not familar with how the server uses this database to grant access to remote users. You should create the proxy database before the NFS server starts. If you are adding proxies, create the OpenVMS accounts before creating the proxy entries.
An empty proxy database file, UCX$PROXY.DAT, is created for you when you first use the configuration procedure to configure NFS. This file is empty until you populate it with proxy entries for each NFS user. If you do not use the configuration procedure to configure NFS, use the CREATE PROXY command to create the empty database file. The file UCX$PROXY.DAT resides in the SYS$COMMON:[SYSEXE] directory.
Use the ADD PROXY, REMOVE PROXY, and SHOW PROXY commands to maintain the proxy database. Issue these commands at the UCX prompt:
UCX> ADD PROXY user_name /UID=nn /GID=nn /HOST=host_name
For example, you can use the following command to register a user:
UCX> ADD PROXY SMITH /UID=53 /GID=45 /HOST="june"
You can specify a list of hosts for which the UID and GID are valid, for example:
UCX> ADD PROXY SMITH /UID=53 /GID=45 /HOST=("APRIL","MAY","JUNE")
or, you can specify that all hosts are valid using an asterisk (*), as shown in the following example:
UCX> ADD PROXY SMITH /UID=53 /GID=45 /HOST=*
If you use the configuration procedure to configure NFS, the export database is created for you, if it does not already exist.) This file is empty until you populate it with mount point entries. If you do not use the configuration procedure to configure NFS, use the CREATE EXPORT command to create the empty database file.
Use the ADD EXPORT, REMOVE EXPORT, and SHOW EXPORT commands to maintain the export database. Issue these commands at the UCX prompt:
UCX> ADD EXPORT "/path/name" /HOST=host_name
See the DIGITAL TCP/IP Services for OpenVMS Management Command Reference manual for more information about these commands and command qualifiers.
You may identify mount points by one or more of the following methods: