Previous | Contents | Index |
Part 5 describes how to configure, use, and manage the components that enable transparent network file sharing: NFS server, PC-NFS, and NFS client. It includes the following chapters:
The Network File System (NFS) server software lets you set up file systems on your OpenVMS host for export to users on remote NFS client hosts. These files and directories appear to the remote user to be on the remote host even though they physically reside on the local system.
After the NFS server is installed on your computer, you must configure the server to allow network file access.
This chapter reviews key NFS concepts and describes:
See Chapter 21 for information on managing the NFS client.
If your network includes PC clients, you may want to configure PC-NFS.
Section 20.1.9 and Section 20.4 provide more information.
20.1 Key Concepts
NFS software was originally developed on and used for UNIX machines. For this reason, NFS implementations use UNIX style conventions and characteristics. The rules and conventions that apply to UNIX files, file types, file names, file ownership, and user identification also apply to NFS.
Because the TCP/IP Services product runs on OpenVMS, the NFS software must accommodate the differences between UNIX and OpenVMS file systems, for example, by converting file names and mapping file ownership information. You must understand these differences to configure NFS properly on your system, to select the correct file system for the application, and to ensure that your file systems are adequately protected while granting access to users on remote hosts.
The following sections serve as a review only. If you are not familiar
with NFS, see the DIGITAL TCP/IP Services for OpenVMS Concepts and Planning manual for more information.
20.1.1 Clients and Servers
NFS is a client/server environment that allows computers to share disk space and allows users to work with their files from multiple computers without copying them to their local system. The NFS server can make any of its file systems available to the network by exporting the files and directories. Users on authorized client hosts access the files by mounting the exported files and directories. The NFS client systems accessing your server may be running UNIX, OpenVMS, or other operating systems.
The NFS client identifies each file system by the name of its
mount point on the server. The mount point is the name
of the device or directory at the top of the file system hierarchy that
you create on the server. An NFS device is always named DNFSn.
The NFS client makes file operation requests by contacting your NFS
server. The server then performs the requested operation.
20.1.2 NFS File Systems on OpenVMS
The OpenVMS system includes a hierarchy of devices, directories and files stored on a Files--11 On-Disk Structure (ODS-2) formatted disk. OpenVMS and ODS-2 define a set of rules that govern files within the OpenVMS file system. These rules define the way that files are named and catalogued within directories.
If you are not familiar with OpenVMS file systems, refer to the OpenVMS System Manager's Manual: Essentials to learn how to set up and initialize a Files--11 disk.
You can set up and export two different kinds of file systems: a traditional OpenVMS file system or a UNIX style file system built on top of an OpenVMS file system. This UNIX style file system is called a container file system.
Each file system is a multilevel directory hierarchy: on OpenVMS
systems, the top level of the directory structure is the master file
directory (MFD). The MFD is always named [000000] and contains all the
top-level directories and reserved system files. On UNIX systems or
with a container file system, the top-level directory is called the
root.
20.1.2.1 Selecting a File System
You can set up and export either an OpenVMS file system or a container file system. Which one you choose depends on your environment and the user needs on the NFS client host.
You might use an OpenVMS file system if:
Select the OpenVMS file system if you need to share files between users on OpenVMS and users on NFS clients.
You might use a container file system if:
The NFS software lets you create a logical UNIX style file system on your OpenVMS host that conforms to UNIX file system rules. This means that any UNIX application that accesses this file system continues to work as if it were accessing files on a UNIX host.
An OpenVMS server can support multiple container file systems. Creating a container file system is comparable to initializing a new disk with an OpenVMS volume structure, because it provides the structure that enables users to create files. The file system parameters, directory structure, UNIX style file names, and file attributes are catalogued in a data file called a container file.
The number of UNIX containers you should create depends on how you want to manage your system.
In a container file system, each conventional UNIX file is stored as a separate data file. The container file also stores a representation of the UNIX style directory hierarchy and, for each file name, a pointer to the data file. In addition to its UNIX style name, each file in the container file system has a system-assigned valid Files--11 file name.
An OpenVMS directory exists for each UNIX directory stored in the container. All files catalogued in a UNIX directory are also catalogued in the corresponding OpenVMS directory; however, the UNIX directory hierarchy is not duplicated in the OpenVMS directory hierarchy.
Because each UNIX style file is represented as an OpenVMS data file, OpenVMS utilities such as BACKUP can use standard access methods to access these files.
Except for backing up and restoring files, you should not use DCL commands to manipulate files in a container file system. Instead, use the commands described in Section 20.10. |
For more information about backing up and restoring files, see Section 20.7 and Section 20.10.7.
For information about setting up container file systems, see
Section 20.9.
20.1.3 How the Server Grants Access to Users and Hosts
The server uses the following database files to grant access to users on client hosts:
These database files are usually created by TCPIP$CONFIG and can be shared by all OpenVMS Cluster nodes running TCP/IP Services. To control access to these database files, set the OpenVMS file protections accordingly. By default, World access is denied.
Section 20.6 describes how to create these database files on your
server.
20.1.4 How the Server Maps User Identities
Both OpenVMS and UNIX based systems use identification codes as a general method of resource protection and access control. Just as OpenVMS employs user names and UICs for identification, UNIX identifies users with a user name and a user identifier (UID) and one or more group identifiers (GIDs). Both UIDs and UICs identify a user on a system.
The proxy database contains entries for each user who accesses a file system on your local server. Each entry contains the OpenVMS user name, the UID/GID pair that identifies the user's account on the client system, and the name of the client host. This file is loaded into dynamic memory when the server starts.
When a user on the OpenVMS client host requests access to a file, the client searches its proxy database for an entry that maps the requester's identity to a corresponding UID/GID pair. (Proxy lookup is performed only on OpenVMS servers; UNIX clients already know the user by its UID/GID pair.) If the client finds a match, it sends a message to the server that contains the following:
The server searches its proxy database for an entry that corresponds to the requester's UID/GID pair. If the UID maps to an OpenVMS account, the server grants access to the file system according to the privileges set for that account.
In the following example, the proxy entry maps a client user with UID=15/GID=15, to the OpenVMS account named ACCOUNT2. Any files owned by user ACCOUNT2 are deemed to be also owned by user UID=15 and GID=15.
OpenVMS User_name Type User_ID Group_ID Host_name ACCOUNT2 OND 15 15 * |
After the OpenVMS identity is resolved, the NFS server uses this
acquired identity for all data access, as described in Section 20.1.7.
20.1.5 Mapping the Default User
In a trusted environment, you may want the server to grant restricted access even if the incoming UID does not map to an OpenVMS account. This is accomplished by adding a proxy entry for the default user. The NFS server defines the default user at startup with the following attributes:
You can initialize these attributes using the SYSCONFIG command, which is defined by the SYS$MANAGER:TCPIP$DEFINE_COMMANDS.COM procedure. For example:
$ @SYS$MANAGER:TCPIP$DEFINE_COMMANDS $ SYSCONFIG -r nfs_server noproxy_uid=-2 noproxy_gid=-2 |
If the server finds a proxy entry for the default user, it grants access to OpenVMS files as the OpenVMS user associated with "nobody" in the proxy record. TCP/IP Services normally uses the UNIX user "nobody" (--2/--2) as the default user.
To temporarily modify run-time values for the default user, use the /UID_DEFAULT and /GID_DEFAULT qualifiers to the SET NFS_SERVER command.
To permanently modify these values, edit the SYS$STARTUP:TCPIP$NFS_SYSTARTUP.COM file with the commands to define new values for the UID and GID logical names. See Section 20.12 for instructions on modifying SYSCONFIG variables to change the default values.
If you require tighter restrictions, you can disable the default user mapping and set additional security controls by setting the attribute noproxy_enabled . See Section 20.11 for more information.
The configuration procedure for the NFS client creates a nonprivileged account with the user name TCPIP$NOBODY. You may want to add a proxy record for the default user that maps to the TCPIP$NOBODY account. |
When a remote UNIX client does a mount, it is often performed by the superuser. (In some UNIX implementations, this can be performed only by the superuser.)
A superuser (root) on a remote client does not automatically become a privileged user on the server. Instead, the superuser (UID=0) is mapped to the default user defined with the attributes noproxy_uid and noproxy_gid . (By default, user "nobody" (--2/--2) is used.)
You may have remote clients that use the superuser to mount file
systems. If you want to grant normal root permissions, add a proxy
record with UID=0/GID=1 and map this to an appropriate OpenVMS account.
The ability of the remote superuser to mount and access files on the
server is controlled by the privileges you grant for this OpenVMS
account.
20.1.7 How OpenVMS and the NFS Server Grant File Access
To protect your exported file systems, you must take care when granting account and system privileges for remote users. You must also understand how OpenVMS grants access to files.
The NFS server uses the proxy database to map the incoming user identity to an OpenVMS account. The server uses the account's UIC to evaluate the protection code, along with other security components, before granting or denying access to files.
When a user tries to access a protected file or directory, the OpenVMS system uses the following sequence to compare the security profile of the user against the security profile of the target file or directory.
For a more thorough discussion on access checking, refer to the
OpenVMS Guide to System Security.
20.1.8 Understanding the Client's Role in Granting Access
Before sending a user request to the NFS server, the client performs its own access checks. This check occurs on the client host and causes the client to grant or deny access to data. This means that even though the server may grant access, the client may deny access before the user's request is even sent to the server host. If the client user maps to an OpenVMS account that is not allowed access to a file, an ACL entry may not allow access from an NFS client as it would locally for that OpenVMS account.
It is also possible for the server to reject an operation that was otherwise allowed by the client. With the attribute noproxy_enabled , you can use the ACL for additional access control. See Section 20.11 for a complete description of the security features set with this variable.
With this variable set, the TCP/IP Services startup procedure creates the
TCPIP$NFS_REMOTE identifier. For example, you can use this identifier
in the ACL to reject access to some (or all) files available through
NFS. (See Section 20.12 for more information about logical names.)
20.1.9 Granting Access to PC-NFS Clients
TCP/IP Services provides authentication services to PC-NFS clients by means of PC-NFS. As with any NFS client, users must have a valid account on the NFS server host, and user identities must be registered in the proxy database.
Because PC operating systems do not identify users with UID/GID pairs, these pairs must be assigned to users. PC-NFS assigns UID/GID pairs based on information you supply in the proxy database.
The following describes this assignment sequence:
The NFS server can be shut down and started independently. This is useful when you change parameters or logical names that require the service to be restarted.
The following files are provided:
To preserve site-specific parameter settings and commands, create the following files. These files are not overwritten when you reinstall TCP/IP Services:
If the NFS server resides on more than one host in an OpenVMS Cluster system, you can manage the proxy database and the export database as a homogeneous OpenVMS Cluster system (one proxy file on the OpenVMS Cluster system) or a heterogeneous OpenVMS Cluster system (a different proxy database on each host in the cluster).
The NFS server automatically responds to the requests it receives on any TCP/IP network interface. Therefore, if several OpenVMS Cluster nodes have Internet cluster interfaces, the server can execute as a clusterwide application. Clients that mount file systems using the cluster alias can then be served by any of the NFS servers in the cluster. Because NFS uses cluster failover, if one of the servers is taken down, client requests are redirected to another host in the cluster.
To allow NFS clients to access the cluster, define a cluster alias and a network interface name for each cluster member.
Previous | Next | Contents | Index |