Previous | Contents | Index |
The MIME (Multipurpose Internet Mail Extensions) specification provides a set of additional headers you can use so users can send mail messages composed of more than simple ASCII text. MIME is an enhancement to RFC 822.
For MIME mail to be decoded correctly, follow these guidelines:
$ DEFINE/SYSTEM TCPIP$SMTP_JACKET_LOCAL 1 |
If MIME mail does not decode, check the mail headers on the client system. If you see multiple blocks of headers and the MIME version header is not in the first block, confirm that you have followed these guidelines.
Part 5 describes how to configure, use, and manage the components that enable transparent network file sharing: NFS server, PC-NFS, and NFS client. Part 5 also provides an extensive review of key NFS concepts.
Chapter 16 describes how to set up the NFS server and make file systems available to users on NFS client hosts. This chapter also describes how to set up PC-NFS, how to troubleshoot server and file system problems, and describes the NFS characteristics that can affect system performance.
Chapter 17 describes how to set up the NFS client, which provides users with access to remote file systems.
The Network File System (NFS) server software lets you set up file systems on your local system for export to users on remote NFS client hosts. These files and directories, even though they physically reside on the local system, appear to the remote user to be on the remote host.
After the NFS server is installed on your computer, you must configure the server to allow network file access.
This chapter reviews key NFS concepts and provides guidelines for configuring and managing the NFS server on your OpenVMS system. See Chapter 17 for information on managing the NFS client.
If your network includes PC clients, you may want to configure the
PC-NFS daemon. Section 16.1.9 and Section 16.4 provide more information.
16.1 Reviewing Key Concepts
NFS software was originally developed on and used for UNIX machines. For this reason, NFS implementations use UNIX style conventions and characteristics. The rules and conventions that apply to UNIX files, file types, file names, file ownership, and user identification also apply to NFS.
Because the DIGITAL TCP/IP Services for OpenVMS product runs on OpenVMS, the NFS software must accommodate the differences between UNIX and OpenVMS file systems, for example, by converting file names and mapping file ownership information. You must understand these differences to configure NFS properly on your system, to select the correct file system for the NFS client, and to ensure that your file systems are adequately protected while granting access to users on remote hosts.
The following sections serve as a review only. If you are not familiar
with NFS, see the DIGITAL TCP/IP services for OpenVMS Concepts and
Planning manual for more information.
16.1.1 Clients and Servers
NFS is a client-server environment that allows computers to share disk space and users to work with their files from multiple computers without copying them to their local system. The NFS server can make any of its file systems available to the network by exporting the files and directories. Users on authorized client hosts access the files by mounting the exported files and directories. The NFS client systems accessing your server may be running UNIX, OpenVMS, or other operating systems.
The NFS client identifies each file system by the name of its
mount point on the server. The mount point is the name
of the device or directory at the top of the file system hierarchy that
you create on the server. An NFS device is always named DNFSn.
The NFS client makes file operation requests by contacting your NFS
server. The server then performs the requested operation.
16.1.2 NFS File Systems on OpenVMS
The OpenVMS system includes a hierarchy of devices, directories and files stored on a Files--11 On-Disk Structure (ODS-2) formatted disk. OpenVMS and ODS-2 define a set of rules that govern files within the OpenVMS file system. These rules define the way that files are named and catalogued within directories.
If you are not familiar with OpenVMS file systems, refer to the OpenVMS System Manager's Manual to learn how to set up and initialize a Files--11 disk.
You can set up and export two different kinds of file systems: a traditional OpenVMS file system or a UNIX style file system built on top of an OpenVMS file system. This UNIX style file system is called a container file system.
Each file system is a multilevel directory hierarchy: on OpenVMS
systems, the top level of the directory structure is the master file
directory
(MFD). The MFD is always named [000000] and contains all the top-level
directories and reserved system files. On UNIX systems or with a
container file system, the top-level directory is called root.
16.1.2.1 Selecting a File System
As previously stated, you can set up and export an OpenVMS file system or a container file system. What you use depends on your environment and the user needs on the NFS client host.
For example, you might use an OpenVMS file system if
You might use a container file system if
The NFS software lets you create a logical UNIX style file system on your OpenVMS host that conforms to UNIX file system rules. This means that any UNIX application that accesses this file system continues to work as if it were accessing files on a UNIX host.
An OpenVMS server can support multiple container file systems. Creating a container file system is comparable to initializing a new disk with an OpenVMS volume structure, because it provides the structure that enables users to create files. The file system parameters, directory structure, UNIX style file names, and file attributes are catalogued in a data file called a container file.
The number of UNIX containers to create depends on how you want to manage your system.
In a container file system, each conventional UNIX file is stored as a separate data file. The container file also stores a representation of the UNIX style directory hierarchy and, for each file name, a pointer to the data file. In addition to its UNIX style name, each file in the container file system has a system-assigned valid Files--11 file name.
An OpenVMS directory exists for each UNIX directory stored in the container. All files catalogued in a UNIX directory are also catalogued in the corresponding OpenVMS directory; however, the UNIX directory hierarchy is not duplicated in the OpenVMS directory hierarchy.
Because each UNIX style file is represented as an OpenVMS data file, OpenVMS utilities such as BACKUP can use standard access methods to access these files.
The container file system shared library (SYS$SHARE:TCPIP$CFS_SHR.EXE) is an OpenVMS shared library that is used by both the NFS server process and the management control program to process files within the container file system.
You cannot use DCL commands to manipulate files in a container file system. Instead, use the commands described in Section 16.9. |
The server uses the following database files to grant access to users on client hosts:
These database files are created by TCPIP$CONFIG or manually and can be shared by all OpenVMS cluster hosts running DIGITAL TCP/IP Services for OpenVMS. If you use TCPIP$CONFIG, it queries you for file protection preference. If you use the CREATE PROXY command, world access is automatically denied.
Section 16.5 describes how to create these database files on your
server.
16.1.4 How the Server Maps User Identities
Both OpenVMS and UNIX based systems use identification codes as a general method of resource protection and access control. Just as OpenVMS employs user names and UICs for identification, UNIX identifies users with a user name and a user identification (UID) and group identification (GID) pair. Both UIDs and GIDs are numbers to identify a user on a system.
The proxy database contains entries for each user accessing a file system on your local server. Each entry contains the OpenVMS user name, the UID/GID pair that identifies the user's account on the client system, and the name of the client host. This file is loaded into dynamic memory when the server starts.
When a user on the OpenVMS client host requests access to a file, the client searches its proxy database1 for an entry that maps the requester's identity to a corresponding UID/GID pair. If the client finds a match, it sends a message to the server that contains the following:
The server searches its proxy database1 for an entry that corresponds to the requester's UID/GID pair. If this incoming UID maps to an OpenVMS account, the server grants access to the file system according to the privileges set for that account. For the following example, the proxy entry maps a client user with UID=15/GID=15, to the OpenVMS account named ACCOUNT2. Any files owned by user ACCOUNT2 are deemed to be also owned by user UID15 and GID15.
OpenVMS User_name Type User_ID Group_ID Host_name ACCOUNT2 OND 15 15 * |
After the OpenVMS identity is resolved, the NFS server uses this
acquired identity for all data access, as described in Section 16.1.7.
16.1.5 Mapping the Default User
In a trusted environment, you may want the server to grant restricted access even if the incoming UID does not map to an OpenVMS account. This is done by adding a proxy entry for the default user. The NFS server defines the default user at startup with the logical names TCPIP$NFS00000000_UID and TCPIP$NFS00000000_GID. If the server finds a proxy entry for the default user (the product normally uses the UNIX user "nobody" --2/--2), it grants access to OpenVMS files as the OpenVMS user associated with "nobody" in the proxy record.
To temporarily modify run-time values for the default user, use the /UID_DEFAULT and /GID_DEFAULT qualifiers to the SET NFS_SERVER command. To permanently modify these values, edit SYS$STARTUP:TCPIP$SYSTARTUP.COM and define new values for the UID and GID logical names. See Section 16.11 for instructions on modifying server logical names to change the default values.
The configuration procedure for the NFS server creates a nonprivileged account with the user name TCPIP$NOBODY. You may want to add a proxy record for the default user that maps the TCPIP$NOBODY account. |
If you require tighter restrictions, you can disable the default user
mapping and set additional security controls by setting the bit mask
with the logical name TCPIP$NFS00000000_SECURITY. See Section 16.10 for
details and Table 16-2 for more information.
16.1.6 Mapping a Remote Superuser
When a remote UNIX client does a mount, it is often performed by the superuser. (In some UNIX implementations, this can be performed only by the superuser.)
A superuser (root) on a remote client does not automatically become a privileged user on the server. Instead, the superuser (UID=0) is mapped to the default user defined with the logical names TCPIP$NFS00000000_UID and TCPIP$NFS00000000_GID. (Remember, user "nobody" (--2/--2) is used by default.)
You may have remote clients that use the superuser to mount file
systems. If you want to grant normal root permissions, change the
values set with the UID and GID logical names to UID=0/GID=1 and add a
proxy record, mapping this pair to a corresponding OpenVMS account.
The ability of the remote superuser to mount and access files on the
server is controlled by the privileges you grant for this OpenVMS
account.
16.1.7 How OpenVMS and the NFS Server Grant File Access
To properly protect your exported file systems, you must take care when granting account and system privileges for remote users. You must also understand how OpenVMS grants access to files.
As previously stated, the NFS server uses the proxy database to map the incoming user identity to an OpenVMS account. The server uses the account's UIC to evaluate the protection code, along with other security components, before granting or denying access to files.
When a user tries to access a protected file or directory, the OpenVMS system uses the following sequence to compare the security profile of the user against the security profile of the target file or directory.
For a more thorough discussion on access checking, refer to the
OpenVMS Guide to System Security.
16.1.8 Understanding the Client's Role in Granting Access
Before sending a user request to the NFS server, the client performs its own access checks. This check occurs on the client host and causes the client to grant or deny access to data. This means that even though the server may grant access, the client may deny access before the user's request is even sent to the server host. If the client user maps to an OpenVMS acount that lacks protection mask access to a file, an ACL entry may not allow access from an NFS client as it would locally for that OpenVMS account.
It is also possible for the server to reject an operation that was
otherwise allowed by the client. With the logical name
TCPIP$NFS00000000_SECURITY, you can use the ACL for additional access
control. See Section 16.10 for a complete description of the security
features set with this logical name. With the appropriate bit set, the
TCP/IP Services startup procedure creates the TCPIP$NFS_REMOTE
identifier. Using this identifier in the ACLs, you can, for example,
reject access to some (or all) files available through NFS. (See
Section 16.11 for more information about logical names.)
16.1.9 Granting Access to PC-NFS Clients
DIGITAL TCP/IP Services for OpenVMS provides authentication services to PC-NFS clients by means of the PC-NFS daemon. As with any NFS client, users must have a valid account on the NFS server host and user identities must be registered in the proxy database.
Because PC operating systems do not identify users with UID/GID pairs, these pairs must be assigned to them. The PC-NFS daemon assigns UID/GID pairs based on information you supply in the proxy database.
The following describes this assignment sequence:
1 Proxy lookup is performed only on OpenVMS servers or clients. UNIX servers and clients already know the user by its UID/GID pair. |
16.2 Starting and Stopping the NFS Server
When it detects a request from a client host, the auxiliary server starts the NFS server. The command procedure SYS$STARTUP:TCPIP$NFS_SERVER_RUN.COM enables the server for automatic startup and also defines a set of logical names that provide default values for NFS characteristics.
To stop the NFS server, you invoke the command procedure SYS$STARTUP:TCPIP$NFS_SHUTDOWN.COM. You can stop the NFS server even though clients still have file systems mounted on the server. If a client has a file system mounted with the hard option of the UNIX mount command and the client accesses the file system while the server is down, the client will stall while it is waiting for a response from the server.
Alternatively, if the client has a file system mounted using the soft option of the UNIX mount command, the client will receive an error message if it attempts to access a file.
Because the NFS protocol is stateless, clients with file systems
mounted on the server do not need to remount when the server is
restarted. To ensure this uninterrupted service, you must be sure all
file systems are mapped before restarting the NFS server. The simplest
way to do this is to use the SET CONFIGURATION MAP command.
16.3 Running the NFS Server on an OpenVMS Cluster System
If the NFS server resides on more than one host in an OpenVMS Cluster system, you can manage the proxy database and the export database as a homogeneous OpenVMS Cluster system (one proxy file on the OpenVMS Cluster system) or a heterogeneous OpenVMS Cluster system (a different proxy database on each host in the cluster). If the database files are to be shared by all hosts on the OpenVMS Cluster, be sure to set the file protection to allow WORLD read access and to deny WORLD write access.
The NFS server automatically responds to the requests it receives on any TCP/IP network interface. Therefore, if several of your OpenVMS Cluster hosts have internet cluster interfaces, the server can execute as a clusterwide application. Clients that mount file systems using the cluster internet host name can then be served by any of the NFS servers in the cluster. Because NFS uses cluster failover, if one of the servers is taken down, client requests are redirected to another host in the cluster.
To allow NFS clients to access the cluster, define a cluster alias and a network interface name for each cluster member. See Section 1.4 for more information.
Previous | Next | Contents | Index |