DIGITAL TCP/IP Services for OpenVMS
Management


Previous | Contents

14.6 Backing Up a File System

You can back up NFS-mounted files using standard OpenVMS backup procedures. For more information, see the OpenVMS documentation.

If you back up an OpenVMS file system or a container file system while remote users are accessing the files, the resulting save set may contain files that are in an inconsistent state. For a container file system, there is the additional danger that the container file itself may be in an inconsistent state.

Furthermore, the OpenVMS BACKUP utility does not issue warning messages when backing up files that are opened by the NFS server, even without the /IGNORE=INTERLOCK qualifier.

The safest way to back up is to schedule the backup for a time when users will not be accessing the files. Then either unmap the file systems to be backed up or simply shut down the NFS server.

14.7 Setting Up and Exporting an OpenVMS File System

The following example describes how to set up an OpenVMS file system on the OpenVMS server, and how to make the file system available to Joe Brown, a user on UNIX client ultra.

Joe Brown has an OpenVMS user name of BROWN and a UNIX user name of joe.

  1. Log in to a UNIX node to find the UID/GID for the UNIX user joe, by entering the following command:
    % grep joe /etc/passwd 
     joe: (encrypted password) :27:58: ... 
    

    The fields :27:58 of the password entry for joe are the UID and GID. In this example, joe has UID=27 and GID=58.
  2. Log in to the OpenVMS server.
    The OpenVMS files exist on DSA301:[BROWN.TEST]. Joe wants to mount the files in the subdirectory TEST.
  3. Enter the following commands:
    $ UCX 
    UCX> ADD PROXY BROWN /UID=27 /GID=58 /host=ultra 
    UCX> MAP DSA301: "/vmsdisk" 
    UCX> ADD EXPORT "/vmsdisk/brown/test" /host=ultra 
    

    If you want to make the mapping permanent, issue a SET CONFIGURATION MAP command.

If users need to create files with case-sensitive names or names containing characters that do not conform to the OpenVMS syntax, you can enable a name-conversion feature that gives users more file naming flexibility without creating a container file system. Use the /OPTIONS=NAME_CONVERSION qualifier to the command ADD EXPORT to enable this option.

With the NAME_CONVERSION option set, users can create files and directories in an OpenVMS file system using names that do not conform to OpenVMS file-naming rules.


Note

If any client hosts had the file system mounted before the name conversion was enabled, they must dismount and remount for this feature to take effect.

See the DIGITAL TCP/IP Services for OpenVMS Management Command Reference manual for more options to the ADD EXPORT commmand.

14.8 Setting Up and Exporting a UNIX-Style File System

A UNIX-style file system is called a container file system. When creating a UNIX-style file system, you must name an owner by means of the /USER_NAME qualifier to the CREATE CONTAINER command. If the container file system is for the use of just one remote user, that user can be the owner. If it is for the use of several users, the owner should be a user whose UIC is mapped to UID=0/GID=1 (UNIX user root). In either case, the name set with this qualifier must be already registered in the proxy database. This user also becomes the owner of the internal root directory of the container.

To create a UNIX-style file system on the NFS server, follow these steps:

  1. Add a proxy entry for the owner of the container file system.
    UCX> ADD PROXY SYSTEM /UID=0 /GID=1 /HOST=*
  2. Create an empty container on an OpenVMS volume, assign an owner, and set permissions.
    UCX> CREATE CONTAINER DSA101:[TEST] /USER_NAME=SYSTEM /ROOT_MODE=741 /HOST="june"
    This example creates a UNIX-style file system named TEST on device DSA101:. The user with a UID of 0 is assigned as owner. The permissions are assigned as follows:
  3. Map the OpenVMS volume on which the container file has been created.
    UCX> MAP "/test_dsk" DSA101:
    Note that it is important to map the underlying volume before mapping the container file system to make it available to the NFS server and the UCX management utility. It is possible to use a volume both as an OpenVMS-style file system and a UNIX-style file system. If the disk was already in use as a OpenVMS-style file system, it may already be mapped. In that case, you can skip this step.
  4. Map the container file system to make it available to NFS client hosts. This mapping gives the file system its UNIX-style name and UNIX-style attributes. For example,
    UCX> MAP "/test" DSA101:[TEST]
    To make the mappings permanent, also use the SET CONFIGURATION MAP command.
  5. If you do not already have proxies for the users, create them now. For example,
    UCX> ADD PROXY USER1 /UID=234 /GID=14 /HOST=*
  6. In the root directory, create a top-level directory for each remote user. Be sure to specify directory ownership and set file permissions as needed for your environment. For example,
    UCX> CREATE DIRECTORY "/test/user1" /USER_NAME=USER1 /MODE=741 /HOST="june"
  7. Export the root directory or the user top-level directories in the UNIX container.
    UCX> ADD EXPORT "/test" /HOST=* 
    

    or
    UCX> ADD EXPORT "/test/user1" /HOST="june" 
     
    

    14.9 Maintaining a UNIX-style (Container) File System

    This section reviews the commands you use to maintain and examine a container file system. Topics include:

    For complete command descriptions, see the DIGITAL TCP/IP Services for OpenVMS Management Command Reference manual.

    14.9.1 Displaying Directory Listings

    Use the DIRECTORY command to display the contents of a directory. For example,

    UCX> DIRECTORY "/path/name"
    

    Here, /path/name is a valid UNIX directory specification that begins with a slash (/) and is enclosed by quotes.

    The DIRECTORY command also has two qualifiers, as follows:

    14.9.2 Copying Files into a UNIX-Style File System

    You cannot use the DCL COPY command to create files in a UNIX-style file system because the UNIX directory structure is fully contained in the corresponding container file. Instead, you must use the IMPORT command to copy a file from an OpenVMS directory into a UNIX-style file system. Likewise, use the EXPORT command to copy a file from a UNIX-style file system in to an OpenVMS directory.

    If the OpenVMS data file does not have the STREAM_LF record format it will automatically be converted to STREAM_LF. Use the /NOCONVERT qualifier to prevent the conversion.

    14.9.3 Removing Links to a File

    A link is a directory entry referring to a file. A file can have several links to it. A link (hard link) to a file is indistinguishable from the original directory entry. Any changes to the file are independent of the link used to reference the file. A file cannot be deleted (removed) until the link count is zero.

    Users can create multiple links to a file. A user sometimes creates a link to a file so that the file appears in more than one directory.

    All links to a file are of equal value. If a file has two links and one link is removed, the file is still accessible through the remaining link. When the last existing link is removed (the link count is zero), the file is no longer accessible and is deleted.

    Remove links to a file with the REMOVE FILE command. For example, to remove the link to a file named letter located at /usr/smith, issue the following command:

    UCX> REMOVE FILE "/usr/smith/letter"
    

    14.9.4 Removing Links to a Directory

    Like UNIX files, UNIX directories have links to them. An empty directory is deleted when the last link to the directory is removed.

    Remove links to a UNIX directory with the REMOVE DIRECTORY command. For example, to remove the directory smith at /usr, issue the following command:

    UCX> REMOVE DIRECTORY "/usr/smith"
    

    14.9.5 Deleting a UNIX-Style file system

    You can delete a container file system with all its directories and files by issuing the DELETE CONTAINER command. For example, to delete the UNIX container created on WORK1$:[GROUP_A], issue the following command:

    UCX> DELETE CONTAINER WORK1$:[GROUP_A]
    

    Use the UNMAP command to unmap the UNIX-style file system before you delete it.

    14.9.6 Verifying the Integrity of a UNIX-Style File System

    You can use the ANALYZE CONTAINER command to check the integrity of your container file system. This command is similar in function to the DCL ANALYZE/DISK_STRUCTURE command.

    For example, to verify the integrity of a UNIX-style file system located in WORK1$:[GROUP_A], issue the following:

    UCX> MAP "/group_a" WORK1$:
     
    UCX> ANALYZE CONTAINER WORK1$:[GROUP_A]
    

    File system access to the container file is suspended while the container is being analyzed.


    Note

    The underlying OpenVMS file system must be mapped before you use the ANALYZE CONTAINER command.

    You may want to analyze your container file system under the following circumstances:

    Table 14-1 lists the important file components of a UNIX-style file system that are normally verified by the ANALYZE CONTAINER command.

    Table 14-1 UNIX-Style File System Components Analyzed
    UNIX Item OpenVMS Conceptual Equivalent Description
    Super block Home block Contains the basic information on the internal structuring of the container file.
    Inode File header Each file or directory has an inode that contains information describing the file. The inode is a central definition of the file.
    Directory Directory Contains the file names and directory hierarchy information. File name entries contain links to the inode information.
    Bitmap BITMAP.SYS Contains the container file internal allocation information. Only one bitmap exists in the container file.

    For a complete description of the ANALYZE CONTAINER command and its qualifiers, see Appendix C and the DIGITAL TCP/IP Services for OpenVMS Management Command Reference manual.

    14.9.7 Restoring an Entire Container File System

    For a typical image restore, simply follow normal OpenVMS procedures.

    For a non-image restore, an additional step is required after the restore. The Files-11 File IDs are recorded in the container file. These must be updated by the UCX ANALYZE CONTAINER /REPAIR command.

    This extra step is also required for an image restore if the save set is being restored with the /NOINITIALIZE qualifier to a volume with a different label or if it is being restored to a bound volume set that has a member that was added since the time of the image backup.

    14.9.8 Restoring Parts of a Container File System


    Important

    DIGITAL strongly recommends that before you attempt any of the recovery procedures described below, you read and completely understand: (1) the procedure you will attempt, (2) each of the procedures described before it, and (3) the OpenVMS documentation for the BACKUP command and qualifiers you will use to produce the intended result.

    Sometimes you may need to recover parts of a container file system without losing updates that have been applied to other parts since the backup save set was made. Depending on whether you need to recover files, directories, the container file itself, or some combination, this can be a very awkward and labor-intensive procedure. It can also be error-prone.

    14.9.8.1 Using a Temporary Copy of the Container File System

    If the container file itself is intact and you have enough free disk space, it may be simplest to restore the entire container file system to a temporary location and then transfer the needed files to the primary copy of the container file system using the UCX EXPORT and IMPORT commands. It is also possible to use a remote client if you prefer to allow the users to transfer their own files.

    To do this, you can either restore the container file system to a separate disk or to a temporary directory on the same disk. If you use a temporary directory on the same disk, you will need to rename the container file according to the directory name. For example, if the original container file system is in DKA100:[DAISY],

    $ DIRECTORY [DAISY]*.CONTAINER 
     
    DAISY.CONTAINER;1 
     
    Total of 1 file. 
    $ UCX SHOW MAP 
    /daisy                                 _DKA100:[DAISY]    
    $ 
    

    If you restore the backup save set to DKA100:[TEMP],

    $ RENAME DKA100:[TEMP]DAISY.CONTAINER TEMP.CONTAINER 
    %RENAME-I-RENAMED, DKA100:[TEMP]DAISY.CONTAINER;1 renamed to 
    DKA100:[TEMP]TEMP.CONTAINER;1 
    $ UCX 
    UCX> ANALYZE CONTAINER /REPAIR DKA100:[TEMP] 
     . 
     . 
     . 
    UCX> MAP "/j2" DKA100:[TEMP] 
    

    If you use a remote client to move the files, issue:

    UCX> ADD EXPORT "/j2" /HOST="robin" 
    UCX> EXIT 
    

    14.9.8.2 Recovering a Corrupted File That Still Exists

    Each of the files in a container file system has two names: the UNIX-style name that is recorded in the container file and an OpenVMS name, which you can see using ordinary OpenVMS commands. The OpenVMS names are formed from an eight-digit hexadecimal number prepended to $BFS.;1 or $BFS.DIR;1, depending on whether the file is a directory file. The save set has the files recorded by their OpenVMS names and directory paths, and you need to find out what that name and directory are for the particular file you need to recover.

    If the file still exists you can do this by issuing the UCX DIRECTORY /VMS command. For example,

    $ UCX DIRECTORY /VMS "/daisy/user1" 
     
    Directory:  /daisy/user1 
     
     . 
     
     VMS file: _DKA100:[DAISY]00003510$BFS.DIR;1 
     
     .. 
     VMS file: _DKA100:[DAISY]00012201$BFS.DIR;1 
     
     file.txt 
     VMS file: _DKA100:[DAISY.00003510$BFS]00005503$BFS.;1 
     
    $ 
    

    In this example, to recover /daisy/user1/file.txt, you need to select [DAISY.00003510$BFS]00005503$BFS.;1 when you restore.

    Note that the hierarchical structure of the directory tree is not reflected in the OpenVMS names of the directory files. All of the OpenVMS directory names are cataloged directly in the container directory, in this example [DAISY].

    14.9.8.3 Recovering a Deleted File

    If the file has been deleted as an OpenVMS file, but its name and attributes are still intact in the container file, the UCX DIRECTORY /VMS command will not show you its former OpenVMS name. You need to compute it.

    For each user file and directory file the container file has the analog of a UNIX inode. In a UNIX file system, each file has an inode in which the file's attributes are stored. Each inode has a unique inode number. The same is true in a container file system.

    The inode number is displayed as a decimal number called "File ID:" when you issue a UCX DIRECTORY /FULL command. For example,

    $ UCX DIRECTORY /FULL "/daisy/user1/file.txt" 
     
    Directory:  /daisy/user1 
     
    login.com 
    VMS file: *** no such file 
    Size                                File ID:        21763 
     Blocks:            3               Owner 
     Bytes:          1103                UID:            5107 
    Created:  23-JUN-1997 10:46:09.10    GID:              15 
    Revised:  23-JUN-1997 10:46:08.96   Mode:             755  Type: File 
    Accessed: 23-JUN-1997 10:46:08.54   Links:              1 
     
    $ 
    

    To get the OpenVMS name, convert the inode number to hexadecimal. Use enough leading zeros to make eight hexadecimal digits and append $BFS.;1 for a non-directory file or $BFS.DIR;1 for a directory file. For example, inode number 21763 becomes 00005503$BFS.;1.

    You now need to find the OpenVMS name of the immediate parent directory of the file. If the OpenVMS parent directory is still intact, the UCX DIRECTORY /VMS or UCX DIRECTORY /FULL command will tell you its OpenVMS name. If the parent directory is not intact, you need to restore the directory by computing the name from the inode number. For example,

    $ UCX DIR /FULL "/daisy" 
     
    Directory:  /daisy 
     . 
     . 
     . 
    user1 
    VMS file: _DKA100:[DAISY]00003510$BFS.DIR;1 
    Size                                File ID:        13584 
      Blocks:            4               Owner 
      Bytes:          1915                UID:            5107 
    Created:  16-JUN-1997 15:46:25.54    GID:              15 
    Revised:  23-JUN-1997 10:46:08.57   Mode:             741  Type: Directory 
    Accessed: 23-JUN-1997 10:46:08.57   Links:              2 
    $ 
    

    In this example, if "VMS file:" does not show the OpenVMS file name, you must compute it from the inode number, in this case 13584. You need to select [DAISY.00003510$BFS]00005503$BFS.;1 when you restore.

    As an alternative method of finding inode numbers, you may find it easier to mount the container file from a UNIX client and use the ls -i and/or ls -R commands. You still need to compute the OpenVMS file specification from the inode numbers of each file to be recovered and its immediate parent directory.

    14.9.8.4 Recovering a File That Has Been Completely Deleted from the Container

    There is no supported method of computing the OpenVMS name of a file by restoring just the container file from the save set. You need to restore the entire container file system to a temporary location as described in Section 14.9.8.1.

    14.9.8.5 Recovering a Container File Only

    This section applies to the situation of a container file being deleted or so badly corrupted that UCX ANALYZE CONTAINER /REPAIR does not help, but the other OpenVMS file and directory parts of the container file system are still intact.

    1. Restore the container file only from the most recent backup save set.
    2. UNMAP the container file system.
    3. Run UCX ANALYZE CONTAINER /REPAIR. This resets the inode attributes such as file size and time stamps according to the OpenVMS attributes of any files that were updated since the backup save set was made. It also resets the OpenVMS File ID of the container file.
    4. MAP the container file system. If the NFS server is not shut down, MAP it with an alternate name that does not appear in the export database to prevent remote clients from using the container file system.
    5. Files that were created after the save set was made have no inodes or directory links in the container file. You should be able to identify all these by the OpenVMS command DIRECTORY /SINCE. Make a note of what OpenVMS directory each is catalogued in, because this may be helpful later. Then rename these files into a temporary directory outside the container file system. Note that if any of these files are directories, renaming the directories effectively gets any files in those directories out of the container file system and into the temporary directory.
    6. Use the UCX IMPORT and UCX CREATE DIRECTORY commands to transfer the files back into the container file system. For each file and directory you need to determine its UNIX-style pathname, its owner, and possibly its mode.
      • You can determine the parent directory from the original OpenVMS parent directory and a UCX DIRECTORY /VMS listing. If any of the files to be recovered from the temporary directory are directory files, re-create those directory files first.
      • You can determine the UID and GID from the proxy record for the OpenVMS owner.
      • You can determine the mode from the OpenVMS protection mask for OWNER, GROUP, and WORLD.
      • There is sure no way to recover the original UNIX-style names of the files. You might be able to use the file contents to make a reasonable guess. For some files you may simply have to assign new names.

      14.10 Setting Up NFS Security Features

      The NFS server and the OpenVMS operating system provide many levels of security controls you can use to protect your file systems. Section 14.1.3, Section 14.1.4, and Section 14.1.7 reviewed how the server uses the proxy and export databases to restrict client access as well as how to use OpenVMS account privileges and protection schemes to control access to files and directories.

      The NFS server provides additional security options by modifying the logical name UCX$NFS00000000_SECURITY in the NFS server startup file UCX$NFS_STARTUP.COM. This file is located in the directory SYS$SYSDEVICE:[UCX$NFS].

      The server reads this logical name when it is started and applies the following security features:

      • Setting bit 0 (value = 1)
        Grants the identifier UCX$NFS_REMOTE to each client, creating a class of users who can access files using NFS. If you need to control NFS access to selected objects, set ACLs on objects to deny access to users holding the UCX$NFS_REMOTE identifier.
      • Setting bit 1 (value = 2)
        Disables user-level mount requests. If this bit is set, only root (UID=0) is able to mount file systems on the server.
      • Setting bit 2 (value = 4)
        Stops the server from accepting client requests from nonprivileged internet ports. If all of your NFS client implementations use privileged ports, you can use this feature to prevent a user from masquerading as an NFS client implementation.
      • Setting bit 4 (value = 16)
        Disables users' access to the server through the SYSUAF file.
        The OpenVMS Authorize utility allows you to restrict a user's network access during certain hours of the day with the /NETWORK and /ACCESS qualifiers. When you add a proxy record to the volatile proxy database, the server reads the network access information from the SYSUAF file. However, the NFS server does not automatically receive any changes made to the SYSUAF. Therefore, you must make sure that the information in the NFS volatile database reflects the information in the SYSUAF file.
        If access is denied for a certain time of day, the following message is written to the NFS error log file and the NFS client receives an NFS AUTH_BADCRED error in the reply message, which is translated into a permission-set denied message during the mount operation.
        UCX$-W-NFS_ACCNOA,  Access to the OpenVMS account is denied 
        

        If verification is set up for each NFS message, then NFS performance is reduced because at least three system service calls and additional calculations for each message are required. If the account has no restrictions on the network mode of operations, no verification is done. If the verification is disabled, no additional processing occurs.
        By default, bit 4 is set to 0, which enables this feature. To disable the feature, set bit 4 to 1.
      • Setting bit 5 (value = 32)
        Changes the default that allows unmapped users limited access to file systems through the default user account. You can change this behavior by setting bit 5. This provides an additional level of security protection because the server will not let unmapped users access any data at all. See Section 14.1.5 for more information about granting access through the default user account.

      14.11 Modifying Server Characteristics

      The file SYS$STARTUP:UCX$NFS_SERVER_STARTUP.COM defines a set of logical names that set characteristics of the NFS server. These characteristics include:

      • Error message logging
      • Size of the host table
      • File inactivity timer
      • Number of threads
      • Number of cached transactions
      • Value of the default UID
      • Value of the default GID
      • Security options
      • Server time
      • Server cache parameters
      • Policy for file blocks allocation.

      You can modify the NFS server by changing the values of the these logical names either permanently (follow the instructions provided in the command file) or temporarily with the SET NFS_SERVER command. If you modify the startup file, restart the server to make the changes take effect.

      Note that modifying server characteristics will affect server performance. Be sure to understand the impact (review Section 14.13) before making any changes.

      Table 14-2 describes the NFS server logical names.

      Table 14-2 Modifying Server Characteristics
      UCX Logical Name Description
      UCX$NFS00000000_ERROR Enables or disables error message logging
      UCX$NFS00000000_OPCOM Enables or disables error logging to the operator console (OPCOM). Setting the value to zero (0) disables this option.
      UCX$NFS00000000_HOSTS Specifies the maximum number of client hosts that can be defined in the server's host table. This parameter should be large enough to allow for the definition of all the hosts present in the proxy database. For this purpose, wildcard hosts counts as one.

      Making the parameter value larger than needed makes NFS allocate redundant virtual memory within the server.

      UCX$NFS00000000_UID
      UCX$NFS00000000_GID
      Defines the default user. The default values for these logical names are -2/-2 (the UNIX account "nobody").

      You can change the values for these logical names. File access is determined by the privileges assigned to the OpenVMS account that maps to the default user in the proxy database.

      You can also set parameters dynamically by supplying the /UID_DEFAULT and /GID_DEFAULT qualifiers to the SET NFS_SERVER command.

      UCX$NFS00000000_INACTIVITY Specifies, in minutes and seconds, the time interval since the last file access request.

      The server keeps an activity timestamp for each opened file to help manage the open file cache. You can also modify this value with the /INACTIVITY_TIMER qualifier of the SET NFS_SERVER command.

      The default setting for this value is 02:00, or 2 minutes. Making the interval too short causes the NFS server to close files more often, which reduces performance.

      UCX$NFS00000000_SECURITY Gives a bit-mask value. Each set bit adds a different security feature to the NFS operation as follows:
      Bit Description
      0 Grants the UCX$NFS_REMOTE identifier to each client. This lets you use ACLs to restrict access to users holding this security identifier.
      1 Disables user-level mount requests, allowing only superuser (UID=0/GID=1) mount access to the file system.
      2 Only privileged ports on the client host can send messages to the NFS server. (Privileged ports have ports of 0 to 1023.)
      4 Allows you to use information in the SYSUAF file to restrict network access.
      5 Prevents unmapped users from gaining access through the default user account.
      UCX$NFS00000000_THREADS Defines the maximum number of threads that can be active at the same time.

      The performance of the server is directly related to this value. The recommended value for an average load is a thread maximum of 20.

      If you increase this value, you should also increase the XID cache value. You may need to increase the /PAGE_FILE parameter in the NFS startup file as well.

      UCX$NFS00000000_XID Defines the size of the transaction cache, specified in a number of 8-Kbyte buffers. Default value: 256 buffers (258 x 8 Kbyte).

      In a busy server environment, increasing the size of the cache improves server performance. See Section 14.13.6 for more information.

      Depending on the frequency of file operations, the size of the cache is critical. If you increase this parameter, you might also need to increase the /PAGE_FILE parameter in the NFS startup file.


      Previous | Next | Contents