As the slave servers accumulate new information from the forwarder
servers, they continue to request additional information. Slave servers
do not receive complete zones from primary servers and only accumulate
data upon request.
3.7.5 Forwarder Servers
Forwarder servers process requests that slave servers cannot resolve locally. A forwarder server can be any BIND server with Internet access. Thus, a forwarder server can be a primary, a secondary, or a caching-only server. The configuration files on the slave servers list the systems that the slaves have access to as forwarder servers.
Forwarder servers have full access to the Internet and are able to obtain information regarding other servers not currently found in local caches. Because forwarder servers can receive requests from several slave servers, they can acquire a larger local cache than a slave server. All hosts in the domain have more local information available because the forwarder servers have a large cache. This means that the server sends fewer queries from that site to root servers on networks outside the internet.
A slave and forwarder server configuration is useful when the client is either unable to directly communicate with the designated forwarder server or unable to use the cache on the slave server.
For example, if the forwarder server is heavily used, the slave server can use information in its cache to handle common requests. However, the forwarder server handles all queries that require internet access.
Figure 3-3 shows the relationship among root servers, master servers, slave servers, forwarder servers, and clients.
Figure 3-3 Relationship of Master/Forwarder Server and Slave Servers
You can run the BIND service on a local network that does not have
Internet access. In this configuration, the servers resolve local
queries only. Any request that depends on Internet access goes
unresolved.
3.8 BIND Server Files
Files residing on BIND server systems contain the database of information needed to resolve BIND queries. On UCX systems, these files usually reside in the SYS$SPECIFIC:[UCX$BIND] directory.
You can use UCX management commands or a text editor to create and populate the files. The file entries are called resource records and include the information that servers need to resolve queries. The following sections describe these files as well as the information that servers need to resolve database files and their required file entry formats:
3.8.1 Master Zone File
A primary server maintains the master zone file. This file describes
all the hosts in the local zone. On DIGITAL UNIX BIND servers, this
file is the named.hosts file. On DIGITAL TCP/IP Services for
OpenVMS BIND servers, you create this file either by using a text
editor and the UCX CONVERT/BIND command or by copying the file from
another server. Create one master zone file for each zone over which
the server has authority.
Note
In BIND server files, fully qualified domain names end with a dot (.), which represents the root domain. The BIND service will not append any other domain labels to a fully qualified domain name.
Example 3-1 shows an example of a master zone file for a root server.
Example 3-1 Master Zone File
. IN SOA SRI-NIC.ARPA. HOSTMASTER.SRI-NIC.ARPA. 870611 ;serial 1800 ;refresh every 30 min 300 ;retry every 5 min 604800 ;expire after a week 86400) ;minimum of a day NS A.ISI.EDU. NS C.ISI.EDU. NS SRI-NIC.ARPA. MIL. 86400 NS SRI-NIC.ARPA. 86400 NS A.ISI.EDU. EDU. 86400 NS SRI-NIC.ARPA. 86400 NS C.ISI.EDU. SRI-NIC.ARPA. A 26.0.0.73 A 10.0.0.51 MX 0 SRI-NIC.ARPA. HINFO DEC-2060 TOPS20 ACC.ARPA. A 26.6.0.65 HINFO PDP-11/70 UNIX MX 10 ACC.ARPA. USC-ISIC.ARPA. CNAME C.ISI.EDU. 73.0.0.26.IN-ADDR.ARPA. PTR SRI-NIC.ARPA. 65.0.6.26.IN-ADDR.ARPA. PTR ACC.ARPA. 51.0.0.10.IN-ADDR.ARPA. PTR SRI-NIC.ARPA. 52.0.0.10.IN-ADDR.ARPA. PTR C.ISI.EDU.
The boot file provides the server type (primary, secondary, or
tertiary), the zones over which the server has authority, and database
file location. On DIGITAL UNIX BIND servers, the boot file is called
named.boot. On UCX hosts, boot information resides in the
UCX$CONFIGURATION.DAT database.
3.8.3 Loopback Interface File
The loopback interface file defines the name of the local loopback
interface, known as localhost. The resource record for this
file defines localhost with a network address of 127.0.0.1.
The DIGITAL TCP/IP Services for OpenVMS configuration procedure creates
this file, and generally calls it NAMED.LOCAL.
3.8.4 Hints File
The hints file contains information about locating the authoritative
name servers for top-level domains. You obtain this information from
the InterNIC.
3.8.5 Reverse Translation File
Reverse translations (address-to-host name) require servers for each
reverse zone. Information needed for reverse translation is contained
in the reverse translation file. On a DIGITAL UNIX BIND server, the
file is called hosts.rev. This file specifies the resource
records for the IN-ADDR.ARPA domain. On UCX hosts, create and populate
the file with a text editor and the UCX CONVERT/ULTRIX BIND command or
copy the file from another server.
3.9 BIND Clients
BIND client software, called the BIND resolver, uses the BIND service to resolve host names, addresses, and network service information. BIND clients make queries, but do not resolve them. Instead, BIND servers resolve the client requests. On UCX hosts, one resolver is active for each process.
If you use the UNIX Hesiod service to propagate user information to a group of connected workstations with correctly configured zones, the UCX BIND server can use this functionality. Refer to your UNIX Hesiod documentation for information about Hesiod databases and queries.
The Network File System (NFS) is a facility for sharing files in a heterogeneous environment of machines, operating systems, and networks. NFS allows users to access files distributed across a network in such a way that remote files appear as if they reside on the local host. Developed by Sun Microsystems, Inc., NFS has become a standard for the exchange of data between machines running different operating systems.
Another way the NFS protocol achieves portability between different machines, operating systems, network architectures, and transport protocols is through the use of Remote Procedure Calls (RPCs) and the External Data Representation (XDR), two network programming constructs that handle reliability issues. For more information about RPCs and XDR, see the DIGITAL TCP/IP Services for OpenVMS ONC RPC Programming manual.
Using NFS is simple. Configuring and implementing NFS, however, are more complex. A summary of NFS concepts and considerations is included in this chapter, but you should refer to the DIGITAL TCP/IP Services for OpenVMS Management guide for detailed configuration and implementation information.
Specific topics covered in this chapter include:
NFS was originally designed for UNIX machines, so it follows UNIX conventions for files, file types, file names, file ownership, user information, and so forth. NFS in an OpenVMS environment must accommodate the differences between UNIX and OpenVMS in such a way that when an OpenVMS user accesses a file from a UNIX machine, the file looks like an OpenVMS file. Conversely, when a UNIX user accesses a file from an OpenVMS machine, it looks like a UNIX file.
In a local environment, file systems reside on physical disks directly connected to the computer. NFS provides a distributed environment where the users on one system can access files that physically reside on disks attached to another networked system. These files are called remote file systems.
Remote files are made accessible to local users through the process called mounting. After a file system or the entire disk is mounted, users access files through the operating system's services. A mount operation makes a remote file system, or a subtree within it, part of the local file system.
Some general characteristics of NFS include the following:
Table 4-1 defines basic NFS terms.
The NFS protocol specification is defined in several RFCs:
The protocol provides for stateless operations where:
Stateless servers provide robustness when there are client, server, or
network failures. If a client fails, an NFS server need not take action
to continue normal operation. If a server or the network fails, NFS
clients continue to attempt completing NFS calls until the server or
network is fixed. This robustness can be important in a complex network
of heterogeneous systems that is not under the control of a single
network manager.
4.3 NFS Client and Server Software
NFS consists of the NFS server and the NFS client software. The NFS server is implemented as a daemon, waiting for requests from clients. The NFS server does not retain the state of the NFS client.
The NFS server daemon is a multithreaded facility; it processes multiple NFS client calls, in parallel. The NFS server daemon is also an event-driven, asynchronous process. Each NFS client call contains all the information necessary to complete the request.
The NFS client software implements a state mechanism that maintains all information required for processing client requests. Each client operation can be requested more than once and contains all the information necessary to complete the request. This model presumes no file open and close requests because these require saving the state of the object and that the server write data to the disk before returning the reply message to the user.
When mounting a remote file system, an NFS client sends a message that
makes the remote file system part of the local tree. The remote host
redirects operations that access files on the remote file system to the
NFS client software. The NFS client and NFS server then exchange
messages.
4.4 Related Databases
NFS servers and NFS clients use the proxy database to provide users access to remote file systems. The NFS server also uses the export database. The export database includes file system names.
All OpenVMS nodes running the NFS server software can share the proxy and export databases. Table 4-2 shows the databases used by the NFS server and NFS client.
Entity | Database | File Name | Logical Name |
---|---|---|---|
Server | Export database | UCX$EXPORT.DAT | UCX$EXPORT |
Server | Proxy database | UCX$PROXY.DAT | UCX$PROXY |
Client | Proxy database | UCX$PROXY.DAT | UCX$PROXY |
The PC-NFS daemon (PC-NFSd) provides authentication and printing services for PCs.
The PC-NFS daemon provides the following functions required for printing:
NFS accommodates numerous key differences between UNIX and OpenVMS to make user interaction between the two operating systems appear transparent. These differences are discussed in the remainder of this chapter and include:
Table 4-3 lists differences between the OpenVMS and UNIX directory hierarchies.
UNIX | OpenVMS |
---|---|
May reside on multiple volumes. | Resides on one volume having one root above all directories on the volume. |
Devices not included in file specification. | Devices included in file specifications. |
Figure 4-1 shows a UNIX directory hierarchy similar to the OpenVMS hierarchy. The UNIX hierarchy appears as one tree that can be located on more than one device.
Figure 4-1 UNIX Directory Hierarchy
An OpenVMS file specification is limited to eight directory levels and has the following format:
device:[directory.subdirectory] filename.type;version
The following delimiters separate the file specification components:
A UNIX file specification, called a path name, has the following format:
/directory/directory/filename
The slash (/) is the only delimiter that the UNIX file specification format uses. The first slash in a UNIX file specification represents the root directory. Subsequent slashes separate each component in the file specification (the directories from the other directories and the file name). In theory, there is no limit to the number of directory levels in a UNIX file specification, whereas an OpenVMS file specification is limited to eight directory levels.
Absolute and Relative File Specifications
OpenVMS and UNIX both have two types of file specifications or path names: absolute path name and relative path name.
On UNIX systems, absolute path names use the entire directory path that leads to the file, beginning with the root, which is represented by an initial slash. The root directory is the first directory in the file system. All other files and directories trace their ancestry back to the root. Relative path names begin the directory path with the current process default directory and exclude the current directory name in the path name.
For example, using Figure 4-1, a UNIX absolute path name would be /usr/jones/accounting/calc whereas the relative path name for the file calc in the current directory /usr/jones is accounting/calc.
In an OpenVMS system, the absolute path name is DRA0:[JONES.WORK]CALC.PAS and the relative path name for the file calc.pas in the current directory JONES is [.WORK]CALC.PAS.
A UNIX path name can have a maximum of 1024 characters; an OpenVMS file specification can have a maximum of 255 characters.
A complete OpenVMS file name specification includes the file name, the file type, and an optional version number, from left to right in that order. The file name and file type can each have up to 39 characters and are separated with a period (for example: FILE_NAME.TXT;1). The valid characters in a file name or type include: A--Z, 0--9, underscore (_), hyphen (--), and dollar sign ($). Version numbers (following a semicolon) are decimal numbers from 1 to 32767; they differentiate versions of the same file.
A UNIX file specification generally contains up to 1024 characters, with each element of the path name containing up to 255 characters. Some versions of the UNIX operating system limit the size of one element to 14 characters or have other limits that you can change if you recompile the kernel.
In theory, you can use any ASCII character in a UNIX path name except for the slash (/) and null characters. For example, a file name of report.from.january_24 is valid. However, you should avoid using some characters (such as the pipe (|) character) because these characters can have special meaning to the UNIX shell.
The OpenVMS file system is not case sensitive. However, the UNIX operating system treats upper and lowercase characters as different characters.
For example, on a UNIX system the following filenames represent three different files; however, on an OpenVMS system they represent one file.
File types are important in OpenVMS file name identification. The file type usually describes the kind of data in the file. For example, a text file typically has a file type of .TXT. Directories all have file types of .DIR;1.
Although UNIX systems do not use file types, UNIX does use certain naming conventions that resemble OpenVMS file types. For example, file names ending in .txt are text files. UNIX directories do not have special file types.
Every OpenVMS file has a version number. When a file is created, the system assigns it a version number of 1. Subsequently, when a file is edited or additional versions of that file are created, the version number automatically increases by 1. Therefore, many versions of a file with the same file name can exist in the same directory.
The UNIX file system does not support automatic creation of multiple
versions. In most cases, if you edit a UNIX file, the system saves only
the most recently edited copy.
4.6.3 Linking Files
A link is a directory entry that refers to a file or a directory. On UNIX systems, files cannot exist without links and a file can have multiple links. On an OpenVMS system, files can exist without any links.
There are two kinds of links: hard links and symbolic links.
A hard link to a file is indistinguishable from the original link established when the file was created. These additional links allow users to share the same file under different path names. A hard link cannot span file systems.
On UNIX systems, any changes to the file are independent of the link used to refer to the file. The UNIX system maintains a count of the number of links to each file. If removing a link results in the link count becoming zero, the file is deleted. A file cannot be deleted except by removing all of its links.