Previous | Contents | Index |
You can run the BIND service on a local network that does not have
Internet access. In this configuration, the servers resolve local
queries only. Any request that depends on Internet access goes
unresolved.
3.7.7 Zone Transfers
Zone transfers are the process by which slave servers obtain their zone data. When a slave server starts up and periodically thereafter, the server checks to see if its data is up-to-date. It does this by polling a master server to see if the master server's zone database serial number is greater than the slave's. If so, the slave performs a zone transfer over the network.
An essential point in this polling environment is that whenever a
change is made to a master server's zone database file, the zone's
serial number must be incremented for the change to propagate to other
servers. If the serial number does not change, slave servers do not
know that they should perform a zone transfer.
3.7.7.1 Zone Change Notification
In addition to slave servers polling to determine the necessity of a zone transfer, BIND 8 provides a mechanism for a primary master server to notify slaves of changes to a zone's database.
When a master server determines that a change has been made to a database, it will send a NOTIFY message to all the slave servers for the zone. The slave servers respond with a NOTIFY response to stop any further NOTIFY messages from the master and then query the master server for the SOA record of the zone. When the query is answered, slaves check the serial number in the SOA record and if the serial number has changed, the slaves transfer the zone.
This interrupt feature combined with polling provides a good balance
between the slow propagation of data due to a long refresh times and
periods of inconsistent data between authority servers when zone data
is updated.
3.7.7.2 Dynamic Update
DNS Dynamic Update, a BIND 8 feature, provides zone changes in real time, that is, dynamically, without having to change a database file and then signal the master server to reload the zone data. Most often these changes come from other network applications, like DHCP servers, that automatically assign an IP address to a host and then want to register the host name and IP address with BIND.
Dynamic Update provides:
Dynamic updates are remembered over system reboots or restart of the
BIND server. Whenever the BIND server starts up, it looks for and reads
the file where it logged updates (typically domain.db_log)) and merges the updates
into its cache of zone data. If you define the logical
TCPIP$BIND_SERVER_MERGE_DYNAMIC_UPDATES, the dynamic updates are also
automatically written to the master zone database file that gets
reloaded at each startup of the BIND server.
3.8 BIND Server Configuration File
BIND reads information from an ASCII file called TCPIP$BIND.CONF. On UNIX systems, the file name is named.boot. This configuration file consists of statements that specify:
Example 3-1 shows an example of a BIND 8 configuration file.
Example 3-1 BIND 8 Configuration File |
---|
//------------------------------------------------------------------- // // Copyright (c) Digital Equipment Corporation, 1998 // // TCPIP$BIND.CONF - BIND server configuration file // // IMPORTANT // // This file has been generated by the TCP/IP Services for OpenVMS // TCPIP CONVERT /CONFIGURATION BIND command. // // File: SYS$SPECIFIC:[TCPIP$BIND]TCPIP$BIND.CONF // Date: 27-Jan-1999 14:40:02 // // See the DIGITAL TCP/IP Services for OpenVMS Management guide for // instructions on editing and using this file. // //____________________________________________________________________ options { directory "SYS$SPECIFIC:[TCPIP$BIND]"; }; zone "FRED.PARROT.BIRD.COM" in { type master; file "FRED_PARROT_BIRD_COM.DB"; }; zone "0.0.127.IN-ADDR.ARPA" in { type master; file "127_0_0.DB"; }; zone "LOCALHOST" in { type master; file "LOCALHOST.DB"; }; zone "4.33.198.IN-ADDR.ARPA" in { type master; file "4_33_198_IN-ADDR_ARPA.DB"; }; zone "." in { type hint; file "ROOT.HINT"; }; |
Files residing on BIND server systems contain the database of information needed to resolve BIND queries. The following sections describe the four database files used by the server:
Detailed information on how to create and name these files is discussed
in the DIGITAL TCP/IP Services for OpenVMS Management guide.
3.9.1 Master Zone File
A primary master server maintains the master zone file. This file contains:
There is one master zone file for each zone for which the server has authority.
Example 3-2 shows a typical master zone file.
Example 3-2 Master Zone File |
---|
$ORIGIN ucx.ern.sea.com. @ IN SOA owl.ucx.ern.sea.com. pmaster.owl.ern.sea.com. ( 23 ; Serial 600 ; Refresh 300 ; Retry 172800 ; Expire 43200 ) ; Minimum ; IN NS owl.ucx.ern.sea.com. IN NS condor.ucx.ern.sea.com. ; thrush IN A 9.20.208.53 condor IN A 9.20.208.10 birdy IN A 9.20.208.47 IN MX 10 birdy.ucx.ern.sea.com. IN MX 100 inet-gw-1.pa.emu.com. IN MX 100 mts-gw.pa.emu.com. IN MX 200 crl.emu.com. IN MX 300 nester.emu.com. seagull IN A 9.20.208.30 IN MX 10 seagull.ucx.ern.sea.com. IN MX 100 inet-gw-1.pa.emu.com. IN MX 100 mts-gw.pa.emu.com. IN MX 200 crl.emu.com. IN MX 300 nester.emu.com. owl IN A 9.20.208.72 IN MX 10 owl.ucx.ern.sea.com. IN MX 100 inet-gw-1.pa.emu.com. IN MX 100 mts-gw.pa.emu.com. IN MX 200 crl.emu.com. IN MX 300 nester.emu.com. peacock IN A 9.20.208.73 IN MX 10 pultdown.ucx.ern.sea.com. IN MX 100 inet-gw-1.pa.emu.com. IN MX 100 mts-gw.pa.emu.com. IN MX 200 crl.emu.com. IN MX 300 nester.emu.com. redwing IN A 9.20.208.79 IN MX 10 redwing.ucx.ern.sea.com. IN MX 100 inet-gw-1.pa.emu.com. IN MX 100 mts-gw.pa.emu.com. IN MX 200 crl.emu.com. IN MX 300 nester.emu.com. robin IN A 9.20.208.47 IN A 9.20.208.30 IN A 9.20.208.72 |
For every host with an A record in the master zone file, there needs to be a way to map an IP address back to a host name. This is accomplished by using a zone file for a special domain called the IN-ADDR.ARPA domain.
The zone file for this domain contains PTR records that specify the reverse translations (address-to-host name) required for the zone. There is a IN-ADDR.ARPA zone file for each network represented in the master zone file including the loopback interface.
Example 3-3 shows the contents of a typical reverse domain file.
Example 3-3 Reverse Domain File |
---|
$ORIGIN 208.20.9.in-addr.arpa. @ IN SOA owl.ucx.ern.sea.com. pmaster.owl.ucx.ern.sea.com. ( 1 ; Serial 600 ; Refresh 300 ; Retry 172800 ; Expire 43200 ) ; Minimum ; IN NS owl.ucx.ern.sea.com. IN NS condor.ucx.ern.sea.com. ; 53 IN PTR thrush.ucx.ern.sea.com. 10 IN PTR condor.ucx.ern.sea.com. 47 IN PTR birdy.ucx.ern.sea.com. 30 IN PTR seagull.ucx.ern.sea.com. 72 IN PTR owl.ucx.ern.sea.com. 73 IN PTR peacock.ucx.ern.sea.com. 79 IN PTR redwing.ucx.ern.sea.com. |
The loopback interface files define the zone of the local loopback interface, known as LOCALHOST. There is a master zone file and a reverse domain file for the LOCALHOST. The resource record for this file defines LOCALHOST with a network address of 127.0.0.1. The DIGITAL TCP/IP Services for OpenVMS configuration procedure creates these two files and calls them LOCALHOST.DB and 127_0_0.DB.
Example 3-4 shows the contents of the master zone file for the loopback interface.
Example 3-4 Loopback Interface Zone File |
---|
; ; BIND data file for local loopback interface (forward translation). ; ; Provided for Digital TCP/IP Services for OpenVMS. ; $ORIGIN localhost. @ 1D IN SOA @ root ( 42 ; Serial 3H ; Refresh 15M ; Retry 1W ; Expire 1D ) ; Minimum ; 1D IN NS @ 1D IN A 127.0.0.1 |
Example 3-5 shows the contents of the reverse domain file for the loopback interface.
Example 3-5 Loopback Reverse Domain File |
---|
; ; BIND data file for local loopback interface (reverse translation). ; ; Provided for Digital TCP/IP Services for OpenVMS. ; $ORIGIN 0.0.127.in-addr.arpa. @ 1D IN SOA localhost. root.localhost. ( 42 ; Serial 3H ; Refresh 15M ; Retry 1W ; Expire 1D ) ; Minimum ; 1D IN NS localhost. 1 1D IN PTR localhost. |
The hints file contains information about the authoritative name servers for top-level domains. You can obtain this information from the InterNIC. However, the TCP/IP Services TCPIP$CONFIG procedure creates this file during the configuration procedure.
Example 3-6 shows the contents of a typical hints file.
Example 3-6 Hints File |
---|
; Data file for initial cache data for root domain servers. ; ; Provided for DIGITAL TCP/IP Services for OpenVMS. ; ; <<>> DiG 8.1 <<>> @192.5.5.241 ; (1 server found) ;; res options: init recurs defnam dnsrch ;; got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10 ;; flags: qr aa rd; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 13 ;; QUERY SECTION: ;; ., type = NS, class = IN ; ;; ANSWER SECTION: . 6D IN NS H.ROOT-SERVERS.NET. . 6D IN NS B.ROOT-SERVERS.NET. . 6D IN NS C.ROOT-SERVERS.NET. . 6D IN NS D.ROOT-SERVERS.NET. . 6D IN NS E.ROOT-SERVERS.NET. . 6D IN NS I.ROOT-SERVERS.NET. . 6D IN NS F.ROOT-SERVERS.NET. . 6D IN NS G.ROOT-SERVERS.NET. . 6D IN NS J.ROOT-SERVERS.NET. . 6D IN NS K.ROOT-SERVERS.NET. . 6D IN NS L.ROOT-SERVERS.NET. . 6D IN NS M.ROOT-SERVERS.NET. . 6D IN NS A.ROOT-SERVERS.NET. ; ;; ADDITIONAL SECTION: H.ROOT-SERVERS.NET. 5w6d16h IN A 128.63.2.53 B.ROOT-SERVERS.NET. 5w6d16h IN A 128.9.0.107 C.ROOT-SERVERS.NET. 5w6d16h IN A 192.33.4.12 D.ROOT-SERVERS.NET. 5w6d16h IN A 128.8.10.90 E.ROOT-SERVERS.NET. 5w6d16h IN A 192.203.230.10 I.ROOT-SERVERS.NET. 5w6d16h IN A 192.36.148.17 F.ROOT-SERVERS.NET. 5w6d16h IN A 192.5.5.241 G.ROOT-SERVERS.NET. 5w6d16h IN A 192.112.36.4 J.ROOT-SERVERS.NET. 5w6d16h IN A 198.41.0.10 K.ROOT-SERVERS.NET. 5w6d16h IN A 193.0.14.129 L.ROOT-SERVERS.NET. 5w6d16h IN A 198.32.64.12 M.ROOT-SERVERS.NET. 5w6d16h IN A 202.12.27.33 A.ROOT-SERVERS.NET. 5w6d16h IN A 198.41.0.4 ; ;; Total query time: 608 msec ;; FROM: ucx.ern.sea.com to SERVER: 192.5.5.241 ;; WHEN: Mon May 18 15:26:19 1998 ;; MSG SIZE sent: 17 rcvd: 436 |
The BIND resolver is a set of routines that is linked into each network application needing DNS name resolution services. The resolver formulates one or more queries based on the resolver's configuration and information supplied by network applications and sends the queries to a server to obtain an answer.
You can configure the following resolver features:
The DIGITAL TCP/IP Services for OpenVMS Management guide contains information on how to configure the
resolver.
3.10.1 Default Domain
The default domain is the domain in which the client host resides. When
resolving a query with just the host name supplied, the resolver
appends the default domain to the host name and then processes the
query. This is a convenience for the user. It saves typing a fully
qualified domain name.
3.10.2 Search List
The search list is also another convenience for the user. The default
search list is derived from the default domain and is applied if the
user enters a domain name that is not fully qualified.
3.10.3 Name Servers
You can configure the resolver to query any name server including the local host, and you can specify a maximum of three name servers. The resolver queries each name server in the order listed until it receives an answer or times out.
The Network File System (NFS) is a facility for sharing files in a heterogeneous environment of hardware platforms, operating systems, and networks. NFS allows users to access files distributed across a network in such a way that remote files appear as if they reside on the local host. NFS has become a standard for the exchange of data between machines running different operating systems.
Another way the NFS protocol achieves portability between different machines, operating systems, network architectures, and transport protocols is through the use of Remote Procedure Calls (RPCs) and the External Data Representation (XDR), two network programming constructs that handle reliability issues. For more information about RPCs and XDR, see the DIGITAL TCP/IP Services for OpenVMS ONC RPC Programming manual.
Using NFS is simple. Configuring and implementing NFS, however, are more complex. A summary of NFS concepts and considerations is included in this chapter, but you should refer to the DIGITAL TCP/IP Services for OpenVMS Management guide for detailed configuration and implementation information.
Specific topics covered in this chapter include:
NFS was originally designed for UNIX systems, so it follows UNIX conventions for files, file types, file names, file ownership, user information, and so forth. NFS in an OpenVMS environment must accommodate the differences between UNIX and OpenVMS in such a way that when an OpenVMS user accesses a file from a UNIX system, the file looks like an OpenVMS file. Conversely, when a UNIX user accesses a file from an OpenVMS system, it looks like a UNIX file.
In a local environment, file systems reside on physical disks directly connected to the system. NFS provides a distributed environment where the users on one system can access files that physically reside on disks attached to another networked system. These files are called remote file systems.
Remote files are made accessible to local users through the process called mounting. After a file system or the entire disk is mounted, users access files through the operating system's services. A mount operation makes a remote file system, or a subtree within it, part of the local file system.
Some general characteristics of NFS include the following:
Table 4-1 defines basic NFS terms.
Term | Definition |
---|---|
File system | Top-level directory, its lower-level directories, and all the files in those directories. The top-level is called the master file directory (MFD) in OpenVMS file systems and the root in UNIX style file systems. |
Container file system | File system on an OpenVMS host that has a UNIX style directory structure and UNIX style file attributes. Created by the DIGITAL TCP/IP Services for OpenVMS software. |
UNIX style file system | Same as container file system. |
OpenVMS file system | File system with an OpenVMS directory structure and OpenVMS file attributes. |
UNIX file system | File system on a UNIX host with a UNIX directory structure and UNIX file attributes. |
Proxy | Record in the proxy database that gives a remote user access to local file systems or a local user access to remote file systems. |
Disk | Physical device, or volume, on which a file system resides. |
Mapping | Process that makes local OpenVMS disks and container file systems accessible to an NFS client users, thus identifying them as "NFS file systems." |
Exporting | Adding a file system name to the export database. This allows an NFS client to mount a mapped local file system. |
Mounting | Process that makes physically remote file systems, directories, or individual files available to NFS clients. |
Mount point | Directory location of the mounted file system. |
Previous | Next | Contents | Index |