DECdfs for OpenVMS Management Guide


Previous Contents Index


Chapter 1
Introduction to DECdfs

DECdfs for OpenVMS client and server software allows users on a local OpenVMS or OpenVMS Cluster system to access files on a remote OpenVMS or OpenVMS Cluster system in a DECnet Phase IV or DECnet Phase V network. DECdfs enables the remote disk to function similarly to a local disk. A client user can set his or her default device and directory to the DECdfs device, edit files there, run programs, or copy files to his or her local device and directory. DECdfs includes the following advantages:

The DECdfs file access protocol works in any DECnet environment but performs especially well with high-speed LAN lines. This speed accounts for ease of access: users can access files on a remote server as quickly as on a local device.

Figure 1-1 illustrates the client-server relationship between two systems running DECdfs. The figure shows the DCL TYPE command at the client system, which displays a file that resides on a disk at the server. Note that the command does not include a node name or access-control information, as would be necessary in an ordinary network operation.

Figure 1-1 DECdfs Client-Server Relationship


DECdfs can play a special role in an environment where many users have systems with limited disk space. If you off-load files that require significant disk space to a single server, you free resources at each client. If you also move files that require frequent backup operations to the server, you lessen the time and cost of multiple backups.

OpenVMS Cluster environments allow multiple systems to share files. Like clusters, DECdfs provides file sharing, but in contrast to clusters, it allows client systems to be autonomous.

When selecting the applications to use with DECdfs, note the features and restrictions listed in Table 1-1.

Table 1-1 DECdfs Features and Restrictions
DECdfs allows users to... But does not allow them to...
Create and manipulate directories Perform logical and physical I/O
Share a file for concurrent reading with other users Share a file when a user is writing to the file
Use all file QIO function codes Use the shared-write option; DECdfs converts the shared-write option to the exclusive-write option.
Use all Record Management Services (RMS) features or the QIO interface Install files as known images on the client if the files reside at a disk at the server
Run applications at the client that use mapped sections on the server Use system page files or swap files on the server

The following scenarios illustrate the usefulness of DECdfs:

The remainder of this chapter describes the basic components of a DECdfs environment and the interface for managing DECdfs.

1.1 Components of a DECdfs Environment

As described in the introduction to this chapter, DECdfs creates a client-server relationship among network nodes. A node can be a client, a server, or both. A client-and-server node can share one of its disks with other nodes while accessing files on another node's disks. This manual refers separately to the client and server functions at such nodes. When the manual refers to a client or a server, it refers to functions that can exist on the same node unless the reference is specifically to a remote client or remote server.

The following sections describe the components of a DECdfs environment.

1.1.1 Digital Distributed Name Service

Like DECdfs, the Digital Distributed Name Service (DECdns) implements a client-server relationship between the user of resources and the provider of resources. As a user of DECdns, each DECdfs node is a DECdns client. The DECdns system that provides information about the location of files is a DECdns server.

DECdns provides a network-wide registry of shareable resources; if available, it maintains an account of the location of file resources that DECdfs servers make available and that DECdfs clients use. DECdns is available on the following systems:

DECdns is not available on OpenVMS Alpha systems unless the system is running DECnet Phase V. If a node does not provide DECdns software, DECdfs cannot access the DECdns registry of available resources. In this case, DECdfs requires users on nodes without DECdns to specify the node name where the resource is located. Using DECdfs on nodes without DECdns is described in Section 2.3.2.2.

If DECdns is available, whenever a DECdfs server makes available a group of files, it notifies the DECdns server. DECdns records the global (networkwide) name of the files and address information needed to communicate with the DECdfs server. To use the DECdfs server, a DECdfs client queries the DECdns server for the DECdfs server's address information. The client then passes the node address to DECnet for setting up a network connection (link) between the DECdfs client node and the DECdfs server node. After the client receives the server address information from DECdns, it communicates directly with the server. Figure 1-2 illustrates how DECdfs interacts with DECdns.

If a DECdfs client wants to access a resource on a DECdfs server but the client cannot access the DECdns server that has registered the resource, the client must specify the node name of the DECdfs server it wants to use. If a DECdfs server cannot access the DECdns server, all DECdfs clients that want to use the DECdfs server's resource must specify that server's node name.

Figure 1-2 Interaction Between DECdfs and the DECdns Server


DECdns registers information about network resources in a namespace, which is the registry of network names managed by DECdns. Certain DFS$CONTROL commands listed in Section 1.4 allow you to add and remove information from the DECdns namespace. If you need to move a DECdfs disk from one server to another, you can simply remove and reregister the DECdns information. The users at a client never need to know that the location of the files has changed. Users on nodes that are not running DECdns will need to know of a resource relocation because these users explicitly specify the name of the server where the resource resides.

Consult with the DECdns manager (a person responsible for managing DECdns) at your site before setting up DECdfs on your system. The DECdns manager needs to know how you plan to set up DECdfs, and you need information on how the DECdns manager has set up DECdns. For example, some DFS$CONTROL commands require that you specify names that conform to the DECdns naming conventions in your network. A DECdns manager can create a single-directory namespace or a hierarchical namespace. It is important to know which type of namespace your network is using so that you can use the DFS$CONTROL commands correctly. The DECdns manager must inform DECdfs users of any access point changes that make access impossible.

1.1.2 Access Points

An access point represents the file resources that a server makes available to clients. It allows the server to name the available resources and allows the client to find the resources. An access point refers to a specific directory (usually the master file directory) on a specific device.

At a server, the DECdfs manager decides what directory on what device to make available to DECdfs client users. The access point gives access to that directory and all subdirectories. The master file directory is the default choice. The DECdfs manager gives the access point a name. Using the DFS$CONTROL command ADD ACCESS_POINT, the manager then registers the access point name with the local DECdfs server database (see Section 1.1.3.1) and with DECdns. If DECdns is not available, the access point is recorded only in the server database. See Section 2.3.2.2 for information on adding access points on systems without DECdns.

At a client, the DECdfs manager enters the DFS$CONTROL command MOUNT to make that access point available on the system. DFS$CONTROL provides a DCL-like interface, and mounting the access point is much like mounting an actual device. The MOUNT command functions as follows:

When a user on a client system first mounts an access point, DECdfs queries DECdns to find out which node serves that access point. Systems not running DECdns must use the /NODE qualifier to specify the DECdfs server name. If a DECdfs connection to the server does not already exist, the client sets up one. If a DECnet logical link does not already exist, the client also requests DECnet to provide one.

The DECdfs Communication Entity creates one connection for all communication between a server and a particular client. This single connection provides DECdfs service to any number of users at the client. The users can mount any number of access points on the server and open any number of files.

End users on the client can then use the mounted client device as if it were a local device. If you assign a logical name to the client device, access to the files can be simple. End users enter standard DCL file commands requesting directories of, or access to, files on the client device. DECdfs intervenes and interacts with DECnet software to redirect these requests across the network to the server of the actual device.

Figure 1-3 illustrates the correlation between a client device and access point named HELP.

Figure 1-3 Correlation Between a Client Device and an Access Point


In Figure 1-3, a correlation exists between client device DFSC1001: on node CARDNL and the access point that refers to DUA0:[000000] on node EIDER. A user at CARDNL can access files subordinate to DUA0:[000000] by specifying the device DFSC1001: with the file specification in DCL commands. If the DFSC1001: client device has the logical name HELP_LIBRARY, as in the previous example, the user can specify HELP_LIBRARY in the file specification.

1.1.3 DECdfs Management Components

DECdfs is comprised of three cooperating management components: the server, the client, and the Communication Entity. Each has a name, attributes describing it, and an interface supporting management operations. Sections 1.1.3.1, 1.1.3.2, and 1.1.3.3 describe the function of each component.

1.1.3.1 The Server

The server implements the server function on a node, as follows:

The server also contains the server database, which is the local registry of resources. Information in the server database usually matches information in DECdns namespace, if DECdns is available. Occasionally, however, the server database and DECdns information may differ (see Section 2.3.6).

The server is implemented by the following files:

The installation procedure places these files respectively in the SYS$SYSTEM and SYS$LOADABLE_IMAGES directories.

To manage a DECdfs server, you must prepare the system for the expected amount of use. You can display and set various parameters of the server to improve performance and control use. Chapter 2 describes the management tasks for servers.

1.1.3.2 The Client

The client implements the client function on a node. Each DFSC device represents the client, which resides in the DFSC device driver. The client functions as follows:

The client is implemented by the SYS$LOADABLE_IMAGES:DFSCDRIVER.EXE file, which is the DECdfs client device driver.

Chapter 3 describes the management tasks for a client.

1.1.3.3 The Communication Entity

The Communication Entity passes information between the server or client and the network software. The Communication Entity is automatically part of any DECdfs installation, whether the node is a client only or both a client and a server. Without the Communication Entity, the client and server would not be able to communicate across the network.

Client requests to servers and server responses to clients pass through the Communication Entity to acquire network services. The Communication Entity functions as follows:

Figure 1-4 illustrates the flow of information between client and server as it passes through the Communication Entity and the DECnet software.

Figure 1-4 Server-Client Information Flow


A DECdfs connection represents a relationship between a DECdfs server and client through the Communication Entity.

The client initiates a connection to a server. The client then uses that connection for all communication to all access points on that server. There can be two DECdfs connections between any two nodes only if each node has both a client and a server and if each node's client is accessing the other node's server. To display the connections on your system, enter the following DFS$CONTROL command:


DFS> SHOW COMMUNICATION/CURRENT_CONNECTIONS

A DECdfs connection may or may not have an active DECnet link at a single point in time. When a client first mounts an access point, the Communication Entity requests a logical link from DECnet. All communication between the client and server passes through that link. When the Communication Entity finds that the link was not used during a specified timeout period, it disconnects the link, giving resources back to the server. However, DECdfs stores the server's network address information and maintains the DECdfs connection. The Communication Entity provides links for that connection as needed and times them out as appropriate, until the client device is dismounted.

Occasionally, DECdfs creates a connection between a client and server in another way. The DFS$CONTROL command SHOW ACCESS_POINT/FULL displays access point names (from DECdns) and status (from server databases). Entering this command causes DECdfs to create a connection between your node and each node from which you display server database information. Some commands, such as ADD ACCESS_POINT and REMOVE ACCESS_POINT, create DECdfs connections to the local server. You might see such connections when you display the connections on your system.

The following files implement the Communication Entity:

The installation procedure places these files respectively in the SYS$SYSTEM and SYS$LOADABLE_IMAGES directories.

1.2 Client-Server Consistency Issues

This section contains information on system times and logical names on the client and server. See Section 2.2.2.3 for information about consistency of user names on the client and server.

1.2.1 System Times on the Client and Server

DECdfs does not coordinate the system times on the client and server. Some applications, such as the DEC Code Management System (CMS), need consistent timestamps to work correctly; therefore, you must coordinate the system times of clients and servers interactively.

Note

If the client and server are in the same time zone, a network time synchronization service can help eliminate problems caused by inconsistent system times. If the client and the server are not in the same time zone, you should set the time on both to Greenwich Mean Time, to avoid time problems caused by geographical differences.

DECdfs treats timestamps differently within files and in file headers. Timestamps recorded in records within files are based on the client system time. Timestamps recorded in file headers (and displayed with the DIRECTORY/DATE or DIRECTORY/FULL command) are usually based on the server system time. Exceptions do exist. For example, if you use the COPY command without specifying an output file name, the command sets the output file's creation date equal to the input file's creation date. The file's timestamp is whatever system time the input file originally had.

1.2.2 Logical Names on the Client and Server

Logical names are valid only on the local system. The client system does not have information about logical names defined on the server. If a user needs to use a logical name to access files on the server, you need to define that logical name on the client system. For example, if you use DECdfs to access a CMS library that users specify with a logical name, define the logical name on the client system to represent the library.

1.3 Comparison of Standard OpenVMS File Access and DECdfs File Access

DECdfs divides standard OpenVMS file access functions between two cooperating nodes.

Figure 1-5 illustrates the standard OpenVMS file access functions and shows where DECdfs divides the file access functions between the client and server. In the figure, an application requests access to a file by entering either a Record Management Services (RMS) call or a QIO directly to the disk driver. The Files-11 extended $QIO system service processor (XQP) opens and accesses the file through the disk driver.

Figure 1-5 Standard OpenVMS File Access Functions


Note

The direction of the arrows in Figure 1-5 indicates the flow of control (not the flow of information, which exists in both directions).

Figure 1-6 shows the same file access functions as those in Figure 1-5 and illustrates how DECdfs divides the functions between the client and server nodes. The application and RMS remain on the client. To reach the disk driver and the Files-11 XQP, however, the application's request passes through the DECdfs drivers and over the network.

Figure 1-6 DECdfs File Access Functions


Figure 1-6 shows the following activity at the client:

Figure 1-6 shows the following activity at the server:

1.4 The DFS$CONTROL Commands

DFS$CONTROL is a set of commands that you use to manage DECdfs. The commands allow you to set up, monitor, tune, and customize your DECdfs environment, particularly on a server. However, most parameters to the DECdfs software have default values that should provide a satisfactory balance between economical use of resources and good performance. Generally you need only use DFS$CONTROL to start the DECdfs client, server, and Communication Entity and to add access points on a server or mount them on a client.

Table 1-2 lists the DFS$CONTROL commands.

Table 1-2 DFS$CONTROL Commands
Command Description
Entered at Either a Server or Client
EXIT Terminates the DFS$CONTROL session.
HELP Displays information on DFS$CONTROL commands.
SET COMMUNICATION Sets parameters for the DECdfs Communication Entity.
SHOW ACCESS_POINT Displays the names of access points stored by DECdns.
SHOW COMMUNICATION Displays information about the DECdfs Communication Entity.
SHOW VERSIONS Displays version information for DECdfs components.
SHUTDOWN COMMUNICATION Stops DECdfs communication after completing file operations in progress.
SNAPSHOT COMMUNICATION Records the current communication counters in DFS$CONTROL memory or in a specified file.
START COMMUNICATION Starts the Communication Entity.
STOP COMMUNICATION Stops DECdfs communication immediately.
Entered at a Server Only
ADD ACCESS_POINT Makes an access point available by registering it in the server database and with DECdns.
REMOVE ACCESS_POINT Removes an access point name from the server database and from DECdns.
SET SERVER Sets parameters for the DECdfs server.
SHOW SERVER Displays information about the DECdfs server.
SNAPSHOT SERVER Records the current server counters in DFS$CONTROL memory or in a specified file.
START SERVER Starts the DECdfs server.
STOP SERVER Stops the DECdfs server.
Entered at a Client Only
DISMOUNT Makes a DECdfs client device (and therefore an access point) unavailable to users.
MOUNT Mounts an access point as a DECdfs client device.
SHOW CLIENT Displays information about a DECdfs client device.
SNAPSHOT CLIENT Records the current client counters in DFS$CONTROL memory or in a specified file.

See Chapter 4 for a dictionary of DFS$CONTROL commands.

1.4.1 Using DFS$CONTROL Commands in DECdfs Command Files

For routine operation, DECdfs command files execute the basic DFS$CONTROL commands that set up your DECdfs environment. The DECdfs for OpenVMS Installation Guide describes the startup and configuration files that you edit and execute. These files are located in the SYS$MANAGER and SYS$STARTUP directory. Table 1-3 summarizes these files.

Table 1-3 DECdfs System Files
File Comment
DFS$STARTUP.COM Do not edit this file, but note that it executes DFS$CONFIG and DFS$SYSTARTUP, both of which you may edit.
DFS$CONFIG.COM This file contains the SET commands that set parameters for the DECdfs server and Communication Entity. DFS$STARTUP executes this file before it starts the DECdfs processes. The commands have default values, so edit this file only if you want to change the parameter values.
DFS$SYSTARTUP.COM This file contains commands that add access points at a server and mount access points at a client. On a server, keep this file up to date to add the access points each time DECdfs starts up. On a client, use this command file to mount access points for systemwide use.
OpenVMS VAX Version 5.5-2
SYSTARTUP_V5.COM Edit the system SYSTARTUP_V5 file so that it executes the SYS$STARTUP:DFS$STARTUP command file. DECnet startup must complete before DECdfs startup begins. SYSTARTUP_V5.COM is in the SYS$MANAGER directory.
OpenVMS VAX Version 6.n
OpenVMS Alpha Version 6.n
OpenVMS Alpha Version 7.n
SYSTARTUP_VMS.COM Edit the system SYSTARTUP_VMS.COM file so that it executes the SYS$STARTUP:DFS$STARTUP command file. DECnet startup must complete before DECdfs startup begins. SYSTARTUP_VMS.COM is in the SYS$MANAGER directory.

1.4.2 Using DFS$CONTROL Commands Interactively

Some DFS$CONTROL commands, such as the SHOW commands, are appropriate only for interactive use. You can enter DFS$CONTROL commands at the OpenVMS system prompt in one of the following ways:

You can use other commands either interactively or by executing the DFS$CONFIG or DFS$SYSTARTUP command files. If you choose to interactively enter a command that one of these files usually executes, edit the file to reflect any new values that you have set. This ensures that, for DFS$CONFIG, the next startup uses the most recent value or, for DFS$SYSTARTUP, your system adds or mounts all access points.

1.4.3 Getting Help with DECdfs

The DFS$CONTROL HELP command displays a list of topics on which you can obtain information. Entering HELP and a command name displays information on the specified command.

Brief DECdfs help is also available at the DCL prompt:


$ HELP DFS


Previous Next Contents Index