Document revision date: 30 March 2001 | |
Previous | Contents | Index |
Use the SYSMAN command CONFIGURATION SET TIME to set the time across the cluster. This command issues warnings if the time on all nodes cannot be set within certain limits. Refer to the OpenVMS System Manager's Manual for information about the SET TIME command.
One of the most important features of OpenVMS Cluster systems is the ability to provide access to devices and files across multiple systems.
In a traditional computing environment, a single system is directly attached to its storage subsystems. Even though the system may be networked with other systems, when the system is shut down, no other system on the network has access to its disks or any other devices attached to the system.
In an OpenVMS Cluster system, disks and tapes can be made accessible to
one or more members. So, if one computer shuts down, the remaining
computers still have access to the devices.
6.1 Data File Sharing
Cluster-accessible devices play a key role in OpenVMS Clusters because, when you place data files or applications on a cluster-accessible device, computers can share a single copy of each common file. Data sharing is possible between VAX computers, between Alpha computers, and between VAX and Alpha computers.
In addition, multiple systems (VAX and Alpha) can write to a shared disk file simultaneously. It is this ability that allows multiple systems in an OpenVMS Cluster to share a single system disk; multiple systems can boot from the same system disk and share operating system files and utilities to save disk space and simplify system management.
Note: Tapes do not allow multiple systems to access a
tape file simultaneously.
6.1.1 Access Methods
Depending on your business needs, you may want to restrict access to a particular device to the users on the computer that are directly connected (local) to the device. Alternatively, you may decide to set up a disk or tape as a served device so that any user on any OpenVMS Cluster computer can allocate and use it.
Table 6-1 describes the various access methods.
Method | Device Access | Comments | Illustrated in |
---|---|---|---|
Local | Restricted to the computer that is directly connected to the device. | Can be set up to be served to other systems. | Figure 6-3 |
Dual ported | Using either of two physical ports, each of which can be connected to separate controllers. A dual-ported disk can survive the failure of a single controller by failing over to the other controller. | As long as one of the controllers is available, the device is accessible by all systems in the cluster. | Figure 6-1 |
Shared | Through a shared interconnect to multiple systems. | Can be set up to be served to systems that are not on the shared interconnect. | Figure 6-2 |
Served | Through a computer that has the MSCP or TMSCP server software loaded. | MSCP and TMSCP serving are discussed in Section 6.3. | Figures 6-2 and 6-3 |
Dual pathed | Possible through more than one path. | If one path fails, the device is accessed over the other path. Requires the use of allocation classes (described in Section 6.2.1 to provide a unique, path-independent name.) | Figure 6-2 |
Note: The path to an individual disk may appear to be local from some nodes and served from others. |
When storage subsystems are connected directly to a specific system, the availability of the subsystem is lower due to the reliance on the host system. To increase the availability of these configurations, OpenVMS Cluster systems support dual porting, dual pathing, and MSCP and TMSCP serving.
Figure 6-1 shows a dual-ported configuration, in which the disks have independent connections to two separate computers. As long as one of the computers is available, the disk is accessible by the other systems in the cluster.
Figure 6-1 Dual-Ported Disks
Note: Disks can also be shadowed using Volume Shadowing for OpenVMS. The automatic recovery from system failure provided by dual porting and shadowing is transparent to users and does not require any operator intervention.
Figure 6-2 shows a dual-pathed DSSI and Ethernet configuration. The disk devices, accessible through a shared SCSI interconnect, are MSCP served to the client nodes on the LAN.
Rule: A dual-pathed DSA disk cannot be used as a system disk for a directly connected CPU. Because a device can be on line to one controller at a time, only one of the server nodes can use its local connection to the device. The second server node accesses the device through the MSCP (or the TMSCP server). If the computer that is currently serving the device fails, the other computer detects the failure and fails the device over to its local connection. The device thereby remains available to the cluster.
Dual-pathed disks or tapes can be failed over between two computers that serve the devices to the cluster, provided that:
Caution: Failure to observe these requirements can endanger data integrity.
You can set up HSC or HSJ storage devices to be dual ported between two storage subsystems, as shown in Figure 6-3.
Figure 6-3 Configuration with Cluster-Accessible Devices
By design, HSC and HSJ disks and tapes are directly accessible by all OpenVMS Cluster nodes that are connected to the same star coupler. Therefore, if the devices are dual ported, they are automatically dual pathed. Computers connected by CI can access a dual-ported HSC or HSJ device by way of a path through either subsystem connected to the device. If one subsystem fails, access fails over to the other subsystem.
Note: To control the path that is taken during
failover, you can specify a preferred path to force access to disks
over a specific path. Section 6.1.3 describes the preferred-path
capability.
6.1.3 Specifying a Preferred Path
The operating system supports specifying a preferred path for DSA disks, including RA series disks and disks that are accessed through the MSCP server. (This function is not available for tapes.) If a preferred path is specified for a disk, the MSCP disk class drivers use that path:
In addition, you can initiate failover of a mounted disk to force the disk to the preferred path or to use load-balancing information for disks accessed by MSCP servers.
You can specify the preferred path by using the SET PREFERRED_PATH DCL command or by using the $QIO function (IO$_SETPRFPATH), with the P1 parameter containing the address of a counted ASCII string (.ASCIC). This string is the node name of the HSC or HSJ, or of the OpenVMS system that is to be the preferred path.
Rule: The node name must match an existing node running the MSCP server that is known to the local node.
Reference: For more information about the use of the SET PREFERRED_PATH DCL command, refer to the OpenVMS DCL Dictionary: N--Z.
For more information about the use of the IO$_SETPRFPATH function,
refer to the OpenVMS I/O User's Reference Manual.
6.2 Naming OpenVMS Cluster Storage Devices
In the OpenVMS operating system, a device name takes the form of ddcu, where:
For CI or DSSI devices, the controller designation is always the letter A; and the unit number is selected by the system manager.
For SCSI devices, the controller letter is assigned by OpenVMS, based on the system configuration. The unit number is determined by the SCSI bus ID and the logical unit number (LUN) of the device.
Because device names must be unique in an OpenVMS Cluster, and because every cluster member must use the same name for the same device, OpenVMS adds a prefix to the device name, as follows:
node$ddcu |
$allocation-class$ddcu |
The purpose of allocation classes is to provide unique and unchanging device names. The device name is used by the OpenVMS Cluster distributed lock manager in conjunction with OpenVMS facilities (such as RMS and the XQP) to uniquely identify shared devices, files, and data.
Allocation classes are required in OpenVMS Cluster configurations where storage devices are accessible through multiple paths. Without the use of allocation classes, device names that relied on node names would change as access paths to the devices change.
Prior to OpenVMS Version 7.1, only one type of allocation class existed, which was node based. It was named allocation class. OpenVMS Version 7.1 introduced a second type, port allocation class, which is specific to a single interconnect and is assigned to all devices attached to that interconnect. Port allocation classes were originally designed for naming SCSI devices. Their use has been expanded to include additional devices types: floppy disks, PCI RAID controller disks, and IDE disks.
The use of port allocation classes is optional. They are designed to solve the device-naming and configuration conflicts that can occur in certain configurations, as described in Section 6.2.3.
To differentiate between the node-based allocation class and the new port allocation class, the term node allocation class is now assigned to the earlier type.
All nodes that have direct access to the same multipathed device must
use the same nonzero value for the node allocation class. Multipathed
MSCP controllers also have an allocation class parameter, which is set
to match that of the connected nodes. (If the allocation class does not
match, the devices attached to the nodes cannot be served.)
6.2.2 Specifying Node Allocation Classes
A node allocation class can be assigned to computers, HSC or HSJ controllers, and DSSI ISEs. The node allocation class is a numeric value from 1 to 255 that is assigned by the system manager.
The default node allocation class value is 0. A node allocation class value of 0 is appropriate only when serving a local, single-pathed disk. If a node allocation class of 0 is assigned, served devices are named using the node-name$device-name syntax, that is, the device name prefix reverts to the node name.
The following rules apply to specifying node allocation class values:
System managers provide node allocation classes separately for disks and tapes. The node allocation class for disks and the node allocation class for tapes can be different.
The node allocation class names are constructed as follows:
$disk-allocation-class$device-name $tape-allocation-class$device-name |
Caution: Failure to set node allocation class values and device unit numbers correctly can endanger data integrity and cause locking conflicts that suspend normal cluster operations. Figure 6-4 shows an example of how cluster device names are specified in a CI configuration.
Figure 6-4 Disk and Tape Dual Pathed Between HSC Controllers
In this configuration:
If one controller with node allocation class 1 is not available, users can gain access to a device specified by that node allocation class through the other controller.
Figure 6-6 builds on Figure 6-4 by including satellite nodes that access devices $1$DUA17 and $1$MUA12 through the JUPITR and NEPTUN computers. In this configuration, the computers JUPITR and NEPTUN require node allocation classes so that the satellite nodes are able to use consistent device names regardless of the access path to the devices.
Note: System management is usually simplified by using
the same node allocation class value for all servers, HSC and HSJ
subsystems, and DSSI ISEs; you can arbitrarily choose a number between
1 and 255. Note, however, that to change a node allocation class value,
you must shut down and reboot the entire cluster (described in
Section 8.6). If you use a common node allocation class for computers
and controllers, ensure that all devices have unique unit numbers.
6.2.2.1 Assigning Node Allocation Class Values on Computers
There are two ways to assign a node allocation class: by using CLUSTER_CONFIG.COM or CLUSTER_CONFIG_LAN.COM, which is described in Section 8.4, or by using AUTOGEN, as shown in the following table.
Assign or change node allocation class values on HSC subsystems while the cluster is shut down. To assign a node allocation class to disks for an HSC subsystem, specify the value using the HSC console command in the following format:
SET ALLOCATE DISK allocation-class-value |
To assign a node allocation class for tapes, enter a SET ALLOCATE TAPE command in the following format:
SET ALLOCATE TAPE tape-allocation-class-value |
For example, to change the value of a node allocation class for disks to 1, set the HSC internal door switch to the Enable position and enter a command sequence like the following at the appropriate HSC consoles:
[Ctrl/C] HSC> RUN SETSHO SETSHO> SET ALLOCATE DISK 1 SETSHO> EXIT SETSHO-Q Rebooting HSC; Y to continue, Ctrl/Y to abort:? Y |
Restore the HSC internal door-switch setting.
Reference: For complete information about the HSC
console commands, refer to the HSC hardware documentation.
6.2.2.3 Assigning Node Allocation Class Values on HSJ Subsystems
To assign a node allocation class value for disks for an HSJ subsystem, enter a SET CONTROLLER MSCP_ALLOCATION_CLASS command in the following format:
SET CONTROLLER MSCP_ALLOCATION_CLASS = allocation-class-value |
To assign a node allocation class value for tapes, enter a SET CONTROLLER TMSCP_ALLOCATION_CLASS ALLOCATE TAPE command in the following format:
SET CONTROLLER TMSCP_ALLOCATION_CLASS = allocation-class-value |
For example, to assign or change the node allocation class value for disks to 254 on an HSJ subsystem, use the following command at the HSJ console prompt (PTMAN>):
PTMAN> SET CONTROLLER MSCP_ALLOCATION_CLASS = 254 |
To assign or change allocation class values on any HSD subsystem, use the following commands:
$ MC SYSMAN IO CONNECT FYA0:/NOADAP/DRIVER=SYS$FYDRIVER $ SET HOST/DUP/SERVER=MSCP$DUP/TASK=DIRECT node-name $ SET HOST/DUP/SERVER=MSCP$DUP/TASK=PARAMS node-name PARAMS> SET FORCEUNI 0 PARAMS> SET ALLCLASS 143 PARAMS> SET UNITNUM 900 PARAMS> WRITE Changes require controller initialization, ok? [Y/(N)] Y PARAMS> EXIT $ |
To assign or change node allocation class values on any DSSI ISE, the command you use differs depending on the operating system.
For example, to change the allocation class value to 1 for a DSSI ISE TRACER, follow the steps in Table 6-2.
Step | Task |
---|---|
1 |
Log into the SYSTEM account on the computer connected to the hardware
device TRACER and load its driver as follows:
|
2 |
At the DCL prompt, enter the SHOW DEVICE FY command to verify the
presence and status of the FY device, as follows:
$ SHOW DEVICE FY |
3 |
At the DCL prompt, enter the following command sequence to set the
allocation class value to 1:
$ SET HOST/DUP/SERVER=MSCP$DUP/TASK=PARAMS node-name |
4 | Reboot the entire cluster in order for the new value to take effect. |
Previous | Next | Contents | Index |
privacy and legal statement | ||
4477PRO_009.HTML |