Document revision date: 19 July 1999 | |
Previous | Contents | Index |
To create a clusterwide logical name, you must have write (W) access to the table in which the logical name is to be entered, or SYSNAM privilege if you are creating clusterwide logical names only in LNM$SYSCLUSTER. Unless you specify an access mode (user, supervisor, and so on), the access mode of the logical name you create defaults to the access mode from which the name was created. If you created the name with a DCL command, the access mode defaults to supervisor mode. If you created the name with a program, the access mode typically defaults to user mode.
When you create a clusterwide logical name, you must include the name of a clusterwide logical name table in the definition of the logical name. You can create clusterwide logical names by using DCL commands or with the $CRELNM system service.
The following example shows how to create a clusterwide logical name in the default clusterwide logical name table, LNM$CLUSTER_TABLE, using the DEFINE command:
$ DEFINE/TABLE=LNM$CLUSTER_TABLE logical-name equivalence-string |
To create clusterwide logical names that will reside in a clusterwide logical name table you created, you define the new clusterwide logical name with the DEFINE command, specifying your new clusterwide table's name with the /TABLE qualifier, as shown in the following example:
$ DEFINE/TABLE=new-clusterwide-logical-name-table logical-name - _$ equivalence-string |
If you attempt to create a new clusterwide logical name with the same access mode and identical equivalence names and attributes as an existing clusterwide logical name, the existing name is not deleted, and no messages are sent to remote nodes. This behavior differs from similar attempts for other types of logical names, which delete the existing name and create the new one. For clusterwide logical names, this difference is a performance enhancement. The condition value SS$_NORMAL is returned. The service completed successfully, but the new logical name was not created. |
In general, system managers edit the SYLOGICALS.COM command procedure to define site-specific logical names that take effect at system startup. However, this is not the appropriate command procedure for defining clusterwide logical names. Instead, define them in the SYSTARTUP_VMS.COM command procedure.
The reason for defining clusterwide logical names in SYSTARTUP_VMS.COM is that SYSTARTUP_VMS.COM is run at a much later stage in the booting process than SYLOGICALS.COM. When a node boots and joins the cluster, its CLUSTER_SERVER process receives the current clusterwide logical name database from another node and creates identical tables and names. Until these definitions are completed, other clusterwide logical name creations are stalled. Therefore, creating or testing clusterwide definitions by defining them in SYLOGICALS.COM could slow system startup. OpenVMS will ensure that the clusterwide database has been initialized before SYSTARTUP_VMS.COM is executed.
You can test the state of the new $GETSYI item, CWLOGICALS, to determine whether the clusterwide logical name database has been initialized. For example:
$ STATE=F$GETSYI ("CWLOGICALS") |
If F$GETSYI returns a value of TRUE, then the clusterwide logical name database has been initialized. Note that on a node booted standalone or MIN, the value of this item is always FALSE.
For clusterwide definitions in a common SYSTARTUP_VMS.COM, Compaq recommends that you use a conditional definition, such as the following:
$ IF F$TRNLNM("CLUSTER_APPS") .EQS. "" THEN - _$ DEFINE/TABLE=LNM$SYSCLUSTER/EXEC CLUSTER_APPS - _$ 1$DKA500:[COMMON_APPS] |
A conditional definition avoids surprises that can occur. For example, if a system manager redefines a name that is also defined in SYSTARTUP_VMS.COM but does not edit SYSTARTUP_VMS.COM because the new definition is temporary, and a new node joins the cluster, the new node would initially receive the new definition. However, when the new node executed SYSTARTUP_VMS.COM, it would cause all the nodes in the cluster, including itself, to revert to the original value.
F$GETSYI ("CWLOGICALS") always returns a value of FALSE on a noncluster system. Procedures that are designed to run on both clustered and nonclustered environments should first determine whether they are in a cluster, and if so, then determine whether clusterwide logical names are initialized. |
When using clusterwide logical names, observe the following guidelines:
The $TRNLNM system service and the $GETSYI system service provide
attributes that are specific to clusterwide logical names. This section
describes those attributes. It also describes the use of $CRELNT as it
pertains to creating a clusterwide table. For more information about
using logical names in applications, refer to the OpenVMS Programming Concepts Manual.
5.4.8.1 Clusterwide Attributes for $TRNLNM System Service
Two clusterwide attributes are available in the $TRNLNM system service:
LNM$V_CLUSTERWIDE is an output attribute to be returned in the itemlist if you asked for the LNM$_ATTRIBUTES item for a logical name that is clusterwide.
LNM$M_INTERLOCKED is an attr argument bit that can be set to ensure that any clusterwide logical name modifications in progress are completed before the name is translated. LNM$M_INTERLOCKED is not set by default. If your application requires translation using the most recent definition of a clusterwide logical name, use this attribute to ensure that the translation is stalled until all pending modifications have been made.
On a single system, when one process modifies the shareable part of the logical name database, the change is visible immediately to other processes on that node. Moreover, while the modification is in progress, no other process can translate or modify shareable logical names.
In contrast, when one process modifies the clusterwide logical name database, the change is visible immediately on that node, but it takes a short time for the change to be propagated to other nodes. By default, translations of clusterwide logical names are not stalled. Therefore, it is possible for processes on different nodes to translate a logical name and get different equivalence names when modifications are in progress.
The use of LNM$M_INTERLOCKED guarantees that your application will
receive the most recent definition of a clusterwide logical name.
5.4.8.2 Clusterwide Attribute for $GETSYI System Service
The clusterwide attribute, SYI$_CWLOGICALS, has been added to the
$GETSYI system service. When you specify SYI$_CWLOGICALS, $GETSYI
returns the value 1 if the clusterwide logical name database has been
initialized on the CPU, or the value 0 if it has not been initialized.
Because this number is a Boolean value (1 or 0), the buffer length
field in the item descriptor should specify 1 (byte). On a nonclustered
system, the value of SYI$_CWLOGICALS is always 0.
5.4.8.3 Creating Clusterwide Tables with the $CRELNT System Service
When creating a clusterwide table, the $CRELNT requestor must supply a
table name. OpenVMS does not supply a default name for clusterwide
tables because the use of default names enables a process without the
SYSPRV privilege to create a shareable table.
5.5 Coordinating Startup Command Procedures
Immediately after a computer boots, it runs the site-independent command procedure SYS$SYSTEM:STARTUP.COM to start up the system and control the sequence of startup events. The STARTUP.COM procedure calls a number of other startup command procedures that perform cluster-specific and node-specific tasks.
The following sections describe how, by setting up appropriate cluster-specific startup command procedures and other system files, you can prepare the OpenVMS Cluster operating environment on the first installed computer before adding other computers to the cluster.
Reference: See also the OpenVMS System Manager's Manual for more
information about startup command procedures.
5.5.1 OpenVMS Startup Procedures
Several startup command procedures are distributed as part of the OpenVMS operating system. The SYS$SYSTEM:STARTUP.COM command procedure executes immediately after OpenVMS is booted and invokes the site-specific startup command procedures described in the following table.
Procedure Name | Invoked by | Function |
---|---|---|
SYS$MANAGER:
SYPAGSWPFILES.COM |
SYS$SYSTEM:
STARTUP.COM |
A file to which you add commands to install page and swap files (other than the primary page and swap files that are installed automatically). |
SYS$MANAGER:
SYCONFIG.COM |
SYS$SYSTEM:
STARTUP.COM |
Connects special devices and loads device I/O drivers. |
SYS$MANAGER:
SYSECURITY.COM |
SYS$SYSTEM:
STARTUP.COM |
Defines the location of the security audit and archive files before it starts the security audit server. |
SYS$MANAGER:
SYLOGICALS.COM |
SYS$SYSTEM:
ST ARTUP.COM |
Creates systemwide logical names, and defines system components as executive-mode logical names. (Clusterwide logical names should be defined in SYSTARTUP_VMS.COM.) Cluster common disks can be mounted at the end of this procedure. |
SYS$MANAGER:
SYSTARTUP_VMS.COM |
SYS$SYSTEM:
STARTUP.COM |
Performs many of the following startup and login functions:
|
The directory SYS$COMMON:[SYSMGR] contains a template file for each
command procedure that you can edit. Use the command procedure
templates (in SYS$COMMON:[SYSMGR]*.TEMPLATE) as examples for
customization of your system's startup and login characteristics.
5.5.2 Building Startup Procedures
The first step in preparing an OpenVMS Cluster shared environment is to build a SYSTARTUP_VMS command procedure. Each computer executes the procedure at startup time to define the operating environment.
Prepare the SYSTARTUP_VMS.COM procedure as follows:
Step | Action |
---|---|
1 | In each computer's SYS$SPECIFIC:[SYSMGR] directory, edit the SYSTARTUP_VMS.TEMPLATE file to set up a SYSTARTUP_VMS.COM procedure that: |
2 |
Build a common command procedure that includes startup commands that
you want to be common to all computers. The common procedure might
contain commands that:
Note: You might choose to build these commands into individual command procedures that are invoked from the common procedure. For example, the MSCPMOUNT.COM file in the SYS$EXAMPLES directory is a sample common command procedure that contains commands typically used to mount cluster disks. The example includes comments explaining each phase of the procedure. |
3 |
Place the common procedure in the SYS$COMMON:[SYSMGR] directory on a
common system disk or other cluster-accessible disk.
Important: The common procedure is usually located in the SYS$COMMON:[SYSMGR] directory on a common system disk but can reside on any disk, provided that the disk is cluster accessible and is mounted when the procedure is invoked. If you create a copy of the common procedure for each computer, you must remember to update each copy whenever you make changes. |
To build startup procedures for an OpenVMS Cluster system in which existing computers are to be combined, you should compare both the computer-specific SYSTARTUP_VMS and the common startup command procedures on each computer and make any adjustments required. For example, you can compare the procedures from each computer and include commands that define the same logical names in your common SYSTARTUP_VMS command procedure.
After you have chosen which commands to make common, you can build the
common procedures on one of the OpenVMS Cluster computers.
5.5.4 Using Multiple Startup Procedures
To define a multiple-environment cluster, you set up computer-specific versions of one or more system files. For example, if you want to give users larger working set quotas on URANUS, you would create a computer-specific version of SYSUAF.DAT and place that file in URANUS's SYS$SPECIFIC:[SYSEXE] directory. That directory can be located in URANUS's root on a common system disk or on an individual system disk that you have set up on URANUS.
Follow these steps to build SYSTARTUP and SYLOGIN command files for a multiple-environment OpenVMS Cluster:
Step | Action |
---|---|
1 | Include in SYSTARTUP_VMS.COM elements that you want to remain unique to a computer, such as commands to define computer-specific logical names and symbols. |
2 | Place these files in the SYS$SPECIFIC root on each computer. |
Example: Consider a three-member cluster consisting of
computers JUPITR, SATURN, and PLUTO. The timesharing environments on
JUPITR and SATURN are the same. However, PLUTO runs applications for a
specific user group. In this cluster, you would create a common
SYSTARTUP_VMS command procedure for JUPITR and SATURN that defines
identical environments on these computers. But the command procedure
for PLUTO would be different; it would include commands to define
PLUTO's special application environment.
5.6 Providing OpenVMS Cluster System Security
The OpenVMS security subsystem ensures that all authorization
information and object security profiles are consistent across all
nodes in the cluster. The OpenVMS VAX and OpenVMS Alpha operating
systems do not support multiple security domains because the operating
system cannot enforce a level of separation needed to support different
security domains on separate cluster members.
5.6.1 Security Checks
In an OpenVMS Cluster system, individual nodes use a common set of authorizations to mediate access control that, in effect, ensures that a security check results in the same answer from any node in the cluster. The following list outlines how the OpenVMS operating system provides a basic level of protection:
The OpenVMS operating system provides the same strategy for the protection of files and queues, and further incorporates all other cluster-visible objects, such as devices, volumes, and lock resource domains.
Actions of the cluster manager in setting up an OpenVMS Cluster system can affect the security operations of the system. You can facilitate OpenVMS Cluster security management using the suggestions discussed in the following sections.
The easiest way to ensure a single security domain is to maintain a single copy of each of the following files on one or more disks that are accessible from anywhere in the OpenVMS Cluster system. When a cluster is configured with multiple system disks, you can use system logical names (as shown in Section 5.9) to ensure that only a single copy of each file exists.
The OpenVMS security domain is controlled by the data in the following files:
Note: Using shared files is not the only way of
achieving a single security domain. You may need to use multiple copies
of one or more of these files on different nodes in a cluster. For
example, on Alpha nodes you may choose to deploy system-specific user
authorization files (SYSUAFs) to allow for different memory management
working-set quotas among different nodes. Such configurations are fully
supported as long as the security information available to each node in
the cluster is identical.
5.7 Files Relevant to OpenVMS Cluster Security
Table 5-3 describes the security-relevant portions of the files that must be common across all cluster members to ensure that a single security domain exists.
Notes:
The following table describes designations for the files in Table 5-3.
Table Keyword | Meaning |
---|---|
Required | The file contains some data that must be kept common across all cluster members to ensure that a single security environment exists. |
Recommended | The file contains data that should be kept common at the discretion of the site security administrator or system manager. Nonetheless, Digital recommends that you synchronize the recommended files. |
File Name | Contains | ||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
VMS$AUDIT_SERVER.DAT
[recommended] |
Information related to security auditing. Among the information
contained is the list of enabled security auditing events and the
destination of the system security audit journal file.
When more than one copy of this file exists, all copies should be
updated after any SET AUDIT command.
OpenVMS Cluster system managers should ensure that the name
assigned to the security audit journal file resolves to the following
location:
Rule: If you need to relocate the audit journal file somewhere other than the system disk (or if you have multiple system disks), you should redirect the audit journal uniformly across all nodes in the cluster. Use the command SET AUDIT/JOURNAL=SECURITY/DESTINATION= file-name, specifying a file name that resolves to the same file throughout the cluster. Changes are automatically made in the audit server database, SYS$MANAGER:VMS$AUDIT_SERVER.DAT. This database also identifies which events are enabled and how to monitor the audit system's use of resources, and restores audit system settings each time the system is rebooted. Caution: Failure to synchronize multiple copies of this file properly may result in partitioned auditing domains. Reference: For more information, see the OpenVMS Guide to System Security. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
NETOBJECT.DAT
[required] |
The DECnet object database. Among the information contained in this
file is the list of known DECnet server accounts and passwords. When
more than one copy of this file exists, all copies must be updated
after every use of the NCP commands SET OBJECT or DEFINE OBJECT.
Caution: Failure to synchronize multiple copies of this file properly may result in unexplained network login failures and unauthorized network access. For instructions on maintaining a single copy, refer to Section 5.9.1. Reference: Refer to the DECnet--Plus documentation for equivalent NCL command information. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
NETPROXY.DAT
and NET$PROXY.DAT [required] |
The network proxy database. It is maintained by the OpenVMS Authorize
utility. When more than one copy of this file exists, all copies must
be updated after any UAF proxy command.
Note: The NET$PROXY.DAT and NETPROXY.DAT files are equivalent; NET$PROXY is for DECnet--Plus implementations and NETPROXY.DAT is for DECnet for OpenVMS implementations. Caution: Failure to synchronize multiple copies of this file properly may result in unexplained network login failures and unauthorized network access. For instructions on maintaining a single copy, refer to Section 5.9.1. Reference: Appendix B discusses how to consolidate several NETPROXY.DAT and RIGHTSLIST.DAT files. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
QMAN$MASTER.DAT
[required] |
The master queue manager database. This file contains the security
information for all shared batch and print queues.
Rule: If two or more nodes are to participate in a shared queuing system, a single copy of this file must be maintained on a shared disk. For instructions on maintaining a single copy, refer to Section 5.9.1. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
RIGHTSLIST.DAT
[required] |
The rights identifier database. It is maintained by the OpenVMS
Authorize utility and by various rights identifier system services.
When more than one copy of this file exists, all copies must be updated
after any change to any identifier or holder records.
Caution: Failure to synchronize multiple copies of this file properly may result in unauthorized system access and unauthorized access to protected objects. For instructions on maintaining a single copy, refer to Section 5.9.1. Reference: Appendix B discusses how to consolidate several NETPROXY.DAT and RIGHTSLIST.DAT files. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
SYSALF.DAT
[required] |
The system Autologin facility database. It is maintained by the OpenVMS
SYSMAN utility. When more than one copy of this file exists, all copies
must be updated after any SYSMAN ALF command.
Note: This file may not exist in all configurations. Caution: Failure to synchronize multiple copies of this file properly may result in unexplained login failures and unauthorized system access. For instructions on maintaining a single copy, refer to Section 5.9.1. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
SYSUAF.DAT
[required] |
The system user authorization file. It is maintained by the OpenVMS
Authorize utility and is modifiable via the $SETUAI system service.
When more than one copy of this file exists, you must ensure that the
SYSUAF and associated $SETUAI item codes are synchronized for each user
record. The following table shows the fields in SYSUAF and their
associated $SETUAI item codes.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||
Caution: Failure to synchronize multiple copies of the
SYSUAF files properly may result in unexplained login failures and
unauthorized system access. For instructions on maintaining a single
copy, refer to Section 5.9.1.
Reference: Appendix B discusses creation and management of the various elements of an OpenVMS Cluster common SYSUAF.DAT authorization database. |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||
SYSUAFALT.DAT
[required] |
The system alternate user authorization file. This file serves as a
backup to SYSUAF.DAT and is enabled via the SYSUAFALT system parameter.
When more than one copy of this file exists, all copies must be updated
after any change to any authorization records in this file.
Note: This file may not exist in all configurations. Caution: Failure to synchronize multiple copies of this file properly may result in unexplained login failures and unauthorized system access. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
+VMS$OBJECTS.DAT
[required] |
On VAX systems, this file is located in SYS$COMMON:[SYSEXE] and
contains the clusterwide object database. Among the information
contained in this file are the security profiles for all clusterwide
objects. When more than one copy of this file exists, all copies must
be updated after any change to the security profile of a clusterwide
object or after new clusterwide objects are created. Clusterwide
objects include disks, tapes, and resource domains.
OpenVMS Cluster system managers should ensure that the security object database is present on each node in the OpenVMS Cluster by specifying a file name that resolves to the same file throughout the cluster, not to a file that is unique to each node. The database is updated whenever characteristics are modified, and the information is distributed so that all nodes participating in the cluster share a common view of the objects. The security database is created and maintained by the audit server process. Rule: If you relocate the database, be sure the logical name VMS$OBJECTS resolves to the same file for all nodes in a common-environment cluster. To reestablish the logical name after each system boot, define the logical in SYSECURITY.COM. Caution: Failure to synchronize multiple copies of this file properly may result in unauthorized access to protected objects. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
VMS$PASSWORD_HISTORY.DATA
[recommended] |
The system password history database. It is maintained by the system
password change facility. When more than one copy of this file exists,
all copies should be updated after any password change.
Caution: Failure to synchronize multiple copies of this file properly may result in a violation of the system password policy. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
VMSMAIL_PROFILE.DATA
[recommended] |
The system mail database. This file is maintained by the OpenVMS Mail
utility and contains mail profiles for all system users. Among the
information contained in this file is the list of all mail forwarding
addresses in use on the system. When more than one copy of this file
exists, all copies should be updated after any changes to mail
forwarding.
Caution: Failure to synchronize multiple copies of this file properly may result in unauthorized disclosure of information. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
VMS$PASSWORD_DICTIONARY.DATA
[recommended] |
The system password dictionary. The system password dictionary is a
list of English language words and phrases that are not legal for use
as account passwords. When more than one copy of this file exists, all
copies should be updated after any site-specific additions.
Caution: Failure to synchronize multiple copies of this file properly may result in a violation of the system password policy. |
||||||||||||||||||||||||||||||||||||||||||||||||||||||
VMS$PASSWORD_POLICY.EXE
[recommended] |
Any site-specific password filters. It is created and installed by the
site-security administrator or system manager. When more than one copy
of this file exists, all copies should be identical.
Caution: Failure to synchronize multiple copies of this file properly may result in a violation of the system password policy. Note: System managers can create this file as an image to enforce their local password policy. This is an architecture-specific image file that cannot be shared between VAX and Alpha computers. |
Previous | Next | Contents | Index |
privacy and legal statement | ||
4477PRO_007.HTML |