Document revision date: 19 July 1999 | |
Previous | Contents | Index |
The SMISERVER process must be running on a remote node for SYSMAN commands to execute on that node. SMISERVER is the detached process responsible for executing SYSMAN commands on remote nodes.
Any node that is part of an OpenVMS Cluster system normally starts the SMISERVER process in the system startup procedure SYS$SYSTEM:STARTUP.COM. (The system parameter CLUSTER on the node must have a value of 1 or more.)
To start the SMISERVER process on a workstation that is not part of an OpenVMS Cluster system, include the following command line in the site-specific startup command procedure SYSTARTUP_VMS.COM:
$ @SYS$SYSTEM:STARTUP SMISERVER |
For more information about SYSTARTUP_VMS.COM, see Section 5.2.7.
You can also enter this command interactively to restart the SMISERVER
process without rebooting the system.
2.3.3 Understanding a SYSMAN Management Environment
When you use SYSMAN, you must define the management environment you will be working in. The management environment is the node or nodes on which subsequent commands will execute.
By default, the management environment is the local node (the node from which you execute SYSMAN). To execute commands on one or more other nodes, you can redefine the management environment to be any of the following:
Refer to Figure 2-2 during the following discussion of management environments.
Figure 2-2 Sample SYSMAN Management Environment
You can use NODE21 as the management environment, or you can define the environment to be any node, group of nodes, or cluster shown in Figure 2-2.
If you execute SYSMAN from NODE21, then NODE21 is the local node; it is
the management environment when SYSMAN starts. All other nodes are
remote nodes.
2.3.4 Defining the SYSMAN Management Environment
To define the management environment, use the SYSMAN command SET ENVIRONMENT. Whenever you redefine an environment, SYSMAN displays the new context. You can always verify the current environment with the SHOW ENVIRONMENT command.
When you are not working on your local node or within your own cluster, your environment is a nonlocal environment. SYSMAN makes this distinction for security reasons; when you are defining a nonlocal environment, such as a different cluster, SYSMAN prompts for a password. SYSMAN also prompts for a password when you attempt to manage a system under a different user name. You can change your user name by using the /USERNAME qualifier with SET ENVIRONMENT.
A SYSMAN environment remains in effect until you change it or exit from
SYSMAN.
2.3.4.1 Defining Another Node as the Environment
You can define a management environment to be any node available through DECnet. To define one or more nodes to be your management environment, use the SET ENVIRONMENT/NODE command.
For the following examples, refer to Figure 2-2; assume you are logged in to NODE 21.
$ RUN SYS$SYSTEM:SYSMAN SYSMAN> SET ENVIRONMENT/NODE=NODE22 %SYSMAN-I-ENV, current command environment: Individual nodes: NODE22 Username ALEXIS will be used on nonlocal nodes |
SYSMAN> SET ENVIRONMENT/NODE=(NODE23,NODE24,NODE25) Remote Password: %SYSMAN-I-ENV, Current Command Environment: Individual nodes: NODE23,NODE24,NODE25 At least one node is not in local cluster Username ALEXIS will be used on nonlocal nodes |
SYSMAN> SET ENVIRONMENT/CLUSTER/NODE=NODE24 Remote Password: %SYSMAN-I-ENV, current command environment: Clusterwide on remote cluster NODE24 Username ALEXIS will be used on nonlocal nodes SYSMAN> DO SHOW TIME %SYSMAN-I-OUTPUT, command execution on node NODE24 13-AUG-1998 13:07:54 %SYSMAN-I-OUTPUT, command execution on node NODE25 13-AUG-1998 13:10:28 |
If you want to organize the nodes in your cluster according to specific categories (for example, all CI-based nodes or all nodes with C installed), you can define logical names to use with the SET ENVIRONMENT/NODE command, as follows:
$ CREATE/NAME_TABLE/PARENT=LNM$SYSTEM_DIRECTORY SYSMAN$NODE_TABLE |
$ DEFINE CI_NODES NODE21,NODE22,NODE23/TABLE=SYSMAN$NODE_TABLE |
$ RUN SYS$SYSTEM:SYSMAN SYSMAN> SET ENVIRONMENT/NODE=(CI_NODES) Remote Password: %SYSMAN-I-ENV, current command environment: Individual nodes: NODE21,NODE22,NODE23 At least one node is not in the local cluster. Username SYSTEM will be used on nonlocal nodes. |
You can also define logical names for VAX and Alpha nodes in a dual-architecture OpenVMS Cluster system, as explained in Section 20.6.
The following example demonstrates how you can define multiple logical names to organize several management environments:
$ CREATE/NAME_TABLE/PARENT=LNM$SYSTEM_DIRECTORY SYSMAN$NODE_TABLE $ DEFINE CI_NODES SYS2,SYS8/TABLE=SYSMAN$NODE_TABLE $ DEFINE C NODE21,NODE22,NODE23/TABLE=SYSMAN$NODE_TABLE $ DEFINE PASCAL NODE23,NODE18,CI_NODES/TABLE=SYSMAN$NODE_TABLE $ RUN SYS$SYSTEM:SYSMAN SYSMAN> SET ENVIRONMENT/NODE=(C,PASCAL) Remote Password: %SYSMAN-I-ENV, current command environment: Individual nodes: NODE21,NODE22,NODE23,NODE18,SYS2,SYS8 At least one node is not in the local cluster. Username SYSTEM will be used on nonlocal nodes. |
To define your management environment to be an OpenVMS Cluster system, use the SET ENVIRONMENT/CLUSTER command.
In SYSMAN, OpenVMS Cluster environments can be one of two types:
OpenVMS Cluster Environment | Definition |
---|---|
Local | Cluster from which you are using SYSMAN |
Nonlocal | Any cluster other than the one from which you are executing SYSMAN |
To expand the management environment in Figure 2-2 from NODE21 to Cluster 1, enter the following command from NODE21:
SYSMAN> SET ENVIRONMENT/CLUSTER %SYSMAN-I-ENV, Current Command Environment: Clusterwide on local cluster Username ALEXIS will be used on nonlocal nodes |
In the OpenVMS Cluster environment shown in Figure 2-2, SYSMAN executes commands on all nodes in Cluster 1, namely NODE21, NODE22, and NODE23.
To manage a nonlocal cluster with SYSMAN, use the /NODE qualifier to identify the cluster. If you define an OpenVMS Cluster alias, the /NODE qualifier can use the alias rather than the node name.
If you use the /CLUSTER and /NODE qualifiers together, the environment becomes the OpenVMS Cluster system where the given node is a member. For example, to perform management tasks on Cluster 2 in Figure 2-2, enter SET ENVIRONMENT with the /CLUSTER qualifier and name one node within Cluster 2 using the /NODE qualifier:
SYSMAN> SET ENVIRONMENT/CLUSTER/NODE=NODE24 Remote Password: %SYSMAN-I-ENV, Current Command Environment: Clusterwide on remote node NODE24 Username ALEXIS will be used on nonlocal nodes |
For information about using SYSMAN to manage an OpenVMS Cluster system
that contains both Alpha and VAX nodes, see Section 20.6.
2.3.5 Understanding Your SYSMAN Profile
When you use SYSMAN across OpenVMS Cluster systems, SYSMAN establishes a profile that contains your rights, privileges, and defaults, and verifies that you are an authorized user. If you encounter privilege problems when using SYSMAN, it helps to know how SYSMAN determines your profile.
SYSMAN looks for three possible scenarios when determining your profile:
The profile does not include symbolic names, logical names, preset
terminal characteristics, or key definitions established through a
login command procedure. The only environment that has the attributes
defined in a login command procedure is the local node from which you
are executing SYSMAN.
2.3.6 Adjusting Your SYSMAN Profile
Use the SYSMAN command SET PROFILE to change your SYSMAN management profile. The qualifiers /PRIVILEGES, /DEFAULT, and /VERIFY enable you to change the following attributes of the SMISERVER process:
Attribute | Qualifier | For More Information |
---|---|---|
Current privileges | /PRIVILEGES | Section 2.3.6.1 |
Default device and directory | /DEFAULT | Section 2.3.6.2 |
DCL verification of DO commands | /VERIFY | Section 2.3.7 |
This profile is in effect until you change it with SET PROFILE, reset the environment (which may change your profile automatically), or exit from SYSMAN.
The SET PROFILE command temporarily changes the attributes of your
current local process. However, when you exit from SYSMAN, all
attributes are restored to the values that were current when SYSMAN was
invoked.
2.3.6.1 Changing Your Current Privileges
The SYSMAN command SET PROFILE/PRIVILEGES temporarily changes your current privileges in an environment.
Frequently, system management commands require special privileges. You might need to add privileges before you execute certain commands in an environment. System managers usually have the same privileges on all nodes; if you do not have the required privileges on a node, SYSMAN cannot execute the command and returns an error message.
The following example makes SYSPRV one of your current privileges:
SYSMAN> SET PROFILE/PRIVILEGES=SYSPRV SYSMAN> SHOW PROFILE %SYSMAN-I-DEFDIR, Default directory on node NODE21 -- WORK1:[MAEW] %SYSMAN-I-DEFPRIV, Process privileges on node NODE21 -- TMPMBX OPER NETMBX SYSPRV |
Use the SET PROFILE/DEFAULT command to reset the default device and directory specification for your process and all server processes in the environment.
Most often, the default device and directory specified in your UAF record is a first-level directory in which you create and maintain files and subdirectories. SYSMAN uses this default device and directory name when resolving file specifications. It also assigns the default device and directory name to any files that you create during a session.
In some cases, you might need to change the default device and directory in your SYSMAN profile. For example, you might have a directory containing command procedures as well as some system management utilities that require the default directory to be SYS$SYSTEM.
The following example sets the default device and directory to DMA1:[SMITH.COM]:
$ RUN SYS$SYSTEM:SYSMAN SYSMAN> SET PROFILE/DEFAULT=DMA1:[SMITH.COM] |
Use the SET PROFILE/VERIFY command to turn on DCL verification, which displays DCL command lines and data lines as they execute.
SYSMAN can execute DCL commands using the DO command. By default, SYSMAN DCL verification is turned off.
$ RUN SYS$SYSTEM:SYSMAN SYSMAN> SET PROFILE/VERIFY |
The SYSMAN command DO executes DCL command procedures and SYSMAN command procedures on all nodes in an OpenVMS Cluster environment. In an OpenVMS Cluster environment or in any environment with multiple nodes, you enter a set of commands once, and SYSMAN executes the commands sequentially on every node in the environment. SYSMAN displays the name of each node as it executes commands, or an error message if the command fails.
If a node does not respond within a given timeout period, SYSMAN displays a message before proceeding to the next node in the environment. You can specify a timeout period with the SET TIMEOUT command.
Each DO command executes as an independent subprocess, so no process context is retained between DO commands. For this reason, you must express all DCL commands in a single command string, and you cannot run a procedure that requires input.
In an OpenVMS Cluster environment, SYSMAN executes DO commands sequentially on all nodes in the cluster. After a command completes or times out on one node, SYSMAN sends it to the next node in the environment. Any node that is unable to execute a command returns an error message.
For more information about using the DO command to manage an OpenVMS Cluster system, see Section 20.6. You can also refer to the OpenVMS System Management Utilities Reference Manual for a complete description of the SYSMAN command DO.
In the following example, SYSMAN runs the INSTALL utility and makes a file known on all nodes in the cluster when you enter the commands from the local node:
$ RUN SYS$SYSTEM:SYSMAN SYSMAN> SET ENVIRONMENT/CLUSTER SYSMAN> SET PROFILE/PRIVILEGE=CMKRNL SYSMAN> DO INSTALL ADD/OPEN/SHARED WORK4:[CENTRAL]STATSHR . . . %SYSMAN-I-OUTPUT, Command execution on node NODE21 %SYSMAN-I-OUTPUT, Command execution on node NODE22 |
The SYSMAN execute procedure (@) command executes SYSMAN command procedures on each node in the environment.
The following example creates and executes a SYSMAN command procedure to display the current date and system time for each OpenVMS Cluster node:
$ CREATE TIME.COM SET ENVIRONMENT/CLUSTER CONFIGURATION SHOW TIME[Ctrl/Z] $ RUN SYS$SYSTEM:SYSMAN SYSMAN> @TIME %SYSMAN-I-ENV, Current command environment: Clusterwide on local cluster Username SYSTEM will be used on nonlocal nodes System time on node NODE21: 19-JUN-1998 13:32:19.45 System time on node NODE22: 19-JUN-1998 13:32:27.79 System time on node NODE23: 19-JUN-1998 13:32:58.66 SYSMAN> |
You can create an initialization file that is used each time you invoke SYSMAN. In the initialization file, you can perform tasks such as defining keys and setting up your environment.
The default file specification for the SYSMAN initialization file is SYS$LOGIN:SYSMANINI.INI. If you want your SYSMAN initialization file to have a different file specification, you must define the logical name SYSMANINI to point to the location of the file. The following is a sample initialization file that defines several keys:
$ TYPE SYSMANINI.INI DEFINE/KEY/TERMINATE KP0 "SET ENVIRONMENT/CLUSTER/NODE=(NODE21,NODE22)" DEFINE/KEY/TERMINATE KP1 "CONFIGURATION SHOW TIME" DEFINE/KEY/TERMINATE KP2 "SHOW PROFILE" . . . |
The operator communication manager (OPCOM) is a tool for communicating with users and operators on the system. OPCOM allows you to perform the following functions:
Function | For More Information |
---|---|
To broadcast messages to users who are logged in | Section 2.4.3 |
To control the use of OPA0: as an operator terminal | Section 2.4.4 |
To designate terminals as operator terminals, enabling them to display messages broadcast by OPCOM | Section 2.4.5 |
To record messages broadcast by OPCOM in a log file | Section 18.6.3 |
To send requests to an operator 1 | Section 2.4.6 |
To reply to operator requests 1 | Section 2.4.7 |
Figure 2-3 illustrates the function of OPCOM.
Figure 2-3 Operator Communication Manager (OPCOM)
OPCOM uses the following components:
Component | Description | For More Information |
---|---|---|
OPCOM process | The system process that manages OPCOM operations. Unless you disable it, the OPCOM process starts automatically at system startup time. | Section 2.4.2 |
Operator terminals | Terminals designated to display messages broadcast by OPCOM. Usually, the console terminal (with the device name OPA0:) is the operator terminal. However, you can designate any user terminal as an operator terminal. | Section 2.4.5 |
Operator log file | A file that records messages broadcast by OPCOM. The file is named SYS$MANAGER:OPERATOR.LOG. | Section 18.6.1 |
OPCOM messages | Messages broadcast by OPCOM. These messages are displayed on operator terminals and written to the operator log file. The messages might be general messages sent by you, user requests, operator replies, or system events. | Section 18.6.2 |
REPLY and REQUEST commands | DCL commands that allow you to use and control OPCOM. |
Section 2.4.3,
Section 2.4.6, and Section 2.4.7 |
OPCOM uses the following defaults:
OPCOM has the following requirements:
Previous | Next | Contents | Index |
privacy and legal statement | ||
6017PRO_002.HTML |