Previous | Contents | Index |
A queued task element within a queue can have two states: HOLD and NOHOLD. Each of these states can be further qualified to yield a total of four different states/substates:
ACMS task queue files can be backed up without the QTI process being stopped and without terminating programs that call the ACMS$QUEUE_TASK and ACMS$DEQUEUE_TASK services. To perform an online backup of your task queue files:
For programs that call the ACMS$QUEUE_TASK and ACMS$DEQUEUE_TASK
services to continue while a backup of a queue file is taking place,
the programs must check the return statuses of ACMS$_QUEDEQSUS and
ACMS$_QUEENQSUS. See Compaq ACMS for OpenVMS Writing Applications for details.
5.6 Summary of ACMSQUEMGR Commands and Qualifiers
QUEMGR commands allow you to create and manage ACMS queues. Table 5-1 lists the ACMSQUEMGR commands and qualifiers and provides a brief description of each command. See Chapter 20 for a complete description of the ACMSQUEMGR commands and qualifiers.
Commands and Qualifiers | Description |
---|---|
CREATE QUEUE
/DEQUEUE=keyword /ENQUEUE=keyword /FILE_SPECIFICATION=file-spec /MAX_WORKSPACES_SIZE=n |
Creates a queue for queued task elements. |
DELETE ELEMENT
/[NO]CONFIRM /EXCLUDE=keyword /SELECT=keyword |
Deletes one or more queued task elements. |
DELETE QUEUE
/[NO]PURGE |
Deletes a queue. |
EXIT | Exits the ACMSQUEMGR Utility. |
HELP
/[NO]PROMPT |
Provides information about ACMSQUEMGR commands. |
MODIFY QUEUE
/FILE_SPECIFICATION=file-spec /MAX_WORKSPACES_SIZE=n |
Modifies the static characteristics of a queue. |
SET ELEMENT
/[NO]CONFIRM /EXCLUDE=keyword /PRIORITY=n /SELECT=keyword /STATE=[NO]HOLD |
Sets the state and/or priority of one or more queued task elements. |
SET QUEUE
/DEQUEUE=keyword /ENQUEUE=keyword |
Dynamically sets the queue state. |
SHOW ELEMENT
/BRIEF /EXCLUDE=keyword /FULL /OUTPUT[=file-spec] /SELECT=keyword /TOTAL_ONLY |
Displays one or more queued task elements in a queue. |
SHOW QUEUE
/OUTPUT[=file-spec] |
Displays the characteristics of a queue. |
This chapter describes how to set up applications with
distributed forms processing in a transaction
processing (TP) system. (Applications with distributed forms processing
are sometimes called distributed applications.)
6.1 What Is Distributed Forms Processing?
An ACMS application consists of forms processing and database processing. In a distributed ACMS TP system, one or more nodes, called the back end, handle the database processing and computation, while the forms processing is offloaded onto another node or set of nodes called the front end. The front end is sometimes referred to as the submitter node or nodes, and the back end is sometimes referred to as the application node or nodes.
This distribution of tasks over more than one node in a distributed system improves the speed and reliablity of ACMS transactions by allowing the configuration of a distributed system where more powerful machines are dedicated to database processing and smaller machines handle the forms processing. You can configure each node in the distributed system for the processing of specific tasks.
Reliability of a system can be enhanced by installing applications on more than one node of a system and using search lists so that, if a node fails, users are switched to a second node where the same application is running.
Figure 6-1 shows an example of a typical distributed ACMS system.
Figure 6-1 Distributed Forms Processing
On the front end, or submitter node, terminal users use menus to select tasks for applications running on the application node. The applications, in turn, interact with resource managers such as Rdb, DBMS, and RMS. Resource managers are the tools which manipulate your data. A distributed system can be established in an OpenVMS Cluster, a local-area network, or a wide-area network. ACMS uses DECnet to communicate between the front end and the back end of a distributed system.
The front-end system is often called the submitter node because it refers to a node on which tasks are being selected. The back-end system is often called the application node because it refers to the node where the application is executing and where all the actual processing of an application takes place.
Because many applications can run at a single time on one distributed
system, it is important that application specifications in menu
definitions on the submitter node point to the correct applications on
the application node. Section 6.3 describes how you define
application specifications for a distributed TP system. The following
section describes what you must do to enable your system for
distributed forms processing.
6.2 Preparing Your System for Distributed Forms Processing
Once you have designed your distributed system to the extent of
deciding which nodes are to be used as the front end and which nodes
are to be used as the back end, you can configure each node in your
system for distributed forms processing. This section describes actions
the system manager takes to enable processing of applications with
distributed forms. This includes actions that must be taken in all
environments and some actions that are specific to an OpenVMS Cluster
environment, submitter nodes, or application nodes.
6.2.1 Common Setup Tasks for Distributed Forms Processing
The following procedures must be performed for both submitter and application nodes to set up your system for processing distributed forms processing:
. . . NODE_NAME="MYNODE" . . . |
In an OpenVMS Cluster environment, you must ensure that the ACMS parameter file, ACMSPAR.ACM, is stored in the SYS$SPECIFIC directory before invoking the ACMSPARAM.COM command procedure. Because each node has a different DECnet node name, each node must have its own node-specific ACMSPAR.ACM file. Check this on each node of the cluster. |
$ RUN SYS$SYSTEM:ACMSGEN ACMSGEN> SET NODE_NAME DOVE ACMSGEN> WRITE CURRENT |
Note that WRITE CURRENT was used in the preceding example. This ensures that a new ACMSPAR.ACM file is not created where one already exists. It also provides a check that the file exists on each node in an OpenVMS Cluster.
Finally, note that regardless of which method you use to set the node
name parameter, you must stop and restart the ACMS system for the
change to take effect.
6.2.2 Actions Required on Submitter Nodes
After you have completed the actions listed in Section 6.2.1 detailing common steps that must be taken, the following additional actions are needed to enable distributed processing on a submitter, or front-end, node:
Make sure that each step in Section 6.2.1 has been completed. Then, take these additional steps to enable distributed processing by authorizing remote access to ACMS on the application node.
The system manager on an application node can authorize remote task submitter nodes using these methods:
These methods are described in the following paragraphs.
When ACMS uses a proxy account or a default submitter user name for a
remote task submitter, tasks executed by the remote submitter are
executed as if they were selected by a local task submitter using the
same account. If a proxy or default submitter user name does not exist
for a remote task submitter, ACMS rejects the remote task selection.
6.2.3.1 Assigning Individual Proxy Accounts
The ACMS proxy enables system managers to give ACMS users on remote nodes access to ACMS applications on application nodes without granting access to other files and OpenVMS resources on the application node.
For existing ACMS sites, simply adding a new user proxy to the ACMS proxy file does not make a system more secure. You must also remove the user's proxy from the OpenVMS proxy file to deny users on remote nodes access to other files and OpenVMS resources. |
When a user on a remote node first attempts to select a task on the application node, the following occurs:
To decide which type of proxy is appropriate for a user on a remote node, first determine the type of access to the system that the user needs and what level of security is required. Then, decide whether to create an OpenVMS proxy, an ACMS proxy, or both.
Before making changes in the security level of users on remote nodes, consider the needs of the following types of users:
Table 6-1 identifies the types of proxies these users require.
OpenVMS Proxy | ACMS Proxy | |
---|---|---|
OpenVMS Only User | + | |
ACMS Only User | + | |
OpenVMS and ACMS User | + | + |
An OpenVMS proxy allows users to select ACMS tasks remotely and users on remote nodes to access other files and OpenVMS resources on the application node.
An ACMS proxy allows users to select ACMS tasks remotely. Unlike an
OpenVMS proxy, an ACMS proxy does not grant users on remote nodes
access to any other files or OpenVMS resources on the application node,
except through an ACMS task.
6.2.3.1.3 Setting Up the ACMS Proxy File
Use the ACMS User Definition Utility (UDU) to create and maintain the ACMS proxy file, ACMSPROXY.DAT. This file contains the mapping of <remote-node>::<remote-user> to <local-user>.
You also use UDU to add, remove, and display the proxy specifications in the ACMS proxy file. The UDU interface, including the command syntax and the use of wildcards, is similar to the proxy command interface in the OpenVMS Authorize Utility.
By default, UDU and run-time ACMS look for the ACMS proxy file, ACMSPROXY.DAT, in two different places:
UDU looks for the default ACMS proxy file ACMSPROXY.DAT in the current directory. You can define the logical name ACMSPROXY to specify another location for the file, define it in any logical name table in your process directory table, and in any access mode.
The SYS$SYSTEM:ACMSPROXY.DAT file location is the run-time default specification of the proxy file. You can define the system-level executive-mode logical name ACMSPROXY to specify an alternate file location, which the ACMS run-time system uses. For example, issue the following command to create the proxy file FOO.BAR in the SYS$TEST directory:
$ DEFINE/SYSTEM/EXECUTIVE ACMSPROXY SYS$TEST:FOO.BAR |
If the ACC encounters any problems the first time it opens the ACMS proxy file, an error message is written to the SWL log file.
If you want the proxy file to be in SYS$SYSTEM and accessible to all nodes in the cluster, you must specify a SYS$COMMON directory, not SYS$SPECIFIC. In order for ACMS to search the file for remote proxies, the ACC process must be able to read the ACMS proxy file.
To allow ACC access to the ACMS proxy file, perform one of the following actions:
To implement the ACMS proxy, perform the following steps:
Performing these operations increases security and also ensures that the use of the ACMS proxy mechanism does not cause a degradation in ACMS performance. When ACC needs to search only one proxy file for a proxy, the ACMS performance is the same as in Version 3.2. (However, even when ACMS needs to search both the ACMS and OpenVMS proxy files, there is only slight negative impact on the performance of ACMS.)
The format you choose for creating ACMS proxies has security implications. The following formats are listed from most secure to least secure. The ACC process follows this order when it searches the ACMS proxy file for a match with a remote user proxy. If the ACC finds no match in the search list, it searches the OpenVMS proxy file in the same order for a remote user proxy.
The application node ACC checks for a remote task submitter's proxy. In addition, the application node ACC requests that the submitter node ACC validate the task submitter based on a security token and submitter ID that the submitter node ACC assigned to the user. When the task submitter first enters ACMS, the submitter node ACC verifies the following items to make sure that the user is authorized:
Previous | Next | Contents | Index |