Compaq ACMS for OpenVMS
Managing Applications


Previous Contents Index

6.2.3.1.5 Creating OpenVMS Proxies

The OpenVMS proxy file can have more than one <local-user> for each <remote-node>::<remote-user> combination. The ACMS proxy file, on the other hand, can have only one <local-user> for each <remote-node>::<remote-user> combination. Therefore, for an OpenVMS proxy, you must specify a default <local-user>. This allows ACMS, when it searches the OpenVMS proxy file, to find one <local-user> to associate with <remote-node>::<remote-user> for the remote user proxy.

For example, to allow user JONES on the submitter node COWBOY to select tasks under the user name PERS, create an OpenVMS proxy account as follows:


UAF>  CREATE/PROXY COWBOY::JONES PERS/DEFAULT

When starting DECnet, to allow remote users access to files and other OpenVMS resources, issue the NCP command SET KNOWN PROXIES ALL. If you do not execute this command on a node, remote users do not have access to files and other OpenVMS resources. However, ACMS opens and reads the OpenVMS proxy file and the ACMS proxy file whether or not you enable OpenVMS proxy access to files and OpenVMS resources with NCP.

6.2.3.2 Assigning a Default Submitter User Name Account

The recommended way to set the default submitter user name is by editing the ACMVARINI.DAT file in SYS$MANAGER. Include a line that specifies the default submitter user name.


 
. 
. 
. 
USERNAME_DEFAULT = "ACMS_USER" 
. 
. 
. 
 

Invoke the ACMSPARAM.COM procedure in SYS$MANAGER to apply the change to the ACMS parameter file. Note that in an OpenVMS Cluster environment, you must ensure that the ACMS parameter file, ACMSPAR.ACM, is stored in the SYS$SPECIFIC:[SYSEXE] directory before invoking the ACMSPARAM.COM command procedure. The value you assign to the USERNAME_DEFAULT parameter is used as the submitter user name if ACMS cannot find a specific proxy user name for a remote submitter.

You can also define a default submitter user name using the ACMSGEN Utility. For example:


$ RUN SYS$SYSTEM:ACMSGEN
ACMSGEN> SET USERNAME_DEFAULT ACMS_USER
ACMSGEN> WRITE CURRENT

The SET command defines ACMS_USER as the new default submitter user name, which the WRITE CURRENT command uses to update the current setting. The user name you set as the USERNAME_DEFAULT must be authorized by the OpenVMS Authorize Utility, and must exist in the ACMS User Definition File.

6.2.3.3 Assigning Proxy Accounts in an OpenVMS Cluster Environment

In an OpenVMS Cluster environment, where the submitter and application nodes are in the same cluster, you can create a wildcard proxy account for all users on a submitter node.

For example, to allow all users on the submitter node, COWBOY, to select tasks on an application node, you create a proxy account for users with the OpenVMS Authorize Utility:


UAF> CREATE/PROXY COWBOY::* */DEFAULT

Note that this technique can also be used outside an OpenVMS Cluster environment, but the security implications should be well understood. Specifically, the system manager on the application node must understand that this method allows any user on the submitter node access to the application node.

6.2.4 File Protection for Application and Form Files

You must also ensure that application and form files have the correct file protection assigned.

When a terminal user on the submitter node selects a task in an application on the application node, ACMS automatically copies the application database and forms files used by the application from the application node to the submitter node. For ACMS to automatically distribute these files, they must be accessible by the ACMS Central Controller (ACC) and the CP on the submitter node, or remote task selections fail. You can ensure that these files are accessible on the application node by giving them WORLD:RE protection. See Section 6.4 for more information about how ACMS performs this automatic distribution.

Alternatively, if you do not wish to change the protection of the application databases and form files to WORLD:RE, you can create proxy accounts on the application node for the ACC and CP user names on the submitter node. The following example shows how to allow the ACC and CP processes on the submitter node, COWBOY, to access their respective files on an application node. The example assumes that on both the submitter and the application nodes, the ACC and CP processes run under the ACMS$ACC and ACMS$CP user names.


UAF> ADD/PROXY COWBOY::ACMS$ACC ACMS$ACC/DEFAULT
UAF> ADD/PROXY COWBOY::ACMS$CP ACMS$CP/DEFAULT

Note

If a proxy account is created on the application node for the ACC or CP processes on the submitter node, ensure that the account on the application node is authorized for network access and that it is not defined with the /DISUSER qualifier. Failure to do this prevents a cache operation from taking place and causes the task selection to fail.

ACMS uses the DECnet-VAX File Access Listener (FAL) object to copy from one node to another. FAL is a DIGITAL-supplied object that is defined by default in the DECnet-VAX configuration database. This FAL object must exist and be enabled for ACMS to distribute needed files from one node to another.

To confirm that the FAL object is present, you can use the Network Control Program (NCP) SHOW command. First use RUN SYS$SYSTEM:NCP to obtain the NCP> prompt. At the NCP> prompt, type SHOW and at the following prompts type OBJECT and FAL, as shown in the following example:


$ RUN SYS$SYSTEM:NCP
NCP> SHOW
(ACTIVE, ADJACENT, AREA, CIRCUIT, EXECUTOR, 
KNOWN, LINE, LOGGING, LOOP, MODULE, NODE, OBJECT):OBJECT
Object name (8 characters): FAL
 
 
 
Object Volatile Summary as of 21-JUL-1989 14:57:57 
 
   Object   Number  File/PID              User Id          Password 
 
  FAL           17  FAL.EXE 
NCP> EXIT
$ 

You must also ensure that files needed for DECforms escape routines have the correct file protection assigned.

Place DECforms escape routines in a restricted directory because they contain user-written code that runs in a privileged process. DECforms escape routines are not distributed automatically by ACMS. Hence, the system manager on the submitter node needs to copy these files manually from the application node. Consequently, make sure the files are accessible to the submitter node. See Section 6.6 for a description of managing DECforms escape routines.

6.3 Defining Application Specifications

In a distributed TP system, it is important that the application specifications in menu definitions point to the node on which the application is running.

You can "hardcode" an application file name and node name in the menu definition, or use a logical name to reference the application. Logical names allow you to change the application to which a menu definition or definitions refers by simply redefining a logical name. This avoids the need to redefine and rebuild a menu definition. If an application is installed on more than one node in the system, you can define a logical name to translate to a search list specifying all locations for the application. The following sections describe how to define logical names and search lists.

6.3.1 Using Logical Names for Applications

If task submitters on the submitter node access several applications that are always stored together on another node, define a logical name for that node. For example, if the applications ACCOUNTING, PERSONNEL, and BUDGET are all installed on node DOVE, place the following logical name assignment in the system startup file:


$ DEFINE/SYSTEM APPL_NODE DOVE:: 

The menu definitions can refer to APPL_NODE::ACCOUNTING, APPL_NODE::PERSONNEL, and APPL_NODE::BUDGET. Then, if you need to move these applications to another node, you can simply redefine APPL_NODE to be the new node name without changing the menu definitions.

Example 6-1 shows how a menu definition refers to the node DOVE after it has been defined with the logical name APPL_NODE as explained in the previous paragraphs.

Example 6-1 Using a Logical Name for a Node

ENTRIES ARE: 
. 
. 
. 
 
NEW_EMPLOYEE: TASK IS NEW_EMPL_TASK IN "APPL_NODE::PERSONNEL"; 
. 
. 
. 
END ENTRIES; 

If you want control of specific applications rather than groups of applications, define a logical name for that application. For example, if the application PAYROLL is installed on node DOVE, place this command in the system startup file:


$ DEFINE/SYSTEM PAYROLL DOVE::PAYROLL 

The menu definition simply refers to PAYROLL, as in Example 6-2.

Example 6-2 Using a Logical Name for an Application

ENTRIES ARE: 
. 
. 
. 
 
NEW_EMPLOYEE: TASK IS NEW_EMPL_TASK IN "PAYROLL"; 
. 
. 
. 
END ENTRIES; 

When an ACC on a submitter node asks an application node for an application, the ACC process on the application node attempts to translate the application name locally. If a logical name does exist for the application name, then the translation is used to access the application on the application node.

Any name changes that occur on the application node do not affect the submitter's file name. For example, if a logical name on submitter node ARK exists which defines PAYROLL as DOVE::PAYROLL, node ARK asks application node DOVE for application PAYROLL. On node DOVE, ACMS tries to translate the PAYROLL logical again. If the logical translates to NEW_PAYROLL, for instance, the NEW_PAYROLL application is accessed on node DOVE. When files relating to that application are copied to node ARK, they are stored as PAYROLL.

The copying of application database (.ADB) and form files from a remote node to the local node where users are selecting tasks is referred to as caching. This is done to increase system performance, by avoiding the need for network access to the application node for application database and form files. A fuller explanation of this process is found in Section 6.4.

If an application specification or a translation of a logical application specification does not contain a node name, ACMS looks for the application on the submitter's node. If you do use node names, they cannot contain access control strings. For example, ALPHA"ADAMS JOHNQUINCY"::PAYROLL is not a valid application specification because it includes the access control string, "ADAMS JOHNQUINCY".

ACMS follows OpenVMS conventions when translating logical names. For information about how OpenVMS translates logical names and how it translates logical names interactively, refer to OpenVMS DCL Dictionary. See Compaq ACMS for OpenVMS ADU Reference Manual for more information about ACMS application specifications.

6.3.2 Search Lists and Primitive Failover

If an application moves between nodes, or you want to provide a backup node for an application in the event that a node fails, use search lists. Search lists are used to specify all locations for an application. For example, if the application PAYROLL is installed on node DOVE and on node RAVEN, define the logical name PAYROLL to be the following search list:


$ DEFINE/SYSTEM PAYROLL DOVE::PAYROLL,RAVEN::PAYROLL

The menu definition then specifies PAYROLL as the application name. ACMS searches for the application on each node listed until it locates an available application, or until it reaches the end of the search list. You can define a search list for an application name, a node name, or both.

For example, if the applications ACCOUNTING, PERSONNEL, and BUDGET are all installed on nodes DOVE, ARK, and BRANCH, you can define the logical name APPL_NODE to be the following search list:


$ DEFINE/SYSTEM APPL_NODE DOVE::, ARK::, BRANCH::

Menu definitions then specify APPL_NODE::ACCOUNTING. When APPL_NODE::ACCOUNTING is translated, the first available application in the search list is used. Therefore, DOVE::ACCOUNTING is used if it is available. If it is not available, ACMS continues to process the search list from left to right until an available application is located.

Search lists provide a primitive form of failover. When an application goes down, an application node goes down, or the link between the submitter node and the application node is severed for some reason, ACMS cancels all active tasks on the submitter node. When a user attempts to select another task in the same application, ACMS reprocesses the search list for an available application.

Once an available node or application is found, all subsequent task selections are redirected to the available application, and terminal users are once again able to select tasks. No group or user workspace context in the failed application is saved. Users receive an error message when this type of failover situation occurs.

If you want to stop an application and cause users to select tasks in an application executing on another node, you can force failover to another application by using the ACMS/REPROCESS APPLICATION command. This command is described in the next section.

6.3.3 Redirecting Users to Other Applications at Run Time

ACMS translates logical names for applications only when a task is first selected in an application. Once ACMS on a submitter node locates an application node, all task selections from the submitter node go to that application node until that application is either stopped or the link between the submitter node and the application node is broken.

However, you can cause ACMS to direct task selections to another application in a search list (or return users to an application that previously failed), by using the ACMS/REPROCESS APPLICATION command. The ACMS/REPROCESS APPLICATION command causes ACMS to retranslate the search list for a logical name. For example:


$ ACMS/REPROCESS APPLICATION PAYROLL
Reprocess Specification for Application PAYROLL (Y/[N]):

This command causes ACMS to retranslate the logical name PAYROLL the next time a task submitter selects a task in the application.

The ACMS/REPROCESS APPLICATION command is especially useful for system managers who need to install an updated version of an application, but do not want to cause application downtime during this procedure. Simply redirect user task selections to an application executing on another node by using the ACMS/REPROCESS APPLICATION command; install the updated version; and then issue another ACMS/REPROCESS APPLICATION command to redirect user task selections back to the original node, which is now executing the updated application.

The ACMS/REPROCESS APPLICATION command can also be used to switch between versions of an application where the submitters and the application are running on the same system, as illustrated in the following example.

A site has two copies of an application, PAYROLL_A and PAYROLL_B, with the logical name PAYROLL defined as PAYROLL_A, PAYROLL_B. Note that no node names are used. The assumption is that the system is running and the system manager wants to make a new version of the payroll application available without interrupting the company's flow of work.

The system manager copies the new application database file (the .ADB file) to ACMS$DIRECTORY and calls it PAYROLL_B. The order of the PAYROLL logical is then redefined to PAYROLL_B, PAYROLL_A. The ACMS/REPROCESS APPLICATION PAYROLL command is then issued, causing all subsequent task selections to go to the new PAYROLL_B application. When there are no more submitters using PAYROLL_A, that application can be stopped. To switch users back to PAYROLL_A after it has been updated, the logical names are reversed, and the process is repeated.

In a distributed ACMS environment, the process goes as follows: If PAYROLL is defined as the search list: DOVE::PAYROLL, ARK::PAYROLL, RAVEN::PAYROLL, and you want to install an updated version of the PAYROLL application on node DOVE (the application currently being used), redefine the search list to point to ARK::PAYROLL, RAVEN::PAYROLL and execute the ACMS/REPROCESS APPLICATION command. Terminal users make new task selections in ARK::PAYROLL. After you install the updated application on node DOVE, redefine the search list to point to DOVE::PAYROLL, ARK::PAYROLL, RAVEN::PAYROLL and issue another ACMS/REPROCESS APPLICATION command. Users make new task selections in the updated application.

The ACMS/REPROCESS APPLICATION command can also be used to cause submitters to "failover" to an application that previously failed but has now been made available. For example, if PAYROLL is defined as the search list: DOVE::PAYROLL, ARK::PAYROLL, RAVEN::PAYROLL, and node DOVE goes down, ACMS automatically retranslates the search list for PAYROLL, and terminal users make new task selections in ARK::PAYROLL. When node DOVE again becomes available, issue the ACMS/REPROCESS APPLICATION command to redirect subsequent user task selections back to the DOVE::PAYROLL application.

6.4 Distributed Operations ACMS Performs Automatically

When an ACMS/START APPLICATION command is issued, the application execution controller (EXC) checks that all form files referenced by task groups in the application definition exist. The EXC uses the default directory and logical names to locate the form files.

Once all the form files are found, EXC stores the full file specification for each form file. When EXC makes a DECforms I/O request to the agent process, it passes the full file specification to the agent process.

In ACMS, all DECforms requests run in the agent process, whether the task is submitted locally or remotely. Therefore, the agent process must have access to the form files. If a task is selected from a remote agent, the form files must be distributed to the remote node.

All form files used by an application must be on the application node, because the application execution controller (EXC) checks that all files are present when the application starts. The form files must also be accessible to the submitter node, because code running in the task submitting agent on the submitter node issues the calls to DECforms requests. Escape routine files, however, are not checked when the application starts.

Note

ACMS uses the translation of the SYS$NODE system logical name from an application node to form the directory specification when caching DECforms form files and TDMS request library files. DECnet defines the logical name when it is started. Do not change the definition of this system-defined logical name. If this logical name is changed, then the ACMS caching routines do not function correctly and place files in directories with incorrect names. If this happens, ACMS cannot open the necessary form or request library files and, as a result, cancels tasks.

6.4.1 Automatic File Distribution

When a user selects a task in an application, the ACMS run-time system checks to see if the application and form files are accessible on shared disks available to both the submitter and the application nodes. If the files are not located on shared disks and are not already resident on the submitter node, ACMS automatically copies the application and forms files used by the application from the application node to the submitter node. ACMS does not automatically distribute DECforms escape routines. You must distribute these files manually. See Section 6.5.1 and Section 6.6 for information on how to manually copy DECforms escape routines and where to store them.

These copies are made automatically by ACMS to increase the performance of the system by avoiding having to access these form and application files over a network.

For ACMS to perform this automatic distribution of application and form files, the ACC and the CP must have access to these files. On the application node, you can use proxies for the remote ACC and CP in combination to allow access to these files by the ACC and CP on the submitter node. Alternatively, assigning the files WORLD:RE protection ensures that these files are accessible.

Using proxies allows a submitter node ACC or CP to access the files on the application node as if they were being accessed by the application node's ACC or CP. In this case, the files do not need WORLD:RE access. However, if proxies are not used, the submitter node's ACC and CP are accessing the files as if they were nonprivileged users on the application node, which is why the files must have WORLD:RE protection. If this no-proxy method is used, the application node must have set up a default DECnet account which does not have privileges, but does allow remote nodes to access files which have WORLD:RE protection on the local node. For ACMS caching to work without the use of proxies, this DECnet default account must exist.

6.4.2 ACMS Systemwide Cache Directory

When ACMS automatically distributes files to the submitter node, it places the files in a special directory hierarchy created by ACMS specially for this purpose. This directory structure is known as the ACMS systemwide cache directory. By default, on the submitter node, ACMS creates this systemwide cache directory structure by using the translated directory specification of ACMS$DIRECTORY with ACMS$CACHE appended as a rooted directory. For example, if ACMS$DIRECTORY translates to DRA1:[ACMSDIR], then the systemwide cache directory on submitter node DOVE is DOVE::DRA1:[ACMSDIR.ACMS$CACHE].

To create and reference files in the systemwide cache directory hierarchy, ACMS uses three logical names:

ACMS defines these logical names as rooted directories. For example, consider a system where the ACMS$DIRECTORY logical is defined to be SYS$SYSDEVICE:[ACMSDIR]. If the physical system disk device name is $1$DUA0:, then ACMS defines ACMS$ADB_CACHE as $1$DUA0:[ACMSDIR.ACMS$CACHE.]. Note that a physical device name must be used when defining a rooted directory logical name.

ACMS creates subdirectories (on the submitter node) for each node to which it copies application databases. Each node needs its own subdirectory in case two nodes have different applications with the same name. ACMS does not create subdirectories if the databases are accessed remotely. The location of these subdirectories is controlled by cache logicals, described in the preceding paragraphs. These subdirectories are used to store the .ADB and form files for applications access on that remote node.

For example, assume that the following nodes are running these applications:

Once a terminal user on submitter node COMET has selected tasks in each of these applications, the ACMS$CACHE directory on node COMET contains the subdirectories for the applications on each remote node.

Referred to under node COMET is PAYROLL.ADB, PAYROLL.DIR, STOCK.ADB, and STOCK.DIR. Referred to under node STAR is PAYROLL.ADB and PAYROLL.DIR. In this way, the ACMS system can discriminate among applications with the same name running on different nodes.

Also note that each subdirectory contains an .ADB and a .DIR for each application. Each .DIR contains the application's request libraries (.RLB) and/or forms files (.FORM and .EXE).

In most cases, the file name and type of the request libraries and forms files are the same as the file name and type on the remote node. If a remote application uses two different files with the same file name and type, but the files reside in a different disk or a different directory, the file type is modified to include the index of the task group within the application and of the request libraries and/or forms files of the task group.

This information can be obtained from the ADU DUMP APPLICATION and DUMP GROUP commands. The file type is modified to be .file-type_g_r, where file-type is the original file type, "g" is the index of the task group (leading zeros suppressed) as recorded in the .ADB and "r" is the index of the request library of forms file (leading zeros suppressed) as recorded in the .TDB.

Given that there are two file specifications with the same file name and type, the second file by alphabetical order of the full file specification will have the file type modified. For example, you have this definition in the task group:


 
    FORMS ARE 
        form_a IN "DISK1$:[USER1]FORM_X.FORM" WITH NAME F1, 
        form_b IN "DISK2$:[USER2]FORM_X.FORM" WITH NAME F2; 
 

The first file will retain the file name and type. The second file will have FORM_X.FORM_1_2 as the file type. The same rule applies to request libraries (.RLB) and forms image files (.EXE). However, you should not use the same file name for forms image files (.EXE) because of the restrictions of the OpenVMS image activator. For details about forms image files, see Compaq ACMS for OpenVMS Writing Applications.

When files are updated on the application node, ACMS automatically copies the new version of each file to the submitter node and deletes the old version the next time a user on a submitter node selects a task in an application containing updated files.

Note that ACMS does not make the copy at the time the change is made on the application node. The copy is made only when a user on the submitter node selects a task in the updated application. When updating application and form files, ACMS maintains a single copy of each file. It deletes old versions as soon as it makes an updated copy. Once the files are updated, the copies remain on each node until they are superseded by new versions or until they are explicitly deleted.


Previous Next Contents Index