Previous | Contents | Index |
This chapter describes how to configure and start an RTR environment.
Recovery journals, router load balancing and callout servers are also
discussed.
2.1 Introduction
Before RTR applications can run, RTR must be started and the application's facility must be defined on each node of the application's environment. This is done by issuing the START RTR and CREATE FACILITY commands on each participating node. There are several ways to accomplish this:
The first two methods are more suited to a development or test environment, while the last method is more suited to a production environment.
The remaining sections contain examples of the commands used to start
and configure RTR. Section 7.2 gives syntax details of the RTR
commands.
2.2 Setting Up---An Example
The following example assumes that RTR is started on the eight-node system shown in Figure 2-1.
In this figure, the application client processes run on the nodes FE1, FE2 and FE3. The servers run on BE1, BE2 and BE3. Nodes TR1 and TR2 are routers and have no application processes running on them. This diagram shows all possible connections. The frontend connects to only one router at a time.
Example 2-1 shows the START RTR commands that must be issued on each node to start this configuration. Commands are issued first on node FE1, then on FE2, and on FE3 for frontends followed by TR1 and TR2 for routers, and finally BE1, BE2, and BE3 for backends.
Example 2-1 Local Configuration of each Node |
---|
% rtr RTR> start rtr RTR> create facility funds_transfer/frontend=FE1 - _RTR> /router=(TR1, TR2) |
% rtr RTR> start rtr RTR> create facility funds_transfer/frontend=(FE1, FE2, FE3) - _RTR /router=TR1 - _RTR> /backend=(BE1, BE2, BE3) |
% rtr RTR> start rtr RTR> create facility funds_transfer/router=(TR1, TR2) - _RTR> /backend=BE1 |
The commands shown in Example 2-1 could also be included in each node's startup script or put in a command procedure used to start the application.
Nodes only need to know about the nodes in the neighboring layers of the configuration, thus FE1 does not need to know about BE1. Superfluous node names are ignored. This allows you to issue the same CREATE FACILITY command on every node to simplify the maintenance of startup command procedures.
Example 2-2 illustrates how to use RTR remote commands to start the same configuration. The SET ENVIRONMENT command is used to send subsequent commands to a number of RTR nodes.
Example 2-2 Remote Setup from One Node |
---|
% rtr RTR> set environment/node= - _RTR> (FE1, FE2, FE3, TR1, TR2, BE1, BR2, BR3) RTR> start rtr RTR> create facility funds_transfer/frontend=(FE1, FE2, FE3) - _RTR> /router=(TR1, TR2) - _RTR> /backend=(BE1, BE2, BE3) |
You can enter the commands shown in Example 2-2 on node in the configuration. However, you must have an account with the necessary privileges on the other nodes.
Use the SHOW RTR command to find out if RTR has been started on a particular node.
Use the SHOW FACILITY and SHOW LINK commands to find out which
facilities (if any) have been created and how they are configured. The
full syntax of these commands is given in Chapter 7.
2.3 Creating a Recovery Journal
RTR writes data to journal files to be able to recover (that is, replay) partially executed transactions after a backend node failure.
To improve performance, the journal may be striped across several disks. Specify the location and size of the journal using the CREATE JOURNAL command.
The CREATE JOURNAL command must be issued on each node where an application server will run, that is, on each backend node and on any router nodes where router callout servers will run. It must be issued after installing RTR and before creating any facilities. It may be issued again later to alter the size or location of the journal to improve performance. Use the MODIFY JOURNAL command to adjust journal sizes.
The CREATE JOURNAL/SUPERSEDE command deletes the contents of any existing journal files. If transaction recovery is required, DO NOT ISSUE this command after a failure. Do not make backup copies of journal files without first making the original journal file read-only or the journal files will be considered spurious by RTR because it sees journal files that it did not create. In this case RTR will issue a %RTR-F-SPUJOUFIL error message. The operator should move any duplicate copies of journal files to a location other than the rtrjnl directory so that RTR will see only the journal file it created. Track duplicate copies of journal files in the log file to prevent RTR seeing more than the file it created and issuing the %RTR-F-SPUJOUFIL error message. If a journal is spurious for reasons other than improper copying, a backup copy followed by a destruction of the transactions contained in a journal can be performed by the CREATE JOURNAL/SUPERSEDE command. |
Using TRIM FACILITY
Use the TRIM FACILITY command to change RTR facility membership.
The RTR facility defines the nodes used by an application, and the roles (frontend, router, backend) they may assume. You do not need to change facility definitions in the event of node or link failures. |
In Figure 2-1, assume that the FE3 node is being removed from the funds_transfer facility. Since FE3 is a frontend for this facility, only the routers (TR1 and TR2) need be reconfigured.
Example 2-3 shows the commands necessary to achieve this reconfiguration.
Example 2-3 Reconfiguration |
---|
% rtr RTR> set environment/node=FE3 (1) RTR> delete facility funds_transfer (2) RTR> stop rtr/node=FE3 (3) RTR> set environment/node=TR1,TR2 (4) RTR> trim facility funds_transfer/node=FE3 (5) |
Using EXTEND FACILITY
In the example in Figure 2-1, assume that a new router node TR3 and new frontend FE4 are being added to the facility funds_transfer . The extended configuration is shown in Figure 2-2. This diagram shows the possible frontend to router connections. The frontend connects to only one router at a time.
Figure 2-2 Extend Configuration
All backend nodes must be informed when router configurations are changed. Because TR3 will be a router for the FE3 and FE4 frontends, these nodes must also be informed of its presence. Likewise, TR3 must be informed about FE3 and FE4.
Example 2-4 shows the EXTEND FACILITY command used for this reconfiguration.
Example 2-4 Reconfiguration Using Extend Facility |
---|
% RTR RTR> start rtr /node=(TR3,FE4) RTR> set environment/node= - (1) _RTR> (FE3,TR1,TR2,BE1,BE2,BE3,TR3,FE4) RTR> extend facility funds_transfer - (2) _RTR> /router=TR3/frontend=(FE3,FE4) - _RTR> /backend=(BE1,BE2,BE3) RTR> extend facility funds_transfer - (3) _RTR> /router=TR1/frontend=FE4 |
Callout servers are applications that receive a copy of every transaction passing through the node where the callout server is running.
Like any other server, callout servers have the ability to abort any transaction that they participate in. Callout servers are typically used to provide an additional security service in the network; transactions can be inspected by the callout server and aborted if they fail to meet any user-defined criteria. Callout servers can run on router or backend nodes, or both.
Callout servers require that a journal be created on the node that the server runs on. In the case of a backend callout server, this would already be done because backends require journals. However,, if the callout server is running on a router node, a journal is required on the router node. A journal is required if RTR is being used in a nested transaction, that is, if RTR is a resource manager to a foreign transaction manager. This could occur, for example, with XA controlling an RTR transaction.
Assume that callout servers are to run on the router nodes (TR1 and TR2) in the configuration shown in Figure 2-1. Example 2-5 shows the commands needed to set up callout servers on the routers.
Example 2-5 Configuration of Callout Servers |
---|
% rtr RTR> set environment/node= - _RTR> (FE1,FE2,FE3,TR1,TR2,BE1,BE2,BE3) RTR> start rtr RTR> create facility funds_transfer/frontend=(FE1,FE2,FE3) - _RTR> /router=(TR1,TR2) - _RTR> /backend=(BE1,BE2,BE3) - _RTR> /call_out=router |
To avoid problems with quorum resolution, design your configuration with an odd number of routers. This ensures that quorum can be achieved.
To improve failover, place your routers on separate nodes from your backends. This way, failure of one node does not take out both the router and the backend.
If your application requires frontend failover when a router fails, frontends must be on separate nodes from the routers, but frontends and routers must be in the same facility. For frontend failover, a frontend must be in a facility with multiple routers.
To identify a node used only for quorum resolution, define the node as a router or as a router and frontend. On this node, define all backends in the facility, but no other frontends. With a widely dispersed set of nodes (such as nodes distributed across an entire country), use local routers to deal with local frontends. This can be more efficient than having many dispersed frontends connecting to a small number of distant routers.
In some configurations, it may be more effective to place routers near
backends.
2.7 Router Load Balancing
Router load balancing, or intelligent reconnection of frontends to a router, allows a frontend to select the router that has the least loading. The CREATE FACILITY and SET FACILITY commands have the /BALANCE qualifier to control this. RTR now allows frontends to determine their router connection. The RTR Version 2 implementation of load balancing treated all routers as equal; this could cause reconnection timeouts in geographically distant routers.
When used with CREATE FACILITY, the /BALANCE qualifier specifies that load balancing is enabled for frontend-to-router connections across the facility.
Use SET FACILITY/[NO]BALANCE to switch load balancing off and on.
The default behavior (/NOBALANCE) connects a frontend to the preferred router. Preferred routers are selected in the order specified in the /ROUTER=tr1,tr2,tr3... qualifier used with the CREATE FACILITY command. If the /ALL_ROLES qualifier is also used, the nodes specifed by this qualifer have a lower priority than the nodes specifed by the /ROUTER qualifier. Automatic failback ensures that the frontend will reconnect to the first router in the order when it becomes available. Manual balancing can be attained by specifying different router orders across the frontends.
When the /BALANCE qualifier is used, the list of routers specified in the router list is randomized, making the preferred router a random selection within the list. Randomness assures that there will be a balance of load in a configuration with a large number of frontends. Automatic failback will maintain the load distribution on the routers. Failback is controlled at a limited rate so as not to overload configurations with a small number of routers.
Assume the following command is issued from a frontend:
RTR CREATE FACILITY test/FRONTEND=Z/ROUTER=(A,B,C) |
The frontend attempts to select a router based on the priority list A, B, C, with A being the preferred router. If the /BALANCE qualifier is added to the end of this command, the preferred router is randomly selected from the three nodes. This random list exists for the duration of the facility. After the facility is stopped, a new random list is made when the facility is created again. The exception to this is if a router does not have quorum (sufficient access to backend systems) then that router will no longer accept connections from frontend systems until it has again achieved quorum. The /BALANCE qualifier is only valid for frontend systems.
Consider the following points when using load balancing:
The commands to set or show load balancing are:
Adding concurrent processes (concurrency) usually increases performance. Concurrency permits multiple server channels to be connected to an instance of a partition.
Concurrency should be added during the testing phase, before an application goes into production.
Consider the following factors when adding concurrency:
RTR supports two levels of rights or privileges:
In general, rtroper or RTR$OPERATOR is required to issue any command that affects the running of the system, and rtrinfo or RTR$INFO is required for using monitor and display commands.
Setting RTR Privileges on UNIX Systems
On UNIX machines, RTR privileges are determined by the user ID and group membership. For RTR users and operators, create the group rtroper and add RTR operators and users as appropriate.
The root user has all privileges needed to run RTR. Users in the group rtroper also have all privileges with respect to RTR, but may not have sufficient privilege to access resources used by RTR, such as shared memory or access to RTR files.
The rtrinfo group is currently used only to allow applications to call rtr_request_info() . For other users, create the groups rtroper and rtrinfo . Users who do not fall into the above categories, but are members of the rtrinfo group, can use only the RTR commands that display information (SHOW, MONITOR, CALL RTR_REQUEST_INFO, etc.).
Depending on your UNIX system, see the addgroup , groupadd or mkgroup commands or the System Administration documentation for details on how to add new groups to your system.
If the groups rtroper and rtrinfo are not defined, all users automatically belong to them. This means that there is no system management required for systems that do not need privilege checking.
Setting RTR Privileges on OpenVMS Systems
Use the AUTHORIZE utility to create the Rights Identifiers RTR$OPERATOR and RTR$INFO if they do not already exist on your system, and assign them to users as appropriate. The RTR System Manager must have the RTR$OPERATOR identifier or the OPER privilege.
Setting RTR Privileges on Windows NT Systems
Administrator privileges are needed for RtrOperator rights by the RTR
System Manager.
2.10 RTR ACP Virtual Memory Sizing
Basic memory requirements of an unconfigured RTR ACP is approximately 5.8 Mbytes. Additional memory may be required depending on the operating system environment being used by the RTR ACP process.
Compaq strongly recommends that you allocate as much virtual memory as
possible. While there is no penalty for allocating more virtual memory
than is used, applications may fail if too little memory is allocated.
2.10.1 OpenVMS Virtual Memory Sizing
On OpenVMS the following allowances for additional virtual memory should be made:
For each | Add an additional |
---|---|
Link | 202 Kbytes |
Facility | 13 Kbytes plus 80 bytes for each link in the facility |
Client or server
application process |
190 Kbytes for the first channel |
Additional application channel | 1350 bytes |
You must also prepare for the number of active transactions in the system. Unless the client applications are programmed to initiate multiple concurrent transactions, this number will not exceed the total number of client channels in the system. This should be verified with the application provider.
It is also necessary to determine the size of the transaction messages in use:
The total of all the contributions listed will provide an estimate of the virtual memory requirements of the RTR ACP. A generous additional safety factor should be applied to the total virtual memory sizing requirement. It is better to grant the RTR ACP resource limits exceeding its real requirements than to risk loss of service in a production environment as a result of insufficient resource allocation. The total result should be divided by the virtual memory size in pages to obtain the final virtual memory requirement. Process memory and page file quotas should be set to accommodate at least this much memory.
Process quotas are controlled by qualifiers to the START RTR command. START RTR accepts both links and application processes as qualifiers which can be used to specify the expected number of links and application processes in the configuration. The values supplied are used to calculate reasonable and safe minimum values for the following RTR ACP process quotas:
Both the /LINKS and /PROCESSES qualifiers have high default values.
The default value for /LINKS is now 512. This value is high but is chosen to protect RTR routers against a failover where the number of frontends is large and the number of surviving routers is small. The maximum value for /LINKS is 1200, which is unchanged from earlier versions of RTR on OpenVMS.
The default value for /PROCESSES is 64. This value is large for frontend and router nodes but is sized for backends hosting applications. Backends with complex applications may have to set this value higher. The maximum value for /PROCESSES is the OpenVMS allowed maximum. Warning messages are generated if the requested (or default) memory quotas conflict with the system-wide WSMAX parameter, or if the calculated or specified page file quota is greater than the remaining free page file space.
The default values for /LINKS and /PROCESSES require a large page file. RTR issues a warning if insufficient free space remains in the page file to accommodate RTR, so choose values appropriate for your configuration.
The /LINKS and /PROCESSES qualifiers do not take into account memory requirements for transactions. If an application passes a large amount of data from client to server or vice-versa, this should be included in the sizing calculations. For further information on the START RTR qualifiers, see the START RTR command in the Command Reference section.
Once the requirements have been determined for the START RTR qualifiers of /PGFLQUOTA or /LINK and /PROCESSES, RTR should be started with these qualifiers set to ensure the appropriate virtual memory quotas are set.
The OpenVMS AUTHORIZE utility does not play a role in the determination of RTR ACP quotas. RTR uses AUTHORIZE quotas for the command line interface and communication server, COMSERV. Virtual memory sizing for the RTR ACP is determined through the qualifiers of the START RTR command. |
Previous | Next | Contents | Index |