Previous | Contents | Index |
Generally, RTR Version 3 may make more demands on system memory than RTR Version 2, which can cause performance reduction. Adding more memory may be useful in improving performance.
Table 2-2 lists the OpenVMS requirements for space on the system disk. These sizes are approximate; actual size may vary depending on system environment, configuration, and software options. For additional details, see the Reliable Transaction Router for OpenVMS Software Product Description.
Requirement | RTR Version 2 | RTR Version 3 |
---|---|---|
Disk space (installation) | 40,000 blocks (20MB) | 50,000 blocks (25MB) |
Disk space (permanent) | 24,000 blocks (12MB) | 36,000 blocks (18MB) |
To restore the RTR Version 2 environment if RTR Version 3 does not work with your applications as expected, use the following procedure:
$ RTR STOP RTR $ RTR DISCONNECT SERVER |
RTR Version 3 introduces certain process and other architectural
changes. The following sections highlight these changes.
3.1 RTR Daemon Process
In RTR Version 3, a new RTR daemon process (called RTRD)
is used by the RTRACP process to build TCP/IP connections for internode
links. The RTR daemon process is present only on systems with IP
networking installed
and with IP enabled as an RTR transport (see Chapter 4, Network Issues, for
information on setting your network transport).
3.2 Command Server Process
The command server process name is RTRCSV_<username>.
In RTR Version 2, a command server was started for every user login invocation to RTR to enter operator commands. With RTR Version 3 there is one command server per node for each user logged in through a common user name.
Command server timeouts are the same in RTR V3 as in RTR V2. |
In RTR Version 3, LIBRTR supersedes RTRSHR. This library module LIBRTR contains most of the RTR code, in contrast with RTR Version 2, where only RTR code specific to application context was contained in RTRSHR. All RTR Version 2 binaries have been superseded by the two executables LIBRTR.EXE and RTR.EXE, in RTR Version 3. Table 3-1 shows the executables of RTR Version 2 and Version 3.
RTR Version 2 | RTR Version 3 |
---|---|
RTRSHR | LIBRTR |
RTR | RTR |
RTRCOMSERV | Now part of LIBRTR. |
RTRACP | Now part of LIBRTR. |
RTRRTL | No longer apply. |
3.4 The ACP Process
The RTR Application Control Process (ACP) handles application control,
and has the process name RTRACP. This is unchanged from RTR Version 2.
3.5 Interprocess Communication
In RTR Version 2, global sections (cache) were used for interprocess communication. In RTR Version 3, interprocess communication is handled with mailboxes. Each RTR process, including any application process, has three mailboxes to communicate with the RTRACP process:
With RTR Version 2, the SHOW RTR/PARAMS command showed the following:
The /PARAMS qualifier is obsolete in RTR Version 3, and the parameters
it showed no longer apply. In RTR Version 3, these parameters are
handled with OpenVMS mailboxes, which you can check using OpenVMS
procedures. See the OpenVMS System Manager's Manual:
Essentials for more information.
3.7 Counters
In RTR Version 2, shared memory in global sections was directly
accessible using the RTR command server. In RTR Version 3, process
counters are still kept in shared memory, but they are accessed
from the command server through RTRACP. Thus, accessing these and other
counters involves communicating with RTRACP. Other counters are
contained within the address space of the ACP.
3.8 Quorum Issues
Network partitioning in RTR Version 3 is based on a router and backend count, whereas in RTR Version 2 it was based on quorum. However, quorum is still used in RTR Version 3; state names and some quorum-related displays have changed.
Additionally, the quorum-related condition of a node in a minority
network partition is handled more gracefully in
RTR Version 3. In RTR Version 2, a shadowed node in a minority network
partition would just lose quorum; in RTR Version 3, the MONITOR QUORUM
command states that the node is "in minority," providing more
information. The algorithms used to determine quorum have also changed
significantly to allow a more stable traffic pattern.
3.9 Server-Process Partition States
As in RTR Version 2, there are three server-process partition states:
With RTR Version 3, a server process that is initially the primary in a standby or shadow environment returns to primary after recovery from a network loss. (With RTR Version 2, there was no way to specify which node would become the primary after network recovery.) Unlike RTR Version 2, where the location of a primary server after a network outage was unpredictable, as long as servers have not been restarted and both servers are accessible, RTR Version 3 retains the original roles.
With RTR Version 3.2, RTR provides commands such as SET PARTITION/PRIMARY that the operator can use to specify a process partition state.
With RTR Version 3, two network transports are available:
At least one transport is required. If a destination supports both transports, RTR Version 3 can use either.
Any node can run either protocol, but the appropriate transport software must be running on that node. For example, for a node to use the DECnet protocol, the node must be running DECnet software. (For specific software network version numbers, see the RTR Version 3 OpenVMS Software Product Description.)
A link can fail over to either transport within RTR. Sufficient
redundancy in the RTR configuration provides greater flexibility to
change transports for a given link when necessary.
4.1 DECnet Support
With RTR Version 2, the only transport was DECnet Phase IV; it provided
DECnet Phase V support but without longnames. With RTR Version 3, both
DECnet Phase IV and DECnet-Plus (DECnet/OSI or DECnet Phase V) are
supported,
including support for longnames and long addresses.
4.2 TCP/IP Support
DECnet-Plus and TCP/IP provide multihoming capability: a multihomed IP node can have more than one IP address. RTR does name lookups and name address translations, as appropriate, using a name server. To use multihomed and TCP/IP addresses, Compaq recommends that you have a local name server that provides the names and addresses for all RTR nodes. The local name server should be available and responsive.
Name servers for all nodes used by RTR should contain the node names and addresses of all RTR nodes. Local RTR name databases must be consistent.
Include all possible addresses of nodes used by RTR, even those addresses not actually used by RTR. For example, a node may have two addresses, but RTR uses only one. Include both addresses in the local name database. |
During installation, the system manager can specify either transport, using logical names RTR_DNA_FIRST or RTR_TCP_FIRST. For example, in the RTR$STARTUP.COM file (found in SYS$STARTUP), the following line specifies DECnet as the default transport:
$ DEFINE/SYSTEM RTR_PREF_PROT RTR_DNA_FIRST |
To set the default transport to TCP/IP, remove (comment out) this definition from RTR$STARTUP.COM and restart RTR. For the change to take immediate effect, you must undefine the old logical name before restarting RTR.
You can also change the above command in RTR$STARTUP.COM to the following:
$ DEFINE/SYSTEM RTR_PREF_PROT RTR_TCP_FIRST |
When creating a facility using TCP/IP as the default, you can specify dna.nodename to override TCP/IP and use DECnet for a specific link. Similarly, when using DECnet as the default, you can specify tcp.nodename to use TCP/IP for a specific link. If the wrong transport has been assigned to a link, you must trim all facilities to remove nodes using the link (use the TRIM FACILITY command) to remove the link, then add the nodes back into the facility specifying the correct transport.
To run the DECnet protocol exclusively, use the following definition for the RTR preferred protocol logical name:
$ DEFINE/SYSTEM RTR_PREF_PROT RTR_DNA_ONLY |
For examples of this command syntax, see the section on Network
Transports in the Reliable Transaction Router System Manager's
Manual.
4.3.1 Supported Products
Network products supported are listed in the RTR Version 3 Software Product Description.
A number of changes that affect system management have been introduced
with RTR Version 3. The following sections describe these changes.
5.1 OpenVMS Quotas
RTR Version 2 used OpenVMS quota values specified on the RTR START command or calculated defaults. Because RTR Version 3 uses dynamic allocation (with the exception of the number of partitions that is statically defined), RTR does not calculate the required quotas, but depends on the system manager to configure quotas adequately. The value of maximum partitions is now set at 500. (See the RTR Release Notes for further information on partitions.)
For example, with RTR Version 2 you were required to explicitly specify the number of links or the number of facilities if defaults were too low. You no longer need to specify each RTR parameter value manually. Additionally, because RTR Version 3 uses mailboxes, you use the appropriate OpenVMS quotas to establish sufficient resources to support RTR Version 3 interprocess communication.
In RTR Version 3, all these parameters are governed by OpenVMS quotas.
To establish these for RTR Version 3, DIGITAL recommends that you
record the actual quotas used by RTR Version 2 on each node and add 50
percent to these values for RTR Version 3. See Table 2-1 for some
specifics.
5.2 Startup
There is a new RTR$STARTUP.COM file in SYS$STARTUP.
It contains several changes including specifying RTR file locations,
and choice of transport (protocol).
5.3 Creating Facilities
You create facilities the same way in RTR Version 3 as in RTR Version 2
5.3.1 Naming Nodes
With the addition of TCP/IP and DECnet-Plus (DECnet/OSI) to RTR Version
3, you can now use longnames for node names.
5.3.2 Modifying Facility Configurations
To modify facilities, you use the same procedures in RTR Version 3 as in RTR Version 2. One facility command has been changed:
SET FACILITY/BROADCAST=MINIMUM=n |
SET FACILITY/BROADCAST_MINIMUM_RATE=n |
All supported operating systems can interoperate together in the RTR environment, as described in Table 5-1.
RTR Version 3 nodes interoperate with... | Description |
---|---|
Other RTR Version 2 nodes |
In RTR Version 3, RTR uses data marshalling (examination of byte format
of messages) and can handle data of more than one byte format, making
the appropriate translation as required. However, an application
running with RTR may not adequately handle different the byte formats
used on different hardware architectures.
RTR Version 3 lets you run both RTR Version 2 and RTR Version 3 nodes in the same environment, but because the RTR Version 2 API does not have the data marshalling capability, an RTR Version 2 application must deal with the different data formats. |
Other RTR Version 3 nodes | RTR Version 3 is fully compatible with other nodes running RTR Version 3. See the RTR Version 3 Release Notes for specifics on known requirements and restrictions. |
Several screens that provide dynamic information on transactions and
system state have changed for RTR Version 3, as described in the
following sections.
5.5.1 RTR Version 2 Screens
Table 5-2 lists the RTR Version 2 screens that are no longer available in RTR Version 3. In general, information in these monitor pictures is no longer applicable. For example, there is no longer a need to examine cache, because RTR Version 3 deals with memory management using OpenVMS mailboxes.
bequorum | cache | chmdata |
chmmsg | declare | delayproc |
dtinfo | failure | memory |
msgacpsys | packets | process |
toptps | trquorum |
Table 5-3 lists the monitor screens that are new to RTR Version 3.
Picture | Description | |
---|---|---|
accfail | Shows most recent links on which a connection attempt was declined. | |
acp2app | Shows RTRACP-to-application message counts. | |
app2acp | Shows application-to-RTRACP message counts. | |
broadcast | Shows information about RTR user events by process. | |
connect | Renamed to netstat. | |
connects | Shows connection status summary. | |
dtx | Shows distributed transaction calls status. | |
dtxrec | Shows distributed transaction recovery status and journal states. | |
event | Shows event routing data by facility. | |
frontend | Shows frontend status and counts by node and facility, including frontend state, current router, reject status, retry count, and quorum rejects. | |
group | Shows server and transaction concurrency by partition. | |
ipc | Shows interprocess communication counts for messages and connects. | |
jcalls | Displays counts of successful (success), failed (fail), and total journal calls for local and remote journals. | |
netstat | Shows link counters relating to connection management, with link state and architecture of remote nodes. | |
rdm | Shows memory used by each RTR subsystem. | |
rejects | Displays the last rtr_mt_rejected message received by each running process. | |
rejhist | Displays the last ten rtr_mt_rejected messages received by each running process. | |
response | Displays the elapsed time that a transaction has been active on the opened channels of a process. | |
rfb | Displays router failback operations, both a summary and detail by facility. | |
routing | Displays statistics of transaction and broadcast traffic by facility. | |
rscbe | Displays the most recent calls history for the RSC subsystem on a backend node. | |
system | Displays high level summary of critical monitor pictures. | |
tpslo | Shows transaction commits by process. | |
trans | Displays transactions by frontend, facility, and user for each frontend, router, and backend. | |
V2calls | Shows RTR Version 2 system service calls. | |
XA 1 | Displays status of XA interface activities. |
Previous | Next | Contents | Index |