Previous | Contents | Index |
The RTR environment has two parts:
You manage your RTR environment from a management station, which can be on a node running RTR or on some other node. You can manage your RTR environment either from your management station running a network browser, or from the command line using the RTR CLI. From a managment station using a network browser, processes use the http protocol for communication.
The RTR system management environment contains four processes:
The RTR Control Process, RTRACP, is the master program. It resides on every node where RTR has been installed and is running. RTRACP performs the following functions:
RTRACP handles interprocess communication traffic, network traffic, and is the main repository of runtime information. ACP processes operate across all RTR roles and execute certain commands both locally and at remote nodes. These commands include:
RTR CLI is the Command Line Interface that:
Commands executed directly by the CLI include:
RTR COMSERV is the Command Server Process that:
The Command Server Process executes commands both locally and across nodes. Commands that can be executed at the RTR COMSERV include:
The RTR system management environment is illustrated in Figure 5-1.
Figure 5-1 RTR System Management Environment
RTR Monitor pictures or the RTR Monitor let you view the status and
activities of RTR and your applications. A monitor picture is dynamic,
its data periodically updated. RTR SHOW commands that also let you view
status are snapshots, giving you a view at one moment in time. A full
list of RTR Monitor pictures is available in the RTR System
Manager's Manual "RTR Monitoring" chapter and in the
help file under RTR_Monitoring. Many RTR Monitor pictures are available
using the RTR browser interface.
Transaction Management
The RTR transaction is the heart of an RTR application, and transaction state characterizes the current condition of a transaction. As a transaction passes from one state to another, it undergoes a state transition. Transaction states are maintained in memory, and some are stored in the RTR journal for use in recovery.
RTR uses three transaction states to track transaction status:
Transaction runtime state describes how a transaction progresses from the point of view of RTR roles (FE, TR, BE). A transaction, for example, can be in one state as seen from the frontend, and in another as seen from the router.
Transaction journal state describes how a transaction progresses from the point of view of the RTR journal. The transaction journal state, not seen by frontends and routers, managed by the backend, is used by RTR for recovery replay of a transaction after a failure.
Transaction server state, also managed by the backend, describes how a transaction progresses from the point of view of the server. RTR uses this state to determine if a server is available to process a new transaction, or if a server has voted on a particular transaction.
The RTR SHOW TRANSACTION command shows transaction status, and the RTR
SET TRANSACTION command can be used, under certain well-constrained
circumstances, to change the state of a live transaction. For more
details on use of SHOW and SET commands, see the RTR System
Manager's Manual.
Partition Management
Partitions are subdivisions of a routing key range of values used with a partitioned data model and RTR data-content routing. Partitions exist for each range of values in the routing key for which a server is available to process transactions. Redundant instances of partitions can be started in a distributed network, to which RTR automatically manages the state and flow of transactions. Partitions and their characterisitcs can be defined by the system manager or operator, as well as within application programs.
RTR management functions enable the operation to manage many partition-based attributes and functions including:
The operator can selectively inspect transactions, modify states, or remove transactions from the journal or the running RTR system. This allows for greater operational control and enhanced management of a system where RTR is running.
For more details on managing partitions and their use in applications, see the RTR System Manager's Manual chapter "Partition Management."
When all RTR and application components are running, the RTR runtime environment contains:
Figure 5-2 shows these components and their placement on frontend, router, and backend nodes. The frontend, router, and backend can be on the same or different nodes. If these are all on the same node, there is only one RTRACP process.
Figure 5-2 RTR Runtime Environment
This concludes the material on RTR concepts and capabilities that all users and implementors should know. For more information, proceed as follows:
If you are: | Read these documents: |
---|---|
a system manager, system administrator, or software installer |
1. RTR
Release Notes
2. RTR Installation Guide 3. RTR Migration Guide (if upgrading from RTR V2 to V3) 4. RTR System Manager's Manual |
an applications or system management developer, programmer, or software engineer |
RTR
Application Design Guide
RTR C++ Foundation Classes RTR C Application Programmer's Reference Manual |
A few additional terms are defined in the Glossary to the Reliable
Transaction Router Application Design Guide.
ACID: Transaction properties supported by RTR:
atomicity, consistency, isolation, durability.
ACP: The RTR Application Control Process.
API: Application Programming Interface.
applet: A small application designed for running on a
browser.
application: User-written software that uses employs
RTR.
application classes: The C++ API classes used for
implementing client and server applications.
backend: BE, the physical node in an RTR facility
where the server application runs.
bank: An establishment for the custody of money, which
it pays out on a customer's request.
branch: A subdivision of a bank; perhaps in another
town.
broadcast: A nontransactional message.
callout server: A server process used for
transactional authentication.
channel: A logical port opened by an application with
an identifier to exchange messages with RTR.
client: A client is always a client application, one that initiates and demarcates a piece of work. In the context of RTR, a client must run on a node defined to have the frontend role. Clients typically deal with presentation services, handling forms input, screens, and so on. A browser, perhaps running an applet, could connect to a web application that acts as an RTR client, sending data to a server through RTR.
In other contexts, a client can be a physical system, but in the
context of RTR and in this document, such a system is always called a
frontend or a node.
client classes: C++ foundation classes used for
implementing client applications.
commit process: The transactional process by which a
transaction is prepared, accepted, committed, and hardened in the
database.
commit sequence number (CSN): A sequence number
assigned to an RTR commit group, established by the vote window, the
time interval during which transaction response is returned from the
backend to the router. All transactions in the commit group have the
same CSN and lock the database.
common classes: C++ foundation classes that can be
used in both client and server applications.
concurrent server: A server process identical to other
server processes running on the same node.
CPU: Central processing unit.
data marshalling: The capability of using systems of
different architectures (big endian, little endian) within one
application.
data object: See RTRData object.
deadlock: Deadly embrace, a situation that occurs when
two transactions or parts of transactions conflict with each other,
which could violate the consistency ACID property when committing them
to the database.
disk shadowing: A process by which identical data are
written to multiple disks to increase data availability in the event of
a disk failure. Used in a cluster environment to replicate entire disks
or disk volumes. See also transactional shadowing.
dispatch: A method in the C++ API RTRData class which,
when called, interprets the contents on the RTRData object and calls an
appropriate handler to process the data. The handler chosen to process
the data is the handler registered with the transaction controller.
This method is used with the event-driven receive model.
DTC: Microsoft Distributed Transaction Coordinator.
endian: The byte-ordering of multibyte values. Big
endian: high-order byte at starting address; little endian: low-order
byte at starting address.
event: RTR or application-generated information about
an application or RTR.
event driven: A processing model in which the
application receives messages and events by registering handlers with
the transaction controller. These handlers are derived from the C++
foundation class message and event-handler classes.
event handler: A C++ API-derived object used in
event-driven processing that processes events.
facility: The mapping between nodes and roles used by
RTR and established when the facility is created.
facility manager: A C++ API management class that
creates and deletes facilities.
facility member: A defined entity within a facility. A
facility member is a role and node combined. Can be a client, router or
server.
failover: The ability to continue operation on a
second system when the first has failed or become disconnected.
failure tolerant: Software that enables an application
to continue when failures such as node or site outages occur. Failover
is automatic.
fault tolerant: Hardware built with redundant
components to ensure that processing survives component failure.
frontend: FE, the physical node in an RTR facility
where the client application runs.
FTP: File transfer protocol.
inquorate: Nodes/roles that cannot participate in a
facility's transactions are inquorate.
journal: A file containing transactional messages used
for recovery.
key range: An attribute of a key segment, for example
a range A to E or F to K.
key segment: An attribute of a partition that defines
the type and range of values that the partition handles.
LAN: Local area network.
link: A communications path between two nodes in a
network.
local node: The node on which a C++ API client or
server application runs. The local node is the computer on which this
instance of the RTR application is executing.
management classes: C++ API classes used by new or
existing applications to manage RTR.
member: See facility member.
message: A logical grouping of information transmitted
between software components, typically over network links.
message handler: A C++ API-derived object used in
event-driven processing that processes messages.
multichannel: An application that uses more than one
channel. A server is usually multichannel.
multithreaded: An application that uses more than one
thread of execution in a single process.
MS DTC: Microsoft DTC; see DTC.
node: A physical system.
nontransactional message: A message containing data
that does not contain any part of a transaction such as a broadcast or
diagnostic message. See transactional message.
partition: RTR transactions can be sent to a specific
database segment or partition. This is data content routing and handled
by RTR when so programmed in the application and specified by the
system administrator. A partition can be in one of three states:
primary, standby, and shadow.
partition properties: Information about the attributes
of a partition.
polling: A processing method where the application
polls for incoming messages.
primary: The state of the partition servicing the
original data store or database. A primary has a secondary or shadow
counterpart.
process: The basic software entity, including address
space, scheduled by system software, that provides the context in which
an image executes.
properties: Application, transaction and system
information.
property classes: Classes used for obtaining
information about facilities, partitions, and transactions.
quorate: Nodes/roles in a facility that has quorum are
quorate.
quorum: The minimum number of routers and backends in
a facility, usually a majority, who must be active and connected for
the valid completion of processing.
quorum node: A node, defined specified in a facility
as a router, whose purpose is not to process transactions but to ensure
that quorum negotiations are possible.
quorum threshold: The minimum number of routers and
backends in a facility required to achieve quorum.
roles: Roles are defined for each node in an RTR
configuration based on the requirements of a specific facility. Roles
are frontend, router, or backend.
rollback: When a transaction has been committed on the
primary database but cannot be committed on its shadow, the committed
transaction must be removed or rolled back to restore the database to
its pre-transaction state.
router: The RTR role that manages traffic between RTR
clients and servers.
RTR configuration: The set of nodes, disk drives, and
connections between them used by RTR.
RTR environment: The RTR run-time and system
management areas.
RTRData object: An instance of the C++ API RTRData
class. This object contains either a message or an event. It is used
for both sending and receiving data between client and server
applications.
secondary: See shadow.
server: A server is always a server application or process, one that reacts to a client application's units of work and carries them through to completion. This may involve updating persistent storage such as a database file, toggling the switch on a device, or performing another pre-defined task. In the context of RTR, a server must run on a node defined to have the backend role.
In other contexts, a server may be a physical node, but in RTR and in
this document, physical servers are called backends or nodes.
server classes: C++ foundation classes used for
implementing server applications.
shadow: The state of the server process that services
a copy of the data store or primary database. In the context of RTR,
the shadow method is transactional shadowing, not disk shadowing. Its
counterpart is primary.
SMP: Symmetric MultiProcessing.
standby: The state of the partition that can take over
if the process for which it is on standby is unavailable. It is held in
reserve, ready for use.
TPS: Transactions per second.
transaction: An operation performed on a database, typically causing an update to the database. Analogous in many cases to a business transaction such as executing a stock trade or purchasing an item in a store. A business transaction may consist of one or more than one RTR transaction. A transaction is classified as original, replay, or recovery, depending on how it arrives at the backend:
Original---Transaction arrived on the first attempt from the client.
Replay---Transaction arrived after some failure as the result of a re-send from the client (that is, from the client transaction-replay buffers in the RTRACP).
Recovery---Transaction arrived as the result of a backend-to-backend recovery operation (recovery from the journal).
transaction controller: A transaction controller
processes transactions. A transaction controller may have 0 or 1
transactions active at any moment in time. It is through the
transaction controller that messages and events are sent and received.
transactional message: A message containing
transactional data.
transactional shadowing: A process by which identical
transactional data are written to separate disks often at separate
sites to increase data availability in the event of site failure. See
also disk shadowing.
two-phase commit: A database commit/rollback concept
that works in two steps: 1. The coordinator asks each local recovery
manager if it is able to commit the transaction. 2. If and only if all
local recovery managers agree that they can commit the transaction, the
coordinator commits the transaction. If one or more recovery managers
cannot commit the transaction, then all are told to roll back the
transaction. Two- phase commit is an all-or-nothing process: either all
of a transaction is committed, or none of it is.
WAN: Wide area network.
Index | Contents |