Reliable Transaction Router
Getting Started


Previous Contents Index

RTR Server Types

In the RTR environment, in addition to the placement of frontends, routers, and servers, the application designer must determine what server capabilities to use. RTR provides four types of software servers for application use:

These are described in the next few paragraphs. You specify server types to your application in RTR API calls.

RTR server types help to provide continuous availability and a secure transactional environment.


Standby server


The standby server remains idle while the RTR primary backend server performs its work, accepting transactions and updating the database. When the primary server fails, the standby server takes over, recovers any in-progress transactions, updates the database, and communicates with clients until the primary server returns. There can be many instances of a standby server. Activation of the standby server is transparent to the user.

A typical standby configuration is shown in Figure 1-12, Standby Server Configuration. Both physical servers running the RTR backend software are assumed by RTR to connect to the same database. The primary server is typically in use, and the standby server can be either idle or used for other applications, or data partitions, or facilities. When the primary server becomes unavailable, the standby server takes over and completes transactions as shown by the dashed line. Primary server failure could be caused by server process failure or backend (node) failure.


Standby in a cluster


The intended and most common use of a standby server is in a cluster environment. In a non- cluster environment, seamless failover of standbys is not guaranteed.

Standby servers are "spare" servers which automatically take over from the main backend if it fails. This takeover is transparent to the application.

Figure 1-15 shows a simple standby configuration. The two backend nodes are members of a cluster environment, and are both able to access the database.

For any one key range, the main or primary server (Server) runs on one node while the standby server (Standby) runs on the other node. The standby server process is running, but RTR does not pass any transactions to it. Should the primary node fail, RTR starts passing transactions to (Standby). Note that one node can contain the primary servers for one key range and standby servers for another key range to balance the load across systems. This allows the nodes in a cluster environment to act as standby for other nodes without having idle hardware. When setting up a standby server, both servers must have access to the same journal.

Figure 1-15 Standby Servers



Transactional shadow server


The transactional shadow server places all transactions recorded on the primary server on a second database. The transactional shadow server can be at the same site or at a different site, and must exist in a networked environment.

A transactional shadow server can also have standby servers for greater reliability. When one member of a shadow set fails, RTR remembers the transactions executed at the surviving site in a journal, and replays them when the failed site returns. Only after all journaled transactions are recovered does the recovering site receive new online transactions. Transactional shadowing is done by partition. A transactional shadow configuration can have only two members of the shadow set.

Shadow servers are servers on separate backends which handle the same transactions in parallel on identical copies of the database.

Figure 1-16 shows a simple shadow configuration. The main (BE) Server at Site 1 and the shadow server (Shadow) at Site 2 both receive every transaction for the data partition they are servicing. Should Site 1 fail, Site 2 continues to operate without interruption. Sites can be geographically remote, for example, available at separate locations in a wide area network (WAN).

Figure 1-16 Shadow Servers


Note that each shadow server can also have standby servers.


Concurrent server


The concurrent server is an additional instance of a server application running on the same node. RTR delivers transactions to a free server from the pool of concurrent servers. If one server fails, the transaction in process is replayed to another server in the concurrent pool. Concurrent servers are designed primarily to increase throughput and can exploit Symmetric Multiprocessing (SMP) systems. Figure 1-17, Concurrent Servers, illustrates the use of concurrent servers sending transactions to the same partition on a backend, the partition A-N.

Concurrent servers allow transactions to be processed in parallel to increase throughput. Concurrent servers deal with the same database partition, and may be implemented as multiple channels within a single process or as one channel in separate processes.

Figure 1-17 Concurrent Servers



Callout server


The callout server provides message authentication on transaction requests made in a given facility, and could be used, for example, to provide audit trail logging. A callout server can run on either backend or router nodes. A callout server receives a copy of all messages in a facility. Because the callout server votes on the outcome of each transaction it receives, it can veto any transaction that does not pass its security checks.

A callout server is facility based, not partition based; any message arriving at the facility is routed to both the server and the callout. A callout server is enabled when the facility is defined. Figure 1-18 illustrates the use of a callout server that authenticates every transaction (txn) in a facility.

Figure 1-18 A Callout Server


To authenticate any part of a transaction, the callout server must vote on the transaction, but does not write to the database. RTR does not replay a transaction that is only authenticated.


Authentication


RTR callout servers provide partition-independent processing for authentication. For example, a callout server can enable checks to be carried out on all requests in a given facility.

Callout servers run on backend or router nodes. They receive a copy of every transaction either delivered to or passing through the node.

Callout servers offer the following advantages:

Since this technique relies on backing out unauthorized transactions, it is most suitable when only a small proportion of transactions are expected to fail the security check, so as not to have a performance impact.


Partition


When working with database systems, partitioning the database can be essential to ensuring smooth and untrammeled performance with a minimum of bottlenecks. When you partition your database, you locate different parts of your database on different disk drives to spread both the physical storage of your database onto different physical media and to balance access traffic across different disk controllers and drives.

For example, in a banking environment, you could partition your database by account number, as shown in Figure 1-19. A partition is a segment of your database.

Figure 1-19 Bank Partitioning Example



Key range


Once you have decided to partition your database, you use key ranges in your application to specify how to route transactions to the appropriate database partition. A key range is the range of data held in each partition. For example, the key range for the first partition in the bank partitioning example goes from 00001 to 19999. You can assign a partition name in your application program or have it set by the system manager. Note that sometimes the terms key range and partition are used as synonyms in code examples and samples with RTR, but strictly speaking, the key range defines the partition. A partition has both a name, its partition name, and an identifier generated by RTR --- the partition ID. The properties of a partition (callout, standby, shadow, concurrent, key segment range) can be defined by the system manager with a CREATE PARTITION command. For details of the command syntax, see the RTR System Manager's Manual.

A significant advantage of the partitioning shown in the bank example is that you can add more account numbers without making changes to your application; you need only add another server and disk drive for the new account numbers. For example, say you need to add account numbers from 90,000 to 99,999 to the basic configuration of Figure 1-19, Bank Partitioning Example. You can add these accounts and bring them on line easily. The system manager can change the key range with a command, for example, in an overnight operation, or you can plan to do this during scheduled maintenance.

A partition can also have multiple standby servers.


Standby Server Configurations


A node can be configured as a primary server for one key range and as a standby server for another key range. This helps to distribute the work of the standby servers. Figure 1-20 illustrates this use of standbys with distributed partitioning. As shown in Figure 1-20, Application Server A is the primary server for accounts 1 to 19,999 and Application Server B is the standby for these same accounts. Application Server B is the primary for accounts 20,000 to 39,999 and Application Server A can be the standby for these same accounts (not shown in the figure). For clarity, account numbers are shown only for primary servers and one standby server.

Figure 1-20 Standby with Partitioning



Anonymous clients


RTR supports anonymous clients, that is, clients can be set up in a configuration using wildcarded node names.


Tunnel


RTR can also be used with firewall tunneling software, which supports secure internet communication for an RTR connection, either client-to-router, or router-to-backend.


RTR Networking Capabilities

Depending on operating system, RTR uses TCP/IP or DECnet as underlying transports for the virtual network (RTR facilities) and can be deployed in both local area and wide area networks. PATHWORKS 32 is required for DECnet configurations on Windows NT.


Chapter 2
Architectural Concepts

This chapter introduces concepts on basic transaction processing and RTR architecture.


The Three-Layer Model

RTR is based on a three-layer architecture consisting of frontend (FE) roles, backend (BE) roles and router (TR) roles. The roles are shown in Figure 2-1. In this and subsequent diagrams, rectangles represent physical nodes, ovals represent application software, and DB represents the disks storing the database (and usually the database software that runs on the server).

Figure 2-1 The Three Layer Model


Client processes run on nodes defined to have the frontend role. This layer allows computing power to be provided locally at the end-user site for transaction acquisition and presentation.

Server processes (represented by "Server" in Figure 2-1) run on nodes defined to have the backend role. This layer:

The router layer contains no application software unless running callout servers. This layer reduces the number of logical network links required on frontend and backend nodes. It also decouples the backend layer from the frontend layer so that configuration changes in the (frequently changing) user environment have little influence on the transaction processing and database (backend) environment.

The three layer model can be mapped to any system topology. More than one role may be assigned to any particular node. For example, on a system with few frontends, the router and frontend layers can be combined in the same nodes. During application development and test, all three roles can be combined in one node.

The nodes used by an application and their configuration roles are specified using RTR configuration commands. RTR lets application code be completely location and configuration independent.


RTR Facilities Bridge the Gap

Many applications can use RTR at the same time without interfering with one another. This is achieved by defining a separate facility for each application.

When an application calls the rtr_open_channel() routine to declare a channel as a client or server, it specifies the name of the facility it will use.

See the RTR System Manager's Manual for information on how to define facilities.


Broadcasts

Sometimes an application has a requirement to send unsolicited messages to multiple recipients.

An example of such an application is a commodity trading system, where the clients submit orders and also need to be informed of the latest price changes.

The RTR broadcast capability meets this requirement.

Recipients subscribe to a class of broadcasts; a sender broadcasts a message in this class, all interested recipients receive the message.

RTR permits clients to broadcast messages to one or more servers, or servers to broadcast to one or more clients. If a server needs to broadcast a message to another server, it must open a second channel as a client.


Flexibility and Growth

RTR allows you to cope easily with changes in:

Since an RTR-based system can be built using multiple systems at each functional layer, it easily lends itself to step-by-step growth, avoiding unused capacity at each stage. With your system still up and running, it is possible to:

This means you do not need to provide spare capacity to allow for growth.

RTR also allows parallel execution. This means that different parts of a single transaction can be processed in parallel by multiple servers.

RTR provides a comprehensive set of monitoring tools to help you evaluate the volume of traffic passing through the system. This can help you respond to unexpected load changes by altering the system configuration dynamically.


Transaction Integrity

RTR greatly simplifies the design and coding of distributed applications, because, with RTR, database actions can be bundled together into transactions.

To ensure that your application deals with transactions correctly, its transactions must be:

These are the ACID properties of transactions. For more detail on these properties, see the Reliable Transaction Router Application Design Guide.


The Partitioned Data Model

One goal in designing for high transaction throughput is reducing the time that users must wait for shared resources.

While many elements of a transaction processing system can be duplicated, one resource that must be shared is the database. Users compete for a shared database in three ways:

This competition can be alleviated by spreading the database across several backend nodes, each node being responsible for a subset of the data, or partition. RTR enables you to implement this partitioned data model, shown roughly in Figure 2-2 where the database has three partitions. RTR routes messages to the correct partition on the basis of an application-defined key. For a more complete description of partitioning as provided with RTR, see the Reliable Transaction Router Application Design Guide.

Figure 2-2 Partitioned Data Model



Object-Oriented Programming

The C++ foundation classes map traditional RTR functional programming concepts into an object-oriented programming model. Using the power and features of these foundation classes requires a basic understanding of the differences between functional and object-oriented programming concepts. Table 2-1 compares the worlds of functional programming and object-oriented programming.

Table 2-1 Functional and Object-Oriented Programming Compared
Functional Programming Object-Oriented Programming
A program consists of data structures and algorithms. A program consists of a team of cooperating objects.
The basic programming unit is the function, that when run, implements an algorithm. The basic programming unit is the class, that when instantiated, implements an object.
Functions operate on elemental data types or data structures. Objects communicate by sending messages.
An application's architecture consists of a hierarchy of functions and sub-functions. An applications architecture consists of objects that model entities of the problem domain. Objects' relationships can vary.

Objects

In the object-oriented environment, a program or application is a grouping of cooperating objects. The basic programming unit is the class. Instantiating, or declaring an instance of, a class implements an object. RTR provides object-oriented programming capabilities with the C++ API, described in the C++ Foundation Classes manual. Objects are instances of a class. In a transaction class, each transaction is an object. An object is an instantiated (declared) class. Its state and behavior are determined by the attributes and methods defined in the class. An object or class is defined by its:

The name given at object declaration is its identity. In Example 2-1, the two dog objects King and Fifi are instances of Dog. The Dog class is declared in a header (Dog.h) file and implemented in a .cpp file.

Example 2-1 Objects-Defined Sample

Dog.h: 
class Dog 
{ ... 
}; 
main.cpp: 
#include "Dog.h" 
main() 
{ 
   Dog King; 
   Dog Fifi; 
} 

Messages

Objects communicate by sending messages. This is done by calling an object's methods.

Some principal categories of messages are:

Class Relationships

Classes can be related in the following ways:

Polymorphism

Polymorphism is the ability of objects, inherited from a common base or parent class, to respond differently to the same message. This is done by defining different implementations of the same method name within the individual child class definitions. For example: A DogArray object, "DogArray OurDogs[2];" refers to two element objects of class Dog, the base class:

If, in a program, OurDogs[n]->Bark() is called in a loop, then:

King's bark does not sound like Fifi's bark because each Bark() call is a separately defined method within its child object definition. The virtual parent class (Dog) method Bark() is defined in the class definition of Dog.


Previous Next Contents Index