Digital TCP/IP Services for OpenVMS
ONC RPC Programming


Previous | Contents

The simplest routine that creates a client handle is clnt_create:

     clnt=clnt_create(server_host,prognum,versnum,transport); 

The parameters here are the name of the host on which the service resides, the program and version number, and the transport to be used. The transport can be either udp for UDP or tcp for TCP. You can change the default timeouts by using clnt_control. For more information, refer to Section 2.7.

3.3.3 Memory Allocation with XDR

To enable memory allocation, the second parameter of xdr_bytes is a pointer to a pointer to an array of bytes, rather than the pointer to the array itself. If the pointer has the value NULL, then xdr_bytes allocates space for the array and returns a pointer to it, putting the size of the array in the third argument. For example, the following XDR routine xdr_chararr1, handles a fixed array of bytes with length SIZE:

xdr_chararr1(xdrsp, chararr) 
     XDR *xdrsp; 
     char *chararr; 
{ 
     char *p; 
     int len; 
 
     p = chararr; 
     len = SIZE; 
     return (xdr_bytes(xdrsp, &p, &len, SIZE)); 
} 

Here, if space has already been allocated in chararr, it can be called from a server like this:

     char array[SIZE]; 
 
     svc_getargs(transp, xdr_chararr1, array); 

If you want XDR to do the allocation, you must rewrite this routine in this way:

xdr_chararr2(xdrsp, chararrp) 
     XDR *xdrsp; 
     char **chararrp; 
{ 
     int len; 
 
     len = SIZE; 
     return (xdr_bytes(xdrsp, charrarrp, &len, SIZE)); 
} 

The RPC call might look like this:

     char *arrayptr; 
 
     arrayptr = NULL; 
     svc_getargs(transp, xdr_chararr2, &arrayptr); 
     /* 
      * Use the result here 
      */ 
     svc_freeargs(transp, xdr_chararr2, &arrayptr); 

After using the character array, you can free it with svc_freeargs; this will not free any memory if the variable indicating it has the value NULL. For example, in the earlier routine xdr_finalexample in Section 3.2.5, if finalp->string was NULL, it would not be freed. The same is true for finalp->simplep.

To summarize, each XDR routine is responsible for serializing, deserializing, and freeing memory as follows:

When building simple examples as shown in this section, you can ignore the three modes. See Chapter 4 for examples of more sophisticated XDR routines that determine mode and any required modification.

3.4 Raw RPC

Raw RPC refers to the use of pseudo-RPC interface routines that do not use any real transport at all. These routines, clntraw_create and svcraw_create, help in debugging and testing the non-communications-oriented aspects of an application before running it over a real network.

Examples

3-6
shows their use.

In this example,

Examples

3-6 Debugging and Testing the Noncommunication Parts of an Application
/* 
* A simple program to increment the number by 1 
*/ 
#include <stdio.h> 
#include <rpc/rpc.h> 
#include <rpc/raw.h>    /* required for raw */ 
 
struct timeval TIMEOUT = {0, 0}; 
static void server(); 
 
main() 
     int argc; 
     char **argv; 
{ 
     CLIENT *clnt; 
     SVCXPRT *svc; 
     int num = 0, ans; 
     int exit(); 
 
     if (argc == 2) 
          num = atoi(argv[1]); 
     svc = svcraw_create(); 
 
     if (svc == NULL) { 
          fprintf(stderr,"Could not create server handle\n"); 
          exit(1); 
     } 
 
     svc_register(svc, 200000, 1, server, 0); 
     clnt = clntraw_create(200000, 1); 
 
     if (clnt == NULL) { 
          clnt_pcreateerror("raw"); 
          exit(1); 
     } 
 
     if (clnt_call(clnt, 1, xdr_int, &num, xdr_int, &ans, 
       TIMEOUT) != RPC_SUCCESS) { 
          clnt_perror(clnt, "raw"); 
          exit(1); 
     } 
 
     printf("Client: number returned %d\n", ans); 
     exit(0) ; 
} 
 
static void 
server(rqstp, transp) 
 
     struct svc_req *rqstp; /* the request */ 
     SVCXPRT *transp; /* the handle created by svcraw_create */ 
{ 
     int num; 
     int exit(); 
 
     switch(rqstp->rq_proc) { 
     case 0: 
          if (svc_sendreply(transp, xdr_void, 0) == FALSE) { 
               fprintf(stderr, "error in null proc\n"); 
               exit(1); 
          } 
          return; 
     case 1: 
          break; 
     default: 
          svcerr_noproc(transp); 
          return; 
     } 
 
     if (!svc_getargs(transp, xdr_int, &num)) { 
          svcerr_decode(transp); 
          return; 
     } 
 
     num++; 
     if (svc_sendreply(transp, xdr_int, &num) == FALSE) { 
          fprintf(stderr, "error in sending answer\n"); 
          exit(1); 
     } 
 
     return; 
} 

3.5 Miscellaneous RPC Features

The following sections describe other useful features for RPC programming.

3.5.1 Using Select on the Server Side

Suppose a process simultaneously responds to RPC requests and performs another activity. If the other activity periodically updates a data structure, the process can set an alarm signal before calling svc_run. However, if the other activity must wait on a file descriptor, the svc_run call does not work. The code for svc_run is as follows:

void 
svc_run() 
{ 
     fd_set readfds; 
     int dtbsz = getdtablesize(); 
 
     for (;;) { 
          readfds = svc_fds; 
          switch (select(dtbsz, &readfds, NULL, NULL, NULL)) { 
 
          case -1: 
               if (errno != EBADF) 
                    continue; 
               perror("select"); 
               return; 
          case 0: 
               continue; 
          default: 
               svc_getreqset(&readfds); 
          } 
     } 
} 

You can bypass svc_run and call svc_getreqset if you know the file descriptors of the sockets associated with the programs on which you are waiting. In this way, you can have your own select that waits on the RPC socket, and you can have your own descriptors. Note that svc_fds is a bit mask of all the file descriptors that RPC uses for services. It can change whenever the program calls any RPC library routine, because descriptors are constantly being opened and closed, for example, for TCP connections.


Note

If you are handling signals in your application, do not make any system call that accidentally sets errno. If this happens, reset errno to its previous value before returning from your signal handler.

3.5.2 Broadcast RPC

The Portmapper required by broadcast RPC is a daemon that converts RPC program numbers into TCP/IP protocol port numbers. The main differences between broadcast RPC and normal RPC are the following:

In the following example, the procedure eachresult is called each time the program obtains a response. It returns a boolean that indicates whether the user wants more responses. If the argument eachresult is NULL, clnt_broadcast returns without waiting for any replies:

#include <rpc/pmap_clnt.h> 
     . 
     . 
     . 
     enum clnt_stat  clnt_stat; 
     u_long    prognum;        /* program number */ 
     u_long    versnum;        /* version number */ 
     u_long    procnum;        /* procedure number */ 
     xdrproc_t inproc;         /* xdr routine for args */ 
     caddr_t   in;             /* pointer to args */ 
     xdrproc_t outproc;        /* xdr routine for results */ 
     caddr_t   out;            /* pointer to results */ 
     bool_t    (*eachresult)();/* call with each result gotten */ 
     . 
     . 
     . 
     clnt_stat = clnt_broadcast(prognum, versnum, procnum, 
       inproc, in, outproc, out, eachresult) 

In the following example, if done is TRUE, broadcasting stops and clnt_broadcast returns successfully. Otherwise, the routine waits for another response. The request is rebroadcast after a few seconds of waiting. If no responses come back in a default total timeout period, the routine returns with RPC_TIMEDOUT:

     bool_t done; 
     caddr_t resultsp; 
     struct sockaddr_in *raddr; /* Addr of responding server */ 
     . 
     . 
     . 
     done = eachresult(resultsp, raddr) 

For more information, see Section 2.8.1.

3.5.3 Batching

In normal RPC, a client sends a call message and waits for the server to reply by indicating that the call succeeded. This implies that the client must wait idle while the server processes a call. This is inefficient if the client does not want or need an acknowledgment for every message sent.

Through a process called batching, a program can place RPC messages in a "pipeline" of calls to a desired server. In order to use batching the following conditions must be true: which:

Because the server does not respond to every call, the client can generate new calls in parallel with the server executing previous calls. Also, the TCP/IP implementation holds several call messages in a buffer and sends them to the server in one write system call. This overlapped execution greatly decreases the interprocess communication overhead of the client and server processes, and the total elapsed time of a series of calls. Because the batched calls are buffered, the client must eventually do a nonbatched call to flush the pipeline. When the program flushes the connection, RPC sends a normal request to the server. The server processes this request and sends back a reply.

In the following example of server batching, assume that a string-rendering service (in example, a simple print to stdout) has two similar calls---one provides a string and returns void results, and the other provides a string and does nothing else. The service (using the TCP/IP transport) may look like

Examples

3-7.

Examples

3-7 Server Batching
#include <stdio.h> 
#include <rpc/rpc.h> 
#include "render.h" 
 
void renderdispatch(); 
 
main() 
{ 
     SVCXPRT *transp; 
     int exit(); 
 
     transp = svctcp_create(RPC_ANYSOCK, 0, 0); 
     if (transp == NULL){ 
          fprintf(stderr, "can't create an RPC server\n"); 
          exit(1); 
     } 
 
     pmap_unset(RENDERPROG, RENDERVERS); 
 
     if (!svc_register(transp, RENDERPROG, RENDERVERS, 
       renderdispatch, IPPROTO_TCP)) { 
          fprintf(stderr, "can't register RENDER service\n"); 
          exit(1); 
     } 
 
     svc_run();  /* Never returns */ 
     fprintf(stderr, "should never reach this point\n"); 
} 
 
void 
renderdispatch(rqstp, transp) 
 
     struct svc_req *rqstp; 
     SVCXPRT *transp; 
{ 
     char *s = NULL; 
 
     switch (rqstp->rq_proc) { 
     case NULLPROC: 
          if (!svc_sendreply(transp, xdr_void, 0)) 
               fprintf(stderr, "can't reply to RPC call\n"); 
          return; 
     case RENDERSTRING: 
          if (!svc_getargs(transp, xdr_wrapstring, &s)) { 
               fprintf(stderr, "can't decode arguments\n"); 
               /* 
                * Tell client he erred 
                */ 
               svcerr_decode(transp); 
               return; 
          } 
          /* 
           * Code here to render the string "s" 
           */ 
          printf("Render: %s\n"), s; 
          if (!svc_sendreply(transp, xdr_void, NULL)) 
               fprintf(stderr, "can't reply to RPC call\n"); 
          break; 
     case RENDERSTRING_BATCHED: 
          if (!svc_getargs(transp, xdr_wrapstring, &s)) { 
               fprintf(stderr, "can't decode arguments\n"); 
               /* 
                * We are silent in the face of protocol errors 
                */ 
               break; 
          } 
          /* 
           * Code here to render string s, but send no reply! 
           */ 
          printf("Render: %s\n"), s; 
          break; 
     default: 
          svcerr_noproc(transp); 
          return; 
     } 
     /* 
      * Now free string allocated while decoding arguments 
      */ 
     svc_freeargs(transp, xdr_wrapstring, &s); 
} 

In the previous example, the service could have one procedure that takes the string and a boolean to indicate whether the procedure will respond. For a client to use batching effectively, the client must perform RPC calls on a TCP-based transport and the actual calls must have the following attributes:

If a UDP transport is used instead, the client call becomes a message to the server and the RPC mechanism becomes simply a message-passing system, with no batching possible. In

Examples

3-8, a client uses batching to supply several strings; batching is flushed when the client gets a null string (EOF).

In this example, the server sends no message, making the clients unable to receive notice of any failures that may occur. Therefore, the clients must handle any errors.

Using a UNIX-to_UNIX RPC connection, an example similar to this one was completed to render all of the lines (approximately 2000) in the UNIX file /etc/termcap. The rendering service simply discarded the entire file. The example was run in four configurations, in different amounts of time:

In the test environment, running only fscanf on /etc/termcap required 6 seconds. These timings show the advantage of protocols that enable overlapped execution, although they are difficult to design.

Examples

3-8 Client Batching
#include <stdio.h> 
#include <rpc/rpc.h> 
#include "render.h" 
 
main(argc, argv) 
     int argc; 
     char **argv; 
{ 
     struct timeval total_timeout; 
     register CLIENT *client; 
     enum clnt_stat clnt_stat; 
     char buf[1000], *s = buf; 
     int exit(), atoi(); 
     char *host, *fname; 
     FILE *f; 
     int renderop; 
 
     host = argv[1]; 
     renderop = atoi(argv[2]); 
     fname = argv[3]; 
 
    f = fopen(fname, "r"); 
     if (f == NULL){ 
          printf("Unable to open file\n"); 
          exit(0); 
     } 
     if ((client = clnt_create(argv[1], 
       RENDERPROG, RENDERVERS, "tcp")) == NULL) { 
          perror("clnttcp_create"); 
          exit(-1); 
     } 
 
     switch (renderop) { 
     case RENDERSTRING: 
          total_timeout.tv_sec = 5; 
          total_timeout.tv_usec = 0; 
          while (fscanf(f,"%s", s) != EOF) { 
               clnt_stat = clnt_call(client, RENDERSTRING, 
                 xdr_wrapstring, &s, xdr_void, NULL, total_timeout); 
               if (clnt_stat != RPC_SUCCESS) { 
                    clnt_perror(client, "batching rpc"); 
                    exit(-1); 
               } 
          } 
          break; 
     case RENDERSTRING_BATCHED: 
          total_timeout.tv_sec = 0;       /* set timeout to zero */ 
          total_timeout.tv_usec = 0; 
          while (fscanf(f,"%s", s) != EOF) { 
               clnt_stat = clnt_call(client, RENDERSTRING_BATCHED, 
                 xdr_wrapstring, &s, NULL, NULL, total_timeout); 
               if (clnt_stat != RPC_SUCCESS) { 
                    clnt_perror(client, "batching rpc"); 
                    exit(-1); 
               } 
          } 
     
          /* Now flush the pipeline */ 
 
 
          total_timeout.tv_sec = 20; 
          clnt_stat = clnt_call(client, NULLPROC, xdr_void, NULL, 
            xdr_void, NULL, total_timeout); 
          if (clnt_stat != RPC_SUCCESS) { 
               clnt_perror(client, "batching rpc"); 
               exit(-1); 
          } 
          break; 
     default: 
          return; 
     } 
 
 
     clnt_destroy(client); 
     fclose(f); 
     exit(0); 
} 

3.6 Authentication of RPC Calls

In the examples presented so far, the client never identified itself to the server, nor did the server require it from the client. Every RPC call is authenticated by the RPC package on the server, and similarly, the RPC client package generates and sends authentication parameters. Just as different transports (TCP/IP or UDP/IP) can be used when creating RPC clients and servers, different forms of authentication can be associated with RPC clients. The default authentication type is none. The authentication subsystem of the RPC package, with its ability to create and send authentication parameters, can support commercially available authentication software.

This manual describes only one type of authentication---authentication through the operating system. The following sections describe client and server side authentication through the operating system.

3.6.1 The Client Side

Assume that a client creates the following new RPC client handle:
     clnt = clntudp_create(address, prognum, versnum, wait, sockp) 

The client handle includes a field describing the associated authentication handle:

     clnt->cl_auth = authnone_create(); 

The RPC client can choose to use authentication that is native to the operating system by setting clnt->cl_auth after creating the RPC client handle:

     clnt->cl_auth = authunix_create_default(); 

This causes each RPC call associated with clnt to carry with it the following authentication credentials structure:

     /* 
      * credentials native to the operating system 
      */ 
     struct authunix_parms { 
          u_long  aup_time;       /* credentials creation time  */ 
          char    *aup_machname;  /* host name where client is  */ 
          int     aup_uid;        /* client's OpenVMS uid       */ 
          int     aup_gid;        /* client's current group id  */ 
          u_int   aup_len;        /* element length of aup_gids */ 
                                  /* (set to 0 on OpenVMS)      */ 
          int     *aup_gids;      /* array of groups user is in */ 
                                  /* (set to NULL on OpenVMS)   */ 
     }; 

In this example, the fields are set by authunix_create_default by invoking the appropriate system calls. Because the program created this new style of authentication, the program is responsible for destroying it (to save memory) with the following:

     auth_destroy(clnt->cl_auth); 

3.6.2 The Server Side

It is difficult for service implementors to handle authentication because the RPC package passes to the service dispatch routine a request that has an arbitrary authentication style associated with it. Consider the fields of a request handle passed to a service dispatch routine:
     /* 
      * An RPC Service request 
      */ 
     struct svc_req { 
          u_long  rq_prog;            /* service program number */ 
          u_long  rq_vers;            /* service protocol vers num */ 
          u_long  rq_proc;            /* desired procedure number */ 
          struct opaque_auth rq_cred; /* raw credentials from wire */ 
          caddr_t rq_clntcred;        /* credentials (read only) */ 
     }; 

The rq_cred is mostly opaque, except for one field, the style of authentication credentials:

     /* 
      * Authentication info.  Mostly opaque to the programmer. 
      */ 
     struct opaque_auth { 
          enum_t    oa_flavor;      /* style of credentials */ 
          caddr_t   oa_base;        /* address of more auth stuff */ 
          u_int     oa_length;      /* not to exceed MAX_AUTH_BYTES */ 
     }; 

The RPC package guarantees the following to the service dispatch routine:

The rq_clntcred field also could be cast to a pointer to an authunix_parms structure. If rq_clntcred is NULL, the service implementor can inspect the other (opaque) fields of rq_cred to determine whether the service knows about a new type of authentication that is unknown to the RPC package.

Examples

3-9 extends the previous remote user's service (see

Examples

3-3
) so it computes results for all users except UID 16.

Examples

3-9 Authentication on Server Side
nuser(rqstp, transp) 
     struct svc_req *rqstp; 
     SVCXPRT *transp; 
{ 
     struct authunix_parms *unix_cred; 
     int uid; 
     unsigned long nusers; 
 
     /* 
      * we don't care about authentication for null proc 
      */ 
     if (rqstp->rq_proc == NULLPROC) { 
          if (!svc_sendreply(transp, xdr_void, 0)) 
               fprintf(stderr, "can't reply to RPC call\n"); 
          return; 
     } 
     /* 
      * now get the uid 
      */ 
     switch (rqstp->rq_cred.oa_flavor) { 
     case AUTH_UNIX: 
          unix_cred = (struct authunix_parms *)rqstp->rq_clntcred; 
          uid = unix_cred->aup_uid; 
          break; 
 
     case AUTH_NULL: 
 
     default:        /* return weak authentication error */ 
          svcerr_weakauth(transp); 
          return; 
     } 
     switch (rqstp->rq_proc) { 
     case RUSERSPROC_NUM: 
          /* 
           * make sure client is allowed to call this proc 
           */ 
          if (uid == 16) { 
               svcerr_systemerr(transp); 
               return; 
          } 
          /* 
           * Code here to compute the number of users 
           * and assign it to the variable nusers 
           */ 
          if (!svc_sendreply(transp, xdr_u_long, &nusers)) 
               fprintf(stderr, "can't reply to RPC call\n"); 
          return; 
 
     default: 
          svcerr_noproc(transp); 
          return; 
     } 
} 

As in this example, it is not customary to check the authentication parameters associated with NULLPROC (procedure 0). Also, if the authentication parameter type is not suitable for your service, have your program call svcerr_weakauth.

The service protocol itself returns status for access denied; in this example, the protocol does not do this. Instead, it makes a call to the service primitive, svcerr_systemerr. RPC deals only with authentication and not with the access control of an individual service. The services themselves must implement their own access control policies and reflect these policies as return statuses in their protocols.

3.7 Using the Internet Service Daemon (INETd)

You can start an RPC server from INETd. The only difference from the usual code is that it is best to have the service creation routine called in the following form because INETd passes a socket as file descriptor 0:
     transp = svcudp_create(0);     /* For UDP */ 
     transp = svctcp_create(0,0,0); /* For listener TCP sockets */ 
     transp = svcfd_create(0,0,0);  /* For connected TCP sockets */ 

Also, call svc_register as follows, with the last parameter flag set to 0, because the program is already registered with the Portmapper by INETd:

     svc_register(transp, PROGNUM, VERSNUM, service, 0); 

If you want to exit from the server process and return control to INETd, you must do so explicitly, because svc_run never returns.

To show all the RPC service entries in the UCX services database, use the following command:

UCX> show serv/rpc/perm 
 
Service             Program Number        Versions Supported 
 
MEL                       101010                 1-         10 
TORME                      20202                 1-          2 
UCX> 

To show detailed information about a single RPC service entry in the UCX services database, use the following command:

UCX> SHOW SERVICES/FULL/PERMANENT MEL 
 
Service: MEL 
 
Port:             1111     Protocol:  UDP             Address:  0.0.0.0 
Inactivity:          5     User_name: GEORGE          Process:  MEL 
Limit:               1 
 
File:         NLA0: 
Flags:        Listen 
 
Socket Opts:  Rcheck Scheck 
 Receive:            0     Send:               0 
 
Log Opts:     None 
 File:        not defined 
 
RPC Opts 
 Program number:      101010  Low version:      1   High version:     10 
 
Security 
 Reject msg:  not defined 
 Accept host: 0.0.0.0 
 Accept netw: 0.0.0.0 
UCX> 

For information about how to add RPC servers to the UCX services database, see Digital TCP/IP Services for OpenVMS Management.

3.8 Additional Examples

The following sections present additional examples for server and client sides, TCP, and callback procedures.

3.8.1 Program Versions on the Server Side

By convention, the first version of program PROG is designated as PROGVERS_ORIG and the most recent version is PROGVERS. Suppose there is a new version of the user program that returns an unsigned short result rather than a long result. If you name this version RUSERSVERS_SHORT, then a server that wants to support both versions would register both. It is not necessary to create another server handle for the new version, as shown in this segment of code:

     if (!svc_register(transp, RUSERSPROG, RUSERSVERS_ORIG, 
       nuser, IPPROTO_TCP)) { 
          fprintf(stderr, "can't register RUSER service\n"); 
          exit(1); 
     } 
     if (!svc_register(transp, RUSERSPROG, RUSERSVERS_SHORT, 
       nuser, IPPROTO_TCP)) { 
          fprintf(stderr, "can't register new service\n"); 
          exit(1); 
     } 

You can handle both versions with the same C procedure, as in

Examples

3-10.

Examples

3-10 C Procedure That Returns Two Different Data Types
nuser(rqstp, transp) 
     struct svc_req *rqstp; 
     SVCXPRT *transp; 
{ 
     unsigned long nusers; 
     unsigned short nusers2; 
 
     switch (rqstp->rq_proc) { 
     case NULLPROC: 
          if (!svc_sendreply(transp, xdr_void, 0)) { 
               fprintf(stderr, "can't reply to RPC call\n"); 
               return; 
          } 
          return; 
     case RUSERSPROC_NUM: 
          /* 
           * Code here to compute the number of users 
           * and assign it to the variable, nusers 
           */ 
          nusers2 = nusers; 
          switch (rqstp->rq_vers) { 
          case RUSERSVERS_ORIG: 
               if (!svc_sendreply(transp, xdr_u_long, &nusers)) { 
                    fprintf(stderr,"can't reply to RPC call\n"); 
               } 
               break; 
          case RUSERSVERS_SHORT: 
               if (!svc_sendreply(transp, xdr_u_short, &nusers2)) { 
                    fprintf(stderr,"can't reply to RPC call\n"); 
               } 
               break; 
          } 
     default: 
          svcerr_noproc(transp); 
          return; 
     } 
} 


Previous | Next | Contents