Digital DCE for OpenVMS VAX and OpenVMS Alpha
Product Guide


Previous Contents Index

3.3.3 Source Compatibility

Compaq recommends that you upgrade all applications developed using DECrpc to DCE-compliant applications. Most applications using DCE RPC make relatively few calls directly to the RPC API, so the conversion effort should be minimal. For most of the API calls, there is a one-to-one mapping between the previous form of the API call and the new format, with only the call names and argument types differing.

3.3.4 Product Coexistence

Installation of this kit does not replace an installation of DECrpc Version 1.0 or 1.1. The two products can coexist on the same system, and applications can continue to use DECrpc while other applications can use the Digital DCE RPC. However, the RPC daemon in the Digital DCE kit must replace the Local Location Broker daemon in DECrpc.

There is no conflict of filenames, facility codes, and the like between the Digital DCE kit and DECrpc. However, Compaq recommends that you remove the DECrpc files from the system when you no longer require them, so that new applications are written using Digital DCE RPC. This ensures that you avoid the possibility of conflict with any future DCE versions.

3.3.5 Directory Service Compatibility

Applications using DECrpc use the Location Broker to share server binding information. The Location Broker includes two components:

The RPC daemon in DCE performs the function of the LLB; that is, the daemon maps RPC calls to processes using dynamic endpoints. The daemon has an RPC interface for registering and looking up mappings that differ from the ones used by the LLB. However, it also supports the LLB RPC interface for backward compatibility, so it can replace the LLB for all applications using the Location Broker calls in the RPC runtime. Note, however, that mappings registered through the new interface and those registered through the LLB interface are maintained in separate databases. Therefore, a mapping registered through the RPC daemon cannot be looked up through the LLB interface, and vice versa.

Compaq recommends that you upgrade DECrpc applications as soon as possible to replace use of the Location Broker with the DCE Directory Service. Compaq will discontinue supporting the use of the Location Broker for compatibility in a future release of DCE products.

Note

For DECrpc and DCE RPC to coexist, you must stop the Local Location Broker daemon (llbd) and use the RPC daemon (rpcd) instead. The RPC daemon is started during DCE configuration. Consult DECrpc documentation (UCX or DEC TCP/IP Services) for information about stopping llbd.

3.4 Interoperability with Other DCE Systems

Digital DCE for OpenVMS VAX and OpenVMS Alpha provides RPC interoperability with Compaq's other DCE offerings, with several restrictions. Digital DCE systems must have at least one network transport in common with a Digital DCE client or server in order to communicate. For example, a Digital DCE client system that supports only the DECnet transport cannot communicate with a DCE server that supports only the Internet transports (TCP/IP and UDP/IP).

This release provides RPC interoperability with other vendors' DCE offerings, with similar restrictions to those listed for other Digital DCE offerings.

The Interface Definition Language provides a data type, (error_status_t), for communicating error status values in remote procedure calls. Data of the error_status_t type is subject to translation to a corresponding native error code. For example, a "memory fault" error status value returned from a DEC OSF/1 system to an OpenVMS system will be translated into the OpenVMS error status value "access violation".

In some cases, information is lost in this translation process. For example, an OpenVMS success or informational message is mapped to a generic success status value on other systems, because most non-OpenVMS systems do not use the same mechanism for successful status values and would interpret the value as an error code.

3.5 Interoperability with Microsoft RPC

DCE systems can interoperate with non-DCE systems that are running Microsoft RPC. Microsoft supplies a DCE-compatible version of remote procedure call software for use on systems running MS-DOS, Windows, or Windows NT. Microsoft RPC systems can also use a DCE name service. The DCE name service can include the Cell Directory Service (CDS). Microsoft RPC servers can export and import binding information, and Microsoft RPC clients can import binding information. Thus, DCE servers can be located and used by Microsoft RPC clients and, similarly, Microsoft RPC servers can be located and used by DCE clients.

Digital DCE for OpenVMS VAX and OpenVMS Alpha includes a name service interface daemon (nsid), also known as the PC Nameserver Proxy Agent, that performs DCE name service clerk functions on behalf of Microsoft RPC clients and servers. Microsoft RPC does not include a DCE name service. Microsoft RPC clients and servers locate an nsid using locally maintained nsid binding information. The binding information consists of the transport over which the nsid is available, the nsid's host network address, and, optionally, the endpoint on which the nsid waits for incoming calls from Microsoft RPC clients and servers. You must provide the nsid's transport and host network address (and, optionally, the nsid's endpoint) to Microsoft RPC clients and servers that want to use the DCE Directory Service with Microsoft RPC applications.

Note

Although your DCE cell may have several NSI daemons running, Microsoft RPC users need the binding for only one nsid. The nsid you choose must be running on a system that belongs to the same DCE cell as the DCE systems with which Microsoft RPC systems will communicate.

You can obtain the nsid binding information by running the rpccp show mapping command on the system where the nsid is running. The following example shows how to enter this command on an OpenVMS VAX system where this release is installed. The nsid bindings are those with the annotation NSID: PC Nameserver Proxy Agent V1.0. Select the appropriate endpoint from among these bindings. In the following example, the nsid binding for the TCP/IP network transport is ncacn_ip_tcp:16.20.16.141[4685].


$ rpccp
rpccp> show mapping


 mappings: 
 . 
 . 
 . 
  <OBJECT>          nil 
  <INTERFACE ID>    D3FBB514-0E3B-11CB-8FAD-08002B1D29C3,1.0 
  <STRING BINDING>  ncacn_ip_tcp:16.20.16.141[4685] 
  <ANNOTATION>      NSID: PC Nameserver Proxy Agent V1.0 
 
  <OBJECT>          nil 
  <INTERFACE ID>    D3FBB514-0E3B-11CB-8FAD-08002B1D29C3,1.0 
  <STRING BINDING>  ncacn_dnet_nsp:2.711[RPC03AB0001] 
  <ANNOTATION>      NSID: PC Nameserver Proxy Agent V1.0 
 . 
 . 
 . 

For more information on using PCs with DCE, see Distributing Applications Across DCE and Windows NT.

3.6 Understanding and Using OSF DCE and VMScluster Technologies

This section describes the following:

3.6.1 Similarities Between VMScluster Environments and DCE Cells

VMScluster technology as implemented by OpenVMS systems provides some of the same features of distributed computing that OSF DCE provides. Many of the VMScluster concepts apply to DCE, and it is easy to think of a VMScluster system as being a type of DCE cell.

The following attributes are shared by DCE and VMScluster environments:

3.6.2 Differences Between VMScluster Environments and DCE Cells

VMScluster environments differ from DCE cells in two significant ways:

VMScluster environments support the concept of individual systems as nodes in the extended system. In DCE, individual systems are called hosts. In a VMScluster environment, each node effectively has two addresses: a network node address and the VMScluster alias address. These two addresses are used differently, as follows:

In DCE there is no such dual identity. All network addressing is done directly to a specified host. The DCE cell does not have a separate network address, and it does not perform any forwarding functions. To share resources across hosts, DCE applications can use replication (resource copies) or store the resources in the shared file system, DFS, if it is available.

The VMScluster environment connection-forwarding mechanism permits the entire extended system to appear on the network as a single addressable entity (the VMScluster alias address). Although DCE does not support a connection-forwarding mechanism, DCE can use the Remote Procedure Call (RPC) grouping mechanism to access shared resources in a distributed file system. This mechanism selects, from an available set, one host/server pair that provides access to the shared resource.

3.6.3 Limitations on Using DCE in a VMScluster System

DCE does not support VMScluster connection forwarding. DCE requires, instead, that all connection requests be made directly to a specific node in the VMScluster instead of to a VMScluster alias.

For example, if you start a DCE application server named whammy on VMScluster node HENDRX in a VMScluster named GUITAR (VMScluster alias name), binding information includes node HENDRX addressing information; it does not include VMScluster alias GUITAR addressing information. In turn, when a client wants to communicate with server whammy, it must retrieve binding information about the server. This binding information must contain address information for physical node HENDRX, not for the VMScluster alias GUITAR.

DCE makes use of VMScluster technology in the following ways:

Although DCE installation and daemon processes are handled in a standard VMScluster manner, you must configure each VMScluster node individually to run DCE services. Some DCE services require node-specific information to be stored in the nonshared file system.

3.6.4 DCE and VMScluster Configuration Issues

Although DCE cells and VMScluster environments include exclusive lists of hosts (nodes), the boundaries of the two environments do not need to match each other. In a VMScluster environment, each node can be a member of only one extended cluster system. The same applies to DCE: each host is a member of only one cell. However, when you configure DCE and use it with VMScluster environments, the boundaries of a cell and the boundaries of a VMScluster do not need to be the same.

For security reasons, you should not have some members of a VMScluster belong to one cell and other members of a VMScluster belong to another cell. However, members of multiple VMScluster environments can be members of one DCE cell.


Chapter 4
Using Digital DCE with DECnet

The following sections describe information you need to know when using Digital DCE for OpenVMS VAX and OpenVMS Alpha with DECnet software.

Digital DCE for OpenVMS VAX and OpenVMS Alpha supports DECnet Phase IV networking. It also supports DECnet/OSI (DECnet Phase V).

4.1 DECnet and DCE Startup and Shutdown Sequences

Before you start or stop DECnet, you should first stop the DCE services. Then, after you start DECnet, restart the DCE services. Follow these steps to shut down DECnet and DCE on a system running DCE applications:

  1. Stop any DCE applications that are running.
  2. Stop the DCE services.
    If you are performing a system shutdown, DCE services are stopped with the following command, placed before the network transport shutdown commands in the site-specific shutdown procedure SYS$MANAGER:SYSHUTDWN.COM:


    $ @SYS$STARTUP:DCE$SHUTDOWN.COM NOCONFIRM
    $ @SYS$MANAGER:DCE$RPC_SHUTDOWN.COM NOCONFIRM
    

    This ensures that both DCE services and DECnet shut down in the correct order.
    If, however, you must shut down DECnet but are not performing a system shutdown, first stop the DCE services with this command:


    $ @SYS$MANAGER:DCE$SETUP stop
    $ @SYS$MANAGER:DCE$RPC_SHUTDOWN clean
    

  3. Then, if you are not performing a system shutdown, you can also stop DECnet interactively with one of the following commands:
    To shut down DECnet Phase IV, use the following command:


    $ MCR NCP SET EXECUTOR STATE SHUT
    

    To shut down DECnet/OSI, use the following command:


    $ @SYS$MANAGER:NET$SHUTDOWN
    

Here is the sequence to follow when you start DECnet on a system that is also running DCE applications:

  1. Start DECnet Phase IV with the following command (usually executed from system startup procedures):


    $ @SYS$MANAGER:STARTNET.COM
    

    Start DECnet/OSI with the following command:


    $ @SYS$STARTUP:NET$STARTUP.COM
    

  2. Make sure the DCE services are started.
    Check to see that the DCE startup command procedure is invoked by the site-specific startup procedure. In SYS$MANAGER:SYSTARTUP_V5.COM, make sure the following line was placed after the network transport startup commands:


    $ @SYS$STARTUP:DCE$STARTUP.COM
    

    DCE startup can occur only after successful completion of the DECnet startup procedure.
    If you need to start the DCE services, but are not performing a system reboot, you can start DCE with this command:


    $ @SYS$MANAGER:DCE$SETUP start
    

  3. After the DCE services are started, you can restart your DCE applications.

4.2 Running DCE Server Applications Using DECnet

Users running server applications that support DECnet need to consider the following:

4.2.1 Server Account Requirements

A DCE server application listening for client requests using the ncacn_dnet_nsp protocol sequence must be able to create a DECnet server endpoint (known as a named object in DECnet). To create the endpoint, the server application must run from an account that has either the rights identifier NET$DECLAREOBJECT or the privilege SYSNAM enabled.

If the NET$DECLAREOBJECT rights identifier does not already exist on your system, installation of Digital DCE for OpenVMS VAX or OpenVMS Alpha creates it for you.

Use the OpenVMS Authorize Utility (AUTHORIZE) to display the rights identifier, as follows:


$ RUN SYS$SYSTEM:AUTHORIZE


UAF> SHOW /IDENTIFIER NET$DECLAREOBJECT


    Name                             Value           Attributes 
    NET$DECLAREOBJECT                %X91F50005      DYNAMIC 

If a server application must run from an account without the SYSNAM privilege, and the rights identifier does not exist, you must use AUTHORIZE to grant the rights identifier to the account. For example:


$ RUN SYS$SYSTEM:AUTHORIZE
UAF> GRANT/IDENTIFIER NET$DECLAREOBJECT uic/account-specification

If the server account does not have the rights identifier NET$DECLAREOBJECT or the SYSNAM privilege, the RPC use-protocol-sequence API routines such as rpc_server_use_all_protseqs() and rpc_server_use_protseq() return the status code rpc_s_cant_listen_socket for the ncacn_dnet_nsp (DECnet) protocol sequence.

4.2.2 DECnet Endpoint Naming

To prevent RPC interoperability problems between DECnet-VAX and DECnet-ULTRIX hosts, Compaq recommends that you specify all well-known server endpoints completely in uppercase characters, using a maximum of 15 characters.

The following example shows an IDL file using an uppercase endpoint:


 [uuid(43D2681B-A000-0000-0D00-00C663000000), 
     version(1), 
     endpoint("ncadg_ip_udp:[2001]", 
        "ncacn_ip_tcp:[2001]", 
              "ncacn_dnet_nsp:[APP_SERVER]") 
 ] 
 interface my_app 
 

When a server calls the RPC use-protocol-sequence API routines such as rpc_server_use_all_protseqs_ep() and rpc_server_use_protseq_if(), DECnet on OpenVMS creates ncacn_dnet_nsp endpoints in uppercase characters, regardless of how the endpoint was specified. DECnet on OpenVMS also converts to uppercase the endpoints in all incoming and outgoing RPC requests.

DECnet-ULTRIX, however, does no conversions on ncacn_dnet_nsp endpoints. These differences can prevent client requests from reaching a server.

For example, an ULTRIX DCE server listening for client requests over the ncacn_dnet_nsp protocol sequence with the endpoint app_server is not able to receive requests from an OpenVMS DCE client. Even though the OpenVMS client uses the endpoint app_server to create a binding handle (by using a string binding or from an import), DECnet on OpenVMS converts the endpoint in the outgoing RPC request to uppercase APP_SERVER. Because the ULTRIX DCE server application is listening on the lowercase app_server endpoint, the client request is rejected.

4.3 DECnet String Binding Formats Supported in This Release

To support the use of string bindings in this release, Compaq has added the following DECnet value to the list of supported protocol sequences:

ncacn_dnet_nsp

Unlike TCP/IP and UDP/IP, DECnet allows a named endpoint. An example of a DECnet protocol sequence named endpoint is TESTNAME. Compaq recommends that you use uppercase names with no more than 15 characters.

An example of an object number is #17. The # (number sign) character must precede an object number.

At present, there are no DECnet Phase IV options.


Previous Next Contents Index