Document revision date: 5 July 2000
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]

Compaq DCE for OpenVMS VAX and OpenVMS Alpha
Installation and Configuration Guide

Previous Contents Index

Chapter 4
Configuring a DCE Cell

This chapter describes the steps necessary to set up a DCE cell, and the DCE system configuration utility for Compaq DCE for OpenVMS VAX and OpenVMS Alpha. Note that DCE must be configured.

4.1 Overview of the DCE Cell

A cell is the basic DCE unit. It is a group of networked systems and resources that share common DCE services. Usually, the systems in a cell are in the same geographic area, but cell boundaries are not limited by geography. A cell can contain from one to several thousand systems. The boundaries of a cell are typically determined by its purpose, as well as by security, administrative, and performance considerations.

A DCE cell is a group of systems that share a namespace under a common administration. The configuration procedure allows you to configure your system as a DCE client, create a new DCE cell, add a master Cell Directory Service (CDS) server, add a replica CDS server, and add a Distributed Time Service (DTS) local server. When you create a new cell, you automatically configure a Security server.

You do not need to create a DCE cell if you are using only the DCE Remote Procedure Call (RPC) and if your applications use only explicit RPC string bindings to provide the binding information that connects server to clients. If there are other systems in your network already using DCE services, it is possible there may be an existing cell that your system can join. If you are not sure, consult your network administrator to find out which DCE services may already be in use in your network.

At a minimum, a cell configuration includes the DCE Cell Directory Service, the DCE Security Service, and the DCE Distributed Time Service. One system in the cell must provide a DCE Directory Service server to store the cell namespace database. You can choose to install both the Cell Directory Server and the Security Server on the system from which you invoked the procedure, or you can split the two servers and put them on different systems.


You must run the installation and configuration procedures on the system where you are creating a cell before you install and configure DCE on the systems that are joining the cell.

4.1.1 Creating a Cell

All DCE systems participate in a cell. If you are installing DCE and there is no cell to join, the first system on which you install the software is also the system on which you create the cell. Remember that this system is also the DCE Security Server. You can also make this system your Cell Directory Server.

When you create a cell, you must name it. The cell name must be unique across your global network. The name is used by all cell members to indicate the cell in which they participate. The configuration procedure provides a default name that is unique and is easy to remember. If you choose a name other than the default, the name must be unique. If you want to ensure that separate cells can communicate, the cell name must follow BIND or X.500 naming conventions.

4.1.2 Joining a Cell

Once the first DCE system is installed and configured and a cell is created, you can install and configure the systems that join that cell. During configuration, you need the name of the cell you are joining. Ask your network administrator for the cell name.

4.1.3 Defining a Cell Name

You need to define a name for your DCE cell that is unique in your global network and is the same on all systems that participate in this cell. The DCE naming environment supports two kinds of names: global names and local names. All entries in the DCE Directory Service have a global name that is universally meaningful and usable from anywhere in the DCE naming environment. All Directory Service entries also have a cell-relative name that is meaningful and usable only from within the cell in which that entry exists. If you plan to connect this cell to other DCE cells in your network either now or in the future, it is important that you choose an appropriate name for this cell. You cannot change the name of the cell once the cell has been created. If you are not sure how to choose an appropriate name for your DCE cell, consult Chapter 9 of the Compaq DCE for OpenVMS VAX and OpenVMS Alpha Product Guide, or the section on global names in the OSF DCE Administration Guide --- Introduction. Before you can register the cell in X.500, you must ensure that the Compaq X.500 Directory Service kit is installed on your CDS server.

Compaq recommends that you use the following convention to create DCE cell names: the Internet name of your host system followed by the suffix -- cell, followed by the Internet address of your organization. For example, if the Internet name of your system is myhost, and the Internet address of your organization is, your cell name, in DCE syntax, would be This convention has the following benefits:

If there is already a cell name defined in a previously existing DCE system configuration, do not change it unless you are removing this system from the cell in which it is currently a member and you are joining a different cell.

When the configuration procedure prompts you for the name of your DCE cell, type the cell name without the /.../ prefix; the prefix is added automatically. For example, if the full global name selected for the cell, in DCE name syntax, is /.../ , enter .

4.1.4 Defining a Host Name

You need to define a name for your system that is unique within your DCE cell. You should use the default host name, which is the Internet host name (the name specified before the first dot(.)). The following example shows the default host name derived from the Internet name of

Please enter your DCE host name [myhost]: 

4.1.5 Intercell Naming Using DNS

This section provides tips on defining a cell name in the Domain Name System (DNS). Names in DNS are associated with one or more data structures called resource records. The resource records define cells and are stored in a data file. For TCP/IP Services for OpenVMS, this file is called SYS$SPECIFIC:[TCPIP$BIND]<domain name>.DB.

If you are using a UNIX DNS Bind server, it is called /etc/namedb/hosts.db . To create a cell entry, you must edit the data file and create two resource records for each CDS server that maintains a replica of the cell namespace root. The following example shows a cell called . The cell belongs to the BIND domain Host is the master CDS server for the cell . The BIND server must be authoritative for the domains of the cell name. The BIND master server requires the following entries in its data file: I A IN MX 1 IN TXT "1 c8f5f807-487c-11cc-b499-08002b32b0ee 
Master /.../ 


TXT records must span only one line. The third entry above incorrectly occupies three lines to show the information included in the TXT record. You need to do whatever is required with your text editor of choice to ensure this. Widening your window helps. You should also ensure that the quotes are placed correctly and that the host name is at the end of the record.

The information to the right of the TXT column in the Hesiod Text Entry (that is, 1 c8f5f807-48...) comes directly from the cdscp show cell /.: as dns command. For example, to obtain the information that goes in the text record (TXT), you would go to a host in the ruby cell, and enter the cdscp show cell /.: as dns command. Then, when the system displays the requested information, cut and paste this information into the record. This method ensures that you do not have any typing errors.

To ensure that the records that you have entered are valid, restart the DNS Bind server process.

4.1.6 Intercell Naming Using LDAP/X.500

This section provides tips on defining a cell name in LDAP/X500.

The cells that will communicate using intercell must be part of the same LDAP/X500 namespace. This is true only if they share a common root in the namespace tree. For example, the cells /c=us/o=compaq/ou=laser-cell and /c=us/o=compaq/ou=ruby-cell share the root /c=us/o=compaq , and would be able to participate in intercell communications.

If your cell is part of an X.500 namespace, answer Yes to the question "Do you want to register the DCE cell in X.500?". If your cell is part of an LDAP namespace, answer Yes to the question "Do you want to register the DCE cell in LDAP?". Additional information about Intercell operations can be found in Chapter 9 of the Compaq DCE for OpenVMS VAX and OpenVMS Alpha Product Guide.

4.2 The DCE System Configuration Utility --- DCE$SETUP.COM

The DCE$SETUP command procedure begins the configuration process. Many of the system configuration utility prompts have default values associated with them. The default responses are based on your existing configuration, if you have one. Otherwise, default values for the most common DCE system configurations are provided. At each prompt, press RETURN to take the default displayed in brackets, type a question mark (?) for help, or supply the requested information.

The system configuration utility sets up the DCE environment on your node so that you can use DCE services. The system configuration utility leads you through the process of creating or joining a cell.


If you are installing Compaq DCE for OpenVMS VAX or OpenVMS Alpha Version 3.0 over a previous version of DCE, you do not have to reconfigure DCE after the installation. Before the installation, stop the DCE daemons with the following command:


Then, after the installation, enter the following command:


You must configure if you are installing DCE for the first time or reconfigure if you are installing a new version over DCE Version 1.0.

If you are installing DCE over an existing Compaq DCE for OpenVMS VAX or OpenVMS Alpha, perform the following steps:

  1. Stop the DCE daemons with the following command:


  2. If installing DCE over version 1.5 of Compaq DCE for OpenVMS VAX or OpenVMS Alpha, also perform the following step to stop the RPC daemon:


  3. After the installation, enter the following command:


4.2.1 Configuring LDAP, NSI, and GDA

The Lightweight Directory Access Protocol (LDAP) provides access to the X.500 directory services without the overhead of the full Directory Access Protocol (DAP). The simplicity of LDAP, along with the powerful capabilities it inherits from DAP, makes it the defacto standard for Internet directory services and for TCP/IP.

Inside a cell, a directory service is accessed mostly through the name service interface (NSI) implemented as part of the run-time library. Cross-cell directory service is controlled by a global directory agent (GDA), which looks up foreign cell information on behalf of the application in either the Domain Naming Service (DNS) or X.500 database. Once that information is obtained, the application contacts the foreign CDS in the same way as the local CDS.

Once LDAP is configured, applications can request directory services from either CDS or LDAP or both. LDAP is provided as an optional directory service that is independent of CDS and duplicates CDS functionality. LDAP is for customers looking for an alternative to CDS that offers TCP/IP and Internet support.

With LDAP directory service available, GDA can look up foreign cell information by communicating through LDAP to either an LDAP-aware X.500 directory service or a standalone LDAP directory service, in addition to DNS and DAP.

Note that DCE for OpenVMS provides it's own client implementation of LDAP. Prior to installing DCE, a DCE administrator must obtain LDAP server software and install it as an LDAP server in the environment. Next, a DCE administrator must choose LDAP during the DCE installation and configuration procedure and intentionally configure LDAP directory service for a cell.

4.2.2 Kerberos 5 Security

The DCE authentication service is based on Kerberos 5. The Kerberos Key Distribution Center (KDC) is part of the DCE Security Server secd . The authorization information that is created by the DCE for OpenVMS privilege server is passed in the Kerberos 5 ticket's authorization field.

DCE provides a Kerberos configuration program (DCE$KCFG.EXE) to assist in the interoperability between DCE Kerberos and standard Kerberos. To find out more information about the kcfg program, use the following two commands.

To display individual command switches and their arguments enter:

kcfg -? 

To display a short description of the command and what it does enter:

kcfg -h 

This provides information on the configuration file management, principal registration, and service configuration.


The dcesetup configuration script sets all tickets as forwardable, a default value. If tickets are not set as forwardable, the Kerberos Distribution Center (KDC) server does not provide authentication and authorization information to the telnet process. The command, kinit -f , marks tickets as forwardable.

All machines within a cell that plan to use Kerberos-enabled tools need to check and possibly modify the registry and the krb5 configuration with the kcfg executable.

To make sure that Kerberos Version 4 interoperates with Kerberos Version 5, an administrator can use the kcfg -k command to change krb.conf entries. This command needs to be entered on each machine in the cell.

The registry must contain a principal entry that describes the host machine of the KDC server. This principal entry is of the form host/<hostname> . The principal and the associated keytable entry can be created with kcfg -p . This verifies that the host entry exists; if not, it creates the host entry.

4.2.3 Starting the System Configuration Utility

You must be logged in as a privileged user. The SHOW command requires only NETMBX and TMPMBX privileges. All other commands require WORLD, SYSPRV, CMKRNL, and SYSNAM privileges. The CONFIG command requires BYPASS privileges.

You can use the same command to perform an initial configuration or to reconfigure DCE. See the Appendix for several sample configurations. To start the system configuration utility, at the DCL prompt enter the following command:


The DCE System Management Main Menu appears:

                    DCE System Management Main Menu 
                           DCE for OpenVMS Alpha V3.0 
      1)  Configure     Configure DCE services on this system 
      2)  Show          Show DCE configuration and active daemons 
      3)  Stop          Terminate all active DCE daemons 
      4)  Start         Start all DCE daemons 
      5)  Restart       Terminate and restart all DCE daemons 
      6)  Clean         Terminate all active DCE daemons and remove 
                         all temporary local DCE databases 
      7)  Clobber       Terminate all active DCE daemons and remove 
                         all permanent local DCE databases 
      8)  Test          Run Configuration Verification Program 
      0)  Exit          Exit this procedure 
      ?)  Help          Display helpful information 
Please enter your selection: 

Enter 1 to view the DCE Configuration Menu. To skip the previous menu and go directly to the DCE Configuration Menu, enter the following command:


For information on how to configure a DCE cell or how to add a client, see Chapter 5. For information on modifying an existing configuration, see Chapter 6.

Previous Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement