PreviousNext

The Data Sharing Model

Some of the DCE services are based on the data sharing model, in which data is shared by distributing it throughout the system. Like RPC, data sharing assumes the existence of clients and servers. Data sharing, however, focuses on distributed data rather than distributed execution. In RPC, the client's procedure is executed on the server. In data sharing, the server's data is sent to the client. For example, if a client wants to access a file, a copy of the file is sent from the server to the client. The client then proceeds to access the file locally. Data sharing can be built on top of RPC, using RPC as the communications mechanism between the client and server, and as the means of transferring data.

Data sharing usually entails having multiple copies of the same data; for example, a master copy of a file on a file server, and a copy of the file on one or more client machines. As a result, copies of data may diverge; a client may make changes to its copy that make the client's copy inconsistent with the copy on the server. Therefore, distributed services based on the data sharing model usually include mechanisms for keeping copies of data consistent.

In addition, services that implement data sharing must be able to synchronize multiple access to data. For example, two clients may each want to modify a given record in a database. The server that manages the database must either prevent them from making conflicting modifications, or decide which modification takes precedence.

Two DCE services are based on the data sharing model. The first is the Directory Service. Both DCE directory services, CDS and GDS, maintain caches on the client. The caches contain copies of data that users on the client have recently accessed. Subsequent access to the data can be made locally to the cache, rather than over the network to the server.

The DCE Distributed File Service is also based on the data sharing model. A DFS client maintains a cache of files that have recently been accessed by a user on the system. DFS servers distribute and revoke tokens, which represent a client's capability to perform operations on files. Through careful token management, the DFS server can ensure that its clients do not perform conflicting operations on shared files, and that they do not see inconsistent copies of the same file.

Data sharing, like RPC, enables users and programmers to communicate transparently in a distributed system.