PreviousNext

Choosing to Thread

The choice of multithreading is really a question of specific application design, and only general guidelines can be supplied here. Application programmers need to be aware that, depending on the threads implementation and the underlying hardware, concurrency may be more apparent than real for many applications. If threads are being time-sliced on a single processor, nonblocking activities will not go any faster because they are multithreaded. In fact, given the extra overhead of a given threads implementation, they may be slower. Even on a multiprocessor, with the DCE user-space threads implementation, all threads in a single process contend for the same processor.

On the other hand, if multiple threads are carrying out activities that may block - and this includes making RPCs to remote hosts - then multithreading will probably be beneficial. For example, multiple concurrent RPCs to several hosts may allow a local client to achieve true parallelism. Note however, that concurrent RPCs to a single server instance may not be any more efficient if the server itself cannot get any real benefit from multithreading of the manager code.

RPC servers are multithreaded by default, since multithreading is an obvious way for servers to simultaneously handle multiple calls. Even if the manager code and underlying implementation do not permit true parallelism, manager multithreading may at least allow a fairer distribution of processing time among competing clients. For example, a client that makes a call that can complete in a short time may not have to wait for a client that is using a lot of processor time to complete. For this to occur, threads must make use of one of the time-sliced scheduling policies (including the default policy). On the other hand, if all calls make use of approximately similar resources, then multithreading may become simply an additional, possibly expensive, form of queueing unless the application or the environment permits real parallelism.

In summary, the developer must consider the following questions in order to decide whether an application will benefit from multithreading:

· Are the threaded operations likely to block, for example, because they make blocking I/O calls or RPCs? If so, then multithreading is likely to be beneficial in any implementation or hardware environment.

· Can the underlying hardware and RPC implementation support threads on more than one processor within a single process? If not, then multithreading cannot achieve real parallelism for processor intensive operations. The DCE user-space threads implementation restricts all threads of a single process to contend for a single processor and so cannot provide real parallelism for processor intensive operations.

· Even if the answer to both of the first two questions is yes, will the use of a time-slicing thread scheduling policy permit fairer distribution of server resources among contending clients? If so, then server manager multithreading may be beneficial.

Even if, according to these criteria, multithreading is likely to benefit an application, the programmer still needs to consider the cost, in terms of additional complexity, of writing multithreaded code. In general, most server manager code will probably benefit from multithreading, which is provided by default by DCE. Most server applications will therefore choose to be multithreaded and incur the extra costs of creating thread-safe code. Whether client code will find the extra complexity of multithreading worthwhile really depends on a careful assessment of the listed criteria for each program design. There is no way to predict what a "typical" client will do.