[OpenVMS documentation]
[Site home] [Send comments] [Help with this site] [How to order documentation] [OpenVMS site] [Compaq site]
Updated: 11 December 1998

Guide to DECthreads


Previous Contents Index

A.3.1 DECthreads Use of Kernel Threads

DIGITAL UNIX kernel threads are created as they are needed by the application. The number of kernel threads that DECthreads creates is limited by normal DIGITAL UNIX configuration limits regarding user and system thread creation. Normally, however, DECthreads creates one kernel thread for each actual processor on the system and the kernel creates an additional kernel thread on behalf of the process for bookkeeping operations.

DECthreads does not delete these kernel threads or let them terminate. Kernel threads not currently needed are retained in an idle state until they are needed again. When the process terminates, all kernel threads in the process are reclaimed by the kernel.

The DECthreads scheduler can schedule any user thread onto any kernel thread. Therefore, a user thread can run on different kernel threads at different times. Normally, this should pose no problem. However, for example, the kernel thread ID as reported by the dbx or Ladebug debuggers can change at any time.

A.3.2 Support for Real-Time Scheduling

DECthreads supports DIGITAL UNIX real-time scheduling. This allows you to set the scheduling policy and priority of threads. By default, threads are created using process contention scope. This means that the full range of POSIX.1c scheduling policy and priority is available. However, threads running in process contention scope do not preempt lower-priority threads in another process. For example, a thread in process contention scope with SCHED_FIFO policy and PRI_FIFO_MAX priority will not preempt a thread in another process running with SCHED_FIFO and PRI_FIFO_MIN.

In contrast, system contention scope means that each thread created by the program has a direct and unique binding to one kernel execution context. A system contention scope thread competes against all threads in the system and will preempt any thread with lower priority. For this reason, the priority range of threads in system contention scope is restricted unless running with root privilege.

Specifically, a thread with SCHED_FIFO policy cannot run at a priority higher than 18 without privilege, since doing so could lock out all other users on the system until the thread blocked. Threads at any other scheduling policy (including SCHED_RR) can run at priority 19 because they are subject to periodic timeslicing by the system. For more information, see the DIGITAL UNIX Realtime Programming Guide.

If your program lacks necessary privileges, attempting to call the following routines for a thread in system contention scope returns the error value [EPERM]:
pthread_attr_setschedpolicy() ( Error returned by pthread_create() at thread creation)
pthread_attr_setschedparam() ( Error returned by pthread_create() at thread creation)
pthread_setschedparam()  

Prior to DIGITAL UNIX Version 4.0, all threads used only system contention scope. In DIGITAL UNIX Version 4.0, all threads created using the pthread interface, by default, have process contention scope.

A.4 Thread Cancelability of System Services

DIGITAL UNIX supports the required system cancelation points specified by the POSIX.1c standard. In addition, critical non-POSIX functions supported by DIGITAL UNIX (such as select()) have also been defined as cancelation points.

For legacy multithreaded applications, note that threads created using the cma or d4 interfaces will not be cancelable at any system call. (Here "system call" means any function without the pthread_ prefix.) If system call cancelation is required, you must write code using the DECthreads pthread interface. None of the system calls should be called with asynchronous cancelation enabled.

For more information, see Section 2.3.7.

A.4.1 Current Cancelation Points

The following functions are cancelation points:
accept()

aio_suspend()
close()
connect()
creat()
fcntl()
fsync()
mq_receive()
q_send()
msync()
nanosleep()
open()
pause()
pthread_cond_timedwait()
pthread_cond_wait()
pthread_delay_np()
pthread_join()
pthread_testcancel()
read()
readv()
recv()
recvfrom()

recvmsg()
select()
sem_wait()
send()
sendmsg()
sendto()
shutdown()
sigwaitinfo()
sigsuspend()
sigtimedwait()
sigwait()
sleep()
system()
tcdrain()
wait()
waitpid()
write()
writev()

A.4.2 Future Cancelation Points

The following list contains POSIX functions that are not cancelation points in this release of DECthreads but might be in a future release (as specified by the POSIX.1c standard). Please code your programs accordingly.
closedir()

ctermid()
fclose()
fflush()
fgetc()
fgets()
fopen()
fprintf()
fputc()
fputs()
fread()
freopen()
fscanf()
fseek()
ftell()
fwrite()
getc()
getc_unlocked()
getchar()
getcwd()
getgrgid()
getgrgid_r()
getgrnam()
getgrnam_r()
getlogin()
getlogin_r()
getpwnam()

getpwnam_r()
gets()
lseek()
opendir()
perror()
printf()
putc()
putc_unlocked()
putchar()
putchar_unlocked()
puts()
readdir()
remove()
rename()
rewind()
rewinddir()
scanf()
tmpfile()
tmpname()
ttyname()
ttyname_r()
ungetc()
unlink()

Note that appropriate non-POSIX functions that do not appear in the preceding list might become cancelation points in the future. DIGITAL UNIX will also implement new cancelation points, as specified by future revisions of the relevant formal or consortium standard bodies.

A.5 Using Signals

This section discusses signal handling based on the POSIX.1c standard.

DIGITAL UNIX Version 4.0 introduces the full POSIX.1c signal model. In previous versions, "synchronous" signals (those resulting from execution errors, such as SIGSEGV and SIGILL) could have different signal actions for each thread. Prior to DIGITAL UNIX Version 3.2, all threads shared a common, processwide signal mask, which meant one thread could not receive a signal while another had the signal blocked.

Under DIGITAL UNIX Version 4.0 and later, all signal actions are processwide. That is, when any thread uses sigaction or equivalent to set a signal handler, or to modify the signal action (for example, to ignore a signal), that action will affect all threads. Each thread has a private signal mask so that it can block signals without affecting the behavior of other threads.

Prior to DIGITAL UNIX Version 4.0, asynchronous signals were processed only in the main thread. In DIGITAL UNIX Version 4.0, any thread that doesn't have the signal masked can process the signal. To support binary compatibility, for a thread created by a DECthreads cma or d4 interface routine, the thread starts with all asynchronous signals blocked.

A.5.1 POSIX sigwait Service

The POSIX 1003.1c sigwait() service allows any thread to block until one of a specified set of signals is delivered. A thread can wait for any of the asynchronous signals except for SIGKILL and SIGSTOP.

For example, you can create a thread that blocks on a sigwait() routine for SIGINT, rather than handling a Ctrl/C in the normal way. This thread could then cancel other threads to cause the program to shut down the current activities.

Following are two reasons for avoiding signals:

In a multithreaded program, signal handlers cannot be used in a modular way because there is only one signal handler routine for all of the threads in an application. If two threads install different signal handlers for the signal, all threads will dispatch to the last handler when they receive the signal.

Most applications should avoid using asynchronous programming techniques in conjunction with threads. For example, techniques that rely on timer and I/O signals are usually more complicated and errorprone than simply waiting synchronously within a thread. Furthermore, most of the threads services are not supported for use in signal handlers, and most run-time library functions cannot be used reliably inside a signal handler.

Some I/O intensive code may benefit from asynchronous I/O, but these programs will generally be more difficult to write and maintain than "pure" threaded code.

A thread should not wait for a synchronous signal. This is because synchronous signals are the result of an error during the execution of a thread, and if the thread is waiting for a signal, then it is not executing. Therefore, a synchronous signal cannot occur for a particular thread while it is waiting, and the thread will wait forever.

The POSIX.1c standard requires that the thread block the signals for which it will wait before calling sigwait().

A.5.2 Handling Synchronous Signals as Exceptions

For the signals traditionally representing synchronous errors in the program, DECthreads catches the signal and converts it into an equivalent exception. This exception is then propagated up the call stack in the current thread and can be caught and handled using the normal exception catching mechanisms.

Table A-3 lists DIGITAL UNIX signals that are reported as DECthreads exceptions by default. If any thread declares an action for one of these signals (using sigaction(2) or equivalent), no thread in the process can receive the exception.

Table A-3 Signals Reported as Exceptions
Signal Exception
SIGILL pthread_exc_illinstr_e
SIGIOT pthread_exc_SIGIOT_e
SIGEMT pthread_exc_SIGEMT_e
SIGFPE pthread_exc_aritherr_e
SIGBUS pthread_exc_illaddr_e
SIGSEGV pthread_exc_illaddr_e
SIGSYS pthread_exc_SIGSYS_e
SIGPIPE pthread_exc_SIGPIPE_e

A.6 Thread Stack Guard Areas

When creating a thread based on a thread attributes object, DECthreads potentially rounds up the value specified in the object's guardsize attribute. DECthreads does so based on the value of the configurable system variable PAGESIZE (see <sys/mman.h>. The default value of the guardsize attribute in a thread attributes object is a number of bytes equal to setting of PAGESIZE.

A.7 Dynamic Activation

Dynamic activation of the DECthreads run-time environment, or of code that depends on DECthreads, is currently not supported.


Appendix B
Considerations for OpenVMS Systems

This appendix discusses DECthreads issues and restrictions specific to the OpenVMS operating system.

B.1 Overview

Under OpenVMS, DECthreads offers these application programming interfaces:

B.2 Compiling Under OpenVMS

The DECthreads C language header files shown in Table B-1 provide interface definitions for the DECthreads pthread and tis interfaces.

Table B-1 DECthreads Header Files
Header File Interface
pthread.h POSIX.1c style routines
tis.h Compaq proprietary thread-independent services routines

Include only one of these header files in your module.

Special compiler definitions are not required when compiling threaded applications that use the pthread interface or the tis interface.

B.3 Linking OpenVMS Images

DECthreads is supplied only as shareable images. It is not supplied as object libraries.

When you link an image that calls DECthreads routines, you must link against the appropriate images listed in Table B-2.

Table B-2 DECthreads Images
Image Routine Library
PTHREAD$RTL.EXE POSIX.1c style interface
CMA$TIS_SHR.EXE Thread-independent services

The image files PTHREAD$RTL.EXE, CMA$TIS_SHR.EXE, CMA$RTL.EXE, and CMA$LIB_SHR.EXE are included in the IMAGELIB library, making it unnecessary to specify those images (unless you are using the /NOSYSLIB switch with the linker) in a Linker options file.

When you link an image that utilizes the CMA$OPEN_LIB_SHR.EXE and CMA$OPEN_RTL.EXE images, they must be specified in a Linker options file.

Note

While this version of DECthreads for OpenVMS supports upward compatibility of source and binaries for the DECthreads d4 interface, DECthreads does not support upward compability for object files.

For instance, under OpenVMS V7.0 and higher, to link object files that were compiled under OpenVMS V6.2, follow these steps:

  1. Copy CMA$OPEN_RTL.EXE from SYS$SHARE for OpenVMS V6.2 into the directory with your object files compiled under the current OpenVMS version. During linking, it provides the locations of the transfer vector entries (OpenVMS VAX) or symbol vector entries (OpenVMS Alpha) in CMA$OPEN_RTL.EXE for the the older OpenVMS version.
  2. Instead of specifying SYS$SHARE:CMA$OPEN_RTL/SHARE in your link options files, specify CMA$OPEN_RTL/SHARE. Be careful about the placement of this option in the options file---it should perhaps be placed at the beginning, or close to it, if you are including other images that link against PTHREAD$RTL.
  3. Link your program.
  4. Delete CMA$OPEN_RTL.EXE from your object directory for the current OpenVMS version.

B.4 Using DECthreads with AST Routines

An asynchronous system trap, or AST, is an OpenVMS mechanism for reporting an asynchronous event to a process. The following are restrictions concerning the use of ASTs with DECthreads:

B.5 Dynamic Activation

Certain run-time libraries use conditional synchronization mechanisms. These mechanisms typically are enabled during image initialization when the run-time library is loaded, and only if the process is multithreaded (that is, if the DECthreads core run-time library PTHREAD$RTL has been linked in). If the process is not multithreaded, the synchronization is disabled.

If your application were to dynamically activate PTHREAD$RTL, any run-time library that uses conditional synchronization may not behave reliably. Thus, dynamic activation of the DECthreads core run-time library PTHREAD$RTL is not supported.

If your application must dynamically activate an image that depends upon PTHREAD$RTL (that is, the image must run, or can be run, in a multithreaded environment), you must build the application by explicitly linking the image calling LIB$FIND_IMAGE_SYMBOL against PTHREAD$RTL.

Use the OpenVMS command ANALYZE/IMAGE to determine whether an image depends upon PTHREAD$RTL. For more information see your OpenVMS documentation.

B.6 Default and Minimum Thread Stack Size

As of OpenVMS Version 7.2, DECthreads has increased the default thread stack size for both OpenVMS Alpha and OpenVMS VAX. Applications that create threads using the default stack size (or a size calculated from the default) will be unaffected by this change.

As of OpenVMS Version 7.2, DECthreads has increased the minimum thread stack size (based on the PTHREAD_STACK_MIN constant) for OpenVMS VAX only. Existing applications that were built using a version prior to OpenVMS Version 7.2 and that base their thread stack sizes on this minimum must be recompiled.

B.7 Requesting a Specific, Absolute Thread Stack Size

Prior to OpenVMS Version 7.2, when an application requested to allocate a thread stack of a specific, absolute size, DECthreads would increase the size by a certain quantity, then round up that sum to an integral number of pages. This process resulted in the actual stack size being considerably larger than the caller's request, possibly by more than one page.

Starting with OpenVMS Version 7.2, when an application requests DECthreads to allocate a thread stack of a specific, absolute size, no additional space is added, but the allocation is still rounded up to an integral number of pages.

Any application that uses default-sized stacks is unlikely to experience problems due to this change. Similarly, any application that sets its thread stack allocations in terms of either the DECthreads default or the allowable minimum stack size is unlikely to experience problems due to this change; however, depending on the allocation calculation used, the application might receive more memory for thread stacks.

Starting with OpenVMS Version 7.2, any thread that is created with a stack allocation of a specific, absolute size might fail during execution because of insufficient stack space. This failure indicates an existing bug in the application that was made manifest by the change in DECthreads.

When the application requests to allocate a thread stack of a specific size, it must allow for not only the space that the application itself requires, but also sufficient stack space for context switches and other DECthreads activity. DECthreads only occasionally uses this additional stack space, such as during timeslice interruptions. A thread with inadequate stack space might encounter no problems during development and testing because of timing vagaries---for instance, a thread might not experience problems until a timeslice occurs while the thread is at its maximum stack utilization---and this situation might never arise during in-house testing. In a different system environment, such as in a production environment, the timing might be different, possibly resulting in occasional failures when certain conditions are met.


Previous Next Contents Index

[Site home] [Send comments] [Help with this site] [How to order documentation] [OpenVMS site] [Compaq site]
[OpenVMS documentation]

Copyright © Compaq Computer Corporation 1998. All rights reserved.

Legal
6101PRO_029.HTML