Document revision date: 19 July 1999 | |
Previous | Contents | Index |
Scheduling means to evaluate and change the states of the process's threads. As your multithreaded program runs, DECthreads detects whether each thread is ready to execute, is waiting for completion of a system call, has terminated, and so on.
Also, for each thread DECthreads regularly checks whether that thread's scheduling priority and scheduling policy, when compared with those of the process's other threads, entail forcing a change in that thread's state. Remember that scheduling priority specifies the "precedence" of a thread in the application. Scheduling policy provides a mechanism to control how DECthreads interprets that priority as your program runs.
To understand this section, you must be familiar with the concepts presented in these sections:
A thread's scheduling priority falls within a range of values, depending on its scheduling policy. To specify the minimum or maximum scheduling priority for a thread, use the sched_get_priority_min() or sched_get_priority_max() routines---or use the appropriate nonportable symbol such as PRI_OTHER_MIN or PRI_OTHER_MAX. Priority values are integers, so you can specify a value between the minimum and maximum priority using an appropriate arithmetic expression.
For example, to specify a scheduling priority value that is midway between the minimum and maximum for the SCHED_OTHER scheduling policy, use the following expression (coded appropriately for your programming language):
pri_other_mid = ( sched_get_priority_min(SCHED_OTHER) + sched_get_priority_max(SCHED_OTHER) ) / 2 |
where pri_other_mid represents the priority value you want to set.
Avoid using literal numerical values to specify a scheduling priority
setting, because the range of priorities can change from implementation
to implementation. Values outside the specified range for each
scheduling policy might be invalid.
2.3.6.2 Effects of Scheduling Policy
To demonstrate the results of the different scheduling policies, consider the following example: A program has four threads, A, B, C, and D. For each scheduling policy, three scheduling priorities have been defined: minimum, middle, and maximum. The threads have the following priorities:
A | minimum |
B | middle |
C | middle |
D | maximum |
On a uniprocessor system, only one thread can run at any given time. The ordering of execution depends upon the relative scheduling policies and priorities of the threads. Given a set of threads with fixed priorities such as the previous list, their execution behavior is typically predictable. However, in a symmetric multiprocessor (or SMP) system the execution behavior is completely indeterminate. Although the four threads have differing priorities, a multiprocessor system might execute two or more of these threads simultaneously.
When you design a multithreaded application that uses scheduling priorities, it is critical to remember that scheduling is not the same as synchronization. That is, you cannot assume that a higher-priority thread can access shared data without interference from lower-priority threads. For example, if one thread has a FIFO scheduling policy and the highest scheduling priority setting, while another has a background scheduling policy and the lowest scheduling priority setting, DECthreads might allow the two threads to run at the same time. As a corollary, on a four-processor system you also cannot assume that the four highest-priority threads are executing simultaneously at any particular moment.
The following figures demonstrate how DECthreads schedules a set of threads on a uniprocessor based on whether each thread has the FIFO, RR, or throughput setting for its scheduling policy attribute. Assume that all waiting threads are ready to execute when the current thread waits or terminates and that no higher-priority thread is awakened while a thread is executing (that is, executing during the flow shown in each figure).
Figure 2-1 shows a flow with FIFO scheduling.
Figure 2-1 Flow with FIFO Scheduling
Thread D executes until it waits or terminates. Next, although thread B and thread C have the same priority, thread B starts because it has been waiting longer than thread C. Thread B executes until it waits or terminates, then thread C executes until it waits or terminates. Finally, thread A executes.
Figure 2-2 shows a flow with RR scheduling.
Figure 2-2 Flow with RR Scheduling
Thread D executes until it waits or terminates. Next, thread B and thread C are time sliced, because they both have the same priority. Finally, thread A executes.
Figure 2-3 shows a flow with Default scheduling.
Figure 2-3 Flow with Default Scheduling
Threads D, B, C, and A are time sliced, even though thread A has a lower priority than the others. Thread A receives less execution time than thread D, B, or C if any of those are ready to execute as often as Thread A. However, the default scheduling policy protects thread A against indefinitely being blocked from executing.
Because low-priority threads eventually run, the default scheduling
policy protects against occurrences of thread starvation and priority
inversion, which are discussed in Section 3.5.2.
2.3.7 Canceling a Thread
Canceling a thread means to request the termination of a target thread as soon as possible. A thread can request the cancelation of another thread or itself.
Thread cancelation is a three-stage operation:
The DECthreads pthread and tis interfaces implement thread cancelation using DECthreads exceptions. Using the DECthreads exception package, it is possible for a thread (to which a cancelation request has been delivered) explicitly to catch the thread cancelation exception (pthread_cancel_e) defined by DECthreads and to perform cleanup actions accordingly. After catching this exception, the exception handler code should always reraise the exception, to avoid breaking the "contract" that cancelation leads to thread termination.
Chapter 5 describes the DECthreads exception package.
2.3.7.2 Thread Return Value After Cancelation
When DECthreads terminates a thread due to cancelation, it writes the
return value PTHREAD_CANCELED into the thread's thread object.
This is because cancelation prevents the thread from calling
pthread_exit() or returning from its start routine.
2.3.7.3 Controlling Thread Cancelation
Each thread controls whether it can be canceled (that is, whether it receives requests to terminate) and how quickly it terminates after receiving the cancelation request, as follows:
A thread's cancelability state determines whether it receives a cancelation request. When created, a thread's cancelability state is enabled. If the cancelability state is disabled, the thread does not receive cancelation requests.
If the thread's cancelability state is enabled, use the pthread_testcancel() routine to request the delivery of any pending cancelation request. This routine enables the program to permit cancelation to occur at places where it might not otherwise be permitted, and it is especially useful within very long loops to ensure that cancelation requests are noticed within a reasonable time.
If its cancelability state is disabled, the thread cannot be terminated by any cancelation request. This means that a thread could wait indefinitely if it does not come to a normal conclusion; therefore, exercise care.
After a thread has been created, use the pthread_setcancelstate() routine to change its cancelability state.
After a thread has been created, use the pthread_setcanceltype() routine to change its cancelability type, which determines whether it responds to a cancelation request at cancelation points (synchronous cancelation), or at any point in its execution (asynchronous cancelation).
Initially, a thread's cancelability type is deferred, which means that the thread receives a cancelation request only at cancelation points---for example, when a call to the pthread_cond_wait() routine is made. If you set a thread's cancelability type to asynchronous, the thread can receive a cancelation request at any time.
If the cancelability state is disabled, the thread cannot be canceled regardless of the cancelability type. Setting cancelability type to deferred or asynchronous is relevant only when the thread's cancelability state is enabled. |
A cancelation point is a routine that delivers a posted cancelation request to that request's target thread. The POSIX.1c standard specifies routines that are cancelation points.
The following routines in the DECthreads pthread interface are cancelation points:
The following routines in the DECthreads tis interface are cancelation points:
Other routines that are also cancelation points are mentioned in the operating system-specific appendixes of this guide. Refer to the following thread cancelability for system services topics:
When a cancelation request is delivered to a thread, the thread could be holding some resources, such as locked mutexes or allocated memory. Your program must release these resources before the thread terminates.
DECthreads provides two equivalent mechanisms that can do the cleanup during cancelation, as follows:
Because it is impossible to predict exactly when an asynchronous cancelation request will be delivered, it is extremely difficult for a program to recover properly. For this reason, an asynchronous cancelability type should be set only within regions of code that do not need to clean up in any way, such as straight-line code or tight looping code that is compute-bound and that makes no calls and holds no resources.
While a thread's cancelability type is asynchronous, it should not call any routine unless it is explicitly documented as "safe for asynchronous cancelation." In particular, you can never use asynchronous cancelability type in code that allocates or frees memory, or that locks or unlocks mutexes---because the cleanup code cannot reliably determine the state of the resource.
In general, you should expect that no run-time library routine is safe for asynchronous cancelation, unless explicitly document to the contrary. Only one DECthreads routine is safe for asynchronous cancelation: pthread_setcanceltype(). |
For additional information about accomplishing asynchronous cancelation for your platform, see Section A.4, Section B.9, and Section C.5.
2.3.7.7 Example of Thread Cancelation Code
Example 2-1 shows a thread control and cancelation example.
Example 2-1 pthread Cancel |
---|
/* * Pthread Cancel Example */ /* * Outermost cancelation state */ { . . . int s, outer_c_s, inner_c_s; . . . /* Disable cancelation, saving the previous setting. */ s = pthread_setcancelstate (PTHREAD_CANCEL_DISABLE, &outer_c_s); if(s == EINVAL) printf("Invalid Argument!\n"); else if(s == 0) . . . /* Now cancelation is disabled. */ . . . /* Enable cancelation. */ { . . . s = pthread_setcancelstate (PTHREAD_CANCEL_ENABLE, &inner_c_s); if(s == 0) . . . /* Now cancelation is enabled. */ . . . /* Enable asynchronous cancelation this time. */ { . . . /* Enable asynchronous cancelation. */ int outerasync_c_s, innerasync_c_s; . . . s = pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, &outerasync_c_s); if(s == 0) . . . /* Now asynchronous cancelation is enabled. */ . . . /* Now restore the previous cancelation state (by * reinstating original asynchronous type cancel). */ s = pthread_setcanceltype (outerasync_c_s, &innerasync_c_s); if(s == 0) . . . /* Now asynchronous cancelation is disabled, * but synchronous cancelation is still enabled. */ } . . . } . . . /* Restore to original cancelation state. */ s = pthread_setcancelstate (outer_c_s, &inner_c_s); if(s == 0) . . . /* The original (outermost) cancelation state is now reinstated. */ } |
2.4 Synchronization Objects
In a multithreaded program, you must use synchronization objects
whenever there is a possibility of conflict in accessing shared data.
The following sections discuss two kinds of DECthreads synchronization
objects: mutexes and condition variables.
2.4.1 Mutexes
A mutex (or mutual exclusion) object is used by multiple threads to ensure the integrity of a shared resource that they access, most commonly shared data, by allowing only one thread to access it at a time.
A mutex has two states, locked and unlocked. A locked mutex has an owner---the thread that locked the mutex. It is illegal to unlock a mutex not owned by the calling thread.
For each piece of shared data, all threads accessing that data must use the same mutex: each thread locks the mutex before it accesses the shared data and unlocks the mutex when it is finished accessing that data. If the mutex is locked by another thread, the thread requesting the lock either waits for the mutex to be unlocked or returns, depending on the lock routine called (see Figure 2-4).
Figure 2-4 Only One Thread Can Lock a Mutex
Each mutex must be initialized before use. DECthreads supports static
initialization at compile time, using one of the macros provided in the
pthread.h header file, as well as dynamic initialization at
run time by calling pthread_mutex_init(). This routine allows
you to specify an attributes object, which allows you to specify the
mutex type. The types of mutexes are described in the following
sections.
2.4.1.1 Normal Mutex
A normal mutex is locked exactly once by a thread. If a thread tries to lock the mutex again without first unlocking it, the thread waits for itself to release the lock and deadlocks.
This is the most efficient form of mutex. When using interface and function inlining (optional), you can often lock and unlock a normal mutex without a call to DECthreads.
A normal mutex usually does not check thread ownership---that is, a
deadlock will result if the owner attempts to "relock" the
mutex. The system usually will not report an erroneous attempt to
unlock a mutex not owned by the calling thread.
2.4.1.2 Default Mutex
This is the name reserved by the Single UNIX Specification, Version 2,
for a vendor's POSIX.1c threads implementation's default mutex type.
For the DECthreads pthread interface, the
"normal" mutex type is implemented as the "default"
mutex type. Be aware that this mutex type might not be the default for
other implementations of the Single UNIX Specification, Version 2,
where any of errorcheck, recursive, or even nonportable mutex types
might be the default.
2.4.1.3 Recursive Mutex
A recursive mutex can be locked more than once by a given thread without causing a deadlock. The thread must call the pthread_mutex_unlock() routine the same number of times that it called the pthread_mutex_lock() routine before another thread can lock the mutex.
When a thread first successfully locks a recursive mutex, it owns that mutex and the lock count is set to 1. Any other thread attempting to lock the mutex blocks until the mutex becomes unlocked. If the owner of the mutex attempts to lock the mutex again, the lock count is incremented, and the thread continues running.
When an owner unlocks a recursive mutex, the lock count is decremented. The mutex remains locked and owned until the count reaches zero. It is an error for any thread other than the owner to attempt to unlock the mutex.
A recursive mutex is useful when a thread requires exclusive access to a piece of data, but must call another routine (or itself) that also requires exclusive access to the data. A recursive mutex allows nested attempts to lock the mutex to succeed rather than deadlock.
This type of mutex is called "recursive" because it allows you a capability not permitted by a normal (default) mutex. However, its use requires more careful programming. For instance, if a recursively locked mutex were used with a condition variable, the unlock performed for a pthread_cond_wait() or pthread_cond_timedwait() would not actually release the mutex. In that case, no other thread can satisfy the condition of the predicate, and the thread would wait indefinitely. See Section 2.4.2 for information on the condition variable wait and timed wait routines.
Previous | Next | Contents | Index |
privacy and legal statement | ||
6101PRO_004.HTML |