Updated: 11 December 1998 |
OpenVMS Programming Concepts Manual
Previous | Contents | Index |
This section defines and describes some advantages of using kernel threads. It also describes some kernel threads features and type of model, as well as the design changes made to the OpenVMS operating system.
For information about the concepts and implementation of user threads with DECthreads, see the Guide to DECthreads. |
By using threads as a programming model, you can gain the following advantages:
With kernel threads, the OpenVMS operating system implements the following two features:
Before the implementation of kernel threads, the scheduling model for the OpenVMS operating system was per process. The only scheduling context was the process itself; that is, only one execution context per process. Since a threaded application could create thousands of threads, many of these threads could potentially be executing at the same time. But because OpenVMS processes had only a single execution context, in effect, only one of those application threads was running at any one time. If this multithreaded application was running on a multiprocessor system, the application could not make use of more than a single CPU.
After the implementation of kernel threads, the scheduling model allows
for multiple execution contexts within a process; that is, more than
one application thread can be executing concurrently. These execution
contexts are called kernel threads. Kernel threads allows a
multithreaded application to have a thread executing on every CPU in a
multiprocessor system. Kernel threads, therefore, allow a threaded
application to take advantage of multiple CPUs in a symmetric
multiprocessing (SMP) system.
1.6.2.2 Efficient Use of the OpenVMS and DECthreads Schedulers
It is the function of the user mode thread manager to schedule individual user mode application threads. On OpenVMS, DECthreads is the user mode threading package of choice. Before the implementation of kernel threads, DECthreads multiplexed user mode threads on the single OpenVMS execution context---the process. DECthreads implemented parts of its scheduling by using a periodic timer. When the AST executed and the thread manager gained control, the thread manager could then select a new application thread for execution. But because the thread manager could not detect that a thread had entered an OpenVMS wait state, the entire application blocked until that periodic AST was delivered. That resulted in a delay until the thread manager regained control and could schedule another thread. Once the thread manager gained control, it could schedule a previously preempted thread unaware that the thread was in a wait state. The lack of integration between the OpenVMS and DECthreads schedulers could result in wasted CPU resources.
After the implementation of kernel threads, the scheduling model
provides for scheduler callbacks. A scheduler callback is an upcall
from the OpenVMS scheduler to the thread manager whenever a thread
changes state. This upcall allows the OpenVMS scheduler to inform the
thread manager that the current thread is stalled and that another
thread should be scheduled. Upcalls also inform the thread manager that
an event a thread is waiting on has completed. With kernel threads, the
two schedulers are better integrated, minimizing application thread
scheduling delays.
1.6.3 Kernel Threads Model and Design Features
This section presents the type of kernel threads model that OpenVMS
Alpha implements, and some features of the operating system design that
changed to implement the kernel thread model.
1.6.3.1 Kernel Threads Model
The OpenVMS kernel threads model is one that implements a few kernel
threads to many user threads with integrated schedulers. With this
model, there is a mapping of many user threads to only several
execution contexts or kernel threads. The kernel threads have no
knowledge of the individual threads within an application. The thread
manager multiplexes those user threads on an execution context, though
a single process can have multiple execution contexts. This model also
integrates the user mode thread manager scheduler with the OpenVMS
scheduler.
1.6.3.2 Kernel Threads Design Features
Design additions and modifications made to various features of OpenVMS Alpha are as follows:
With the implementation of OpenVMS kernel threads, all processes are a
threaded process with at least one kernel thread. Every kernel thread
gets a set of stacks, one for each access mode. Quotas and limits are
maintained and enforced at the process level. The process virtual
address space remains per process and is shared by all threads. The
scheduling entity moves from the process to the kernel thread. In
general, ASTs are delivered directly to the kernel threads. Event flags
and locks remain per process. See Section 1.6.4 for more information.
1.6.3.2.2 Access to Inner Modes
With the implementation of kernel threads, a single threaded process
continues to function exactly as it has in the past. A multithreaded
process may have multiple threads executing in user mode or in user
mode ASTs, as is also possible for supervisor mode. Except in cases
where an activity in inner mode is considered thread
safe, a multithreaded process may have only a single thread
executing in an inner mode at any one time. Multithreaded processes
retain the normal preemption of inner mode by more inner mode ASTs. A
special inner mode semaphore serializes access to inner mode.
1.6.3.2.3 Scheduling
With the implementation of kernel threads, the OpenVMS scheduler
concerns itself with kernel threads, and not processes. At certain
points in the OpenVMS executive at which the scheduler could wait a
kernel thread, it can instead transfer control to the thread manager.
This transfer of control, known as a callback or upcall, allows the
thread manager the chance to reschedule stalled application threads.
1.6.3.2.4 ASTs
With the implementation of kernel threads, ASTs are not delivered to
the process. They are delivered to the kernel thread on which the event
was initiated. Inner mode ASTs are generally delivered to the kernel
thread already in inner mode. If no thread is in inner mode, the AST is
delivered to the kernel thread that initiated the event.
1.6.3.2.5 Event Flags
With the implementation of kernel threads, event flags continue to
function on a per-process basis, maintaining compatibility with
existing application behavior.
1.6.3.2.6 Process Control Services
With the implementation of kernel threads, many process control
services continue to function at the process level. SYS$SUSPEND and
SYS$RESUME system services, for example, continue to change the
scheduling state of the entire process, including all of its threads.
Other services such as SYS$HIBER and SYS$SCHDWK act on individual
kernel threads instead of the entire process.
1.6.4 Kernel Threads Process Structure
This section describes the components that make up a kernel threads
process. It describes the following components:
Two primary data structures exist in the OpenVMS executive that describe the context of a process:
The PCB contains fields that identify the process to the system. The PCB comprises contexts that pertain to quotas and limits, scheduling state, privileges, AST queues, and identifiers. In general, any information that is required to be resident at all times is in the PCB. Therefore, the PCB is allocated from nonpaged pool.
The PHD contains fields that pertain to a process's virtual address
space. The PHD consists of the working set list and the process section
table. The PHD also contains the hardware process control block (HWPCB)
and a floating-point register save area. The HWPCB contains the
hardware execution context of the process. The PHD is allocated as part
of a balance set slot, and it can be outswapped.
1.6.4.1.1 Effect of a Multithreaded Process on the PCB and PHD
With multiple execution contexts within the same process, the multiple threads of execution all share the same address space, but have some independent software and hardware context. This change to a multithreaded process results in an impact on the PCB and PHD structures, and on any code that references them.
Before the implementation of kernel threads, the PCB contained much context that was per-process. Now, with the introduction of multiple threads of execution, much context becomes per-thread. To accommodate per-thread context, a new data structure, the kernel thread block (KTB), is created, with the per-thread context removed from the PCB. However, the PCB continues to contain context common to all threads, such as quotas and limits. The new per-kernel thread structure contains the scheduling state, priority, and the AST queues.
The PHD contains the HWPCB which gives a process its single execution
context. The HWPCB remains in the PHD; this HWPCB is used by a process
when it is first created. This execution context is also called the
initial thread. A single threaded process has only this one execution
context. A new structure, the floating-point registers and execution
data block (FRED), is created to contain the hardware context of the
newly created kernel threads. Since all threads in a process share the
same address space, the PHD continues to describe the entire virtual
memory layout of the process.
1.6.4.2 Kernel Thread Block (KTB)
The kernel thread block (KTB) is a new per-kernel-thread data structure. The KTB contains all per-thread software context moved from the PCB. The KTB is the basic unit of scheduling, a role previously performed by the PCB, and is the data structure placed in the scheduling state queues. Since the KTB is the logical extension of the PCB, the SCHED spinlock synchronizes access to the KTB and the PCB.
Typically, the number of KTBs a multithreaded process has is the same as the number of CPUs on the system. Actually, the number of KTBs is limited by the value of the system parameter MULTITHREAD. If MULTITHREAD is zero, the OpenVMS kernel support is disabled. With kernel threads disabled, user-level threading is still possible with DECthreads. The environment is identical to the OpenVMS environment prior to the OpenVMS Version 7.0 release. If MULTITHREAD is nonzero, it represents the maximum number of execution contexts or kernel threads that a process can own, including the initial one.
The KTB, in reality, is not an independent structure from the PCB. Both
the PCB and KTB are defined as sparse structures. The fields of the PCB
that move to the KTB retain their original PCB offsets in the KTB. In
the PCB, these fields are unused. In effect, if the two structures are
overlaid, the result is the PCB as it currently exists with new fields
appended at the end. The PCB and KTB for the initial thread occupy the
same block of nonpaged pool; therefore, the KTB address for the initial
thread is the same as for the PCB.
1.6.4.3 Floating-Point Registers and Execution Data Block (FRED)
To allow for multiple execution contexts, not only are additional KTBs required to maintain the software context, but additional HWPCBs must be created to maintain the hardware context. Each HWPCB has allocated with it a block of 256 bytes for preserving the contents of the floating-point registers across context switches. Another 128 bytes is allocated for per-kernel thread data.
The combined structure that contains the HWPCB, floating-point register
save area, and per-kernel thread data is called the floating-point
registers and execution data (FRED) block. These structures reside in
the process's balance set slot. This allows the FREDs to be outswapped
with the process header.
1.6.4.4 Kernel Threads Region
Much process context resides in P1 space, taking the form of data cells
and the process stacks. Some of these data cells need to be per kernel
thread, as do the stacks. During initialization of the multithread
environment, a kernel thread region in P1 space is initialized to
contain the per-kernel-thread data cells and stacks. The region begins
at the boundary between P0 and P1 space at address 40000000x, and it
grows toward higher addresses and the initial thread's user stack. The
region is divided into per-kernel-thread areas. Each area contains
pages for data cells and the four stacks.
1.6.4.5 Per-Kernel Thread Stacks
A process is created with four stacks; each access mode has one stack. All four of these stacks are located in P1 space. Stack sizes are either fixed, determined by a SYSGEN parameter, or expandable. The parameter KSTACKPAGES controls the size of the kernel stack. This parameter continues to control all kernel stack sizes, including those created for new execution contexts of kernel threads. The executive stack is a fixed size of two pages; with kernel threads implementation, the executive stack for new execution contexts continues to be two pages in size. The supervisor stack is a fixed size of four pages; with kernel threads implementation, the supervisor stack for new execution contexts is reduced to two pages in size.
For the user stack, a more complex situation exists. OpenVMS allocates
P1 space from high to lower addresses. The user stack is placed after
the lowest P1 space address allocated. This allows the user stack to
expand on demand toward P0 space. With the introduction of multiple
sets of stacks, the locations of these stacks impose a limit on the
size of each area in which they can reside. With the implementation of
kernel threads, the user stack is no longer boundless. The initial user
stack remains semi-boundless; it still grows toward P0 space, but the
limit is the per-kernel thread region instead of P0 space.
1.6.4.6 Per-Kernel-Thread Data Cells
Several pages in P1 space contain process state in the form of data
cells. A number of these cells must have a per-kernel-thread
equivalent. These data cells do not all reside on pages with the same
protection. Because of this, the per-kernel-thread area reserves two
pages for these cells. Each page has a different page protection; one
page protection is user read, user write (URUW); the other is user
read, executive write (UREW).
1.6.4.7 Summary of Process Data Structures
Process creation results in a PCB/KTB, a PHD/FRED, and a set of stacks. All processes have a single kernel thread, the initial thread.
A multithreaded process always begins as a single threaded process. A multithreaded process contains a PCB/KTB pair and a PHD/FRED pair for the initial thread; for its other threads, it contains additional KTBs, additional FREDs, and additional sets of stacks. When the multithreaded application exits, the process returns to its single threaded state, and all additional KTBs, FREDs, and stacks are deleted.
This chapter describes communication mechanisms used within a process and between processes. It also describes programming with intra-cluster communication (ICC). It contains the following sections:
Section 2.1 describes communication within a process.
Section 2.2 describes communication between processes.
Section 2.3 describes intra-cluster communication.
The operating system allows your process to communicate within itself and with other processes. Processes can be either wholly independent or cooperative. This chapter presents considerations for developing applications that require the concurrent execution of many programs, and how you can use process communication to perform the following functions:
Communicating within a process, from one program component to another, can be performed using the following methods:
For passing information among chained images, you can use all four methods because the image reading the information executes immediately after the image that deposited it. Only the common area allows you to pass data reliably from one image to another in the event that another image's execution intervenes between the two communicating images.
For communicating within a single image, you can use event flags, logical names, and symbols. For synchronizing events within a single image, use event flags. See Chapter 16 for more information about synchronizing events.
Because permanent mailboxes and permanent global sections are not
deleted when the creating image exits, they also can be used to pass
information from the current image to a later executing image. However,
Compaq recommends that you use the common area because it uses fewer
system resources than the permanent structures and does not require
privilege. (You need the PRMMBX privilege to create a permanent mailbox
and the PRMGBL privilege to create a permanent global section.)
2.1.1 Using Local Event Flags
Event flags are status-posting bits maintained by the operating system
for general programming use. Programs can set, clear, and read event
flags. By setting and clearing event flags at specific points, one
program component can signal when an event has occurred. Other program
components can then check the event flag to determine when the event
has been completed. For more information about using local and common
event flags for synchronizing events, refer to Chapter 16.
2.1.2 Using Logical Names
Logical names can store up to 255 bytes of data. When you need to pass
information from one program to another within a process, you can
assign data to a logical name when you create the logical name; then,
other programs can access the contents of the logical name. See
Chapter 12 for more information about logical name system services.
2.1.2.1 Using Logical Name Tables
If both processes are part of the same job, you can place the logical
name in the process logical name table (LNM$PROCESS) or in the job
logical name table (LNM$JOB). If a subprocess is prevented from
inheriting the process logical name table, you must communicate using
the job logical name table. If the processes are in the same group,
place the logical name in the group logical name table LNM$GROUP
(requires GRPNAM or SYSPRV privilege). If the processes are not in the
same group, place the logical name in the system logical name table
LNM$SYSTEM (requires SYSNAM or SYSPRV privilege). You can also use
symbols, but only between a parent and a spawned subprocess that has
inherited the parent's symbols.
2.1.2.2 Using Access Modes
You can create a logical name under three access modes---user,
supervisor, or executive. If you create a process logical name in user
mode, it is deleted after the image exits. If you create a logical name
in supervisor or executive mode, it is retained after the image exits.
Therefore, to share data within the process from one image to the next,
use supervisor-mode or executive-mode logical names.
2.1.2.3 Creating and Accessing Logical Names
Perform the following steps to create and access a logical name:
Example 2-1 creates a spawned subprocess to perform an iterative calculation. The logical name REP_NUMBER specifies the number of times that REPEAT, the program executing in the subprocess, should perform the calculation. Because both the parent process and the subprocess are part of the same job, REP_NUMBER is placed in the job logical name table LNM$JOB. (Note that logical names are case sensitive; specifically, LNM$JOB is a system-defined logical name that refers to the job logical name table, whereas lnm$job is not.) To satisfy the references to LNM$_STRING, the example includes the file $LNMDEF.
Example 2-1 Performing an Iterative Calculation with a Spawned Subprocess |
---|
PROGRAM CALC ! Status variable and system routines INTEGER*4 STATUS, 2 SYS$CRELNM, 2 LIB$GET_EF, 2 LIB$SPAWN ! Define itmlst structure STRUCTURE /ITMLST/ UNION MAP INTEGER*2 BUFLEN INTEGER*2 CODE INTEGER*4 BUFADR INTEGER*4 RETLENADR END MAP MAP INTEGER*4 END_LIST END MAP END UNION END STRUCTURE ! Declare itmlst RECORD /ITMLST/ LNMLIST(2) ! Number to pass to REPEAT.FOR CHARACTER*3 REPETITIONS_STR INTEGER REPETITIONS ! Symbols for LIB$SPAWN and SYS$CRELNM ! Include FORSYSDEF symbol definitions: INCLUDE '($LNMDEF)' EXTERNAL CLI$M_NOLOGNAM, 2 CLI$M_NOCLISYM, 2 CLI$M_NOKEYPAD, 2 CLI$M_NOWAIT, 2 LNM$_STRING . . ! Set REPETITIONS_STR . ! Set up and create logical name REP_NUMBER in job table LNMLIST(1).BUFLEN = 3 LNMLIST(1).CODE = %LOC (LNM$_STRING) LNMLIST(1).BUFADR = %LOC(REPETITIONS_STR) LNMLIST(1).RETLENADR = 0 LNMLIST(2).END_LIST = 0 STATUS = SYS$CRELNM (, 2 'LNM$JOB', ! Logical name table 2 'REP_NUMBER',, ! Logical name 2 LNMLIST) ! List specifying ! equivalence string IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) ! Execute REPEAT.FOR in a subprocess MASK = %LOC (CLI$M_NOLOGNAM) .OR. 2 %LOC (CLI$M_NOCLISYM) .OR. 2 %LOC (CLI$M_NOKEYPAD) .OR. 2 %LOC (CLI$M_NOWAIT) STATUS = LIB$GET_EF (FLAG) IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) STATUS = LIB$SPAWN ('RUN REPEAT',,,MASK,,,,FLAG) IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) . . . |
REPEAT.FOR
PROGRAM REPEAT ! Repeats a calculation REP_NUMBER of times, ! where REP_NUMBER is a logical name ! Status variables and system routines INTEGER STATUS, 2 SYS$TRNLNM, 2 SYS$DELLNM ! Number of times to repeat INTEGER*4 REITERATE, 2 REPEAT_STR_LEN CHARACTER*3 REPEAT_STR ! Item list for SYS$TRNLNM ! Define itmlst structure STRUCTURE /ITMLST/ UNION MAP INTEGER*2 BUFLEN INTEGER*2 CODE INTEGER*4 BUFADR INTEGER*4 RETLENADR END MAP MAP INTEGER*4 END_LIST END MAP END UNION END STRUCTURE ! Declare itmlst RECORD /ITMLST/ LNMLIST (2) ! Define item code EXTERNAL LNM$_STRING ! Set up and translate the logical name REP_NUMBER LNMLIST(1).BUFLEN = 3 LNMLIST(1).CODE = LNM$_STRING LNMLIST(1).BUFADR = %LOC(REPEAT_STR) LNMLIST(1).RETLENADR = %LOC(REPEAT_STR_LEN) LNMLIST(2).END_LIST = 0 STATUS = SYS$TRNLNM (, 2 'LNM$JOB', ! Logical name table 2 'REP_NUMBER',, ! Logical name 2 LNMLIST) ! List requesting ! equivalence string IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) ! Convert equivalence string to integer ! BN causes spaces to be ignored READ (UNIT = REPEAT_STR (1:REPEAT_STR_LEN), 2 FMT = '(BN,I3)') REITERATE ! Calculations DO I = 1, REITERATE . . . END DO ! Delete logical name STATUS = SYS$DELLNM ('LNM$JOB', ! Logical name table 2 'REP_NUMBER',) ! Logical name IF (.NOT. STATUS) CALL LIB$SIGNAL (%VAL(STATUS)) END |
Previous | Next | Contents | Index |
Copyright © Compaq Computer Corporation 1998. All rights reserved. Legal |
5841PRO_002.HTML
|