Document revision date: 30 March 2001
[Compaq] [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]
[OpenVMS documentation]

OpenVMS Alpha Guide to Upgrading Privileged-Code Applications


Previous Contents Index

4.6 64-Bit Support in Example Driver

This section summarizes changes made to the example device driver (LRDRIVER.C) to support 64-bit buffer addresses on all I/O functions.

This sample driver is available in the SYS$EXAMPLES directory.

  1. All functions are declared as capable of supporting a 64-bit P1 parameter.
  2. The 64-bit buffered I/O packet header defined by bufiodef.h is used instead of a privately defined structure that corresponds to the 32-bit buffered I/O packet header.
  3. The pointer to the caller's set mode buffer is defined as a 64-bit pointer.
  4. IRP$Q_QIO_P1 is used instead of IRP$L_QIO_P1.
  5. The EXE_STD$ALLOC_BUF_64 routine is used instead of EXE_STD$DEBIT_BYTCNT_ALO to allocate the buffered I/O packet.

No infrastructure changes were necessary to this driver. The original version could have been simply recompiled and relinked and it would have worked correctly with 32-bit buffer addresses.

4.6.1 Example: Declaring 64-Bit Functions

Original:


ini_fdt_act(...,IO$_WRITELBLK,lr$write,BUFFERED); 
... 
ini_fdt_act(...,IO$_SENSECHAR,exe_std$sensemode, 
                                       BUFFERED); 

64-Bit Version:


ini_fdt_act(...,IO$_WRITELBLK,lr$write,BUFFERED_64); (1)
ini_fdt_act(...,IO$_WRITEPBLK,lr$write,BUFFERED_64); 
ini_fdt_act(...,IO$_WRITEVBLK,lr$write,BUFFERED_64); 
ini_fdt_act(...,IO$_SETMODE,lr$setmode,BUFFERED_64); (2)
ini_fdt_act(...,IO$_SETCHAR,lr$setmode,BUFFERED_64); 
ini_fdt_act(...,IO$_SENSEMODE,exe_std$sensemode, 
                                       BUFFERED_64); (3)
ini_fdt_act(...,IO$_SENSECHAR,exe_std$sensemode, 
                                       BUFFERED_64); 

  1. Source changes required to LR$WRITE routine
  2. Source changes required to LR$SETMODE routine
  3. No user buffer, no $QIO parameters

4.6.2 Example: Declaring 64-Bit Buffered I/O Packet

Original:


typedef struct _sysbuf_hdr {       (1)
    char *pkt_datap; 
    char *usr_bufp; 
    short pkt_size; 
    short :16; 
SYSBUF_HDR; 

64-Bit Version:


#include <bufiodef.h>     (2)

  1. Locally defined type, SYSBUF_HDR, for a buffered I/O packet header was necessary.
  2. The new bufiodef.h header file defines the BUFIO type, which includes both the 32-bit and 64-bit buffered I/O packet header cells.

4.6.3 Example: Changes to LR$WRITE

Original:


char *qio_bufp;         (1)
SYSBUF_HDR *sys_bufp; 
 
qio_bufp = (char *) irp->irp$l_qio_p1;  (2)
sys_buflen = qio_buflen + sizeof(SYSBUF_HDR);  (3)
 
status = exe_std$debit_bytcnt_alo(sys_buflen,  (4)
                                  pcb, 
                                  &sys_buflen, 
                                  (void **) &sys_bufp); 
 
irp->irp$l_svapte = (void *) sys_bufp;  (5)
irp->irp$l_boff = sys_buflen; 
sys_datap = (char *) sys_bufp + sizeof(SYSBUF_HDR); (6)

  1. Define 32-bit pointer to caller's buffer
  2. Pointer is initialized using the 32-bit $QIO P1 value
  3. Size of buffered I/O packet includes header size
  4. Allocate pool for buffered I/O packet
  5. Connect the buffered I/O packet to IRP
  6. Compute pointer to data region within packet

64-Bit Version:


CHAR_PQ qio_bufp;       (1)
BUFIO *sys_bufp; 
 
qio_bufp   = (CHAR_PQ) irp->irp$q_qio_p1;    (2)
sys_buflen = qio_buflen + BUFIO$K_HDRLEN64;  (3)
status = exe_std$alloc_bufio_64(irp,         (4)
                                pcb, 
                                (VOID_PQ) qio_bufp, 
                                sys_buflen); 
sys_bufp  = irp->irp$ps_bufio_pkt;           (5)
sys_datap = sys_bufp->bufio$ps_pktdata;      (6)

  1. Define a 64-bit pointer to caller's buffer.
  2. Pointer is initialized using the 64-bit $QIO P1 value. No source changes on references, for example:


    exe_std$writechk(irp,pcb,ucb,qio_bufp,qio_buflen); 
    memcpy (sys_datap, qio_bufp, qio_buflen); 
    

  3. Size of buffered I/O packet includes 64-bit header size.
  4. Allocate pool for a 64-bit buffered I/O packet and connect it to the IRP.
  5. Get pointer to the buffered I/O packet.
  6. Get pointer to data region within packet.

4.6.4 Example: Changes to LR$SETMODE

Original:


SETMODE_BUF *setmode_bufp;  (1)
setmode_bufp = (SETMODE_BUF *) irp->irp$l_qio_p1; (2)

64-Bit Version:


#pragma __required_pointer_size __save 
#pragma __required_pointer_size __long (3)
typedef SETMODE_BUF *SETMODE_BUF_PQ; (4)
#pragma __required_pointer_size __restore (5)
 
SETMODE_BUF_PQ setmode_bufp; (6)
setmode_bufp = (SETMODE_BUF_PQ) irp->irp$q_qio_p1; (7)

  1. 32-bit pointer to a SETMODE_BUF.
  2. Pointer is initialized using the 32-bit $QIO P1 value.
  3. Change pointer size to 64-bits.
  4. Define a type for a 64-bit pointer to a SETMODE_BUF structure.
  5. Restore saved pointer size, 32-bits.
  6. Define a 64-bit pointer to a SETMODE_BUF structure.
  7. Pointer is initialized using the 64-bit $QIO P1 value.

4.6.5 Example: Changes to LR$STARTIO

Original:


ucb->ucb$r_ucb.ucb$l_svapte = 
               (char *) ucb->ucb$r_ucb.ucb$l_svapte + 
               sizeof(SYSBUF_HDR);           (1)

64-Bit Version:


ucb->ucb$r_ucb.ucb$l_svapte = 
               (char *) ucb->ucb$r_ucb.ucb$l_svapte + 
               BUFIO$K_HDRLEN64;             (2)

  1. Skip 32-bit buffered I/O packet header.
  2. Skip 64-bit buffered I/O packet header.


Chapter 5
Modifying User-Written System Services

An application can contain certain routines that perform privileged functions, called user-written system services. This chapter describes the OpenVMS Alpha Version 7.0 changes that can affect user-written system services.

For more information about how to create user-written system services, see the OpenVMS Programming Concepts Manual.

As part of the 64-bit virtual addressing support, the Alpha system service dispatcher automatically performs a sign-extension check on service arguments to ensure that only 32-bit sign extended virtual addresses are passed. This sign-extension check prevents an application from passing a 64-bit virtual address to system services that are not equipped to handle 64-bit virtual addresses. This sign-extension check occurs for the system services (regardless of mode) provided by Compaq as well as for user-written system services.

Although the sign-extension check occurs by default, it is possible to disable the check for services that can properly handle 64-bit virtual addresses. A new flag, PLV$M_64_BIT_ARGS (see Table 5-2), can be specified when creating a user-written system service that is designed to accept 64-bit virtual addresses. The system service dispatcher purposely omits the sign-extension check when this flag is set for a particular service. Table 5-1 shows the components of the Alpha Privileged Library Vector that are new or changed as of OpenVMS Alpha Version 7.0.

Table 5-1 Components of the Alpha Privileged Library Vector
Component Symbol Description
User-supplied rundown routine for executive mode services PLV$PS_EXEC_RUNDOWN_HANDLER May contain the address of a user-supplied rundown routine that performs image-specific cleanup and resource deallocation. When the image linked against the user-written system service is run down by the system, this run-time routine is invoked. Unlike exit handlers, the routine is always called when a process or image exits. (Image rundown code calls this routine with a JSB instruction; it returns with an RSB instruction called in executive mode at IPL 0.)
Kernel Routine Flags Vector PLV$PS_KERNEL_ROUTINE_FLAGS Contains either the address of an array of longwords which contain the defined flags associated with each kernel system service, or a zero. Table 5-2 contains a description of the available flags.
Executive Routine Flags Vector PLV$PS_EXEC_ROUTINE_FLAGS Contains either the address of an array of longwords which contain the defined flags associated with each executive mode system service, or a zero. Table 5-2 contains a description of the available flags.

Table 5-2 Flags for 64-Bit User-Written Services
Flag Description
PLV$M_WAIT_CALLERS_MODE Informs the system service dispatcher that the service can return the status SS$_WAIT_CALLERS_MODE. This flag can only be specified for kernel mode services.
PLV$M_WAIT_CALLERS_NO_REEXEC Informs the system service dispatcher that the service can return the status SS$_WAIT_CALLERS_MODE but should not reexecute the service. This flag can only be specified for kernel mode services.
PLV$M_CLRREG Informs the system service dispatcher to clear the scratch integer registers before returning to the system service requestor. A security-related service may set this flag to ensure that sensitive information is not left in scratch registers. This flag can be specified for both kernel and executive mode system services.
PLV$M_RETURN_ANY Flags the system service dispatcher that the service can return arbitrary values in R0. This flag can only be specified for kernel mode system services.
PLV$M_WCM_NO_SAVE Informs the system service dispatcher that the service has taken steps to save the contents of the scratch integer registers. In this case, the dispatcher will not take the extra steps to save and restore these registers. This flag can only be specified for kernel mode system services.
PLV$M_STACK_ARGS Use of this flag is reserved to Compaq.
PLV$M_THREAD_SAFE Informs the system service dispatcher that the service requires no explicit synchronization. It is assumed by the dispatcher that the service provides its own internal data synchronization and that muliple kernel threads can safely execute other inner mode code in parallel. This flag can be specified for both kernel and executive mode system services.
PLV$M_64_BIT_ARGS Informs the system service dispatcher that the service can accept 64-bit virtual addresses. When set, the dispatcher will not perform the sign-extension check on the service arguments. The sign-extension check is the method used to guarantee that only 32-bit, sign-extended virtual addreses are passed to system services. This check is enabled by default. This flag can be specified for both kernel and executive mode system services.
PLV$M_CHECK_UPCALL Use of this flag is reserved to Compaq.
Example 5-1 illustrates how to create a PLV on Alpha systems using C.

Example 5-1 Creating a Privileged Library Vector (PLV) for C on Alpha Systems

 
/* "Forward routine" declarations */ 
int     first_service(), 
        second_service(), 
        third_service(), 
        fourth_service(); 
int     rundown_handler(); 
 
/* Kernel and exec routine lists: */ 
int (*(kernel_table[]))() = { 
        first_service, 
        second_service, 
        fourth_service}; 
 
int (*(exec_table[]))() = { 
        third_service}; 
 
/* 
** Kernel and exec flags.  The flag settings below enable second_service 
** and fourth_service to be 64-bit capable.  First_service and third_service 
** cannot accept a 64-bit pointer.  Attempts to pass 64-bit pointers to 
** these services will result in a return status of SS$_ARG_GTR_32_BITS. 
** The PLV$M_64_BIT_ARGS flag instructs the system service dispatcher to 
** bypass sign-extension checking of the service arguments for a particular 
** service. 
*/ 
int 
    kernel_flags [] = { 
        0, 
        PLV$M_64_BIT_ARGS, 
        0}, 
 
    exec_flags [] = { 
        PLV$M_64_BIT_ARGS}; 
 
/* 
** The next two defines allow the kernel and executive routine counts 
** to be filled in automatically after lists have been declared for 
** kernel and exec mode.  They must be placed before the PLV 
** declaration and initialization, and for this module will be 
** functionally equivalent to: 
** 
** #define KERNEL_ROUTINE_COUNT 3 
** #define EXEC_ROUTINE_COUNT 1 
** 
*/ 
 
#define EXEC_ROUTINE_COUNT sizeof(exec_table)/sizeof(int *) 
#define KERNEL_ROUTINE_COUNT sizeof(kernel_table)/sizeof(int *) 
 
/* 
** Now build and initialize the PLV structure.  Since the PLV must have 
** the VEC psect attribute, and must be the first thing in that psect, 
** we use the strict external ref-def model which allows us to put the 
** PLV structure in its own psect.  This is like the globaldef 
** extension in VAX C, where you can specify in what psect a global 
** symbol may be found; unlike globaldef, it allows the declaration 
** itself to be ANSI-compliant.  Note that the initialization here 
** relies on the change-mode-specific portion (plv$r_cmod_data) of the 
** PLV being declared before the portions of the PLV which are specific 
** to message vector PLVs (plv$r_msg_data) and system service intercept 
** PLVs (plv$r_ssi_data). 
** 
*/ 
 
#ifdef __ALPHA 
#pragma extern_model save 
#pragma extern_model strict_refdef "USER_SERVICES" 
#endif 
extern const PLV user_services = { 
        PLV$C_TYP_CMOD,         /* type */ 
        0,                      /* version */ 
        { 
        {KERNEL_ROUTINE_COUNT,  /* # of kernel routines */ 
        EXEC_ROUTINE_COUNT,     /* # of exec routines */ 
        kernel_table,           /* kernel routine list */ 
        exec_table,             /* exec routine list */ 
        rundown_handler,        /* kernel rundown handler */ 
        rundown_handler,        /* exec rundown handler */ 
        0,                      /* no RMS dispatcher */ 
        kernel_flags,           /* kernel routine flags */ 
        exec_flags}             /* exec routine flags */ 
        } 
        }; 
#ifdef __ALPHA 
#pragma extern_model restore 
#endif 
 


Chapter 6
Kernel Threads Process Structure

This chapter describes the components that make up a kernel threads process. This chapter contains the following sections:

For more information about kernel threads features, see the OpenVMS Alpha Version 7.0 Bookreader version of the OpenVMS Programming Concepts Manual.

6.1 Process Control Blocks (PCBs) and Process Headers (PHDs)

Two primary data structures exist in the OpenVMS executive that describe the context of a process:

The PCB contains fields that identify the process to the system. The PCB comprises contexts that pertain to quotas and limits, scheduling state, privileges, AST queues, and identifiers. In general, any information that must be resident at all times is in the PCB. Therefore, the PCB is allocated from nonpaged pool.

The PHD contains fields that pertain to a process's virtual address space. The PHD consists of the working set list, and the process section table. The PHD also contains the hardware process control block (HWPCB), and a floating point register save area. The HWPCB contains the hardware execution context of the process. The PHD is allocated as part of a balance set slot, and it can be outswapped.

6.1.1 Effect of a Multithreaded Process on the PCB and PHD

With multiple execution contexts within the same process, the multiple threads of execution all share the same address space but have some independent software and hardware context. This change to a multithreaded process impacts the PCB and PHD structures and any code that references them.

Before the implementation of kernel threads, the PCB contained much context that was per process. With the introduction of multiple threads of execution, much context becomes per thread. To accommodate per-thread context, a new data structure---the kernel thread block (KTB)--- is created, with the per-thread context removed from the PCB. However, the PCB continues to contain context common to all threads, such as quotas and limits. The new per-kernel thread structure contains the scheduling state, priority, and the AST queues.

The PHD contains the HWPCB, which gives a process its single execution context. The HWPCB remains in the PHD; this HWPCB is used by a process when it is first created. This execution context is also called the initial thread. A single threaded process has only this one execution context. Since all threads in a process share the same address space, the PHD continues to describe the entire virtual memeory layout of the process.

A new structure, the floating-point registers and execution data (FRED) block, contains the hardware context for newly created kernel threads.

6.2 Kernel Thread Blocks (KTBs)

The kernel thread block (KTB) is a new per-kernel thread data structure. The KTB contains all per-thread context moved from the PCB. The KTB is the basic unit of scheduling, a role previously performed by the PCB, and is the data structure placed in the scheduling state queues. Since the KTB is the logical extension of the PCB, the SCHED spinlock synchronizes access to the KTB and the PCB.

Typically, the number of KTBs a multithreaded process has, matches the number of CPUs on the system. Actually, the number of KTBs is limited by the value of the system parameter MULTITHREAD. If MULTITHREAD is zero, the OpenVMS kernel support is disabled. With kernel threads disabled, user-level threading is still possible with DECthreads. The environment is identical to the OpenVMS environment prior to this release that implements kernel threads. If MULTITHREAD is nonzero, it represents the maximum number of execution contexts or kernel threads that a process can own, including the initial one.

In reality the KTB is not an independent structure from the PCB. Both the PCB and KTB are defined as sparse structures. The fields of the PCB that move to the KTB retain their original PCB offsets in the KTB. In the PCB, these fields are unused. In effect, if the two structures are overlaid, the result is the PCB as it currently exists with new fields appended at the end. The PCB and KTB for the initial thread occupy the same block of nonpaged pool; therefore, the KTB address for the initial thread is the same as for the PCB.

6.2.1 KTB Vector

When a process becomes multithreaded, a vector similar to the PCB vector is created in pool. This vector contains the list of pool addresses for the kernel thread blocks in use by the process. The KTB vector entries are reused as kernel threads are created and deleted. An unused entry contains a zero. The vector entry number is used as a kernel thread ID. The first entry always contains the address of the KTB for the initial thread, which is by definition kernel thread ID zero. The kernel thread ID is used to build unique PIDs for the individual kernel threads. Section 6.3.1 describes PID changes for kernel threads.

To implement these changes, the following four new fields have been added to the PCB:

The PCB$L_INITIAL_KTB field actually overlays the new KTB$L_PCB field. For a single threaded process, PCB$L_KTBVEC is initialized to contain the address of PCB$L_INITIAL_KTB. The PCB$L_INITIAL_KTB always contains the address of the initial thread's KTB. As a process transitions from being single threaded to multithreaded and back, PCB$L_KTBVEC is updated to point to either the KTB vector in pool or PCB$L_INITIAL_KTB.

The PCB$L_KT_COUNT field counts the valid entries in the KTB vector. The PCB$L_KT_HIGH field gives the highest vector entry number in use.

6.2.2 Floating-Point Registers and Execution Data Blocks (FREDs)

To allow for multiple execution contexts, not only are additional KTBs required to maintain the software context, but additional HWPCBs must be created to maintain the hardware context. Each HWPCB has allocated with it a block of 256 bytes for preserving the contents of the floating-point registers across context switches. Another 128 bytes is allocated for per-kernel thread data. Presently, only a clone of the PHD$L_FLAGS2 field is defined.

The combined structure that contains the HWPCB, floating-point register save area, and per-kernel thread data is called the floating-point registers and execution data (FRED) block. It is 512 bytes in length. These structures reside in the process's balance set slot. This allows the FREDs to be outswapped with the process header. On the first page allocated for FRED blocks, the first 512 bytes are reserved for the inner-mode semaphore.


Previous Next Contents Index

  [Go to the documentation home page] [How to order documentation] [Help on this site] [How to contact us]  
  privacy and legal statement  
6466PRO_005.HTML