Updated: 11 December 1998 |
OpenVMS Version 7.2
New Features Manual
Previous | Contents | Index |
The OpenVMS Registry can control access to OpenVMS Registry keys in the following two ways:
COM applications require read access to the COM registry keys and COM developers require read and write access to the COM registry keys. In COM Version 1.0 for OpenVMS (without authentication), COM for OpenVMS does not have NT credentials; as a result, COM Version 1.0 for OpenVMS uses OpenVMS security to control access to the OpenVMS Registry. Because you cannot control this OpenVMS Registry access on a per-key basis, you must grant all COM application read access to the entire OpenVMS Registry and you must grant all COM developers write access to the entire OpenVMS Registry. This means that the entire OpenVMS Registry, including the PATHWORKS portion not used by COM for OpenVMS, is accessible to COM applications and developers.
In COM Version 1.1 for OpenVMS (with NTLM authentication), COM for OpenVMS will use NT
credentials to control access to specific OpenVMS Registry keys and remove
the OpenVMS privileges and rights identifiers. This will protect those
parts of the OpenVMS Registry that are not used by COM for OpenVMS.
4.4 Common File Qualifier Routines
OpenVMS Version 7.2 contains the UTIL$CQUAL routines, which allow you to parse the command line for qualifiers related to certain file attributes, and to match files you are processing against the selected criteria retrieved from the command line. The utility routines allow a user to select files and perform terminal I/O without the application writer performing these explicitly.
The common file qualifier routines begin with the characters UTIL$CQUAL. Your program calls these routines using the OpenVMS Calling Standard. When you call a UTIL$CQUAL routine, you must provide all the required arguments. Upon completion, the routine returns its completion status as a condition value.
The following table lists the common file qualifier routines.
Routine Name | Description |
---|---|
UTIL$CQUAL_FILE_PARSE | Parses the command line for file qualifiers, and obtains associated values. Returns a context value that is used when calling the matching and ending routines. |
UTIL$CQUAL_FILE_MATCH | Compares the routine file input to the command line data obtained from the parse routine call. |
UTIL$CQUAL_FILE_END | Deletes all virtual memory allocated during the command line parse routine call. |
UTIL$CQUAL_CONFIRM_ACT | Prompts a user for a response from SYS$COMMAND. |
Follow these steps to use the common file qualifier routines:
You may optionally call UTIL$CQUAL_CONFIRM_ACT to ask for user confirmation without calling the other common qualifier routines.
For more information about the common file qualifier routines, refer to
the OpenVMS Utility Routines Manual.
4.5 DECthreads
This section contains information about new DECthreads features for
OpenVMS Version 7.2.
4.5.1 Yellow Zone Stack Overflow (Alpha only)
DECthreads now provides a special protected area of memory between the
usable stack region and the guard page. When accessed, the VMS exec
un-protects this memory and a stack overflow exception is delivered to
the user thread. This feature enables applications to catch stack
overflow conditions and attempt recovery or graceful termination.
4.5.2 Read-Write Locks (Alpha and VAX)
DECthreads now supports use of multiple readers, single writer locks, also known as read-write locks, allowing many threads to have simultaneous read-only access to data while allowing only one thread to have write access at any given time. Such locks are typically used to protect data that is read more frequently than written.
As part of this support, the following new routines have been added to the DECthreads POSIX 1003.1c library:
DECthreads has improved its debugging support, allowing Compaq and
third-party debuggers to better integrate with DECthreads. The
'pthread' debugging interface, previously only available via SDA is now
available from the OpenVMS Debugger as well.
4.6 DIGITAL DCE Remote Procedure Call (RPC) Functionality
Information on Microsoft's NT Lan Manager (NTLM) is provided as a preview of functionality that will be available in a future version of Digital DCE for OpenVMS (Alpha only). This advanced documentation will help you in future planning. |
Beginning with OpenVMS Version 7.2, Remote Procedure Call (RPC) functionality is integrated into the operating system. RPC provides connectivity between individual procedures in an application across heterogeneous systems in a transparent way. This functionality uses the DIGITAL Distributed Computing Environment (DCE) RPC, but also provides the ability to use the Microsoft RPC Application Programming Interface (API) as well as the DIGITAL DCE RPC API. Using RPC, an application can interoperate with either DIGITAL DCE or Microsoft RPC applications. If security is required, you can use either Microsoft's NTLM security or (by installing the full DIGITAL DCE Run-Time Kit) DCE Security.
The RPC daemon allows client/server RPC applications to register their specific endpoints. Registering endpoints allows remote clients of the application to find the server application's entry point on the system.
The DIGITAL DCE Application Developer's Kit (separately licensed) is
required to develop RPC applications, but the resulting applications
may be installed on any OpenVMS Version 7.2 system, or any supported
OpenVMS system where the full DIGITAL DCE Run-Time Kit is installed.
The DIGITAL DCE Run-Time Kit, although shipped on the OpenVMS
CD-ROM, is licensed with the OpenVMS operating system
and does not require the purchase of additional software licenses.
4.6.1 Starting and Stopping RPC
The RPC daemon can be started or stopped with the following new command files:
These command files are located in the directory SYS$COMMON:[SYSMGR].
To start the RPC daemon, execute the DCE$RPC_STARTUP.COM procedure. The following option may be specified:
To stop the RPC daemon, execute the DCE$RPC_SHUTDOWN.COM procedure. The following options may be specified in any order:
Do not stop the RPC daemon if any DCE components or RPC applications are running on the system. |
RPC endpoints can be managed using the RPC Control Program (RPCCP). To invoke this utility, enter the following command:
$ RUN SYS$SYSTEM:DCE$RPCCP.EXE |
You can type help at the system prompt for information about
parameter usage.
4.6.3 Limiting RPC Transports
The RPC daemon can limit what protocols will be used by RPC applications. To restrict the protocols that can be used, set the logical name RPC_SUPPORTED_PROTSEQS to contain the valid protocols. Valid protocols are ncadg_ip_udp, ncacn_ip_tcp, and ncacn_dnet_nsp. Separate each protocol with a colon. For example:
$ define RPC_SUPPORTED_PROTSEQS "ncacn_ip_tcp:ncacn_dnet_nsp" |
This definition prevents RPC applications from registering endpoints
that use TCP/UDP.
4.6.4 For More Information
Refer to the OpenVMS Version 7.2 Release Notes for important information for existing users of DIGITAL DCE for OpenVMS.
For additional information about DIGITAL DCE for OpenVMS, use online help or refer to the following documentation:
As of OpenVMS Alpha Version 7.2, VLM applications can use Fast I/O for memory shared by processes through global sections. In prior versions of OpenVMS Alpha, buffer objects could be created only for process private virtual address space. Database applications where multiple processes share a large cache can now create buffer objects for the following types of global sections:
Buffer objects enable Fast I/O system services, which can be used to read and write very large amounts of shared data to and from I/O devices at an increased rate. By reducing the CPU cost per I/O request, Fast I/O increases performance for I/O operations. Fast I/O improves the ability of VLM applications, such as database servers, to handle larger capacities and higher data throughput rates.
For more information about how to use Fast I/O and buffer objects for
global sections, see the OpenVMS Alpha Guide to 64-Bit Addressing and VLM Features.
4.8 Fast Path Support (Alpha Only)
Fast Path is an optional, high-performance feature designed to improve I/O performance. Fast Path creates a streamlined path to the device. Fast Path is of interest to any application where enhanced I/O performance is desirable. Two examples are database systems and real-time applications, where the speed of transferring data to disk is often a vital concern.
Using Fast Path features does not require source-code changes. Minor interface changes are available for expert programmers who want to maximize Fast Path benefits.
Beginning with OpenVMS Alpha Version 7.1, Fast Path supports disk I/O for the CIXCD and the CIPCA ports. These ports provide access to CI storage for XMI- and PCI-based systems. In Version 7.0, Fast Path supported disk I/O for the CIXCD port only.
Fast Path is not available on the OpenVMS VAX operating system.
For more information, see Table 1-1 and Section 4.21.5 in this
manual, and refer to the OpenVMS I/O User's Reference Manual.
4.9 Fast Skip for SCSI Tape Drives
If you access your tape drive via your own user-written program, you can use a new modifier, IO$M_ALLOWFAST, to control the behavior of the IO$_SKIPFILE function.
In the past, OpenVMS has always skipped files on tape by using skip-record commands to perform the motion. This has essentially simulated skip-by-filemarks, and while this functions correctly, this type of skipping can be quite slow at times as compared to what a tape can achieve by using skip-by-filemarks commands.
In V7.2, OpenVMS allows the tape driver to be set up to use skip-by-filemarks where the user permits. Modern tape drives keep track of the end of logical data, so MKdriver can use these facilities to position correctly if these features exist on a tape drive. (If they do not, the old skip-by-records skipping is used.)
The skipping-by-filemarks, when used, will correctly sense the end-of-tape provided the tape ends in double EOF. Utilities such as Backup and file structured copy, which use ANSI formatted tape, need the end positioning to be set correctly, and they work fine with the new skip method. They already continue skip-file operations across null files if they exist.
Some third-party utilities may, however, depend on the documented behavior of stopping a skip-by-files on double EOF marks on tape. To accommodate this, OpenVMS assumes by default the use of the old skip-by-records form of tape motion.
Because this form of positioning may be as much as 100 times slower than skip-by-files, OpenVMS provides two additional features:
This section briefly describes new features and capablilities pertaining to the command line interface and the callable interface (SOR routines) of the OpenVMS Alpha high-performance Sort/Merge utility. This information is of interest to both general users and programmers.
For more information about these new features, and about using the
OpenVMS Alpha high-performance Sort/Merge utility, refer to the
OpenVMS User's Manual and the OpenVMS Utility Routines Manual.
4.10.1 High-Performance Sorting with Threads
Support for threads has been added to the high-performance Sort/Merge utility. This enables enhanced performance by taking advantage of multiple processors on an SMP configured system.
To obtain best performance when using the high-performance Sort/Merge
utility on a multi-processor machine, your program should be linked
with the /THREADS_ENABLE qualifier.
4.10.2 Indexed Sequential Output File Organization
Support for indexed sequential output file organization has been added to the high-performance Sort/Merge utility.
You may now specify the /INDEXED_SEQUENTIAL file organization qualifier.
4.10.3 Output File Overlay
Support for output file overlay has been added to the high-performance Sort/Merge utility.
You may now specify the /OVERLAY qualifier to overlay or write an
output file to an existing empty file.
4.10.4 Statistical Summary Information
Partial support for statistical summary information has been added to the high-performance Sort/Merge utility.
You may now specify the /STATISTICS qualifier with the following fields:
The new intra-cluster communication (ICC) system services, available on Alpha and VAX, form an application programming interface (API) for process-to-process communications. For large data transfers, the ICC system services are the highest-performance OpenVMS application communication mechanism, superior to standard network transports and mailboxes.
The ICC system services enable application program developers to create distributed applications with connections between different processes on a single system, or between processes on different systems within a single OpenVMS Cluster system.
The ICC system services do not require a network product. The communication uses memory or System Communication Services (SCS).
The ICC system services:
The ICC system services provide the following benefits:
The new system services enable application program developers to create connections between processes on the same or different systems within a single OpenVMS Cluster system.
These services are as follows:
Refer to the OpenVMS System Services Reference Manual: GETQUI--Z for additional information on the ICC system
services.
4.11.3 Programming with ICC
Refer to the OpenVMS Programming Concepts Manual for information on programming with ICC.
4.11.4 ICC System Management and Security
Refer to the OpenVMS System Manager's Manual for information on ICC system management and
security.
4.12 Java Development Kit (Alpha Only)
The Java Development Kit (JDK) is shipped with the OpenVMS operating system. This kit can be used to develop and run Java applets and programs on OpenVMS Alpha systems.
The JDK for OpenVMS systems contains a just-in-time (JIT) compiler. The JIT compiler provides on-the-fly compilation of your application's Java byte-codes and runtime calls into native Alpha machine code. This results in significantly faster execution of your Java application compared with running it using the Java interpreter. The JIT runs by default when you enter the JAVA command.
The JDK implements Java threads on top of native (POSIX) threads. This allows different Java threads in your application to run on different processors, provided that you have a multiprocessor machine. It also means that your Java application will run properly when linked with native methods or native APIs (such as DCE) that are also implemented using POSIX threads.
For more information, see the Java documentation in the following directory on your OpenVMS Alpha system where the JDK is installed:
SYS$COMMON:[SYSHLP.JAVA]INDEX.HTML |
OpenVMS Alpha Version 7.2 includes the following new kernel threads capabilities:
In the initial release of the kernel threads support, OpenVMS allowed a
maximum of 16 kernel threads per process. This enabled an application
to have threads executing on up to 16 CPUs at one time. With OpenVMS
Alpha Version 7.2, the number of kernel threads that can be created
per-process has been increased to 256. The maximum value for the
MULTITHREAD system parameter has also been increased to 256.
4.13.2 New Method for Changing Kernel Thread Priorities
The SYS$SETPRI system service and the SET PROCESS/PRIORITY DCL command both take a process identification value (PID) as an input and therefore affect only a single kernel thread at a time. If you want to change the base priorities of all kernel threads in a process, a separate call to SYS$SETPRI or invocation of the SET PROCESS/PRIORITY command must be done for each thread.
In OpenVMS Alpha Version 7.2, a new value for the 'policy' parameter to the SYS$SETPRI system service has been added. If JPI$K_ALL_THREADS is specified, the call to SYS$SETPRI changes the base priorities of all kernel threads in the target process.
Also, the ALL_THREADS qualifier has been added to the SET
PROCESS/PRIORITY DCL command which provides the same support.
4.13.3 Detecting Thread Stack Overflow
The default user stack in a process can expand on demand to be quite large, so single threaded applications do not typically run out of user stack. When an application is written using DECthreads, each thread gets its own user stack, which is a fixed size. If the application developer underestimates the stack requirements, the application may fail due to a thread overflowing its stack. This failure is typically reported as an access violation and is very difficult to diagnose. To address this problem, yellow stack zones have been introduced in OpenVMS Version 7.2 and are available to applications using DECthreads.
Yellow stack zones are a mechanism by which the stack overflow can be signaled back to the application. The application can then choose either to provide a stack overflow handler or do nothing. If the application does nothing, this mechanism helps pinpoint the failure for the application developer. Instead of an access violation being signaled, a stack overflow error is signaled.
Previous | Next | Contents | Index |
Copyright © Compaq Computer Corporation 1998. All rights reserved. Legal |
6520PRO_006.HTML
|