Document revision date: 19 July 1999 | |
Previous | Contents | Index |
Directs the compiler to generate special Alpha assembly code for VAX MACRO instructions, within portions of the source module, that rely on VAX guarantees of operation atomicity or granularity.
.[NO]PRESERVE argument-list
argument-list
One or more of the symbolic arguments listed in the following table:
Option Description GRANULARITY Preserves the rules of VAX granularity of writes. Specifying .PRESERVE GRANULARITY causes the compiler to use Alpha Load-locked and Store-conditional instruction sequences in code it generates for VAX instructions that perform byte, word, or unaligned longword writes. ATOMICITY Preserves atomicity of VAX modify operations. Specifying .PRESERVE ATOMICITY causes the compiler to use Load-locked and Store-conditional instruction sequences in code it generates for instructions with modify operands.
The .PRESERVE and .NOPRESERVE directives cause the compiler to generate special Alpha assembly code for VAX MACRO instructions, within portions of the source module, that rely on VAX guarantees of operation atomicity or granularity (see Section 2.10).Use of .PRESERVE or .NOPRESERVE without specifying GRANULARITY or ATOMICITY will affect both options. When preservation of both granularity and atomicity is enabled, and the compiler encounters a VAX coding construct that requires both granularity and atomicity guarantees, it enforces atomicity over granularity.
Alternatively, you can use the /PRESERVE and /NOPRESERVE compiler qualifiers to affect the atomicity and granularity in generated code throughout an entire MACRO source module.
Atomicity is guaranteed for multiprocessing systems as well as uniprocessing systems when you specify .PRESERVE ATOMICITY.
When the .PRESERVE directive is present, you can use the /RETRY_COUNT qualifier on the command line to control the number of times the compiler-generated code retries a granular or atomic update.
Warning
If .PRESERVE ATOMICITY is turned on, any unaligned data references will result in a fatal reserved operand fault. See Section 2.10.5. If .PRESERVE GRANULARITY is turned on, unaligned word references to addresses assumed aligned will also cause a fatal reserved operand fault.
INCW 1(R0) |
This instruction, when compiled with .PRESERVE GRANULARITY, retries the insertion of the new word value, if it is interrupted. However, when compiled with .PRESERVE ATOMICITY, it will also refetch the initial value and increment it, if interrupted. If both options are specified, it will do the latter.
This directive allows the user to override the compiler's alignment assumptions, and also allows implicit reads/writes of registers to be declared.
.SET_REGISTERS argument-list
argument-list
One or more of the arguments listed in the following table. For each argument, you can specify one or more registers.
Option Description aligned=<> Declares one or more registers to be aligned on longword boundaries. unaligned=<> Declares one or more registers to be unaligned. Because this is an explicit declaration, this unaligned condition will not produce a fault at run time. read=<> Declares one or more registers, which otherwise the compiler could not detect as input registers, to be read. written=<> Declares one or more registers, which otherwise the compiler could not detect as output registers, to be written to.
The aligned and unaligned qualifiers to this directive allow the user to override the compiler's alignment assumptions. Using the directive for this purpose in certain cases can produce more efficient code (see Section 4.1).The read and written qualifiers to this directive allow implicit reads and writes of registers to be declared. They are generally used to declare the register usage of called routines and are useful for documenting your program.
With one exception, the .SET_REGISTERS directive remains in effect (ensuring proper alignment processing) until the routine ends, unless you change the value in the register. The exception can occur under certain conditions when a flow path joins the code following a .SET_REGISTERS directive.
The following example illustrates such an exception. R2 is declared aligned, and at a subsequent label, 10$, which is before the next write access to the register, a flow path joins the code. R2 will be treated as unaligned following the label, because it is unaligned from the other path.
INCL R2 ; R2 is now unaligned . . . BLBC R0, 10$ . . . MOVL R5, R2 .SET_REGISTERS ALIGNED=R2 MOVL R0, 4(R2) 10$: MOVL 4(R2), R3 ; R2 considered unaligned ; due to BLBC branchThe .SET_REGISTERS directive and its read and written qualifiers are required on every routine call that passes or returns data in any register from R2 through R12, if you specify the command line qualifier and option /OPTIMIZE=VAXREGS. That is because the compiler allows the use of unused VAX registers as temporary registers when you specify /OPTIMIZE=VAXREGS.
#1 |
---|
DIVLR0,R1 .SET_REGISTERS ALIGNED=R1 MOVL 8(R1), R2 ; Compiler will use aligned load. |
In this example, the compiler would normally consider R1 unaligned after the division. Any memory references using R1 as a base register (until it is changed again) would use unaligned load/stores. If it is known that the actual value will always be aligned, performance could be improved by adding a .SET_REGISTERS directive, as shown.
#2 |
---|
MOV1 4(R0), R1 ; Stored memory addresses assumed .SET_REGISTERS UNALIGNED=R1 ; aligned so explicitly set it unaligned MOVL 4(R1), R2 ; to avoid run-time fault. |
In this example, R1 would be considered longword aligned after the MOVL. If it is actually unaligned, an alignment fault would occur on memory reference that follows at run time. To prevent this, the .SET_REGISTERS directive can be used, as shown.
#3 |
---|
.SET_REGISTERS READ=<R3,R4>, WRITTEN=R5 JSB DO_SOMETHING_USEFUL |
In this example, the read/written attributes are used to explicitly declare register uses which the compiler cannot detect. R3 and R4 are input registers to the JSB target routine, and R5 is an output register. This is particularly useful if the routine containing this JSB does not use these registers itself, or if the SET_REGISTERS directive and JSB are embedded in a macro. When compiled with /FLAG=HINTS, routines which use the macro would then have R3 and R4 listed as possible input registers, even if they are not used in that routine.
This directive associates an alignment attribute with a symbol definition for a register offset. You can use this directive when you know the alignment of the base register. This attribute guarantees to the compiler that the base register has the same alignment, which enables the compiler to generate optimal code.
.SYMBOL_ALIGNMENT argument-list
argument-list
One of the arguments listed in the following table.
Option Description long Declares longword alignment for any symbol that you declare after this directive. quad Declares quadword alignment for any symbol that you declare after this directive. none Turns off the alignment specified by the preceding .SYMBOL_ALIGNMENT directive.
The .SYMBOL_ALIGNMENT directive is used to associate an alignment attribute with the fields in a structure when you know the base alignment. It is used in pairs. The first .SYMBOL_ALIGNMENT directive associates either longword (long) or quadword (quad) alignment with the symbol or symbols that follow. The second directive, .SYMBOL_ALIGNMENT none, turns it off.Any time a reference is made with a symbol with an alignment attribute, the base register of that reference, in effect, inherits the symbol's alignment. The compiler also resets the base register's alignment to longword for subsequent alignment tracking. This alignment guarantee enables the compiler to produce more efficient code sequences.
OFFSET1 = 4 .SYMBOL_ALIGNMENT LONG OFFSET2 = 8 OFFSET3 = 12 .SYMBOL_ALIGNMENT QUAD OFFSET4 = 16 .SYMBOL_ALIGNMENT NONE OFFSET5 = 20 . . . CLR1 OFFSET2(R8) . . . MOVL R2, OFFSET4(R6) |
For OFFSET1 and OFFSET5, the compiler will use only its tracking information for deciding if Rn in OFFSET1(Rn) is aligned or not. For the other references, the base register will be treated as longword (OFFSET2 and OFFSET3) or quadword (OFFSET4) aligned.
After each use of OFFSET2 or OFFSET4, the base register in the reference is reset to longword alignment. In this example, the alignment of R8 and R6 will be reset to longword, although the reference to OFFSET4 will use the stronger quadword alignment.
This appendix describes the two sets of built-ins provided with the MACRO-32 Compiler for OpenVMS Alpha. They are:
Both sets of built-ins are presented in tables. The second column of each table specifies the operands the built-in expects, where:
Be careful when mixing built-ins with VAX MACRO instructions on the same registers. The code generated by the compiler expects registers to contain 32-bit sign extended values, but it is possible to create 64-bit register values that are not in this format. Subsequent longword operations on these registers could produce incorrect results. Therefore, make sure to return registers to 32-bit sign extended format before using them in VAX MACRO instructions as source operands. (Loading the register with a new value using a VAX MACRO instruction (such as MOVL) returns it to this format.) |
Ported VAX MACRO code sometimes requires access to Alpha native instructions to deal directly with a 64-bit quantity or to include an Alpha instruction that has no VAX equivalent. The compiler provides built-ins to allow you access to these instructions.
You use these built-ins in the same way that you use native VAX instructions, using any VAX operand mode. For example, EVAX_ADDQ 8(R0),(SP)+,R1 is legal. The only exception is that the first operand of any Alpha load/store built-in (EVAX_LD*, EVAX_ST*) must be a register.
It is recommended that you place any built-in within an ".IF DF,EVAX" conditional code block unless the module is Alpha specific. They can appear in Alpha specific portions of the macro definitions described in Appendix D.
The following byte and word built-ins are included in the MACRO-32 compiler, starting with OpenVMS Alpha Version 7.1:
The best environment in which to run code that contains the byte and word built-ins is on an Alpha computer that implements these instructions in hardware. If you run such code on an OpenVMS Alpha system that implements them by software emulation, the following limitations exist:
%SYSTEM-I-EMULATED, an instruction not implemented on this processor was emulated |
Furthermore, if the code with these built-ins executes on a system without either the byte and word software emulator or a processor that implements the byte and word instructions in hardware, it will incur a fatal exception, such as the following:
%SYSTEM-F-OPCDEC, opcode reserved to Digital fault at PC=00000000000020068,PS=0000001B |
Table C-1 summarizes the Alpha built-ins supported by the compiler.
Memory references in the MACRO-32 compiler built-ins are always assumed to be quadword aligned except in EVAX_SEXTB, EVAX_SEXTW, EVAX_LDBU, EVAX_LDWU, EVAX_STB, EVAX_STW, EVAX_LDQU, and EVAX_STQU. |
Built-in | Operands | Description |
---|---|---|
EVAX_SEXTB | <RQ,WB> | Sign extend byte |
EVAX_SEXTW | <RQ,WW> | Sign extend word |
EVAX_SEXTL | <RQ,WL> | Sign extend longword |
EVAX_LDBU | <WQ,AB> | Load zero-extended byte from memory |
EVAX_LDWU | <WQ,AQ> | Load zero-extended word from memory |
EVAX_LDLL | <WL,AL> | Load longword locked |
EVAX_LDAQ | <WQ,AQ> | Load address of quadword |
EVAX_LDQ | <WQ,AQ> | Load quadword |
EVAX_LDQL | <WQ,AQ> | Load quadword locked |
EVAX_LDQU | <WQ,AQ> | Load unaligned quadword |
EVAX_STB | <RQ,AB> | Store byte from register to memory |
EVAX_STW | <RQ,AW> | Store word from register to memory |
EVAX_STLC | <ML,AL> | Store longword conditional |
EVAX_STQ | <RQ,AQ> | Store quadword |
EVAX_STQC | <MQ,AQ> | Store quadword conditional |
EVAX_STQU | <RQ,AQ> | Store unaligned quadword |
EVAX_ADDQ | <RQ,RQ,WQ> | Quadword add |
EVAX_SUBQ | <RQ,RQ,WQ> | Quadword subtract |
EVAX_MULQ | <RQ,RQ,WQ> | Quadword multiply |
EVAX_UMULH | <RQ,RQ,WQ> | Unsigned quadword multiply high |
EVAX_AND | <RQ,RQ,WQ> | Logical product |
EVAX_OR | <RQ,RQ,WQ> | Logical sum |
EVAX_XOR | <RQ,RQ,WQ> | Logical difference |
EVAX_BIC | <RQ,RQ,WQ> | Bit clear |
EVAX_ORNOT | <RQ,RQ,WQ> | Logical sum with complement |
EVAX_EQV | <RQ,RQ,WQ> | Logical equivalence |
EVAX_SLL | <RQ,RQ,WQ> | Shift left logical |
EVAX_SRL | <RQ,RQ,WQ> | Shift right logical |
EVAX_SRA | <RQ,RQ,WQ> | Shift right arithmetic |
EVAX_EXTBL | <RQ,RQ,WQ> | Extract byte low |
EVAX_EXTWL | <RQ,RQ,WQ> | Extract word low |
EVAX_EXTLL | <RQ,RQ,WQ> | Extract longword low |
EVAX_EXTQL | <RQ,RQ,WQ> | Extract quadword low |
EVAX_EXTBH | <RQ,RQ,WQ> | Extract byte high |
EVAX_EXTWH | <RQ,RQ,WQ> | Extract word high |
EVAX_EXTLH | <RQ,RQ,WQ> | Extract longword high |
EVAX_EXTQH | <RQ,RQ,WQ> | Extract quadword high |
EVAX_INSBL | <RQ,RQ,WQ> | Insert byte low |
EVAX_INSWL | <RQ,RQ,WQ> | Insert word low |
EVAX_INSLL | <RQ,RQ,WQ> | Insert longword low |
EVAX_INSQL | <RQ,RQ,WQ> | Insert quadword low |
EVAX_INSBH | <RQ,RQ,WQ> | Insert byte high |
EVAX_INSWH | <RQ,RQ,WQ> | Insert word high |
EVAX_INSLH | <RQ,RQ,WQ> | Insert longword high |
EVAX_INSQH | <RQ,RQ,WQ> | Insert quadword high |
EVAX_TRAPB | <> | Trap barrier |
EVAX_MB | <> | Memory barrier |
EVAX_RPCC | <WQ> | Read process cycle counter |
EVAX_CMPEQ | <RQ,RQ,WQ> | Integer signed compare, equal |
EVAX_CMPLT | <RQ,RQ,WQ> | Integer signed compare, less than |
EVAX_CMPLE | <RQ,RQ,WQ> | Integer signed compare, less equal |
EVAX_CMPULT | <RQ,RQ,WQ> | Integer unsigned compare, less than |
EVAX_CMPULE | <RQ,RQ,WQ> | Integer unsigned compare, less equal |
EVAX_BEQ | <RQ,AQ> | Branch equal |
EVAX_BLT | <RQ,AQ> | Branch less than |
EVAX_BNE | <RQ,AQ> | Branch not equal |
EVAX_CMOVEQ | <RQ,RQ,WQ> | Conditional move/equal |
EVAX_CMOVNE | <RQ,RQ,WQ> | Conditional move/not equal |
EVAX_CMOVLT | <RQ,RQ,WQ> | Conditional move/less than |
EVAX_CMOVLE | <RQ,RQ,WQ> | Conditional move/less or equal |
EVAX_CMOVGT | <RQ,RQ,WQ> | Conditional move/greater than |
EVAX_CMOVGE | <RQ,RQ,WQ> | Conditional move/greater or equal |
EVAX_CMOVLBC | <RQ,RQ,WQ> | Conditional move/low bit clear |
EVAX_CMOVLBS | <RQ,RQ,WQ> | Conditional move/low bit set |
EVAX_MF_FPCR | <WQ> | Move from floating-point control register |
EVAX_MT_FPCR | <WQ,RQ> | Move to floating-point control register |
EVAX_ZAP | <RQ,RQ,WQ> | Zero bytes |
EVAX_ZAPNOT | <RQ,RQ,WQ> | Zero bytes with NOT mask |
Previous | Next | Contents | Index |
privacy and legal statement | ||
5601PRO_011.HTML |