Document revision date: 19 July 1999 | |
Previous | Contents | Index |
Specifies the prolog version number of the output indexed file.
/PROLOG= n
n
Specifies the prolog number 1, 2, or 3.If you specify 2 for n, the output file will be either a Prolog 1 or a Prolog 2 file.
If you specify 3, the Convert utility creates a Prolog 3 file for output. Prolog 3 files accept multiple keys (or alternate keys), all data types, and segmented keys. The only restriction to using a Prolog 3 file applies to files containing overlapping key segments for the primary key. In this case, you would have to use a Prolog 2 file.
If you do not specify the /PROLOG qualifier, the Convert utility uses the prolog version of the first input file. If the input file is not indexed, the utility uses the RMS default. To see what this default is on your system, enter the DCL command SHOW RMS_DEFAULT.The /PROLOG qualifier overrides the value given with the FDL attribute KEY PROLOG.
$ CONVERT/PROLOG=3 INFILE_2 OUTFILE_3 |
This command loads the records from the Prolog 2 input file INFILE_2 into the Prolog 3 output file OUTFILE_3. Both the input and output file are indexed files.
Specifies whether each input record is to be read from the file a second time and compared to the record originally read.
/READ_CHECK
/NOREAD_CHECK (DEFAULT)
$ CONVERT/READ_CHECK Q3_SALES.DAT YTD_SALES.DAT |
This command specifies that the records from the input file Q3_SALES.DAT are to be read and checked by the file processor, and then loaded into the output file YTD_SALES.DAT.
Specifies whether the input file is to be opened for sharing with other processes during the conversion.
/SHARE
/NOSHARE (DEFAULT)
You can use the /SHARE option to generate a rough backup of a file that is always opened for sharing by some applications. However, another process can alter the records during the Convert utility operations. As a result, the consistency of the output file cannot be guaranteed.
$ CONVERT/SHARE SYSUAF.DAT BACKUP.DAT |
This command indicates that the input file SYSUAF.DAT is open for sharing with other processes at the same time its records are being loaded into the output file BACKUP.DAT.
Increases the Convert utility's performance by reducing the number of required passes through the input data. This is accomplished by placing alternate key information into the CONVWORK file.
/SECONDARY= n
n
Specifies the number of alternate keys that will be loaded to the CONVWORK file with each pass through the input data.The default number of alternate keys written to the CONVWORK file is 1.
This qualifier is valid when you are fast-loading a file with more than one alternate key. While the primary key is being loaded, the first alternate key of the file is passed to the Sort utility using the record interface. Additionally, a number of /SECONDARY number of alternate keys are extracted and placed into the CONVWORK file for subsequent Sort and Load operations.
$ CONVERT/SECONDARY=2 Q3_SALES.DAT YTD_SALES.DAT |
This command causes the Convert utility to load the records from the input file Q3_SALES.DAT into the output file YTD_SALES.DAT. The 2nd and 3rd alternate keys are placed into the CONVWORK file while loading the primary key, reducing the need to read through the input data to process them.
Specifies whether the input file is to be sorted before being loaded into an indexed file. The sort is done according to the primary key of the output file.
/SORT (DEFAULT)
/NOSORT
Two procedures can improve the sort performance:
- Increasing the size of the working set for the duration of the sort. The general rule is to use as large a working set as allowed by your working set quota. To set this value, use the DCL command SET WORKING_SET. To see what your authorized quota is, enter the SHOW WORKING_SET command.
- Placing the input file, the output file, and the temporary work files on separate disk devices. The default operation is to place the work files on your default device, which could cause the Convert utility to run out of disk space. To specify the location of the work files, enter a command in the following form:
ASSIGN device-name: SORTWORKn
The n represents the number of the work file, from 0 to 9. The colon is required after the device name. For example, the following two ASSIGN commands would place the work files on disks named TMPD and DEVD:
$ ASSIGN TMPD: SORTWORK0 $ ASSIGN DEVD: SORTWORK1
Using more than two work files is not particularly advantageous unless you have to use many smaller ones in order to fit on crowded disks. You can control the number of work files with the /WORK_FILES qualifier.
For more information about using the Sort utility with the Convert utility, see the /FAST_LOAD qualifier.
#1 |
---|
$ CONVERT/SORT IN_INX.DAT OUT_INX.DAT |
This command causes the records in the input indexed file IN_INX.DAT to be sorted according to the primary key values before being loaded into the output indexed file OUT_INX.DAT.
#2 |
---|
$ CONVERT/NOSORT/FDL=REORG INX.DAT INX.DAT |
This command reorganizes the file INX.DAT according to the attributes specified in the FDL file REORG.FDL. The primary keys are not sorted because INX.DAT is already ordered by the primary key, and the primary key definition did not change.
Determines whether statistics about the file conversion are to be displayed.
/STATISTICS [=keyword]
/NOSTATISTICS (DEFAULT)
BRIEF
Displays a summary of the file conversion at the completion of the operation.FULL
Displays summary information at the completion of each key load containing Sort and Load statistics for the key. A summary of the file conversion is also displayed at the completion of the operation.
The statistics produced by the Convert utility upon completion are as follows:
- Number of files processed
- Total records processed
- Total exception records
- Total valid records
- Elapsed time
- Buffered I/O count
- Direct I/O count
- Page faults
- CPU time
If you specify the /STATISTICS qualifier without specifying a keyword, CONVERT defaults to /STATISTICS=BRIEF.
$ CONVERT/STATISTICS=FULL Q3_SALES.DAT YTD_SALES.DAT |
This command causes the Convert utility to load the records from the input file Q3_SALES.DAT into the output file YTD_SALES.DAT. Statistics about the Sort and Load of each key are displayed as each key is processed. At the completion of the file conversion, the set of summary statistics is displayed.
Specifies whether records that exceed the maximum record length for variable-length records, or records that exceed the specified record length for fixed-length records, are to be truncated.
/TRUNCATE
/NOTRUNCATE (DEFAULT)
If you specify /NOTRUNCATE and a long record is encountered, the record is not written to the output file. If you specify the /EXCEPTIONS_FILE qualifier, the entire record is written to the exceptions file.
#1 |
---|
$ CONVERT/TRUNCATE INFILE.DAT OUTFILE.DAT |
In response to this command, CONVERT truncates input file records to conform to the specifications of the output file.
#2 |
---|
$ CONVERT/NOTRUNCATE/EXCEPTIONS_FILE=EXFILE INFILE OUTFILE |
This command causes the Convert utility to write input file records that exceed the size specifications of the output file to the exceptions file.
Specifies the number of temporary work files to be used during the sort process.
/WORK_FILES=n
n
Specifies the number of work files you want. You can specify 0 or any value from 1 through 10.The default number of work files used during a sort is 2.
This qualifier is valid when you are fast-loading a file with multiple keys or when you specify the /SORT qualifier. For more information about sorting, see both the /SORT and the /FAST_LOAD qualifiers.
$ CONVERT/WORK_FILES=0 UPDATE.DAT MASTER.DAT |
This command loads the records from the input file UPDATE.DAT into the output file MASTER.DAT without using any work files.
Specifies whether all writes are to be checked by comparing the new disk records with the original records in memory.
/WRITE_CHECK
/NOWRITE_CHECK (DEFAULT)
If you use this switch, each new record on the disk is read and then compared with the original record in memory.
$ CONVERT/WRITE_CHECK UPDATE.DAT MASTER.DAT |
In response to this command, the Convert utility loads the records from the input file UPDATE.DAT into the output file MASTER.DAT, and then compares the output records with the input for accuracy.
#1 |
---|
$ CONVERT/NOCREATE/TRUNCATE/EXCEPTIONS_FILE=EXFILE VARFILE.DAT FIXFILE.DAT |
This command causes the Convert utility to copy records from a file with variable-length records (VARFILE.DAT) to a file with fixed-length records (FIXFILE.DAT). Records longer than the fixed length are truncated, and short records are copied to the exceptions file EXFILE.EXC.
#2 |
---|
$ CONVERT FILE.IDX FILE.IDX |
This command creates the output file FILE.IDX with a version number one higher than that of the input file. The output file is a copy of the input file, but it is a clean copy without bucket splits, RRVs (record reference vectors), or pointers to deleted records. The performance of the output file is also improved.
Note that the Convert utility establishes new record file addresses (RFAs) during such reorganizations.
#3 |
---|
$ CONVERT/FDL=TEST.FDL TRNTO::DBA1:[EXP]SUB.DAT OUT.DAT |
This command creates a new sequential file OUT.DAT with stream record format at the local node, according to the specification in the previously created FDL file TEST.FDL. The input file SUB.DAT at remote node TRNTO is sequential with variable-length record format. The Convert utility copies records from SUB.DAT to OUT.DAT, changing the format of the records.
The contents of the FDL file TEST.FDL are as follows:
SYSTEM SOURCE VAX/VMS FILE ORGANIZATION SEQUENTIAL RECORD BLOCK_SPAN YES CARRIAGE_CONTROL CARRIAGE_RETURN FORMAT STREAM SIZE 0
#4 |
---|
$ CONVERT MASTER.DAT DENVER::DB1:[PROD]MASTER.SAV |
This command creates a new file called MASTER.SAV at remote node DENVER from the file MASTER.DAT at the local node. Because the /FDL qualifier is not used, the new file has the same file organization and record format as the original file. The action of this CONVERT command is similar to that performed by the COPY command. However, CONVERT transfers the file record by record and thus does not use block I/O.
#5 |
---|
$ CONVERT/APPEND SALES.TMP KANSAS::[200,2]SALES.CMD |
This command causes records from the file SALES.TMP at the local node to be added sequentially to the end of the output file SALES.CMD at remote node KANSAS. The file SALES.TMP is sequential with variable-length record format, and the file SALES.CMD is sequential with stream record format. When the Convert utility loads records from the input file to the output file, it changes the record format.
#6 |
---|
$ CONVERT/FDL=FIXED/PAD=0/TRUNCATE INFILE.VAR OUTFILE.FIX |
This command creates the fixed format file OUTFILE.FIX and then loads it with records from the variable input file INFILE.VAR. Before they are loaded, any short records from the input file are padded with an ASCII 0 character, and any long records are truncated.
#7 |
---|
$ CONVERT/FDL=SYS$INPUT FORT.DAT STREAM.DAT FILE ORGANIZATION SEQUENTIAL RECORD CARRIAGE_CONTROL CARRIAGE_RETURN FORMAT STREAM [Ctrl/Z] |
This command converts the FORTRAN carriage control file FORT.DAT to a stream file that prints or types identically. The number of records may differ, and the FORTRAN carriage control information is removed from the records.
#8 |
---|
$ CONVERT/FDL=SYS$INPUT FORT.DAT VAR.DAT FILE ORGANIZATION SEQUENTIAL RECORD CARRIAGE_CONTROL CARRIAGE_RETURN FORMAT VARIABLE [Ctrl/Z] |
This command converts the FORTRAN carriage control file FORT.DAT to a variable-length record file. The FORTRAN carriage control information is preserved as the first data byte, and the number of records in the output and input files is the same.
Previous | Next | Contents | Index |
privacy and legal statement | ||
6027PRO_004.HTML |