Document revision date: 30 March 2001 | |
Previous | Contents | Index |
The DCL commands DISMOUNT and MOUNT are used for creating write
bitmaps. The MOUNT command is used for starting a minicopy operation
using a write bitmap (see Section 7.7).
7.6.1 Creating a Write Bitmap With DISMOUNT
To create a write bitmap, you must specify the /POLICY=MINICOPY[=OPTIONAL] qualifier with the DISMOUNT command. If you specify /POLICY=MINICOPY=OPTIONAL, a write bitmap is created if there is sufficient memory. The disk is dismounted, regardless of whether a write bitmap is created.
The following example shows the use of the POLICY=MINICOPY=OPTIONAL qualifier with the DISMOUNT command:
$ DISMOUNT $4$DUA1 /POLICY=MINICOPY=OPTIONAL |
This command removes $4$DUA1 from the shadow set and starts logging writes to a write bitmap, if possible.
If you specify /POLICY=MINICOPY only (that is, if you omit =OPTIONAL)
and there is not enough memory on the node to create a write bitmap,
the dismount fails.
7.6.2 Creating a Write Bitmap With MOUNT
You can create a write bitmap with the MOUNT command under the following conditions:
The write bitmap created with this command is used for a minicopy operation when you later mount one of the former members of the shadow set into the set.
If you specify the /POLICY=MINICOPY=OPTIONAL qualifier and the shadow
set is already mounted on another node in the cluster, the MOUNT
command succeeds but a write bitmap is not created.
7.7 Starting a Minicopy Operation
If a write bitmap exists for a shadow set member, a minicopy operation starts by default when you specify the MOUNT command to return a shadow set member to the shadow set. This is equivalent to using the /POLICY=MINICOPY=OPTIONAL qualifier to the MOUNT command. If a write bitmap is not available, a full copy occurs.
An example of using the /POLICY=MINICOPY=OPTIONAL qualifier with the MOUNT command follows:
$ MOUNT DSA5/SHAD=$4$DUA0/POLICY=MINICOPY=OPTIONAL volume_label |
If the shadow set (DSA5) is already mounted and a write bitmap exists for this shadow set member ($4$DUA0), the command adds the device $4$DUA0 to the shadow set with a minicopy operation. If a write bitmap is not available, this command adds $4$DUA0 with a full copy.
To ensure that a MOUNT command succeeds only if a minicopy can take
place, specify /POLICY=MINICOPY only (that is, omit =OPTIONAL). If a
write bitmap is not available, the mount will fail.
7.8 Master and Local Write Bitmaps
In an OpenVMS Cluster system, a master write bitmap is created on the node that issues the DISMOUNT or MOUNT command that creates the write bitmap. When a master write bitmap is created, a local write bitmap is automatically created on all other nodes in the cluster on which the shadow set is mounted, provided the nodes have sufficient memory.
A master write bitmap contains a record of all the writes to the shadow set from every node in the cluster that has the shadow set mounted. A local write bitmap tracks all the writes that the local node issues to a shadow set.
Note that if a node with a local bitmap writes to the same logical block number (LBN) of a shadow set more than once, only the LBN of the first write is sent to the master write bitmap. The minicopy operation uses the LBN for the update, not the number of changes to the same LBN.
When there is not enough memory on a node to create a local write
bitmap, the node sends a message for each write directly to the master
write bitmap. This will degrade application write performance.
7.9 System Parameters for Managing Write Bitmap Messages and Shadow Set Limit
System parameters are available for managing the update traffic between a master write bitmap and its corresponding local write bitmaps in an OpenVMS Cluster system. Another new system parameter controls whether write bitmap system messages are sent to the operator console and if they are to be sent, the volume of messages. These system parameters are dynamic, that is, they can be changed on a running system. They are shown in Table 3-3.
In addition, a new volume shadowing system parameter, SHADOW_MAX_UNIT, is provided for specifying the maximum number of shadow sets that can exist on a node. This parameter is described in Table 3-1.
The system parameters for managing write bitmap message traffic control whether the messages are buffered and then packaged in a single SCS message to update the master write bitmap or whether each one is sent immediately. The system parameters are used to set the upper and lower threshholds of message traffic and a time interval during which the traffic is measured.
The writes issued by each remote node are, by default, sent one by one in individual SCS messages to the node with the master write bitmap. This is known as single-message mode.
If the writes sent by a remote node reach an upper threshhold of
messages during a specified interval, single-message mode switches to
buffered-message mode. In buffered-message mode, the
messages (up to nine) are collected for a specified interval and then
sent in one SCS message. During periods of increased message traffic,
grouping multiple messages to send in one SCS message to the master
write bitmap is generally more efficient than sending each message
separately.
7.10 Managing Write Bitmaps With DCL Commands
The SHOW DEVICE, SHOW CLUSTER, and DELETE commands have been extended
for managing write bitmaps.
7.10.1 Determining Write Bitmap Support and Activity
You can find out whether a write bitmap exists for a shadow set by using the DCL command SHOW DEVICE/FULL device-name. If a shadow set supports write bitmaps, device supports bitmaps is displayed along with either bitmaps active or no bitmaps active. If the device does not support write bitmaps, no message pertaining to write bitmaps is displayed.
The following command example shows that no write bitmap is active:
$ SHOW DEVICE/FULL DSA0 Disk DSA0:, device type RAM Disk, is online, mounted, file-oriented device, shareable, available to cluster, error logging is enabled, device supports bitmaps (no bitmaps active). Error count 0 Operations completed 47 Owner process "" Owner UIC [SYSTEM] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 2 Default buffer size 512 Total blocks 1000 Sectors per track 64 Total cylinders 1 Tracks per cylinder 32 Volume label "TST0" Relative volume number 0 Cluster size 1 Transaction count 1 Free blocks 969 Maximum files allowed 250 Extend quantity 5 Mount count 1 Mount status System Cache name "_$252$DUA721:XQPCACHE" Extent cache size 64 Maximum blocks in extent cache 96 File ID cache size 64 Blocks currently in extent cache 0 Quota cache size 0 Maximum buffers in FCP cache 404 Volume owner UIC [SYSTEM] Vol Prot S:RWCD,O:RWCD,G:RWCD,W:RWCD Volume Status: ODS-2, subject to mount verification, file high-water marking, write-back caching enabled. Disk $252$MDA0:, device type RAM Disk, is online, member of shadow set DSA0:. Error count 0 Shadow member operation count 128 Allocation class 252 Disk $252$MDA1:, device type RAM Disk, is online, member of shadow set DSA0:. Error count 0 Shadow member operation count 157 Allocation class 252 |
You can find out the ID of each write bitmap on a node with the DCL command SHOW DEVICE/BITMAP device-name. The /BITMAP qualifier cannot be combined with other SHOW DEVICE qualifiers except /FULL. The SHOW DEVICE/BITMAP display can be brief or full; brief is the default.
If no bitmap is active, no bitmap ID is displayed. The phrase no bitmaps active is displayed.
The following example shows a SHOW DEVICE/BITMAP display:
$ SHOW DEVICE/BITMAP DSA1 Device BitMap Size Percent of Name ID (Bytes) Full Copy DSA1: 00010001 652 11% |
The following example shows a SHOW DEVICE/BITMAP/FULL display:
$ SHOW DEVICE DSA12/BITMAP/FULL Device Bitmap Size Percent of Active Creation Master Cluster Local Delete Bitmap Name ID (bytes) Full Copy Date/Time Node Size Set Pending Name DSA12: 00010001 652 11% Yes 5-MAY-2000 13:30:25:30 300F2 127 2% No SHAD$TEST |
You can specify bitmap information in the SHOW CLUSTER display by issuing the ADD BITMAPS command, as shown in the following example:
$ SHOW CLUSTER/CONTINUOUS Command > ADD BITMAPS Command > ADD CSIO View of Cluster from system ID 57348 node: WPCM1 14-FEB-2000 13:38:53 SYSTEMS MEMBERS NODE SOFTWARE CSID STATUS BITMAPS CSGF1 VMS X6TF 300F2 MEMBER MINICOPY HSD30Y HSD YA01 300E6 HS1CP2 HSD V31D 300F4 CSGF2 VMS X6TF 300D0 MEMBER MINICOPY |
In this example, MINICOPY means that nodes CSGF1 and CSGF2 are capable
of supporting minicopy operations. If a cluster node does not support
minicopy, the term UNSUPPORTED replaces MINICOPY in the display, and
the minicopy function is disabled in the cluster.
7.10.4 Deleting Write Bitmaps
After a minicopy operation is completed, the corresponding write bitmap is automatically deleted.
There may be times when you would like to delete one or more bitmaps. Reasons for deleting bitmaps include the following:
You can delete write bitmaps with the DCL command DELETE with the /BITMAP qualifer. You use the bitmap qualifer to specify the ID of the bitmap you want to delete. For example:
$ DELETE/BITMAP/LOG 00010001 %DELETE-I-DELETED, 00010001 deleted |
There are two aspects of write bitmaps that affect performance; the message traffic that occurs between local and master write bitmaps and the size requirements of each bitmap.
The message traffic can be adjusted by changing the message mode. Single message mode is the default. Buffered message mode can improve overall system performance, but the time to record each process's write in the master write bitmap usually takes longer. These modes are described in detail in Section 7.9.
Additional memory is required to support write bitmaps, as described in
Section 1.3.1. Depending on the memory usage of your system, it may
require additional memory.
7.12 Guidelines for Using a Shadow Set Member for Backup
Volume Shadowing for OpenVMS can be used as an online backup mechanism. With proper application design and proper operating procedures, shadow set members removed from mounted shadow sets constitute a valid backup.
To obtain a copy of a file system or application database for backup purposes using Volume Shadowing for OpenVMS, the standard recommendation has been to determine that the virtual unit is not in a merge state, to dismount the virtual unit, then to remount the virtual unit minus one member. Prior to OpenVMS Version 7.3, there was a documented general restriction on dismounting an individual shadow set member for backup purposes from a virtual unit that is mounted and in active use. This restriction relates to data consistency of the file system, application data, or database located on that virtual unit, at the time the member is removed.
However, Compaq recognizes that this restriction is unacceptable when
true 24x7 application availability is a requirement, and that it is
unnecessary if appropriate data-consistency measures can be ensured
through a combination of application software and system management
practice.
7.12.1 Removing a Shadow Set Member for Backup
With currently supported OpenVMS releases, DISMOUNT can be used to remove members from shadow sets for the purpose of backing up data, provided that the following requirements are met:
Follow these steps to remove the member:
Removal of a shadow set member results in what is called a crash-consistent copy. That is, the copy of the data on the removed member is of the same level of consistency as what would result if the system had failed at that instant. The ability to recover from a crash-consistent copy is ensured by a combination of application design, system and database design, and operational procedures. The procedures to ensure recoverability depend on application and system design and will be different for each site.
The conditions that might exist at the time of a system failure range
from no data having been written, to writes that occurred but were not
yet written to disk, to all data having been written. The following
sections describe components and actions of the operating system that
may be involved if a failure occurs and there are outstanding writes,
that is, writes that occurred but were not written to disk. You must
consider these issues when establishing procedures to ensure data
consistency in your environment.
7.12.3 Application Activity
To achieve data consistency, application activity should be suspended
and no operations should be in progress. Operations in progress can
result in inconsistencies in the backed-up application data. While many
interactive applications tend to become quiet if there is no user
activity, the reliable suspension of application activity requires
cooperation in the application itself. Journaling and transaction
techniques can be used to address in-progress inconsistencies but must
be used with extreme care. In addition to specific applications,
miscellaneous interactive use of the system that might affect the data
to be backed up must also be suspended.
7.12.4 RMS Considerations
Applications that use RMS file access must be aware of the following
issues.
7.12.4.1 Caching and Deferred Writes
RMS can, at the application's option, defer disk writes to some time after it has reported completion of an update to the application. The data on disk will be updated in response to other demands on the RMS buffer cache and to references to the same or nearby data by cooperating processes in a shared file environment.
Writes to sequential files are always buffered in memory and are not
written to disk until the buffer is full.
7.12.4.2 End of File
The end-of-file pointer of a sequential file is normally updated only
when the file is closed.
7.12.4.3 Index Updates
The update of a single record in an indexed file may result in multiple
index updates. Any of these updates can be cached at the application's
option. Splitting a shadow set with an incomplete index update will
result in inconsistencies between the indexes and data records. If
deferred writes are disabled, RMS orders writes so that an incomplete
index update may result in a missing update but never in a corrupt
index. However, if deferred writes are enabled, the order in which
index updates are written is unpredictable.
7.12.4.4 Run-Time Libraries
The I/O libraries of various languages use a variety of RMS buffering
and deferred write options. Some languages allow application control
over the RMS options.
7.12.4.5 $FLUSH
Applications can use the $FLUSH service to guarantee data consistency.
The $FLUSH service guarantees that all updates completed by the
application (including end of file for sequential files) have been
recorded on the disk.
7.12.4.6 Journaling and Transactions
RMS provides optional roll-forward, roll-back, and recovery unit
journals, and supports transaction recovery using the OpenVMS
transaction services. These features can be used to back out
in-progress updates from a removed shadow set member. Using such
techniques requires careful data and application design. It is critical
that virtual units containing journals be backed up along with the base
data files.
7.12.5 Mapped Files
OpenVMS allows access to files as backing store for virtual memory
through the process and global section services. In this mode of
access, the virtual address space of the process acts as a cache on the
file data. OpenVMS provides the $UPDSEC service to force updates to the
backing file.
7.12.6 Database Systems
Database management systems, such as those from Oracle, are well suited to backup by splitting shadow sets, since they have full journaling and transaction recovery built in. Before dismounting shadow set members, an Oracle database should be put into "backup mode" using SQL commands of the following form:
ALTER TABLESPACE tablespace-name BEGIN BACKUP; |
This command establishes a recovery point for each component file of the tablespace. The recovery point ensures that the backup copy of the database can subsequently be recovered to a consistent state. Backup mode is terminated with commands of the following form:
ALTER TABLESPACE tablespace-name END BACKUP; |
It is critical to back up the database logs and control files as well
as the database data files.
7.12.7 Base File System
The base OpenVMS file system caches free space. However, all file
metadata operations (such as create and delete) are made with a
"careful write-through" strategy so that the results are
stable on disk before completion is reported to the application. Some
free space may be lost, which can be recovered with an ordinary disk
rebuild. If file operations are in progress at the instant the shadow
member is dismounted, minor inconsistencies may result that can be
repaired with ANALYZE/DISK. The careful write ordering ensures that any
inconsistencies do not jeopardize file integrity before the disk is
repaired.
7.12.8 $QIO File Access and VIOC
OpenVMS maintains a virtual I/O cache (VIOC) to cache file data. However, this cache is write through. OpenVMS Version 7.3 introduces extended file cache (XFC), which is also write through.
File writes using the $QIO service are completed to disk before
completion is reported to the caller.
7.12.9 Multiple Shadow Sets
Multiple shadow sets present the biggest challenge to splitting shadow
sets for backup. While the removal of a single shadow set member is
instantaneous, there is no way to remove members of multiple shadow
sets simultaneously. If the data that must be backed up consistently
spans multiple shadow sets, application activity must be suspended
while all shadow set members are being dismounted. Otherwise, the data
will not be crash consistent across the multiple volumes. Command
procedures or other automated techniques are recommended to speed the
dismount of related shadow sets. If multiple shadow sets contain
portions of an Oracle database, putting the database into backup mode
ensures recoverability of the database.
7.12.10 Host-Based RAID
The OpenVMS software RAID driver presents a special case for multiple
shadow sets. A software RAID set may be constructed of multiple shadow
sets, each consisting of multiple members. With the management
functions of the software RAID driver, it is possible to dismount one
member of each of the constituent shadow sets in an atomic operation.
Management of shadow sets used under the RAID software must always be
done using the RAID management commands to ensure consistency.
7.12.11 OpenVMS Cluster Operation
All management operations used to attain data consistency must be performed for all members of an OpenVMS Cluster system on which the affected applications are running.
Previous | Next | Contents | Index |
privacy and legal statement | ||
5423PRO_010.HTML |