Document revision date: 15 July 2002 | |
Previous | Contents | Index |
The SHOW DEVICE, SHOW CLUSTER, and DELETE commands have been extended
for managing write bitmaps.
7.10.1 Determining Write Bitmap Support and Activity
You can find out whether a write bitmap exists for a shadow set by using the DCL command SHOW DEVICE/FULL device-name. If a shadow set supports write bitmaps, device supports bitmaps is displayed along with either bitmaps active or no bitmaps active. If the device does not support write bitmaps, no message pertaining to write bitmaps is displayed.
The following command example shows that no write bitmap is active:
$ SHOW DEVICE/FULL DSA0 Disk DSA0:, device type RAM Disk, is online, mounted, file-oriented device, shareable, available to cluster, error logging is enabled, device supports bitmaps (no bitmaps active). Error count 0 Operations completed 47 Owner process "" Owner UIC [SYSTEM] Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G:R,W Reference count 2 Default buffer size 512 Total blocks 1000 Sectors per track 64 Total cylinders 1 Tracks per cylinder 32 Volume label "TST0" Relative volume number 0 Cluster size 1 Transaction count 1 Free blocks 969 Maximum files allowed 250 Extend quantity 5 Mount count 1 Mount status System Cache name "_$252$DUA721:XQPCACHE" Extent cache size 64 Maximum blocks in extent cache 96 File ID cache size 64 Blocks currently in extent cache 0 Quota cache size 0 Maximum buffers in FCP cache 404 Volume owner UIC [SYSTEM] Vol Prot S:RWCD,O:RWCD,G:RWCD,W:RWCD Volume Status: ODS-2, subject to mount verification, file high-water marking, write-back caching enabled. Disk $252$MDA0:, device type RAM Disk, is online, member of shadow set DSA0:. Error count 0 Shadow member operation count 128 Allocation class 252 Disk $252$MDA1:, device type RAM Disk, is online, member of shadow set DSA0:. Error count 0 Shadow member operation count 157 Allocation class 252 |
You can find out the ID of each write bitmap on a node with the DCL command SHOW DEVICE/BITMAP device-name. The /BITMAP qualifier cannot be combined with other SHOW DEVICE qualifiers except /FULL. The SHOW DEVICE/BITMAP display can be brief or full; brief is the default.
If no bitmap is active, no bitmap ID is displayed. The phrase no bitmaps active is displayed.
The following example shows a SHOW DEVICE/BITMAP display:
$ SHOW DEVICE/BITMAP DSA1 Device BitMap Size Percent of Name ID (Bytes) Full Copy DSA1: 00010001 652 11% |
The following example shows a SHOW DEVICE/BITMAP/FULL display:
$ SHOW DEVICE DSA12/BITMAP/FULL Device Bitmap Size Percent of Active Creation Master Cluster Local Delete Bitmap Name ID (bytes) Full Copy Date/Time Node Size Set Pending Name DSA12: 00010001 652 11% Yes 5-MAY-2000 13:30:25:30 300F2 127 2% No SHAD$TEST |
You can specify bitmap information in the SHOW CLUSTER display by issuing the ADD BITMAPS command, as shown in the following example:
$ SHOW CLUSTER/CONTINUOUS Command > ADD BITMAPS Command > ADD CSID View of Cluster from system ID 57348 node: WPCM1 14-FEB-2000 13:38:53 SYSTEMS MEMBERS NODE SOFTWARE CSID STATUS BITMAPS CSGF1 VMS X6TF 300F2 MEMBER MINICOPY HSD30Y HSD YA01 300E6 HS1CP2 HSD V31D 300F4 CSGF2 VMS X6TF 300D0 MEMBER MINICOPY |
In this example, MINICOPY means that nodes CSGF1 and CSGF2 are capable
of supporting minicopy operations. If a cluster node does not support
minicopy, the term UNSUPPORTED replaces MINICOPY in the display, and
the minicopy function is disabled in the cluster.
7.10.4 Deleting Write Bitmaps
After a minicopy operation is completed, the corresponding write bitmap is automatically deleted.
There may be times when you would like to delete one or more bitmaps. Reasons for deleting bitmaps include the following:
You can delete write bitmaps with the DCL command DELETE with the /BITMAP qualifer. You use the bitmap qualifer to specify the ID of the bitmap you want to delete. For example:
$ DELETE/BITMAP/LOG 00010001 %DELETE-I-DELETED, 00010001 deleted |
There are two aspects of write bitmaps that affect performance; the message traffic that occurs between local and master write bitmaps and the size requirements of each bitmap.
The message traffic can be adjusted by changing the message mode. Single message mode is the default. Buffered message mode can improve overall system performance, but the time to record each process's write in the master write bitmap usually takes longer. These modes are described in detail in Section 7.9.
Additional memory is required to support write bitmaps, as described in
Section 1.3.1. Depending on the memory usage of your system, it may
require additional memory.
7.12 Guidelines for Using a Shadow Set Member for Backup
Volume Shadowing for OpenVMS can be used as an online backup mechanism. With proper application design and proper operating procedures, shadow set members removed from mounted shadow sets constitute a valid backup.
To obtain a copy of a file system or application database for backup purposes using Volume Shadowing for OpenVMS, the standard recommendation has been to determine that the virtual unit is not in a merge state, to dismount the virtual unit, then to remount the virtual unit minus one member. Prior to OpenVMS Version 7.3, there was a documented general restriction on dismounting an individual shadow set member for backup purposes from a virtual unit that is mounted and in active use. This restriction relates to data consistency of the file system, application data, or database located on that virtual unit, at the time the member is removed.
However, Compaq recognizes that this restriction is unacceptable when
true 24x7 application availability is a requirement, and that it is
unnecessary if appropriate data-consistency measures can be ensured
through a combination of application software and system management
practice.
7.12.1 Removing a Shadow Set Member for Backup
With currently supported OpenVMS releases, DISMOUNT can be used to remove members from shadow sets for the purpose of backing up data, provided that the following requirements are met:
Follow these steps to remove the member:
Removal of a shadow set member results in what is called a crash-consistent copy. That is, the copy of the data on the removed member is of the same level of consistency as what would result if the system had failed at that instant. The ability to recover from a crash-consistent copy is ensured by a combination of application design, system and database design, and operational procedures. The procedures to ensure recoverability depend on application and system design and will be different for each site.
The conditions that might exist at the time of a system failure range
from no data having been written, to writes that occurred but were not
yet written to disk, to all data having been written. The following
sections describe components and actions of the operating system that
may be involved if a failure occurs and there are outstanding writes,
that is, writes that occurred but were not written to disk. You must
consider these issues when establishing procedures to ensure data
consistency in your environment.
7.12.3 Application Activity
To achieve data consistency, application activity should be suspended
and no operations should be in progress. Operations in progress can
result in inconsistencies in the backed-up application data. While many
interactive applications tend to become quiet if there is no user
activity, the reliable suspension of application activity requires
cooperation in the application itself. Journaling and transaction
techniques can be used to address in-progress inconsistencies but must
be used with extreme care. In addition to specific applications,
miscellaneous interactive use of the system that might affect the data
to be backed up must also be suspended.
7.12.4 RMS Considerations
Applications that use RMS file access must be aware of the following
issues.
7.12.4.1 Caching and Deferred Writes
RMS can, at the application's option, defer disk writes to some time after it has reported completion of an update to the application. The data on disk will be updated in response to other demands on the RMS buffer cache and to references to the same or nearby data by cooperating processes in a shared file environment.
Writes to sequential files are always buffered in memory and are not
written to disk until the buffer is full.
7.12.4.2 End of File
The end-of-file pointer of a sequential file is normally updated only
when the file is closed.
7.12.4.3 Index Updates
The update of a single record in an indexed file may result in multiple
index updates. Any of these updates can be cached at the application's
option. Splitting a shadow set with an incomplete index update will
result in inconsistencies between the indexes and data records. If
deferred writes are disabled, RMS orders writes so that an incomplete
index update may result in a missing update but never in a corrupt
index. However, if deferred writes are enabled, the order in which
index updates are written is unpredictable.
7.12.4.4 Run-Time Libraries
The I/O libraries of various languages use a variety of RMS buffering
and deferred write options. Some languages allow application control
over the RMS options.
7.12.4.5 $FLUSH
Applications can use the $FLUSH service to guarantee data consistency.
The $FLUSH service guarantees that all updates completed by the
application (including end of file for sequential files) have been
recorded on the disk.
7.12.4.6 Journaling and Transactions
RMS provides optional roll-forward, roll-back, and recovery unit
journals, and supports transaction recovery using the OpenVMS
transaction services. These features can be used to back out
in-progress updates from a removed shadow set member. Using such
techniques requires careful data and application design. It is critical
that virtual units containing journals be backed up along with the base
data files.
7.12.5 Mapped Files
OpenVMS allows access to files as backing store for virtual memory
through the process and global section services. In this mode of
access, the virtual address space of the process acts as a cache on the
file data. OpenVMS provides the $UPDSEC service to force updates to the
backing file.
7.12.6 Database Systems
Database management systems, such as those from Oracle, are well suited to backup by splitting shadow sets, since they have full journaling and transaction recovery built in. Before dismounting shadow set members, an Oracle database should be put into "backup mode" using SQL commands of the following form:
ALTER TABLESPACE tablespace-name BEGIN BACKUP; |
This command establishes a recovery point for each component file of the tablespace. The recovery point ensures that the backup copy of the database can subsequently be recovered to a consistent state. Backup mode is terminated with commands of the following form:
ALTER TABLESPACE tablespace-name END BACKUP; |
It is critical to back up the database logs and control files as well
as the database data files.
7.12.7 Base File System
The base OpenVMS file system caches free space. However, all file
metadata operations (such as create and delete) are made with a
"careful write-through" strategy so that the results are
stable on disk before completion is reported to the application. Some
free space may be lost, which can be recovered with an ordinary disk
rebuild. If file operations are in progress at the instant the shadow
member is dismounted, minor inconsistencies may result that can be
repaired with ANALYZE/DISK. The careful write ordering ensures that any
inconsistencies do not jeopardize file integrity before the disk is
repaired.
7.12.8 $QIO File Access and VIOC
OpenVMS maintains a virtual I/O cache (VIOC) to cache file data. However, this cache is write through. OpenVMS Version 7.3 introduces extended file cache (XFC), which is also write through.
File writes using the $QIO service are completed to disk before
completion is reported to the caller.
7.12.9 Multiple Shadow Sets
Multiple shadow sets present the biggest challenge to splitting shadow
sets for backup. While the removal of a single shadow set member is
instantaneous, there is no way to remove members of multiple shadow
sets simultaneously. If the data that must be backed up consistently
spans multiple shadow sets, application activity must be suspended
while all shadow set members are being dismounted. Otherwise, the data
will not be crash consistent across the multiple volumes. Command
procedures or other automated techniques are recommended to speed the
dismount of related shadow sets. If multiple shadow sets contain
portions of an Oracle database, putting the database into backup mode
ensures recoverability of the database.
7.12.10 Host-Based RAID
The OpenVMS software RAID driver presents a special case for multiple
shadow sets. A software RAID set may be constructed of multiple shadow
sets, each consisting of multiple members. With the management
functions of the software RAID driver, it is possible to dismount one
member of each of the constituent shadow sets in an atomic operation.
Management of shadow sets used under the RAID software must always be
done using the RAID management commands to ensure consistency.
7.12.11 OpenVMS Cluster Operation
All management operations used to attain data consistency must be
performed for all members of an OpenVMS Cluster system on which the
affected applications are running.
7.12.12 Testing
Testing alone cannot guarantee the correctness of a backup procedure.
However, testing is a critical component of designing any backup and
recovery process.
7.12.13 Restoring Data
Too often, organizations concentrate on the backup process with little
thought to how their data will be restored. Remember that the ultimate
goal of any backup strategy is to recover data in the event of a
disaster. Restore and recovery procedures must be designed and tested
as carefully as the backup procedures.
7.12.14 Revalidation of Data Consistency Methods
The discussion in this section is based on features and behavior of OpenVMS Version 7.3 and applies to prior versions as well. Future versions of OpenVMS may have additional features or different behavior that affect the procedures necessary for data consistency. Sites that upgrade to future versions of OpenVMS must reevaluate their procedures and be prepared to make changes or to employ nonstandard settings in OpenVMS to ensure that their backups remain consistent.
This chapter explains how to accomplish system maintenance tasks on a
standalone system or an OpenVMS Cluster system that uses volume
shadowing. Refer to Chapter 3 for information about setting up and
booting a system to use volume shadowing.
8.1 Upgrading the Operating System on a System Disk Shadow Set
It is important to upgrade the operating system at a time when your system can afford to have its shadowing support disabled. This is because you cannot upgrade to new versions of the OpenVMS operating system on a shadowed system disk. If you attempt to upgrade a system disk while it is an active member of a shadow set, the upgrade procedure will fail.
Procedure for Upgrading Your Operating System
This procedure is divided into four parts. Part 1 describes how to prepare a shadowed system disk for the upgrade. Part 2 describes how to perform the upgrade. Part 3 describes how to enable volume shadowing on the upgraded system. Part 4 shows how to boot other nodes in an OpenVMS Cluster system with and without volume shadowing.
Part 1: Preparing a Shadowed System Disk
If you need to change the volume label of a disk that is mounted across the cluster, be sure you change the label on all nodes in the OpenVMS Cluster system. For example, you could propagate the volume label change to all nodes in the cluster with one SYSMAN utility command, after you define the environment as the cluster:
|
You cannot perform an upgrade on a shadowed system disk. If your system is set up to boot from a shadow set, you must disable shadowing the system disk before performing the upgrade. This requires changing SYSGEN parameter values interactively using the SYSGEN utility. |
$ RUN SYS$SYSTEM:SYSGEN |
SYSGEN> USE upgrade-disk:[SYSn.SYSEXE]ALPHAVMSSYS.PAR SYSGEN> |
SYSGEN> USE upgrade-disk:[SYSn.SYSEXE]VAXVMSSYS.PAR SYSGEN> |
SYSGEN> SET SHADOW_SYS_DISK 0 |
SYSGEN> WRITE upgrade-disk:[SYSn.SYSEXE]ALPHAVMSSYS.PAR |
SYSGEN> WRITE upgrade-disk:[SYSn.SYSEXE]VAXVMSSYS.PAR |
Even if you plan to use the upgraded system disk to upgrade the operating system on other OpenVMS Cluster nodes, you should complete the upgrade on one node before altering parameters for other nodes. Proceed to Part 2.
Part 2: Performing the Upgrade
Part 3: Enabling Volume Shadowing on the Upgraded System
Once the upgrade is complete and the upgraded node has finished running AUTOGEN, you can enable shadowing for the upgraded node using the following steps.
$ RUN SYS$SYSTEM:SYSGEN SYSGEN> USE CURRENT SYSGEN> |
SYSGEN> SET SHADOWING 2 SYSGEN> SET SHADOW_SYS_DISK 1 SYSGEN> SET SHADOW_SYS_UNIT 54 SYSGEN> WRITE CURRENT |
Part 4: Booting Other Nodes in the OpenVMS Cluster from the Upgraded Disk
If other nodes boot from the upgraded disk, the OpenVMS upgrade procedure automatically upgrades and runs AUTOGEN on each node when it is booted. The procedure for booting other nodes from the upgraded disk differs based on whether the upgraded disk has been made a shadow set.
Previous | Next | Contents | Index |
privacy and legal statement | ||
5423PRO_011.HTML |