Archive Backup System
for OpenVMS
Software Version |
|
Required Operating System |
|
Required Software |
Media and Device Management |
|
|
|
Possession, use, or copying of the software described in this documentation is authorized only pursuant to a valid written license from COMPAQ, an authorized sublicenser, or the identified licenser.
While COMPAQ believes the information included in this publication is correct as of the date of publication, it is subject to change without notice.
Compaq Computer Corporation makes no representations that the interconnection of its products in the manner described in this document will not infringe existing or future patent rights, nor do the descriptions contained in this document imply the granting of licenses to make, use, or sell equipment or software in accordance with the
description.
Copyright 2000 Compaq Computer Corporation.
All rights reserved.
Printed in the United States of America.
TM DEC, DIGITAL, MSCP, OpenVMS, StorageWorks, TK, VAX VMSCluster and the DIGITAL Logos are registered in the United States Patent and Trademark Office.
Compaq and the Compaq Logo are registered in the United States Patent and Trademark Office.
DECconnect, HSZ, StorageWorks, VMS, and OpenVMS are trademarks of Compaq Computer
Corporation.
AIX is registered trademark of International Business Machines Corporation.
FTP Software is a trademark of FTP SOFTWARE, INC.
HP is a registered trademark of Hewlett-Packard Company.
NT is a trademark of Microsoft Corporation.
Oracle, Oracle Rdb, and Oracle RMU are all registered trademarks of Oracle Corporation.
PostScript is a registered trademark of Adobe Systems, Inc.
RDF is a trademark of Touch Technologies, Inc.
SGI is a registered trademark of Chemical Bank.
Solaris is a registered trademark of Sun Microsystems, Inc.
StorageTek is a registered trademark of Storage Technology Corporation.
SunOS is a trademark of Sun Microsystems, Inc. Version 2.1.
UNIX is a registered trademark in the United States and other countries, licensed exclusively through X/Open Company Ltd.
Windows and Windows NT are both trademarks of Microsoft Corporation.
All other trademarks and registered trademarks are the property of their respective holders.
1.1 ABS Operational Environment 1-2
1.6 Hierarchical System Management for OpenVMS Support 1-8
1.7 ABS Supports Stacker Configured Devices 1-8
1.8 ABS Provides Fast Tape Positioning 1-8
3.1 Deciding What Data to Save 3-2
3.2 Deciding When to Save Data 3-3
3.3 Deciding Where to Save Data 3-4
3.3.2 Files-11 Archive Type 3-5
3.3.3 Customizing the Storage Policies Provided by ABS 3-5
3.4 Deciding How to Move Data 3-6
3.4.1 Customizing the Environment Policies Provided By ABS 3-7
4.1 Central Security Domain 4-1
4.2 Backup Management Domains 4-3
4.2.1 Centralized Backup Management Domain 4-4
5.1 How ABS Implements Its System Backup Strategy 5-1
5.1.1 System Backup Process 5-1
5.2 How ABS Implements Its User Backup Strategy 5-2
5.2.1 User Process Context 5-2
5.2.2 User Profile Process 5-2
5.3 Differences Between System and User Backup Operations 5-4
5.4 Configuring ABS for OpenVMS Client Backup Operations 5-5
5.4.1 Creating ABS Policy Objects For OpenVMS Client System Backup
Operations 5-5
5.4.2 Creating Save Requests for OpenVMS Client System Backup Operations 5-6
5.4.3 Creating ABS Policy Objects for OpenVMS Client User Backup Operations 5-7
5.4.4 Creating Save Requests for OpenVMS Client User Backup Operations 5-9
5.5 Configuring ABS for NT and UNIX Client Backup Operations 5-9
5.5.1 Creating ABS Policy Objects For NT and UNIX Client System Backup
Operations 5-9
5.5.2 Creating Save Requests for NT and UNIX Client System Backup Operations 5-11
5.6 Oracle Rdb Databases and Storage Areas Backup Operations 5-12
5.6.1 Saving Individual Storage Areas 5-12
5.6.2.1 Oracle Rdb Database Catalog Entries: 5-13
5.6.2.2 Oracle Rdb Storage Area Catalog Entries: 5-13
5.6.3 Searching for Storage Areas in the Catalog 5-14
6.1 Displaying ABS GUI On an OpenVMS System 6-1
7.1 Using ABS Policy Worksheets 7-1
7.3 Creating an ABS Storage Policy 7-2
7.5.1.5 Criteria Under Which ABS Creates Volume Sets 7-4
7.5.1.6 Clear Volume Set List From Storage Policy 7-5
7.7 Catalog and Execution Node 7-6
7.7.1 Selecting ABS Catalog 7-6
7.7.2 Selecting the Node of Execution 7-6
8.1 Using ABS Policy Worksheets 8-1
8.3 Creating an ABS Environment Policy 8-2
8.4 Environment Policy Name 8-2
8.5 Save and Restore Environment Options 8-2
8.5.1.1 How to Notify and Who to Notify 8-2
8.5.1.3 Type of Notification 8-3
8.5.4 Pre- and Post- Processing Commands 8-4
9.2.1 Save Request Restrictions 9-3
9.2.2 Pre- and Post- Processing Commands 9-4
9.3.1 Immediately Executing the Save Request 9-6
9.3.2 Repetitive Scheduling of Save Request 9-6
10.1 Restore Request Name 10-1
10.2 What Data To Restore 10-1
10.2.1 Restore Request Restrictions 10-3
10.2.2 Pre and Post- Processing Commands 10-4
10.2.3 Selection Criteria 10-6
11.1 Setting the Scheduler Interface Option 11-1
11.2 Changing between Scheduler Interface Option 11-1
11.3 Scheduler Interface Option INT_QUEUE_MANAGER 11-2
11.4 Scheduler Interface Option EXT_QUEUE_MANAGER 11-2
11.5 Scheduler Interface Option EXT_SCHEDULER 11-3
11.6 Scheduler Interface Option DECSCHEDULER 11-3
13.1.2 Entering the Correct Lookup Syntax 13-1
13.1.3 Node of Original Data 13-2
13.1.4 Storage or Catalog Name 13-3
15.1 Creating An ABS Catalog 15-1
15.1.1 Creating a BRIEF type catalog 15-2
15.1.2 Creating a FULL_RESTORE type catalog 15-2
15.1.3 Creating An SLS Type Catalog 15-3
15.1.4 Creating a Catalog using Staging Operation 15-4
15.5 Improving Catalog Performance 15-5
15.5.1 Converting ABS Catalogs 15-5
15.5.2 Moving Target Catalogs to a Different Disk 15-5
15.5.3 Moving Staging Catalog Entries 15-6
17.1 The MDMS Management Domain 17-1
17.1.1.1 Database Performance 17-3
17.1.1.3 Moving the MDMS Database 17-4
17.1.2.1 Server Availability 17-5
17.1.2.2 The MDMS Account 17-5
17.1.3 The MDMS Start Up File 17-6
17.1.3.1 MDMS$DATABASE_SERVERS - Identifies Domain Database Servers 17-7
17.1.3.2 MDMS$LOGFILE_LOCATION 17-7
17.1.3.3 MDMS Shut Down and Start Up 17-7
17.1.4 Managing an MDMS Node 17-8
17.1.4.1 Defining a Node's Network Connection 17-8
17.1.4.2 Defining How the Node Functions in the Domain 17-8
17.1.4.3 Enabling Interprocess Communication 17-9
17.1.4.4 Describing the Node 17-9
17.1.4.5 Communicating with Operators 17-9
17.1.5 Managing Groups of MDMS Nodes 17-9
17.1.6 Managing the MDMS Domain 17-10
17.1.6.1 Domain Configuration Parameters 17-10
17.1.6.2 Domain Options for Controlling Rights to Use MDMS 17-11
17.1.6.3 Domain Default Volume Management Parameters 17-11
17.1.7 MDMS Domain Configuration Issues 17-12
17.1.7.1 Adding a Node to an Existing Configuration 17-12
17.1.7.2 Removing a node from an existing configuration 17-12
17.2 Configuring MDMS Drives, Jukeboxes and Locations 17-13
17.2.1 Configuring MDMS Drives 17-13
17.2.1.1 How to Describe an MDMS Drive 17-13
17.2.1.2 How to Control Access to an MDMS Drive 17-13
17.2.1.3 How to Configure an MDMS Drive for Operations 17-13
17.2.1.4 Determining Drive State 17-14
17.2.1.5 Adding and Removing Managed Drives 17-14
17.2.2 Configuring MDMS Jukeboxes 17-14
17.2.2.1 How to Describe an MDMS Jukebox 17-14
17.2.2.2 How to Control Access to an MDMS Jukebox 17-14
17.2.2.3 How to Configure an MDMS Jukebox for Operations. 17-14
17.2.2.4 Attribute for DCSC Jukeboxes 17-15
17.2.2.5 Attributes for MRD Jukeboxes 17-15
17.2.2.6 Determining Jukebox State 17-15
17.2.2.7 Magazines and Jukebox Topology 17-15
17.2.3 Summary of Drive and Jukebox Issues 17-17
17.2.3.1 Enabling MDMS to Automatically Respond to Drive and Jukebox Requests 17-17
17.2.3.2 Creating a Remote Drive and Jukebox Connection 17-17
17.2.3.3 How to Add a Drive to a Managed Jukebox 17-18
17.2.3.4 Temporarily Taking a Managed Device From Service 17-18
17.2.3.5 Changing the Names of Managed Devices 17-18
18.1 MDMS User Interfaces 18-1
18.1.1 Command Line Interface 18-1
18.1.1.1 Command Structure 18-1
18.1.1.2 Process Symbols and Logical Names for DCL Programming 18-1
18.1.1.3 Creating, Changing, and Deleting Object Records With the CLI 18-1
18.1.1.4 Add and Remove Attribute List Values With the CLI 18-2
18.1.1.5 Operational CLI Commands 18-2
18.1.1.6 Asynchronous Requests 18-3
18.1.2 Graphic User Interface 18-3
18.1.2.1 Object Oriented Tasks 18-3
18.2 Access Rights for MDMS Operations 18-5
18.2.1 Description of MDMS Rights 18-5
18.2.1.1 Low Level Rights 18-5
18.2.1.2 High Level Rights 18-5
18.2.2 Granting MDMS Rights 18-6
18.3 Creating, Modifying, and Deleting Object Records 18-8
18.3.1 Creating Object Records 18-8
18.3.1.2 Differences Between the CLI and GUI for Naming Object Records 18-8
18.3.2 Inheritance on Creation 18-9
18.3.3 Referring to Non-Existent Objects 18-9
18.3.4 Rights for Creating Objects 18-9
18.3.5 Modifying Object Records 18-9
18.3.6 Protected Attributes 18-9
18.3.7 Rights for Modifying Objects 18-10
18.3.8 Deleting Object Records 18-10
18.3.9 Reviewing Managed Objects for References to Deleted Objects 18-10
18.3.10 Reviewing DCL Command Procedures for References to Deleted Objects 18-11
19.1 The RDF Installation 19-1
19.3.1 Starting Up and Shutting Down RDF Software 19-2
19.3.2 The RDSHOW Procedure 19-2
19.3.4 Showing Your Allocated Remote Devices 19-2
19.3.5 Showing Available Remote Devices on the Server Node 19-2
19.3.6 Showing All Remote Devices Allocated on the RDF Client Node 19-3
19.4 Monitoring and Tuning Network Performance 19-3
19.4.2 DECnet-Plus (Phase V) 19-4
19.4.3 Changing Network Parameters 19-4
19.4.4 Changing Network Parameters for DECnet (Phase IV) 19-4
19.4.5 Changing Network Parameters for DECnet-Plus(Phase V) 19-5
19.4.6 Resource Considerations 19-6
19.4.7 Controlling RDF's Effect on the Network 19-7
19.4.8 Surviving Network Failures 19-8
19.5 Controlling Access to RDF Resources 19-9
19.5.1 Allow Specific RDF Clients Access to All Remote Devices 19-9
19.5.2 Allow Specific RDF Clients Access to a Specific Remote Device 19-9
19.5.3 Deny Specific RDF Clients Access to All Remote Devices 19-9
19.5.4 Deny Specific RDF Clients Access to a Specific Remote Device 19-10
20.1.2 Volume States by Manual and Automatic Operations 20-2
20.1.2.1 Creating Volume Object Records 20-2
20.1.2.2 Initializing a Volume 20-3
20.1.2.3 Allocating a Volume 20-3
20.1.2.4 Holding a Volume 20-4
20.1.2.5 Freeing a Volume 20-4
20.1.2.6 Making a Volume Unavailable 20-4
20.1.3 Matching Volumes with Drives 20-4
20.1.4 Magazines for Volumes 20-4
20.1.5 Symbols for Volume Attributes 20-5
20.2.1 Setting Up Operator Communication 20-6
20.2.1.1 Set OPCOM Classes by Node 20-6
20.2.1.2 Identify Operator Terminals 20-6
20.2.1.3 Enable Terminals for Communication 20-6
20.2.2 Activities Requiring Operator Support 20-6
20.3 Serving Clients of Managed Media 20-7
20.3.1 Maintaining a Supply of Volumes 20-7
20.3.1.1 Preparing Managed Volumes 20-7
20.3.2 Servicing a Stand Alone Drive 20-9
20.3.3 Servicing Jukeboxes 20-9
20.3.3.1 Inventory Operations 20-9
20.3.4 Managing Volume Pools 20-10
20.3.4.1 Volume Pool Authorization 20-11
20.3.4.2 Adding Volumes to a Volume Pool 20-11
20.3.4.3 Removing Volumes from a Volume Pool 20-11
20.3.4.4 Changing User Access to a Volume Pool 20-12
20.3.4.5 Deleting Volume Pools 20-12
20.3.5 Taking Volumes Out of Service 20-12
20.3.5.1 Temporary Volume Removal 20-12
20.3.5.2 Permanent Volume Removal 20-12
20.4 Rotating Volumes from Site to Site 20-13
20.4.1 Required Preparations for Volume Rotation 20-13
20.4.2 Sequence of Volume Rotation Events 20-13
20.5 Scheduled Activities 20-15
20.5.1 Logical Controlling Scheduled Activities 20-15
20.5.2 Job Names of Scheduled Activities 20-15
21.1 Creating Jukeboxes, Drives, and Volumes 21-1
21.2 Deleting Jukeboxes, Drives, and Volumes 21-4
A.1 Efficiently Configuring Your System A-1
A.2 Preparing for Disaster Recovery A-2
C.1 Database Cleanup Utility C-1
C.1.1 Starting Up the Database Cleanup Utility C-1
C.1.2 Changing the Database Cleanup Utility Default Behavior C-1
C.1.3 Database Cleanup Utility Log File C-2
C.1.4 Shutting Down the Database Cleanup Utility C-2
C.2 Catalog Cleanup Utility C-2
C.2.1 Starting Up the Catalog Cleanup Utility C-2
C.2.2 Changing the Catalog Cleanup Utility Default Behavior C-2
C.2.3 Catalog Cleanup Utility Log File C-3
C.2.4 Shutting Down the Catalog Cleanup Utility C-3
F.1 Logical Names Provide Additional Tracing F-1
F.2 Troubleshooting Assistance for NT Clients F-1
F.3 Verifying NT and UNIX Client Quotas F-2
F.4 Considerations for Saving Large Disks on UNIX and NT Clients F-2
F.5 Using The Same Volume Set For Multiple Types of ABS Clients F-4
F.7 New Logical Name Added To Increase Stack Size On Alpha Systems F-5
F.8 Additional Error Messages F-5
F.10 Logical To Assist with Server Connection Problems F-5
F.11 AUDIT Flags in ABS$POLICY_CONFIG.DAT F-5
J.2 Why Convert from SLS to ABS? J-1
J.2.1 Consolidated Policy Management J-2
J.2.2 More Intuitive Policy Organization J-2
J.2.3 Better Logging and Diagnostic Capabilities J-2
J.2.5 Automatic Full and Incremental Operations J-2
J.2.6 More versatile User requested Operations J-3
J.2.7 Disk Storage Classes J-3
J.3 SLS and ABS System Backup Policy Overview J-3
J.3.1 SLS Policy with ABS Equivalents J-3
J.3.1.1 System Backup Policy Configuration J-3
J.3.1.2 Defining Your System Backup Policy J-4
J.3.2 ABS Overview with SLS Equivalents J-5
J.3.2.1 Policy Configuration J-5
J.3.2.3 Execution Environment J-7
J.4 SLS and ABS Operation Overview J-11
J.4.1.1 SBK Symbols for Scheduling J-11
J.4.1.2 ABS Scheduler Interface Options J-12
J.4.2 Types of Operations J-12
J.4.2.2 Full and Incremental Operations J-15
J.4.2.3 Selective Operations J-15
J.4.2.4 User Requested Operations J-16
J.4.3 Media and Device Management J-17
J.4.3.1 New Media Manager J-17
J.4.3.2 Volume Set Management J-18
J.4.3.3 Consistency of Volume and Drive Management J-18
J.4.4.3 Restoring data with ABS from SLS History Sets J-19
J.5.1 Steps for Conversion J-20
J.5.1.1 Convert the MDMS Database J-20
J.5.1.2 Determine your use of SLS J-20
J.5.1.3 Converting SLS System Backups to ABS J-21
J.5.1.4 Converting User Backup policy J-26
J.5.1.5 Monitor ABS Activity J-26
J.5.1.6 Restoring from SLS History Sets J-27
J.6 Conversion Utility Reference J-27
J.6.2 Output Command File naming and contents J-27
K.1 Comparing STORAGE and MDMS Commands K-1
L.1.1 Configuration Step 1 Example - Defining Locations L-2
L.1.2 Configuration Step 2 Example - Defining Media Type L-2
L.1.3 Configuration Step 3 Example - Defining Domain Attributes L-2
L.1.4 Configuration Step 4 Example - Defining MDMS Database Nodes L-3
L.1.5 Configuration Step 5 Example - Defining a Client Node L-5
L.1.6 Configuration Step 6 Example - Creating a Jukebox L-5
L.1.7 Configuration Step 7 Example - Defining a Drive L-5
L.1.8 Configuration Step 8 Example - Defining Pools L-7
L.1.9 Configuration Step 9 Example - Defining Volumes using the /VISION qualifier L-7
M.1 Operational Differences Between SLS/MDMS V2 & MDMS V3 M-11
M.1.3 Rights and Privileges M-13
M.2 Converting SLS/MDMS V2.X Symbols and Database M-110
M.2.1 Executing the Conversion Command Procedure M-111
M.2.2 Resolving Conflicts During the Conversion M-111
M.3 Things to Look for After the Conversion M-114
M.4 Using SLS/MDMS V2.x Clients With the MDMS V3 Database M-118
M.4.1 Limited Support for SLS/MDMS V2 during Rolling Upgrade M-118
M.4.2 Upgrading the Domain to MDMS V3 M-118
M.4.3 Reverting to SLS/MDMS V2 M-119
M.5 Convert from MDMS Version 3 to a V2.X Volume Database M-120
N.1 Example Oracle Database N-1
N.2 Backing up a Closed Database N-2
N.2.1 Creating ABS Environment and Storage Policies for a Closed Database Backup N-2
N.2.2 Creating ABS Save Requests for a Closed Database Backup N-4
N.3 Backing Up an Open Database N-5
N.3.1 Creating ABS Environment and Storage Policies for an Open Database Backup N-6
N.3.2 Creating ABS Save Requests for an Open Database Backup N-11
Table 2-1 Differences Between ABS OpenVMS and UNIX or NT Clients 2-3
Table 3-1 Deciding What Data To Save 3-3
Table 3-3 Customizing ABS Provided Storage Policies 3-6
Table 3-4 Customizing ABS Provided Environment Policies 3-7
Table 5-1 System Backup Process 5-1
Table 5-2 User Backup Process 5-3
Table 5-3 Major Differences Between System and User Backup Operations 5-4
Table 5-4 Creating Storage and Environment Policies for OpenVMS Client System Backup Operations 5-6
Table 5-5 Creating System Backup Save or Restore Requests For OpenVMS Client 5-7
Table 5-6 Creating Storage and Environment Policies for OpenVMS Client User Backup Operations 5-8
Table 5-7 Creating Save Requests for OpenVMS Client User Backup Operations 5-9
Table 5-8 Creating Storage and Environment Policies for NT/UNIX Client System Backup Operations 5-10
Table 5-9 Creating Storage and Environment Policies for NT/UNIX Client System Backup Operations 5-11
Table 6-1 Displaying ABS GUI on an OpenVMS System 6-1
Table 6-2 Displaying the GUI On an NT System Using eXcursion and DCL Commands 6-2
Table 6-3 Displaying ABS GUI Using eXcursion Menu Options 6-2
Table 7-1 Creating an ABS Storage Policy 7-2
Table 7-2 Selecting Tape or Disk Storage 7-3
Table 7-3 Options to Save the Data 7-5
Table 7-4 Enabling Access Control to the Storage Policy 7-7
Table 8-1 Creating an ABS Environment Policy 8-2
Table 8-2 Selecting the Notification Options 8-3
Table 8-3 Enabling Access to an ABS Environment Policy 8-8
Table 9-1 Adding Disk or File Names To A Save Request 9-2
Table 9-2 Correctly Entering the Disk Name or File Name 9-2
Table 9-3 Enabling Access To An ABS Save Request 9-10
Table 10-1 Adding Disk or File Names To A Restore Request 10-1
Table 10-2 Entering The Correct Syntax For A Restore Request 10-2
Table 10-3 Enabling Access Control To A Restore Request 10-8
Table 12-1 Requirements for Modifying and Deleting Policies and Requests 12-1
Table 12-2 Modifying or Deleting an ABS Policy or Request 12-2
Table 13-1 Entering The Correct Syntax For A Lookup Operation 13-1
Table 13-2 Finding Saved Data By Date 13-3
Table 15-1 Creating an ABS Catalog For SLS Restores 15-4
Table 15-2 Moving Target Catalogs to a Different Disk 15-6
Table 16-1 MDMS Object Records and What they Manage 16-1
Table 17-1 MDMS Database Files and Their Contents 17-2
Table 17-2 How to Back Up the MDMS Database Files 17-3
Table 17-3 Processing MDMS Database Files for an Image Backup 17-4
Table 17-4 How to Move the MDMS Database 17-5
Table 17-5 MDMS$SYSTARTUP.COM Logical Assignments 17-6
Table 17-6 Network Node Names for MDMS$DATABASE_NODES 17-7
Table 17-7 Default Volume Management Parameters 17-11
Table 17-8 Adding a Node to an Existing Configuration 17-12
Table 17-9 Actions for Configuring Remote Drives 17-18
Table 17-10 Changing the Names of Managed Devices 17-18
Table 18-1 Operational CLI Commands 18-2
Table 18-2 Operational Actions With the GUI 18-4
Table 18-3 Reviewing and Setting MDMS Rights 18-6
Table 18-4 Low Level Rights 18-10
Table 18-5 Reviewing Managed Objects for References to Deleted Objects 18-10
Table 18-6 Reviewing DCL Commands for References to Deleted Objects 18-11
Table 19-1 How to Change Network Parameters 19-5
Table 20-1 MDMS Volume State Transitions 20-2
Table 20-2 Setting Up Operator Communication 20-6
Table 20-3 Operator Management Features 20-6
Table 20-4 Configuring MDMS to Service a Stand Alone Drive 20-9
Table 20-5 How to Create Volume Object Records with INVENTORY 20-10
Table 20-6 Sequence of Volume Rotation Events 20-14
Table 21-1 Creating Devices and Volumes 21-2
Table 21-2 Deleting Devices and Volumes 21-4
Table 21-3 Rotating Volumes Between Sites 21-6
Table 21-4 Servicing Jukeboxes 21-8
Table A-1 Disaster Recovery Tasks A-2
Table B-1 Start Time Formats B-1
Table C-1 Shutting Down the Database Cleanup Utility C-2
Table C-2 Defining the Catalog Cleanup Utility Logical Names C-3
Table E-1 Storage Policy Worksheet E-1
Table E-2 Environment Policy Worksheet E-2
Table E-3 Save Request Worksheet E-3
Table F-1 Assigning a System Variable for NT Troubleshooting F-2
Table F-2 Modifying the Blocking Factor F-3
Table I-1 Comparing SLS and ABS Backup Attributes I-1
Table J-1 DCL Symbols and ABS Equivalent J-3
Table J-2 Storage Class Parameter and SBK File Equivalent J-6
Table J-3 ABS and SBK Equivalent J-7
Table J-4 Save Request and SBK Equivalent J-9
Table J-5 Restore Request Parameter Information J-10
Table J-6 ABS Parameter and SLS Equivalent J-11
Table J-7 Storage Class Parameter J-23
Table J-8 Execution Environment Parameter J-24
Table J-9 SBK Symbols in ABS Terminology J-28
Table J-10 ABS Storage Classes and SLS SBK Equivalent J-31
Table J-11 ABS Execution Environment Parameter and SLS SBK Equivalent J-32
Table J-12 ABS Save Request Parameter and SLS SBK Equivalent J-33
Table K-1 Comparing MDMS Version 2 and Version 3 Commands K-1
Table K-2 Comparing MDMS V2 Forms and MDMS V3 Features K-2
Table K-3 Comparison of TAPESTART.COM to MDMS Version 3 Features. K-4
Table M-1 Volume Attributes M-18
Figure 1-1 ABS Operational Environment 1-3
Figure 1-2 ABS Save or Restore Request 1-5
Figure 2-1 ABS OpenVMS Client-Server Configuration 2-1
Figure 2-2 ABS UNIX or NT Client Configuration 2-3
Figure 3-2 Storage Policy/Archive Type Association 3-5
Figure 4-1 Central Security Domain on an OpenVMS Cluster 4-2
Figure 4-2 Centralized Backup Management Domain On An OpenVMS Cluster 4-4
Figure 4-3 Distributed Backup Management Domain on an OpenVMS Cluster 4-5
Figure 4-4 Combined Backup Management Domains on an OpenVMS Cluster 4-7
Figure 6-1 ABS Main Window 6-2
Figure 12-1 Modify or Delete Requests And Policies Window 12-2
Figure 17-1 The MDMS Domain 17-2
Figure 17-2 Figure 10- 2 Groups in the MDMS Domain 17-10
Figure 17-3 Jukebox Topology 17-16
Figure 17-5 Volume Locations 17-19
Figure 17-6 Named Locations 17-20
Figure 20-1 Volume States 20-1
Figure 20-3 Pools and Volumes 20-11
Figure 21-1 Configuring Volumes and Drives 21-1
Figure 21-2 Volume Rotation 21-6
Figure 21-3 Magazine Placement 21-7
This document is intended for storage administrators who are experienced OpenVMS system managers. This document should be used in conjunction with the Introduction to OpenVMS System Management manual.
The following conventions are used in this document:
The following related products may be mentioned in this document:
The following documents are part of Archive Backup System for OpenVMS documentation set:
Archive Backup System for OpenVMS (ABS) is a software product that allows you to save and restore data in a heterogeneous environment. ABS provides you with the ability to perform anything from full system backup operations to user-requested or user-created backup operations. ABS ensures data safety and integrity by providing a secure environment for their save and restore operations.
All of ABS features are not available, if you only have an ABS-OMT license. The following lists the features that are not implemented with ABS-OMT license:
ABS enables you to implement a backup policy that allows you to save the data through automatic or repetitively scheduled save operations. It also enables you to save data randomly using a one-time-only save operation. ABS allows you to use different scheduler interface options to schedule requests. This feature allows you to customize the scheduling of the save or restore request to your system configuration.
Save and restore operations are accomplished by using two of the policy objects recognized by ABS, a save request and a restore request. These policy objects allow you to save data from online to either a offline volume or to another disk and, if necessary, allows you to restore that data to either its original location or to a different output location.
ABS uses a media manager called Media and Device Management Services (MDMS) for OpenVMS. The MDMS software is provided with ABS software and must be installed and configured before installing ABS. Together, ABS and MDMS minimize the amount of user interaction required to locate saved data and to manage volumes and tape drives used for ABS save and restore operations.
ABS tracks the location of data when saved as a result of an ABS save request. This information is kept in an ABS catalog. Upon request, ABS accesses the catalog to locate or restore the data and coordinates the media management responsibilities with MDMS.
This chapter provides information about the various components that comprises an ABS operational environment. The information includes the following items:
ABS operational environment contains the following components:
ABS interfaces are described in See ABS Interfaces.
See ABS Operational Environment illustrates some of the components of ABS environment.
In See ABS Operational Environment, the following items are also illustrated:
The following ABS policy objects are components of ABS software:
The following sections describe ABS policy objects.
A save request defines the data to be saved and executes upon immediate invocation or through an automatic, repetitive schedule. You can create save requests using either ABS GUI, or CLI.
A save request defines the following criteria:
To meet your storage management requirements, you will need to create save requests that fulfill those requirements. Chapter 9 , Creating Save Requests describes how to plan for and create a save request.
A restore request restores data from offline storage back to online storage. Restore requests are created using either the GUI or DCL and are executed immediately or at a specified time.
A restore request defines the following criteria:
To meet your storage management requirements, you will need to create restore requests that fulfill those requirements. Chapter 10 , Creating Restore Requests describes how to plan for and create a restore request.
See ABS Save or Restore Request illustrates the path of a save or restore request.
A save or restore request is invoked through the GUI or through the CLI (DCL).
A storage policy defines the volumes (media type) and archive characteristics where you can safely store data. Each storage policy has a unique name, contains a set of archive characteristics, and is created and configured by users who have the proper privileges and access right identifiers (typically the storage administrator). A storage policy allows you to specify a simple storage policy name rather than a complicated set of characteristics.
Each storage policy defines the following:
After the installation of ABS is complete, ABS provides storage policies. Section 3.3.3 lists the storage policies provided by ABS installation procedure.
The environment policy defines the criteria under which save and restore requests are executed. The criteria defined in an environment policy include:
After the installation of ABS is complete, ABS provides several environment policies. Section 3.4.1 lists the environment policies provided by ABS installation procedure.
An ABS catalog is a database that contains history information about save requests and can be assigned to one or more storage policies. Each time a save request is initiated through a particular storage policy, the save request history is recorded in ABS catalog associated with the storage policy.
The information contained in an ABS catalog includes:
See ABS Catalogs shows the relationship between an ABS catalog and an ABS storage policy.
After the installation of ABS is complete, ABS provides a default catalog named ABS_CATALOG. By default, this catalog is associated with all storage policies unless it is changed by the creator of the storage policy. All ABS catalogs, both the default catalog and usercreated catalogs, support lookup and restore capabilities.
To meet your storage management requirements, you may need to create catalogs other than the one provided by ABS. Chapter 15 , Creating ABS Catalogs describes how to plan for and create an ABS catalog.
An archive file system is the file system that stores ABS save sets. ABS enables you to specify which archive file system to use (storage policy) to store ABS save sets.
ABS uses various backup agents to save and restore data. The backup agent is determined by the type of data, such as VMS files, Oracle Rdb databases, Oracle Rdb storage areas, UNIX files, or NT files. The backup agent is responsible for the actual data movement operation, while ABS is responsible for invoking the correct backup agent and recording the information about the save operation.
ABS supports systems that have HSM installed. ABS and HSM can be configured together to permit HSM to perform shelving and /or preshelving of an OpenVMS file data, while nightly backups under ABS will copy only the file system metadata. This cooperative configuration is referred to as "Backup Via Shelving".
To facilitate this cooperative operation, the VMS BACKUP engine that ABS employs uses the following command qualifiers:
On systems where HSM is installed, ABS exhibits the following default behavior:
ABS supports stacker configured devices when it encounters free volumes already mounted on the drive. ABS will use the volume if it meets the media type criteria.
ABS provides fast tape positioning so that the speed of positioning the tape is considerably faster. When positioning to the end of a volume that has a large amount of data stored on it, the difference in time will be significant from versions of ABS prior to V2.2. Positioning time for a restore request requires less time depending on the file location on the volume.
If you are utilizing a tape drive that does not support the fast tape positioning, you may see errors such as:
ABS_SKIPMARKS_FAILED, Skip tape marks failure
In those cases, define the logical ABS_NO_FAST_SKIP on the node where the failures occur.
ABS allows the use of different scheduler interfaces. By default ABS uses the programming interface to the OpenVMS Queue Manager to schedule save and restore requests. These are the scheduler interface options which can be used:
The scheduler interface is invoked when a save or restore request is created, you can either start the request immediately (the only option for a restore request) or implement a repetitive schedule for save requests.
The scheduler interface is used to:
For more information on the scheduling options, see Chapter 11 , Scheduling Requests .
ABS Version 3.0A provides the following interfaces:
ABS provides a client-server technology that ensures ABS policy database is secure and available only to authorized ABS clients. ABS server is the node or OpenVMS Cluster system where ABS policy engine and policy database resides.
ABS recognizes OpenVMS, UNIX, and NT clients. The following sections describe the differences between the functions and responsibilities of these types of ABS clients.
See ABS OpenVMS Client-Server Configuration illustrates an OpenVMS client-server configuration as interpreted by ABS software. In this illustration, the following components are shown:
In See ABS OpenVMS Client-Server Configuration, ABS interprets OpenVMS client-server configuration as follows:
See ABS UNIX or NT Client Configuration illustrates a UNIX or NT client-server configuration. In this illustration, the following components are shown:
In See ABS UNIX or NT Client Configuration, ABS interprets the UNIX or NT client configuration as follows:
The main difference between a ABS UNIX or NT client and a ABS OpenVMS client is shown in See Differences Between ABS OpenVMS and UNIX or NT Clients.
The components required to configure a UNIX or NT client system are a gtar executable file (provided with ABS software) and the TCP/IP Services networking software (pre-requisite for UNIX or NT client support). See Archive Backup System for OpenVMS Installation Guide for instructions about installing and configuring a UNIX or NT client system.
There are several decisions you must make to customize Archive Backup System for OpenVMS (ABS) software so it fulfills your storage management requirements. Customizing ABS policy objects provided by the installation procedure or creating new ABS policy objects is the method of defining your ABS policy.
Consider the following items when planning your ABS policy:
To configure an ABS policy that meets your specific needs, weigh the considerations described in the following sections and then create or modify ABS policy objects that meet those needs. See ABS Policy shows the options you can set on ABS policy objects that will implement an ABS policy that meets your storage management requirements.
Part of configuring your ABS policy is deciding your backup strategy. Save requests enable you to specify the data that you need to save, either to ensure data safety or to meet business requirements. You may need to create several different save requests to ensure complete implementation of your backup strategy.
Once your save requests are created, you can modify or delete those save requests (provided they meet deletion criteria) in order to maintain your ABS policy.
ABS supports the following types of save requests:
See Deciding What Data To Save provides some guidelines for deciding which type of save requests you need to create to make sure that you are correctly implementing your backup strategy.
Once you have decided what data you need to save, you must consider when and how often to save that data. ABS offers a variety of scheduling options. It also allows you to interrupt those schedules when necessary. Consider the following items before deciding when and how often to schedule save requests:
Each save request enables you to specify the start time and the interval at which you want to execute the save request. These options on a save request are Start Time and Schedule. This, along with other save request options, enables you to set up a backup strategy that fulfills your storage management needs.
Chapter 9 , Creating Save Requests explains how to create save requests and Section 9.3.2 describes the scheduling options available for those save requests.
Storage policies define a set of characteristics for data that is saved using ABS. Storage policy characteristics include where (on which volumes) to store data, how long to store the data, which catalog to use for recording the location of data, and whether to enable or restrict access to the data by other users.
As the storage administrator, you can allow access to or restrict access from a storage policy, or you can create storage policies for the sole purpose of allowing users to create their own save and restore requests. The manner in which you allow access to a storage policy determines who can create save and restore requests using that storage policy.
A storage policy also controls volume selection. Each storage policy has an archive type associated with it to specify how a volume is located. An archive type includes the type of a volume (either tape or disk) that the storage policy uses.
ABS supports the following archive types:
See Archive Type describes the archive types that ABS supports, the type of volumes supported by the archive type, and the storage policy association.
Each site has its own reasons for using specific types of volumes to store saved data. For example, the type of volume that you choose for long-term storage may have different characteristics than the type of volume that you choose for short-term storage or disaster recovery storage.
The following sections describe the supported archive types and their association with a storage policy.
MDMS is an archive type that manages volumes and tape drives. When it is associated with a storage policy, MDMS uses volumes that are defined as follows:
Files-11 is an archive type that allows you to store saved data to an OpenVMS File-11 structured online disk. Because disk storage is very costly, you must consider the long-term effects of using this type of archive type. The recommended and default archive type is MDMS.
See Storage Policy/Archive Type Association shows how a storage policy is associated with an archive type, and how the archive type is associated with the type of volume that the storage policy uses.
After the installation of ABS has successfully completed, the following storage policies are resident on your system:
With the exception of USER_BACKUPS, all the storage policies provide by ABS installation procedure have the same characteristics. To customize a storage policy, see See Customizing ABS Provided Storage Policies. See Customizing ABS Provided Storage Policies uses ABS_ARCHIVE storage policy as an example.
If you do not want to modify the storage policies provided by ABS, you may need to create additional storage policies to implement your ABS policy so that is meets your storage management needs. See Chapter 7 , Creating Storage Policies for instructions about creating new storage policies.
Up to this point, you have considered what data to save (save request), when to save the data (save request start time and schedule), and where you will store the data (storage policy). Now you must consider how you want to execute save and restore requests, known as the environment policy
An environment policy defines the characteristics of the environment in which save and restore requests execute. Those characteristics include such things as who to notify, how many drives to use, listing and logging options, and so forth.
After the installation of ABS has successfully completed, ABS provides the following environment policies:
With the exception of USER_BACKUPS_ENV, all of the environment policies have the same characteristics. To customize the environment policies to meet your storage management needs, see the instructions in See Customizing ABS Provided Environment Policies.
If you do not want to modify the environment policies provided by ABS, you may need to create additional ones to meet your storage management needs. See Chapter 8 , Creating Environment Policies for instructions about creating new environment policies.
When you install ABS, you are asked for a list of the nodes where the policy engine will run. This list is put into the ABS$SYSTEM:ABS$POLICY_CONFIG.DAT file. There is a separate line for each nodename.
If you later decide to remove one of the nodenames, you must edit this file and remove the line for that node (i.e. node1::):
ABS$POLICY_ENGINE_LOCATION = node1::
Be sure that there is at least one line specifying the ABS$POLICY_ENGINE_LOCATION in this file or ABS will not run.
If you wish to add another policy engine node, you may also add a line to this file. But, you must be sure that the policy engine image (ABS$SYSTEM:ABS$POLICY_ENGINE.EXE) is available on that node.
The information in this chapter describes how to configure a secure operating environment for ABS. It also provides example scenarios for setting up central security domains and backup management domains, the related ABS policy objects, and the access controls to set to allow access to ABS policy objects.
When considering data safety, you must make the following decisions:
A central security domain is the node or OpenVMS Cluster system where ABS server software is installed, and where ABS policy database resides. After the installation of ABS, all ABS policy objects are located in ABS policy database. The central security domain controls creating, modifying, and deleting all ABS policy objects, especially storage and environment policies.
Depending upon your business needs, you can choose to have either a single central security domain or multiple central security domains.
In a distributed environment, ABS assumes that the systems on which it executes are, as a whole, reasonably secure. This means that only trusted backup management personnel (such as the storage administrator or an authorized operator) with a direct need are authorized with the following items:
ABS imposes the following restrictions for user backup operations:
A backup management domain is ABS's concept of backup management control. Any node or OpenVMS Cluster system that can create ABS save requests is considered a backup management domain. These systems have ABS client software installed. You can restrict the control of backup management to a single backup management domain, or you can enable control to several backup management domains. Typically, each backup management domain is controlled by one storage administrator.
ABS conceptually categorizes the following backup management domains:
The following sections describe how to configure central security domains and backup management domains to create a secure environment for ABS operations. Each section contains a scenario that details a hardware configuration, the placement of the central security domain, associated ABS policy objects, and the security controls that should be set on those policy objects to minimize the potential for unauthorized access to data.
A centralized backup management domain allows backup policy and schedules to be created only within a single backup management domain. This backup management domain is also the central security domain. A centralized backup management domain allows you to control your ABS policy in one, centralized location. However, this backup management domain cannot create save or restore requests for remote nodes or OpenVMS Cluster systems outside the backup management domain.
In this scenario, storage and environment policies are created and maintained within the central security domain. A single backup management domain that is confined to the central security domain, can create, modify, and delete save requests. This represents ABS backup management control in its simplest form.
The distributed backup management domain allows ABS policy objects to be stored in a central location (the central security domain), but responsibility for backup management control (creating and modifying save requests) is distributed among remote nodes or remote OpenVMS Cluster systems.
This type of backup management control distributes backup management responsibility to backup management domains outside the central security domain.
In this scenario, storage and environment policies are defined and maintained only from the central security domain. Save and restore requests can be created from the central security domain or from the backup management domain. These remote backup management domains are connected to the central security domain through the DECnet software. The save and restore requests reside on the central security domain (in ABS policy database), but the save or restore operation is executed on the remote system.
Restriction:
A backup management domain cannot create a save request for another backup management domain. A backup management domain can only create a save request for itself.
Depending upon your business needs, you may choose to combine backup management control strategies. You can install ABS server software on multiple nodes or OpenVMS Cluster systems, creating multiple central security domains.
In this scenario, a network has two OpenVMS Cluster systems that have ABS server software installed. Because of the two ABS servers, there are two different central security domains on the network.
The OpenVMS Cluster system (CLSTRA) has several backup management domains while the other OpenVMS Cluster system (CLSTRB) has only one backup management domain, the central security domain.
ABS's strategy for protecting data is to provide the most common type of backup strategies that customers require, system backups and user backups.
The information in this chapter describes how ABS defines its system and user backup strategies.
To implement a system backup strategy, ABS provides policy objects that define the characteristics required for system backup operations. These ABS policy objects are the SYSTEM_BACKUPS storage policy and the SYSTEM_BACKUPS_ENV environment policy.
When a save request is created that uses the SYSTEM_BACKUPS storage policy, ABS creates volume sets (according to the consolidation criteria) that are owned by ABS and places the data specified by the save request on those volumes.
See System Backup Process describes how system backup operations are implemented using ABS
To implement a user backup strategy, ABS provides storage and environment policies that define the characteristics that enable users to create their own save and restore requests. These default policy objects are the USER_BACKUPS storage policy and the USER_BACKUPS_ENV environment policy.
ABS enables a storage administrator to create a save request in the context of a specific user, but without granting that user the privileges necessary to execute the save request (such as BYPASS or CMKRNL).
Create an environment that specifies a user profile, and add an additional privilege mask that grants the necessary privileges required to execute the save request. This restricts the user to executing save requests using only that environment policy.
When a save or restore request executes in an environment policy that has a user profile assigned to it, ABS creates the subprocess where the request executes. This subprocess contains the user name, UIC, privileges, and access right identifiers of the user who is specified in the user profile.
The privileges associated with the subprocess defaults to those identified in the UAF record for the user. However, ABS also allows an additional privilege mask to be specified in the user profile, and places these additional privileges in the authorized privilege mask of the subprocess.
Therefore, if the subprocess executes a SET PROC/PRIV command (in one of the PROLOGUE commands) specifying any additional privileges, it will be granted.
See User Backup ProcessSee Table 5-2 describes the process used by ABS for user backup operations. describes the process used by ABS for user backup operations.
See Major Differences Between System and User Backup Operations shows the major differences between a system and user backup operation.
The context of the user process submitting the save request is set to ABS. |
The context of the user process submitting the save request is set to the original requester (the user creating the save request). |
Storage Policy = USER_BACKUPS1 |
|
Environment Policy = USER_BACKUPS_ENV2 |
|
SYSTEM_BACKUPS_ENV |
USER_BACKUPS_ENV |
To configure ABS policy objects to be able to perform OpenVMS client system and user backups, review the following sections.
You can modify storage and environment policies provided by ABS as described in the other chapters or you can create additional storage and environment polices for OpenVMS client system backup operations.
The following list describes how the access controls must be set on the storage and environment policies intended for OpenVMS client system backup operations:
The examples in See The examples in show you how to create ABS storage and environment policies that enable system backup operations for an ABS client node. show you how to create ABS storage and environment policies that enable system backup operations for an ABS client node.
You can create system save requests for an OpenVMS client node from either ABS OpenVMS server node or from ABS OpenVMS client node. See Creating System Backup Save or Restore Requests For OpenVMS Client describes how to do both.
You can modify existing storage and environment policies, or you can create other storage and environment policies for OpenVMS client user backup operations.
The following list describes how the access controls must be set on the storage and environment policies intended for user backup operations:
The user must have WRITE access to the storage policy to create a save request, and READ access to the storage policy to create a restore request. Thus, setting SHOW access on the environment policy allows the requester to use this default profile, but the access on the storage policy determines what type of operations the user can perform.
To create user save and restore requests for ABS OpenVMS client system, you must first create (or modify) storage and environment policies on ABS server system that meet those needs.
In See Creating Storage and Environment Policies for OpenVMS Client User Backup Operations, the examples shows you how to create ABS storage and environment policies so that a nonprivileged user can create a save or restore request for an OpenVMS client.
You can create save requests for the OpenVMS ABS client node only from the client node itself. See Creating Save Requests for OpenVMS Client User Backup OperationsSee You can create save requests for the OpenVMS ABS client node only from the client node itself. Table 5-7, describes how to do this:, describes how to do this:
To configure ABS policy objects to be able to perform NT or UNIX client system backup operations, review the following sections.
You can modify storage and environment policies provided by ABS as described in other chapters or you can create additional storage and environment polices for NT or UNIX client system backup operations.
The following list describes how the access controls must be set on the storage and environment policies intended for OpenVMS client system backup operations:
The examples in See Creating Storage and Environment Policies for NT/UNIX Client System Backup Operations show you how to create ABS storage and environment policies that enable system backup operations for an ABS NT or UNIX client node.
You can create system save requests for an NT or UNIX client node from ABS OpenVMS server node. See Creating Storage and Environment Policies for NT/UNIX Client System Backup Operations describes how to do this:
ABS supports backup and restore operations of Oracle Rdb databases and storage areas. ABS uses "file types" to distinguish the type of data being saved or restored. This ensures that ABS invokes the correct backup agent for the save or restore request. File types are an option on the GUI, or if you are using DCL, specified by using the /OBJECT_TYPE qualifier.
ABS provides the following file types for Oracle Rdb databases and storage areas:
The following example shows how you might create a full save request that performs subsequent nightly incremental saves of an Oracle Rdb database:
$ ABS SAVE "DISK$RDB_DISK:[RDB_60_DATABASE]RDB_60_DATABASE.RDB" -
_$ /OBJECT_TYPE="RDB_V6.0_DATABASE"/INTERVAL=DAILY_FULL_WEEKLY/START=22:00 -
_$ /STORAGE=SYSTEM_BACKUPS/NAME=RDB_60_DATABASE_BACKUP
Result:
This command causes ABS to create a job for the request named RDB_60_DATABASE_BACKUP that runs every night at 22:00 (10:00 p.m.). The first time the job runs, ABS does a full backup operation of the Oracle Rdb database. Each subsequent night for the next six nights, ABS performs an incremental backup operation of the Oracle Rdb database. On the seventh night, the cycle is repeated. All save sets are written to the storage policy named SYSTEM_BACKUPS.
To save individual storage areas in an Oracle Rdb database, you must add the /INCLUDE qualifier to the Oracle Rdb database name specified in the save request. You must include this qualifier whether you are using the GUI or the DCL interface.
The following example shows how to back up only AREA3 of an Oracle Rdb database using DCL:
$ ABS SAVE -
_$ "DISK$RDB_DISK:[RDB_60_DATABASE]RDB_60_DATABASE.RDB/INCLUDE=AREA3"-
_$ /OBJECT_TYPE="RDB_V6.0_STORAGE AREA"/INTERVAL=DAILY_FULL_WEEKLY -
_$ /START=21:00/STORAGE=SYSTEM_BACKUPS/NAME="RDB_60_DB_AREA3_BACKUP"
Result:
This command causes ABS to create a job for the request named RDB_60_DB_AREA3_BACKUP that runs nightly at 21:00 (9:00 p.m.). The first time the job runs, ABS performs a full backup operation of the storage area. Each subsequent night for the next six nights, ABS performs an incremental backup operation of the storage area. On the seventh night, the cycle is repeated. All save sets are written into the storage policy named SYSTEM_BACKUPS.
If you want to specify more than one storage area, you can include a comma-separated list of storage area names:
Example:
$ ABS SAVE-
_$"DISK$RDB_DISK:[RDB_60_DATABASE]RDB_60_DATABASE.RDB/INCLUDE=(AREA1,AREA3)" _$/OBJECT_TYPE="RDB_V6.0_STORAGE_AREA"/INTERVAL=DAILY_FULL_WEEKLY-
_$/START=21:00$_/STORAGE=SYSTEM_BACKUPS$_/NAME="RDB_60_DB_AREA3_BACKUP"
For a save request that specifies an Oracle Rdb database, ABS creates multiple entries in ABS catalog:
Because ABS creates catalog entries for each storage area within a Oracle Rdb database, you can restore any individual storage area from the save set that contains the Oracle Rdb database data.
Catalog entries for the Oracle Rdb database have the same format as a file specification:
Catalog entries for the Oracle Rdb storage areas consist of the Oracle Rdb database name to which the storage area belongs, plus an /AREA qualifier indicating the name of the storage area:
DISK:[DIRECTORY]DATABASE_NAME.RDB/AREA=STORAGE_AREA_NAME
For example, suppose an Oracle Rdb V6.0 database named RDB_60_DATABASE resides in DISK$RDB_DISK:[DATABASE]. Also suppose that this database contains three storage areas named AREA1, AREA2 and AREA3. If you create a save request that specifies this Oracle Rdb database, the save request also saves the storage areas within the database.
In this situation, ABS creates the following catalog entries:
Object type: RDB_V6.0_DATABASE
Object name: DISK$RDB_DISK:[DATABASE]RDB_60_DATABASE.RDB
Object type: RDB_V6.0_STORAGE_AREA
Object name: DISK$RDB_DISK:[DATABASE]RDB_60_DATABASE.RDB/AREA=AREA1
Object type: RDB_V6.0_STORAGE_AREA
Object name: DISK$RDB_DISK:[DATABASE]RDB_60_DATABASE.RDB/AREA=AREA2
Object type: RDB_V6.0_STORAGE_AREA
Object name: DISK$RDB_DISK:[DATABASE]RDB_60_DATABASE.RDB/AREA=AREA3
If you create a save request that specifies only a storage area, ABS only creates entries in the catalog for the storage area and not for its associated Oracle Rdb database.
To find catalog entries for Oracle Rdb storage areas, use one of the following methods using either the GUI or DCL:
Using ABS, you can restore entire Oracle Rdb databases or individual storage areas. There are two types of restore requests, full and selective.
Recommendation:
To restore Oracle Rdb databases or storage areas, it is recommended that you create a full restore request. This causes ABS not only to restore the most recent full backup of the data, but also to apply any subsequent incremental backup save sets.
Example of restoring an RDB database using DCL:
$ ABS RESTORE "DISK$SLSRMU2:[RDB_60_DATABASE]RDB_60_DATABASE.RDB" -
_$/OBJECT_TYPE="RDB_V6.0_DATABASE"/FULL
Requirement:
The files associated with the Oracle Rdb database must have been deleted from the disk before issuing this command, or on the DCL command line include the /CONFLICT=NEW qualifier.
Example of restoring an Oracle Rdb storage area using DCL:
$ ABS RESTORE -
_$"DISK$SLSRMU2:[RDB_60_DATABASE]RDB_60_DATABASE.RDB/AREA=SLS_DEV1_AREA5" -
_$/OBJ="RDB_V6.0_STORAGE_AREA"/FULL
Requirement
The files associated with the storage area must have been deleted from the disk before issuing this command, or on the DCL command line include the /CONFLICT=NEW qualifier.
ABS supports cataloging information from copied backup savesets on tape, or from tapes created by VMS Backup into ABS catalogs. This allows you to lookup and restore files from savesets created outside of ABS.
To allow this functionality, a new object_type, VMS_SAVESET was created.
To catalog the saveset information, you must create a save request with the tape_volume, a colon , and the saveset name, or wildcard, as the include specification (ie. Tape001:mysaveset.sav), an object type of VMS_SAVESET, and a selective movement type:
$ ABS SAVE tape001:mysaveset.sav -
/OBJECT_TYPE=VMS_SAVESET -
/SELECTIVE -
/NAME=mysaveset_catalog -
/STORAGE_CLASS=my_sc -
/ENVIRONMENT=my_env -
/START=01-JUL-2000
or
$ ABS SAVE tape001:* -
/OBJECT_TYPE=VMS_SAVESET -
/SELECTIVE -
/NAME=mysaveset_catalog -
/STORAGE_CLASS=my_sc -
/ENVIRONMENT=my_env -
/START=01-JUL-2000
ABS will load the tape listed in the include specification, then do a Backup/List of the contents, loading the information into the ABS catalog defined in the storage_class. The original date of the saveset will be preserved in the catalog.
Recommended Implementation:
It is recommended that you create a new catalog to store this data. You also should create a new storage_class to be used by these cataloging operations.
This will allow you to restore from the copied tapes or from the original tapes by selecting the appropriate storage_class for the restore request.
For example:
Several ABS save requests were saved on tape ABS000 using the SYSTEM_BACKUPS storage_class. Saveset Manager was used to copy that tape to another tape, TAP000.
Before cataloging the data, do the following:
ABS will execute the request, cataloging the information in the COPIED_TAPES catalog.
To restore the data which is on ABS000 or TAP000, decide which copy you wish to restore and specify the appropriate storage_class in the restore request. For example, to restore from the original tapes, specify the SYSTEM_BACKUPS storage_class. To restore from the copy, specify the COPIED_SC storage_class. The ABS LOOKUP command with the /FULL qualifier will show the volumes used for the data.
ABS GUI allows you to create, modify and delete ABS policies and requests. It also allows you to find data that was previously saved using ABS.
ABS enables you to display GUI on the OpenVMS server or client system, on an NT client system, or on any system that supports X Window System? for Motif®.
Use the procedure to display ABS GUI on an OpenVMS system.
Set display to your current node. $ SET DISPLAY/CREATE/NODE=OpenVMS_node_name Make sure that you have added the node from which you are accessing the GUI to the session manager security option. |
|
ABS displays ABS GUI main window. See See ABS Main Window. |
To display the GUI on an NT system using the eXcursion? software, follow the procedure in one of the following tables:
See Displaying the GUI On an NT System Using eXcursion and DCL Commands describes how to display the GUI using eXcursion and DCL commands.
See Displaying ABS GUI Using eXcursion Menu Options describes how to display ABS GUI using the eXcursion Menu options.
ABS GUI windows contain the following standard buttons:
A storage policy defines the type of archive file system, type of media, and archive characteristics for ABS save sets. ABS provides the following preconfigured storage policies:
Archive Backup System for OpenVMS Installation Guide describes the characteristics of these preconfigured storage policies. To meet your storage management needs, you may have to create additional storage policies. Review the information in this chapter to determine how to create a storage policy that meets your site-specific needs.
This chapter describes the following information:
Chapter 7, Chapter 8, and Chapter 9 are designed for use with the worksheets provided in Appendix E. Use the worksheets as scratch areas to help you create and manage your ABS storage and environment policies.
Follow these steps to use the worksheets along with the information in this chapter:
Use the procedure in Table 7-1 to create an ABS storage policy.
Each storage policy must have a unique name and be made up of alphanumeric characters, hyphens (-), underscores (_), or a combination thereof. Assign the storage policy a name that reflects the purpose of the storage policy. For example, to meet your long-term storage requirements you may want to create storage policies with the names of 1_YEAR, 5_YEARS, PROJECT_X_DATA, and so forth.
Character Limit:
The storage policy name cannot exceed 31 characters. Because the matching environment policy name is typically the storage policy name appended with _ENV, it is recommended that a storage policy name not exceed 27 characters.
To make sure that you retain the data for the amount of time indicated in the storage policy name, set the Retain Data For option to match those time frames. See See Retain Data For for information about setting the retention period.
The Save Data To option enables you to choose whether you want to save data to a volume in the MDMS database, or whether you want to save data to an OpenVMS disk.
Use the procedure in Table 7-2 to select where you want to store data for this storage policy.
Click the box next to Tape or Disk. Your selection becomes highlighted. |
|
If you selected Tape Options, click the box next to Tape Options and see See Tape Options for instructions. |
|
If you selected Disk Options, click the box next Disk Options and enter the OpenVMS disk and directory specification in the Root Directory to Save Data To box: |
By selecting Tape Options, you are instructing ABS to use volumes and drives managed by the Media and Device Management software (described in Archive Backup System for OpenVMS Installation Guide and in Part II of this manual):
For the Media Type option, enter the media type name (previously configured in MDMS) that you want this storage policy to use. Specifying a media type name instructs ABS to use only this type of media for any save requests that use this storage policy.
Requirement:
Supplying a valid media type is required.
Example of how a media type is defined in MDMS:
$ MDMS CREATE MEDIA_TYPE TK85K
Additional Information:
For additional information about media types, drives, pools, and location, refer to Chapter 18, Basic MDMS Operations.
The Pool option enables you to enter a pool name that has been previously created in MDMS. A pool contains free volumes that only certain users can access, such as ABS. Chapter 18 describes how to create pools in MDMS. Even though a media type may be available to several pools, you may want to restrict a storage policy to only one pool of volumes.
The Drives option allows you to enter a specific drive or comma-separated list of drives to use for this storage policy. Specify the drive name as MDMS drive name rather than the VMS device name.
where the following drive names are defined as follows in MDMS:
Recommendation:
Do not assign drive names to this option, but instead, configure MDMS so that the appropriate media types are paired with the desired drives.
The Location option enables you to specify the location of the volumes that you want to assign to the storage policy. The location is defined in MDMS.
The criteria that ABS uses to create volume sets is determined by the options described in the following sections. It is important to note that the values in all of these options are desired values and not always absolute.
For example, if all of the disk or file names specified in a save request have not been backed up before the specified criteria is met, ABS continues to perform the backup operation to the same volume set even though the criteria may have been exceeded during the save operation.
See the ABS Command Reference Manual for more information on setting the criteria using DCL. The qualifier in the ABS CREATE STORAGE_CLASS command is /CONSOLIDATION=(INTERVAL,COUNT,SIZE).
Enter the number of days (in OpenVMS time format) that you want between the creation of a new volume set. If you assign the value 10, a new volume set is created every ten days.
Enter the number save sets you want to allow per volume set. For example, if you set this option to one (1), ABS creates a new volume set for each save set. If this value is set to 10, ABS creates a new volume set for every ten save sets.
Recommendation:
It is recommended that you enter zero (0) for this option. By entering zero, a new volume set is created based upon the values assigned to the criteria options as described in See Days Before Creating a New Volume Set and See Volumes Per Volume Set.
This option is only available when you modify an existing storage policy. During the create process, this option is grayed out.
There may be a time when you want to remove the reference to a particular volume set and start a new one. ABS allows you to do this. Follow these steps:
Use the Retain Data For option to assign the period of time to retain data saved using this storage policy. ABS provides two options, Days and Expiration On. These options are mutually exclusive. You can assign the number of days to retain the data, or you can assign an exact date that you want the saved data to expire. ABS default retains data for 365 days.
Use one of the following options to define how long to save the data:
Each ABS storage policy uses an ABS catalog to record the history of saved data. Specify ABS catalog that you want this storage policy to use, and execute the save request on the node specified for the Execute Save Operation On option.
ABS uses its catalogs to locate data saved using ABS. ABS provides a default catalog named ABS_CATALOG.
To select the catalog to use for the storage policy, click the box next to Write History Information To and select one of ABS catalogs from the list.
Requirements:
If you wish to use a catalog other than one of the default catalogs provided by ABS, make sure that you:
Chapter 15 , Creating ABS Catalogs provides information about creating ABS catalogs.
This option allows you to configure the storage policy so that you can execute more than one save request simultaneously.
For example, if you create three save requests that are scheduled to start at the same time, those save requests can run simultaneously provided there are enough media management resources (such as tape drives and free volumes) to support multiple backup operations. In this case, you would set the Number of Streams option to 3. Valid values range from 1 to 36.
To increase or decrease this option, place the pointer on the slide rule and hold down MB1, slide up to decrease the value, slide down to decrease the value.
The Storage Policy Access Control option enables you to authorize users to access the storage policy, and to enable those users with certain types of access controls.
The user who creates the storage policy is automatically granted access to it, and he is granted all of the access controls (READ, WRITE, SHOW, SET, DELETE, and CONTROL). To add other users and to provide them with access control to the storage policy, use the procedure described in Table 7-4.
Once you have entered all the information for the storage policy, do the following:
An environment policy defines the environment in which save and restore requests are executed. ABS provides the following default environment policies:
Archive Backup System for OpenVMS Installation Guide describes the characteristics of these default environment policies. To meet your storage management needs, you may have to create additional environment policies. Review the information in this chapter to determine how to create an environment policy that meets your site-specific needs.
The information in this chapter includes:
Chapter 7, Chapter 8, and Chapter 9 are designed for use with the worksheets provided in Appendix E. Use the worksheets as described in Section 7.1.
Use the procedure described in See Creating an ABS Environment Policy to create an ABS environment policy:
Each environment policy must have a unique name and be made up of alphanumeric characters, hyphens (-), underscores (_), or a combination thereof. The environment policy name should have a matching storage policy name with the characters _ENV appended to it. For example, if you create a storage policy named 5_YEARS, the matching environment policy name should be 5_YEARS_ENV. ABS will look for a matching storage and environment policy names. If you do not have a matching environment name, then ABS will use the default environment policy named ABS_DEFAULT_ENV.
Restriction:
The environment policy name cannot exceed 31 characters.
The Save and Restore Environment Options enable you to set up the conditions under which save and restore requests will execute using this environment policy. the following sections describe these conditions and how to set them.
The Who to Notify option enables you to specify several methods of notification and conditions under which notifications about ABS save and restore requests are executed. You can notify personnel when ABS save and restore request complete successfully, or when and why they fail.
To enable these options, use the procedure in See Selecting the Notification Options
An ABS environment policy provides several conditions as when to notify. To set these conditions, do the following:
The Data Verification options provides you with the ability to specify default data safety checks for ABS save and restore requests that use this environment policy. Data verification include checks such as full data verification, redundancy checks, and tape read verification. This option enables you to ensure data safety by performing various types of data verification during execution of ABS save and restore requests.
To enable one or more Data Verification options, do the following:
The Listing option allows you to specify the default listing file behavior for save and restore requests that use this ABS environment policy.
The Processing Commands option enables you to specify pre- and post- processing commands that will execute once. A preprocessing command executes once, prior to the start of the save or restore request, and a postprocessing command executes once, after the completion of the save or restore request. This option accepts a platform specific string.
ABS generates logical names that may be used from within the prologue and epilogue (pre- and post- processing commands) for the entire save request. These may be referenced in a command procedure which is executed as a prologue or epilogue for the environment policy.
These logical names are defined in the process JOB table. They exist during the save request. Once the save request is complete the logicals will no longer be available.
The name of the device used by the save request. If multiple output devices have been used this logical will be a comma separated list. |
The Original File option enables you to select the default behavior for the original data (OpenVMS disk or file) being saved.
To set the original file option, do this:
The Retry Options enables you to specify the number of times and how often a save or restore request should be retried before operator intervention is required.
Restriction:
If the retry count is set to 0 (zero), the minutes between retry attempts also must be set to 0 (zero).
Default:
If you do not assign a value to this option, the default values are 3 for Retry Count and 15 minutes for Minutes between Retry Attempts.
Hint:
Each time a retry attempt occurs, ABS enviroment policy generates a warning message. If you want to be notified of the retry attempt, select Warnings Occur in the When option (refer to section See Who to Notify ).
The User Profile option enables you to configure an environment policy that allows save and/or restore requests to run in the context of either ABS process or the user's process provided the users are authorized to create their own save and restore requests.
The Open Files option allows you to save files that are open during the execution of the save request. If you select Hot Backup, this allows you to save Oracle Rdb databases that are open. Ignore Writers enables you to save OpenVMS, UNIX, and NT files that are open.
The Tape Drives option enables you to set the number of tape drives used for each ABS save and restore request that use this environment policy. If the requested number of tape drives is available, ABS allocates those drives for the save or restore request. If the requested number of tape drives is not available, ABS will use the amount of tape drives available.
For example, if you create a save request that uses an environment policy where the number of tape drives is set to 3, ABS allocates three drives when the save request job begins. This means those three drives are unavailable to other applications during the time the save request is executing.
Default:
The default value is 1. This value cannot be 0 (zero).
Recommendation:
Use the default value of 1. Allocating more than one drive per save request will constrain the tape drives. This means those tape drives will not be available to other applications while the save request is executing.
ABS supports the following types of compression for UNIX clients:
Recommendation:
It is recommended that you use the default UNIX environment policy or create an ABS environment policy with the desired compression options for all of your UNIX save requests. Do not mix compression types for UNIX save requests.
For example, use the same ABS environment policy for all of your UNIX save requests. This way, all UNIX data saved using ABS will have the same compression option set.
If you use different types of compression options on different save requests (UNIX save requests use different ABS environment policies), the saved data will be compressed differently. When you attempt to restore the UNIX data, you must know which compression option was used to save the original UNIX data. ABS is unable to restore the file without being instructed to use the proper compression.
Restriction:
This qualifier is not valid for NT client save and restore operations.
ABS provides you with the ability to either back up the UNIX symbolic links only, or to follow the UNIX symbolic links and back up the data as well.
Restriction:
This option is valid only for UNIX client operations.
ABS allows you to save only the root file system (such as the disk the root directory resides on), or an entire filesystem type if the filesystem spans physical devices. ABS supports the following options:
The Environment Policy Access Control option enables you to authorize users to access ABS environment policy, and to enable those users with certain types of access control.
The user who creates the environment policy is automatically granted access to it, and the user is granted all the access controls (READ, WRITE, SHOW, SET, DELETE, and CONTROL). To add other users and to provide them access to ABS environment policy, use the procedure in See Enabling Access to an ABS Environment Policy.
Once you have entered all the information for the environment policy, do the following:
Save requests are ABS policy objects that define the data that you want to save, and when to save that data. A save request uses an ABS policy that defines the volume on which the data will be saved (Storage Policy), and the conditions under which the save request will execute (Environment Policy).
Each save request must have a unique name and be made up of alphanumeric characters, hyphens (-), underscores (_), or a combination thereof. The default save request name is the user name appended by the current date and time.
To change the default save request name:
Restriction:
A save request name cannot exceed 40 characters. Because ABS generates the corresponding log file name by appending the save request name with eight additional characters, it is recommended that you limit the save request name to 32 characters.
Use this option to specify the file name, disk name, or set of file or disk names that you want to save:
To add a file name, disk name, or a set of file or disk names to the save request, use the procedure in See Adding Disk or File Names To A Save Request.
Click Add... in What Data To Save area. ABS displays What Data To Save window and defaults to the What to Save option. |
|
In the left-hand column, select the type of data to save. For example, to save an entire OpenVMS disk, click Entire OpenVMS Disk. To save an NT file, click NT Individual File. |
|
Select the node where the file or disk resides. This is the first option in the right-hand column. If the node is not available from the listed nodes, click Other and enter the node name in the box. |
|
Enter the file name or disk name that you want to save. You may enter multiple file or disk names (up to eight) as a comma-separated list. However, to add a list of comma-separated file names, they must all be the same file type (OpenVMS, Oracle Rdb, NT, or UNIX). To correctly enter the disk or file name, see See Correctly Entering the Disk Name or File Name. For a list of restrictions regarding adding disk names and file names to a save request, see See Save Request Restrictions |
|
To exclude a specific file or set of files from the save request, click the box next to Data to Exclude and enter the file name. For example, enter *.COM to exclude all files with a .COM file extension. |
To correctly enter the disk or file name according to file type, see the instructions in See Correctly Entering the Disk Name or File Name.
ABS imposes the following restrictions when adding disk names or file names to a save request:
Use the pre- and post- processing commands to submit commands that take action either before or after each file or disk name entered for the save request.
Enter a command string for these options:
ABS generates these logical names for use within a prologue or epilogue (pre- or post- processing command) for a SAVE or RESTORE request. A set of "_n" logicals will be generated for each include specification in the request. The series will begin with 1 and continue for each include specification within the request. These may be referenced in a command procedure which is executed as a prologue or epilogue for the save or restore request.
These logical names are defined in the process JOB table. They exist during the save request. Once the save request is complete the logicals will no longer be available.
If you do not assign a string to this option, ABS does not execute any pre- or post processing commands.
Enter the specific selection criteria for the disk or file name you are adding to the save request. You can save data specifically by a before or since date.
Enter any backup agent specific qualifiers for the save request. Backup agent qualifiers are determined by the type of file or disk that you are backing up, such as OpenVMS, Oracle Rdb, UNIX, or NT files. Using this qualifier may supersede qualifiers set by ABS. Use this qualifier with extreme caution.
ABS provides the option of immediately executing the save request (described in See Immediately Executing the Save Request), or setting up the save request to execute on a repetitive schedule (described in See Repetitive Scheduling of Save Request). ABS provides several scheduling options for a save request.
To schedule a save request to execute on a repetitive schedule, use the following procedure:
Notice also in Example B that ABS repeats the full backup operation until a successful full backup operation is achieved on Wednesday. If one of the incremental backup operations fail, ABS skips to the next level of the incremental backup operations. Unlike repeating the full backup operation, ABS does not repeat the same level of incremental backup operations during the 7-day cycle.
In the Example B, the Level 4 incremental backup operation failed on Friday. On Saturday, ABS resumes with a Level 5 incremental backup operation. However, the contents of the incremental backup operations are correct because ABS will back up all new or modified files since the last successful full backup or the last successful lower level incremental backup operation.
The save log file will contain the following backup command issued by ABS for Saturday, 05-APR-1997:
$ BACKUP/.../SINCE="03-APR-1997 02:00:00.00"
Because the last successful lower level incremental backup operation was performed on 03-APR-1997, all changes to any file since the date and time specified in the BACKUP command are included in the backup operation.
ABS enables you to create and use specific policies that define which volumes and tape drives to use for save requests, and in what type of environment to execute the save requests.
A save request must use a storage policy and an execution policy. ABS supplies default policies, but you can use different policies if you so choose.
To change the storage policy or execution policy, click on the box next to each policy and select one of the policies displayed in the list box.
The Save Request Access Control option enables you to authorize other users to access the save request, and to enable those users with specific access controls.
The default is the user who creates the save request.
Click the box next to the access control that you want to enable for the user you are adding or modifying:
|
|
Once you have entered the information for the save request, click OK on the main window to submit the save request to ABS policy database.
Result: |
Once you have entered all of the information for the save request, click OK on the main Save Request screen to submit the save request to ABS policy database.
Result:
ABS displays the Submit Save Request screen, this screen contains the basic information for the save request.
Restore requests are ABS policy objects that define the data that you want to restore. A restore request references an ABS catalog that contains the backup information about that data. It also uses an ABS environment policy that defines the conditions under which the restore request will execute.
Each restore request has a unique name and is made of alphanumeric characters, hyphens (-), underscores (_), or a combination thereof. The default restore request name is the user name appended by the current date and time.
Use this option to specify the file name, disk name, or set of file or disk names that you want to restore:
Upon initial creation of a new restore request, the What Data To Restore area is, by default, blank. Once you have added a file name or disk name, it appears in the What Data To Restore area.
To add a file name, disk name, or a set of file or disk names, use the procedure in See Adding Disk or File Names To A Restore Request.
Click Add... in the What Data To Restore area.
Result: |
|
In the left-hand column, select the type of data to restore. For example, to restore an entire OpenVMS disk, click Entire OpenVMS Disk. To restore an NT file, click NT Individual File. |
|
Enter the file name or disk name that you want to restore. You may enter multiple file or disk names (up to eight) as a comma-separated list. However, to add a list of comma-separated file names, they must all be the same file type (OpenVMS, Oracle Rdb, NT, Or UNIX). To correctly enter the disk names and file names for the restore request, see See Entering The Correct Syntax For A Restore Request. For a list of restrictions regarding adding disk names and file names to a restore request, see See Restore Request Restrictions |
|
To exclude a specific file or set of files from the restore request, click on the box Data to Exclude and enter the file name. For example, enter *.COM to exclude all files with a .COM file extension. |
See Entering The Correct Syntax For A Restore Request describes how to enter the correct syntax for a restore request according to the file type.
ABS imposes the following restrictions when adding file names or disk names to a restore request:
Recommendation:
Unless you are restoring an entire OpenVMS disk, specify the full path name. Include the disk name, directory name, and file name.
If you are restoring an entire OpenVMS disk, specify only the disk name and include the trailing colon:
Use the pre- and post- processing commands to submit commands (platform specific) that take action either before or after each file or disk name entered for the restore request.
Enter a command string for these options:
ABS generates these logical names for use within a prologue or epilogue (pre- or post- processing command) for a SAVE or RESTORE request. A set of "_n" logicals will be generated for each include specification in the request. The series will begin with 1 and continue for each include specification within the request. These may be referenced in a command procedure which is executed as a prologue or epilogue for the save or restore request.
These logical names are defined in the process JOB table. They exist during the save request. Once the save request is complete the logicals will no longer be available.
Default:
If you do not assign a string to this option, ABS does not execute any pre- or postprocessing commands.
ABS enables you to specify a storage policy to use for the restore request, and enables you to specify under what conditions to execute the restore request.
The storage policy option allows you to specify the name of the storage policy that contains the data that you want to restore. Specifying a storage policy name instructs ABS to search for the data in ABS catalog assigned to that storage policy.
Default storage policy:
If you do not specify this option, ABS searches the default storage class, SYSTEM_BACKUPS.
An environment policy defines under what conditions to execute the restore request. For example, some conditions may be in what context to run the restore request (in the context of the user or ABS), who to notify when the restore request completes, under which error conditions to notify a user, and so forth. See Chapter 8 , Creating Environment Policies for more details.
Requirement:
A restore request must use a storage policy and an environment policy. ABS provides default policies, but you can use different policies if you so choose. Click on Storage Policy and Execution Policy and select one of the policies displayed in the list box.
The Restore To option enables you to specify an output location for the restored data. Use the following procedure:
Access Control option enables you to authorize other users access to the restore request, and to enable those users with specific access controls.
The default access is set to the user who creates the restore request. To add other users and access controls, see the procedure in See Enabling Access Control To A Restore Request:
Once you have entered all the information for the restore request, click OK on the main window to submit the restore request to ABS policy database.
Result:
ABS displays the Submit Restore Request window, this window contains the basic information for the restore request. Click Submit to submit the restore request, or click Cancel to cancel the submit operation. If you cancel, the main window reappears and you can modify any information, or click Cancel to cancel creating the restore request.
Save and restore requests are executed by calling the command procedure
ABS$SYSTEM:COORDINATOR.COM with the request's universal ID as parameter 1. Typically this is done in a detached process created either by using OpenVMS Queue Manager or through a 3rd party scheduler program. ABS supports five options for scheduling requests which are all described in this chapter.
During installation the user is asked to decide on the scheduler interface to be used by ABS for scheduling requests. The scheduler interface chosen is stored in the file
ABS$SYSTEM:ABS$POLICY_CONFIG.DAT:
! Scheduler option, must be one of the list below:
! (written by KITINSTAL at installation)
! NONE
! DECSCHEDULER
! EXT_SCHEDULER
! INT_QUEUE_MANAGER
! EXT_QUEUE_MANAGER
!
ABS$SCHEDULER = INT_QUEUE_MANAGER
This file can be edited and the scheduler setting can be changed using a text editor. ABS needs to be restarted for the change to take effect.
Before changing the scheduler interface option from either EXT_SCHEDULER or
DECSCHEDULER you have to make sure that ABS jobs are no longer being scheduled using the old scheduler interface. Either delete all ABS jobs from the scheduler database or set the jobs on hold.
No preliminary change is necessary when switching from either INT_QUEUE_MANAGER or EXT_QUEUE_MANAGER because ABS jobs do not remain in the Queue Manager's database.
Once ABS has been restarted and the new scheduler interface option is either EXT_SCHEDULER or DECSCHEDULER you have to set the start time for all requests which should be scheduled. At this point all save requests should become jobs in the new scheduler's database.
If switching to either INT_QUEUE_MANAGER or EXT_QUEUE_MANAGER no extra step is necessary. ABS will call the Queue Manager when necessary to create the batch job for the request.
This option causes ABS to call the OpenVMS Queue Manager to create, delete, modify and show jobs for save and restore requests. All jobs are queued to a batch queue called ABS$<node_name> (e.g. ABS$FUDGE) where node_name is the execution node name for the request. With this option the policy engine has a scheduler function active which calls the OpenVMS Queue Manager to create jobs for requests which are due to run.
For a request which is scheduled to run on a node which is not the current node and which is not in the current OpenVMS cluster, ABS sends a message to the ABS$COORD_CLEAN process on the remote node. The batch job for the request is then created, deleted or modified on the remote node by the ABS$COORD_CLEAN process.
No further setup is necessary to use this option. On your ABS VMS Client nodes, be sure that the ABS$COORD_CLEAN process is running and the ABS$<node_name> queue is working.
Information about ABS scheduling activities is logged into the ABS$LOG:ABS$POLICY_<node_name>.LOG file. To receive more information in the log file, you may define a logical:
$ DEFINE/SYSTEM ABS_SCHEDULER_LOGGING TRUE
An OPCOM message is also sent to the TAPE operator in case of ABS scheduling failures.
This option causes ABS to execute the command procedure ABS$SYSTEM: ABS$EXT_QUEUE_MANAGER.COM. The command procedure uses DCL to interface with the OpenVMS Queue Manager to either create, delete, modify or show a batch job for a request. With this option the policy engine has a scheduler function active which calls the command procedure to create jobs for requests that are due to run.
The template file ABS$SYSTEM: ABS$EXT_QUEUE_MANAGER.TEMPLATE provided during installation should be copied to ABS$SYSTEM:ABS$EXT_QUEUE_MANAGER.COM and can be used as-is or, can be modified if necessary. However, caution needs to be taken if the template command procedure is modified. A failure in the command procedure could cause save or restore requests to fail. For debugging purpose, the command procedure is run in a subprocess of the ABS$POLICY process. It creates a logfile called ABS$LOG:ABS$EXT_QUEUE_MANAGER_<request_name>.LOG. This log may be used for troubleshooting problems with the command procedure.
For a request which is scheduled to run on a node which is not the current node and which is not in the current OpenVMS cluster, ABS sends a message to the ABS$COORD_CLEAN process at the remote node. The ABS$COORD_CLEAN process then spawns a subprocess to execute the local copy of ABS$SYSTEM: ABS$EXT_QUEUE_MANAGER.COM.
No further setup is necessary to use this option. On your ABS VMS Client nodes, be sure that the ABS$COORD_CLEAN process is running and the ABS$<node_name> queue is working
This option should be used as an alternative to INT_QUEUE_MANAGER if the OpenVMS Queue Manager should be used to execute requests but modifications are necessary to allow for a more sophisticated schedules. For example a save request should never run on the first day of a month.
See the description in the template procedure about the input and output parameters.
Information about ABS scheduling activities is logged into the ABS$LOG:ABS$POLICY_<node_name>.LOG file. To receive more information in the log file, you may define a logical:
$ DEFINE/SYSTEM ABS_SCHEDULER_LOGGING TRUE
An OPCOM message is also sent to the TAPE operator in case of ABS scheduling failures.
This option causes ABS to execute the command procedure ABS$SYSTEM: ABS$EXT_SCHEDULER.COM. The command procedure uses DCL to interface with a 3rd party scheduler product to either create, delete, modify or show a scheduled job for a request.
The template file ABS$SYSTEM: ABS$EXT_SCHEDULER.TEMPLATE provided during installation is an example of how to interface with POLYCENTER Scheduler. The template file should be copied to ABS$SYSTEM:ABS$EXT_SCHEDULER.COM. This command procedure needs to be carefully adjusted to work with any other scheduler product. A failure in the command procedure could cause save or restore requests to fail. The command procedure is run in a subprocess of the ABS$POLICY process. It creates a log file called ABS$LOG:ABS$EXT_SCHEDULER_<request_name>.LOG. This log may be used for troubleshooting problems with the command procedure.
In contrast to EXT_QUEUE_MANAGER, ABS calls this interface only once to create a new job for a request. ABS assumes that the external scheduler does its own rescheduling of requests and can schedule request to run on a remote node.
Information about ABS scheduling activities is logged into the ABS$LOG:ABS$POLICY_<node_name>.LOG file. To receive more information in the log file, you may define a logical:
$ DEFINE/SYSTEM ABS_SCHEDULER_LOGGING TRUE
An OPCOM message is also sent to the TAPE operator in case of ABS scheduling failures.
This option should be used to tie ABS into an existing scheduler product.
This option causes ABS to call the POLYCENTER Scheduler V2.1b (DECscheduler) programming interface to create, delete, modify or show a job for a request.
Refer to the POLYCENTER Scheduler documentation on how to setup and manage the scheduler.
This option is qualified to be used with POLYCENTER Scheduler V2.1b only. However, it may be used with different versions of DECscheduler but without any further guarantee to work correctly.
This option causes ABS to NOT call any scheduler to create, delete, modify or show a job for a request.
The SHOW SAVE/SYMBOLS and SHOW RESTORE/SYMBOLS commands provide a means of obtaining scheduling information for a request and a means to execute that request through the following DCL command line sequence:
$ ABS SHOW SAVE my_request/SYMBOLS
$ SUBMIT ABS$SYSTEM:COORDINATOR /PARAMETER="''ABS_UID'"/USER=ABS
This executes the request `my_request' immediately on the current node.
This option should be used if neither of the other options can be used or there is a simple requirement for executing requests.
ABS calls the scheduler interface from process ABS$POLICY.
Option INT_QUEUE_MANAGER: ABS uses the programming interface to the OpenVMS Queue Manager. A scheduler thread is started within the ABS$POLICY process to submit due requests to the OpenVMS Queue Manager. The request will be submitted to batch queue ABS$<execution_node>. If the batch queue is available on the local node or is within the OpenVMS cluster the ABS$POLICY process calls the local Queue Manager. For remote nodes the ABS$POLICY process forwards the request via DECnet to the ABS$COORD_CLEAN process running on the execution node. The ABS$COORD_CLEAN process then submits the request to the local batch queue ABS$<execution_node>.
Failures to call the OpenVMS Queue Manager will be logged in ABS$LOG:ABS$POLICY_<node_name>.LOG or ABS$LOG:ABS$COORD_CLEANUP_<node_name>.LOG. The scheduler thread in the ABS$POLICY process sends OPCOM messages to operator TAPES if it fails to schedule a request.
Option EXT_QUEUE_MANAGER: This option uses the same method as INT_QUEUE_MANAGER to schedule jobs locally or remote. But instead of calling the programming interface of the OpenVMS Queue Manager, a subprocess is created to run the command procedure ABS$SYSTEM:ABS$EXT_QUEUE_MANAGER.COM. The command procedure is responsible to issue the DCL commands to create, delete, modify and show batch jobs. Also the command procedure has to return status about the commands and in some cases additional information. See the command procedure file for more details.
Failures to execute the command procedure will be logged in ABS$LOG:ABS$POLICY_<node_name>.LOG or ABS$LOG:ABS$COORD_CLEANUP_<node_name>.LOG. The scheduler thread in the ABS$POLICY process sends OPCOM messages to operator TAPES if it fails to schedule a request. Each activation of the command procedure creates a logfile ABS$LOG:ABS$EXT_QUEUE_MANAGER_<request_name>.LOG. The request name portion of the logfile name maybe truncated to a valid OpenVMS file specification.
Option EXT_SCHEDULER: This option uses the same method as EXT_QUEUE_MANAGER to interface with the scheduler. A subprocess is created to run the command procedure ABS$SYSTEM:ABS$EXT_SCHEDULER.COM. The command procedure is responsible to issue the DCL commands to create, delete, modify and show jobs for a 3rd party scheduler product. Also the command procedure has to return status about the commands and in some cases additional information. See the command procedure template file ABS$SYSTEM:ABS$EXT_SCHEDULER.TEMPLATE for more details. In contrast to option EXT_QUEUE_MANAGER, ABS assumes that the 3rd party scheduler product reschedules all requests locally and remote. So ABS will not call the scheduler if a request is due to run.
Failures to execute the command procedure will be logged in ABS$LOG:ABS$POLICY_<node_name>.LOG. Each activation of the command procedure creates a logfile ABS$LOG:ABS$EXT_SCHEDULER_<request_name>.LOG. The request name portion of the logfile name maybe truncated to a valid OpenVMS file specification
Option DECSCHEDULER: ABS uses the programming interface to POLYCENTER Scheduler V2.1b. ABS calls this scheduler only once locally to create any request since this scheduler product can reschedule jobs locally and remote.
Failures to call the POLYCENTER Scheduler will be logged in ABS$LOG:ABS$POLICY_<node_name>.LOG.
ABS allows you to modify and delete ABS policies and requests that already exist in ABS policy database. To modify polices and requests, see the requirements listed in See Requirements for Modifying and Deleting Policies and Requests.
To modify or delete an ABS policy or request, invoke ABS GUI as described in Chapter 6 , Displaying ABS Graphical User Interface . Click Modify or Delete Requests & Policies.
Result:
ABS displays the Modify or Delete Requests And Policies window, illustrated in See Modify or Delete Requests And Policies Window.
Use the following procedure to modify or delete an exiting ABS request or policy.
Click the box next to Data to Lookup and enter the disk or file name that you want to find. You can enter up to eight disk or file names as a comma-separated list.
Recommendation:
It is recommended that you do not mix file types in one lookup operation. For example, do not specify OpenVMS disk or file names along with UNIX or NT disk or file names in one lookup operation.
The file name syntax is dependent upon the file type, described in See File Type
This option allows you to specify the type of file to search for. The default is All, but this type of lookup is not recommended. Instead, click the box next to File Type and select the type of file, such as VMS files. See See Entering the Correct Lookup Syntax to determine how to correctly enter the lookup syntax.
See Entering The Correct Syntax For A Lookup Operation describes how to enter the correct syntax for the lookup operation.
This option enables you to specify the node name where saved data originally resided. Use the following procedure to enter the node name:
If you know the storage policy name or catalog name where the saved data resides, you can constrain the search to that particular storage policy or catalog.
Restriction:
This option is mutually exclusive. You can select either a storage policy name or a catalog name, but not both.
Archived Dates to Search option enables you to constrain the lookup operation to an exact specified date, before a specified date, or after a specified date.
See See Finding Saved Data By Date to constrain the lookup operation by a specified date.
Restriction:
If you are looking for data that was saved using SLS, only the On Exact Date option is valid. This is because SLS does not supply any other dates.
To submit the lookup operation once you have entered the correct data, use the following as procedure:
ABS allows you to monitor the status of an active ABS job from the GUI. To view the status of an active job, use the following procedure:
An ABS catalog contains historical information about ABS save operations. This historical information includes the location of data that was saved using ABS. For this purpose, ABS provides a catalog named ABS_CATALOG.
Most businesses can operate efficiently using only ABS_CATALOG provided by ABS. However, your business needs may require you to create additional catalogs. Some of those business needs may be:
To create an ABS catalog, follow these steps:
After invoking the utility, the catalog utility program prompts for the following information:
Function ([CREATE],SHOW,MODIFY,DELETE):
Catalog name: PRIVATE_CATALOG
Catalog type ([BRIEF],SLS,FULL_RESTORE):
Catalog owner [current_node::current_username]:
Use faster staging catalog operation ([YES],NO):
Catalog location [ABS$CATALOG]:
Default:
If you do not specify an answer to any of the options (except the catalog name), the catalog object utility selects the defaults (enclosed in square brackets ([])). If ABS$CATALOG points to a search list, the catalog files will be created in the location pointed to by the first element in the search list.
You can specify a different device and directory for the new catalog files. To use such a catalog you have to add a new search list element to the logical name ABS$CATALOG in ABS$SYSTEM:ABS$SYSTARTUP.COM. You either restart ABS or, if you do not want to restart ABS at this point just modify the current definition of this logical using the DCL DEFINE command.
$ DEFINE/SYSTEM/EXECUTIVE/NOLOG ABS$CATALOG -
ABS$ROOT:[CATALOG],DISK$BACKUPS:[ABS_CATALOGS]
This allows catalog files to be located in DISK$BACKUPS:[ABS_CATALOGS].
Restriction:
If you create an SLS ABS catalog type, you cannot enable staging.
You are going to back up the data on NODEA to a drive on NODEB |
The BRIEF catalog type stores all information about save requests performed and all files saved. It allows individual file lookups and restores. This is the default catalog type.
The FULL_RESTORE catalog type only stores information about save requests performed. No information about individual filenames are stored in the catalog. The size of a FULL_RESTORE catalog is drastically smaller than the BRIEF catalog type but you cannot restore individual files. Save requests using this catalog type must be of type FULL and only specify a disk name. Staging does not apply to these catalogs.
You can still use the backup agent outside of ABS to do a selective restore.
$ BACKUP MKA500:01MAY20001234567. /SELECT=[MyDir]MyFile.Dat *
restores the file [MyDir]MyFile.Dat from save set 01MAY20001234567 to the current directory.
To view information about saved disks in a FULL_RESTORE catalog, use the ABS REPORT SAVE_LOG command. The report shows you the volume ID and save set name used.
When a save request uses a FULL_RESTORE type of catalog, the following message is displayed in the save request log file:
"Full_Restore catalog type, individual object names will not be logged"
An ABS FULL_RESTORE catalog imposes the following restrictions:
ABS allows you to create ABS catalogs solely for the purpose of restoring data saved using SLS. These catalogs are not maintained in ABS database and are used only for restore operations and not for save operations.
ABS catalogs that are created for SLS provide the following features:
Restrictions:
An ABS catalog created for SLS restore operations imposes the following restrictions:
To create a catalog for restoring data that was saved using SLS, see the procedure in See Creating an ABS Catalog For SLS Restores
ABS allows you to enable staging for a ABS catalog. A catalog that provides staging improves the performance of the save operation because the catalog entry for a saved file is first written to a sequential disk file in ABS$CATALOG. Once the backup operation has completed a separate process moves the entries from the staging catalog file to the final catalog (the catalog name specified in the storage class associated with the save request).
The final catalog does not contain the information about the save operation until the staging process has completed. If you request a backup operation and immediately look in the final catalog, the entries may not be available, yet. The backup operation and the staging process must complete before the currently saved files can be looked up in the catalog.
You can always modify the staging setting for an existing catalog using the ABS_CATALOG_OBJECTS utility. The use of this feature is highly recommended.
To view a catalog definition, use the ABS_CATALOG_OBJECT utility and select the SHOW function.
To view the contents of a catalog use the ABS LOOKUP command or select the lookup function in the GUI. The DCL commands and GUI operations that search the catalog files require the user to have read access to those catalog files. This is because these operations are executed in the context of the user, and not in the context of the ABS account.
With the default catalog protection, a user would have to be logged into a system account or ABS account, or have elevated privileges that enables the user read access to the files (such as BYPASS, SYSPRV, or READALL).
Examples of such operations are using ABS LOOKUP command and ABS REPORT SAVE_LOG DCL command, or using the LOOKUP option from the GUI.
If you choose to MODIFY a catalog object, you will be prompted for the fields in the same manner as CREATE. To be sure of the values to set, you should first do a SHOW of the catalog so that you will not inadvertently change fields which you do not want to change.
Restriction:
You may not modify the CATALOG TYPE. To change the catalog type you must first delete and then recreate the catalog.
The DELETE option will delete the catalog object and the catalog files located in the ABS$CATALOG directory. If you do not wish to delete the actual catalog files, then copy them to another name or location prior to executing the DELETE function.
ABS provides a catalog conversion command procedure that improves the target catalog update performance by doing a file-to-file conversion. A target catalog is the final catalog where ABS entries reside, while a staging catalog is a temporary catalog that increases ABS performance during save operations. By converting the target catalogs, you can improve ABS target catalog update performance. This describes how to convert ABS catalogs. For additional improvement of the catalog update performance, you can also move the target catalogs to a different disk by defining a system level search list logical for ABS$CATALOG in ABS$SYSTARTUP.COM.
Run the conversion command procedure for each individual catalog.
$ @ABS$SYSTEM:ABS$CONVERT_CATALOG MyCatalog
The command procedure creates a new copy of the catalog files. The new and the old files will reside in the same directory. The command procedure also allows you to move the converted files to a different disk or directory.
$ @ABS$SYSTEM:ABS$CONVERT_CATALOG <catalog_name> <disk:>[<dir>]
You can also improve the target catalog update performance by moving the target catalogs to a different disk.
To do this, follow the procedure in See Moving Target Catalogs to a Different Disk.
However, ABS always writes the staging catalogs to the first element in the ABS$CATALOG search list, and then it writes the entries for the target catalogs to a different disk.
With staging catalog enabled ABS creates a command procedure at the end of a save request. A separate process is created which executes this command procedure to move all entries from the staging catalog to the final catalog. If all entries have been moved successfully the command procedure is deleted. If ABS failed to execute the command procedure you can run it manually. To do this, enter the following command at the system prompt on the node the save request was executed:
$ @ABS$CATALOG:<catalog_name>_m_n.COM
To determine which file to execute, search your save request log files in ABS$LOG to find the file names for the staging files.
The save request ABS$LOG:<save_request_name>.LOG file will contain the following information:
COORDINATOR: Staging process PID : 00006505
COORDINATOR: Staging catalog : ABS$CATALOG:ABS_CATALOG_4.STG;1
COORDINATOR: Staging procedure : ABS$CATALOG:ABS_CATALOG_4_1.COM;1
COORDINATOR: Staging logfile : ABS$LOG:ABS_CATALOG_4.LOG
The ABS catalog files will grow as you continue to execute save requests which use those catalogs. The sizes depend on the number of files saved and the retention period used. For as long as the retention period has not expired more entries will be added to the catalog. Once the retention period is reached the daily ABS_CLEAN_CATLG_<node_name> batch job will remove expired entries from the catalog. So the more files you save and the longer you want to keep the archived data the larger the catalog files.
Be sure that you consider this information when creating catalogs and assigning retention values to your storage classes. It may be best to create separate catalogs for each storage class, if the retention period is different. For example, you may create a storage class called MONTHLY_SAVE_SC, with a retention period of one month. Create a catalog to be used by that storage class. The catalog size will grow for one month and then maintain its size. But for your storage class, YEARLY_SAVE_SC, with a retention periods of a year, use a different catalog. That catalog will grow for one year, then maintain its size. If you have multiple catalogs, it will be easier for you to move catalogs to different disks if the size exceeds available space or do regular maintenance by running the ABS$SYSTEM:CONVERT_CATALOG command procedure.
ABS can have multiple catalogs. Each catalog is comprised of three RMS Indexed Sequential Files:
These files must reside in the same directory. Different catalogs can be in different directories or different disk volumes.
The Transaction Log Entry file contains two entries per save request executed. It contains among other data the save set name, the tape's volume ID and the expiration date of the save set. Depending on record compression the average record size on disk is about 300 bytes.
The Archive Object Entry file contains one entry for each file backed up. It contains among other data the device and file name. Depending on record compression and depending on actual filename sizes the average record size on disk is about 300 bytes.
The Archive Object Entry Instance file contains an entry for every time a file is backed up. It does not contain the filename but a back pointer to the record in the AOE. Depending on record compression the average record size on disk is about 200 bytes.
TLE: This grows to the average size of how many save requests are active.
AOE: This grows to the number of files that are actively being backed up
AOE_INSNC: This can grow very large.
...and if you had 100 volumes: AOE_INSNC is 292 GB!!!
As you can see from example 4 catalogs can become quite large. Changing the backup schedule so that less files are saved and using shorter retention periods help to maintain smaller catalogs. If this cannot be achieved extra disk space should be reserved for the ABS catalogs with space for future expansion.
This chapter starts by describing the Media and Device Management Services software (MDMS)' management concept and its implementation. Following that is a description of the product's internal interfaces.
Media and Device Management Services V3.0A (MDMS), can be used to manage locations of tape volumes in your IT environment. MDMS identifies all tape volumes by their volume label or ID. Volumes can be located in different places like tape drives or onsite locations. Requests can be made to MDMS for moving volumes between locations. If automated volume movement is possible - like in a jukebox (tape loader, tape library) - MDMS moves volume/s without human intervention. MDMS sends out operator messages if human intervention is required.
MDMS allows scheduled moves of volumes between onsite and offsite locations (e.g. vaults).
Multiple nodes in a network can be setup as an MDMS domain. Note that:
MDMS is a client/server application. At a given time only one node in an MDMS domain will be serving user requests and accessing the database. This is the database server. All other MDMS servers (which are not the database server) are clients to the database server. All user requests will be delegated through the local MDMS server to the database server of the domain.
In case of failure of the designated database server, MDMS' automatic failover procedures ensure that any of the other nodes in the domain, that has the MDMS server running, can take over the role of the database server.
MDMS manages all information in its database as objects. See MDMS Object Records and What they Manage lists and describes the MDMS objects.
MDMS tries to reflect the true states of objects in the database. MDMS requests by users may cause a change in the state of objects. For some objects MDMS can only assume the state, for example: that a volume has been moved offsite. Wherever possible, MDMS tries to verify the state of the object. For example if MDMS finds a volume that should have been in a jukebox slot, in a drive, it updates the database with the current placement of the volume.
MDMS provides an internal callable interface to ABS and HSM software. This interfacing is transparent to the ABS or HSM user. However some MDMS objects can be selected from ABS and HSM.
MDMS communicates with the OpenVMS OPCOM facility when volumes need to be moved, loaded, unloaded, and for other situations where operator actions are required. Most MDMS commands allow control over whether or not an OPCOM message will be generated and whether or not an operator reply is necessary.
MDMS controls jukeboxes by calling specific callable interfaces. For SCSI controlled jukeboxes MDMS uses the MRD/MRU callable interface. For StorageTek jukeboxes MDMS uses DCSC. You still have access to these jukeboxes using the individual control software but doing so will make objects in the MDMS database out-of-date.
The Media and Device Management Services Installation and configuration guide provides information about establishing the MDMS domain configuration. The information in this chapter goes beyond the initial configuration of MDMS, explaining concepts in more detail than the product installation and configuration guide. This chapter also includes procedures related to changing an existing MDMS configuration.
The major sections in this chapter focus on the MDMS domain and its components, and the devices that MDMS manages.
A sample configuration for MDMS is shown in Appendix E.
If you have MDMS/SLS V2.X installed, you can convert the symbols and database to MDMS V3. Appendix Kdescribes what has changed, how to do the conversion and how to use MDMS V2.9 clients with an MDMS V3 database server (for a rolling upgrade).
To manage drives and volumes, you must first configure the scope of the MDMS management domain. This includes placing the database in the best location to assure availability, installing and configuring the MDMS process on nodes that serve ABS 3 or HSM 3, and defining node and domain object record attributes. The MDMS Domain is defined by:
The MDMS database is a collection of OpenVMS RMS files that store the records describing the objects you manage. lists the files that make up the MDMS database.
If you are familiar with the structure of OpenVMS RMS files, you can tune and maintain them over the life of the database. You can find File Definition Language (FDL) files in the MDMS$ROOT:[SYSTEM] directory for each of the database files. Refer to the OpenVMS Record Management System documentation for more information on tuning RMS files and using the supplied FDL files.
MDMS keeps track of all objects by recording their current state in the database. In the event of a catastrophic system failure, you would start recovery operations by rebuilding the system, and then by restoring the important data files in your enterprise. Before restoring those data files, you would have to first restore the MDMS database files.
Another scenario would be the failure of the storage system on which the MDMS files reside. In the event of a complete disk or system failure, you would have to restore the contents of the disk device containing the MDMS database.
The procedures in this section describe ways to create backup copies of the MDMS database. These procedures use MDMS$SYSTEM:MDMS$COPY_DB_FILES.COM command procedure. This command procedure copies database files with the CONVERT/SHARE command. The procedure in See The procedure outlined in describes how you can make backup copies of just the MDMS database files using the OpenVMS Backup Utility. This procedure does not account for other files on the device.describes how to copy MDMS database files only. The procedure in See The procedure in shows how to process the MDMS database files for an image backup. The image backup could be part of a periodic full backup and subsequent incremental. This procedure also describes how to use the files in case you restore them. describes how to process the MDMS database files when they are copied as part of an image backup on the disk device.
The procedure outlined in describes how you can make backup copies of just the MDMS database files using the OpenVMS Backup Utility. This procedure does not account for other files on the device.
The procedure in shows how to process the MDMS database files for an image backup. The image backup could be part of a periodic full backup and subsequent incremental. This procedure also describes how to use the files in case you restore them.
In the event the disk device on which you keep the MDMS database runs out of space, you have the option of moving the MDMS database, or moving other files off the device. The procedure described in this section explains the actions you would have to perform to move the MDMS database. Use this procedure first as a gauge to decide whether moving the MDMS database would be easier or more difficult than moving the other files. Secondarily, use this procedure to relocate the MDMS database to another disk device. See In the event the disk device on which you keep the MDMS database runs out of space, you have the option of moving the MDMS database, or moving other files off the device. The procedure described in this section explains the actions you would have to perform to move the MDMS database. Use this procedure first as a gauge to decide whether moving the MDMS database would be easier or more difficult than moving the other files. Secondarily, use this procedure to relocate the MDMS database to another disk device. describes how to move the MDMS database to a new device location. describes how to move the MDMS database to a new device location.
This section describes the MDMS software process, including server availability, interprocess communication, and start up and shut down operations.
Each node in an MDMS domain has one MDMS server process running. Within an MDMS domain only one server will be serving the database to other MDMS servers. This node is designated as the MDMS Database Server, while the others become MDMS clients. Of the servers listed as database servers, the first one to start up tries to open the database. If that node can successfully open the database, it is established as the database server. Other MDMS servers will then forward user requests to the node that has just become the database server.
Subsequently, if the database server fails because of a hardware failure or a software induced shut down, the clients compete among themselves to become the database server. Whichever client is the first to successfully open the database, becomes the new database server. The other clients will then forward user requests to the new database server. User requests issued on the node which is the database server, will be processed on that node immediately.
During installation you create the MDMS user account as shown in . This account is used by MDMS for every operation it performs.
Username: MDMS$SERVER Owner: SYSTEM MANAGER
Account: SYSTEM UIC: [1,4] ([SYSTEM])
CLI: DCL Tables:
Default: SYS$SYSROOT:[SYSMGR]
LGICMD: SYS$LOGIN:LOGIN
Flags: DisForce_Pwd_Change DisPwdHis
Primary days: Mon Tue Wed Thu Fri Sat Sun
Secondary days:
No access restrictions
Expiration: (none) Pwdminimum: 14 Login Fails: 0
Pwdlifetime: 30 00:00 Pwdchange: 1-JUL-1998 12:19
Maxjobs: 0 Fillm: 500 Bytlm: 100000
Maxacctjobs: 0 Shrfillm: 0 Pbytlm: 0
Maxdetach: 0 BIOlm: 10000 JTquota: 4096
Prclm: 10 DIOlm: 300 WSdef: 5000
Prio: 4 ASTlm: 300 WSquo: 10000
Queprio: 0 TQElm: 300 WSextent: 30000
CPU: (none) Enqlm: 2500 Pgflquo: 300000
Authorized Privileges:
DIAGNOSE NETMBX PHY_IO READALL SHARE SYSNAM SYSPRV TMPMBX WORLD
Default Privileges:
DIAGNOSE NETMBX PHY_IO READALL SHARE SYSNAM SYSPRV TMPMBX WORLD
MDMS creates the SYS$STARTUP:MDMS$SYSTARTUP.COM command procedure on the initial installation. This file includes logical assignments that MDMS uses when the node starts up. The installation process also offers an opportunity to make initial assignments to the logicals.
If you install MDMS once for shared access in an OpenVMS Cluster environment, this file is shared by all members. If you install MDMS on individual nodes within an OpenVMS Cluster environment, this file is installed on each node.
In addition to creating node object records and setting domain and node attributes, you must define logicals in the MDMS start up file. These are all critical tasks to configure the MDMS domain.
See provides brief descriptions of most of the logical assignments in MDMS$SYSTARTUP.COM. More detailed descriptions follow as indicated. provides brief descriptions of most of the logical assignments in MDMS$SYSTARTUP.COM. More detailed descriptions follow as indicated.
Of all the nodes in the MDMS domain, you select those which can act as a database server. Only one node at a time can be the database server. Other nodes operating at the same time communicate with the node acting as the database server. In the event the server node fails, another node operating in the domain can become the database server if it is listed in the MDMS$DATABASE_SERVERS logical.
For instance, in an OpenVMS Cluster environment, you can identify all nodes as a potential server node. If the domain includes an OpenVMS Cluster environment and some number of nodes remote from it, you could identify a remote node as a database server if the MDMS database is on a disk served by the Distributed File System software (DECdfs). However, if you do not want remote nodes to function as a database server, do not enter their names in the list for this assignment.
The names you use must be the full network name specification for the transports used. shows example node names for each of the possible transport options. If a node uses both DECnet and TCP/IP, full network names for both should be defined in the node object
Defines the location of the Log Files. For each server running, MDMS uses a log file in this location. The log file name includes the name of the cluster node it logs.
For example, the log file name for a node with a cluster node name NODE_A would be:
The MDMS node object record characterizes the function of a node in the MDMS domain and describes how the node communicates with other nodes in the domain.
To participate in an MDMS domain, a node object has to be entered into the MDMS database. This node object has 4 attributes to describe its connections in a network:
When an MDMS server starts up it only has its network node name/s to identify itself in the MDMS database. Therefore if a node has a network node name but it is not defined in the
node object records of the database, this node will be rejected as not being fully enabled. For example, a node has a TCP/IP name and TCP/IP is running but the node object record shows the TCP/IP full name as blank.
There is one situation where an MDMS server is allowed to function even if it does not have a node object record defined or the node object record does not list all network names. This is in the case of the node being an MDMS database server. Without this exception, no node entries can be created in the database. As long as a database server is not fully enabled in the database it will not start any network listeners.
This section describes how to designate an MDMS node as a database server, enable and disable the node.
When you install MDMS, you must decide which nodes will participate as potential database servers. To be a database server, the node must be able to access the database disk device.
Typically, in an OpenVMS Cluster environment, all nodes would have access to the database disk device, and would therefore be identified as potential database servers.
Set the database server attribute for each node identified as a potential database server. For nodes in the domain that are not going to act as a database server, negate the database server attribute.
There are several reasons for disabling an MDMS node.
Disable the node from the command line or the GUI and restart MDMS.
When you are ready to return the node to service, enable the node.
Nodes in the MDMS domain have two network transport options: one for DECnet, the other for TCP/IP. When you configure a node into the MDMS domain, you can specify either or both these transport options by assigning them to the transport attribute. If you specify both, MDMS will attempt interprocessor communications on the first transport value listed. MDMS will then try the second transport value if communication fails on the first.
If you are using the DECnet Plus network transport, define the full DECnet Plus node name in the decnet fullname attribute. If you are using an earlier version of DECnet, leave the
DECnet-Plus fullname attribute blank.
If you are using the TCP/IP network transport, enter the node's full TCP/IP name in the
TCPIP fullname attribute. You can also specify the receive ports used by MDMS to listen for incoming requests. By default, MDMS uses the port range of 2501 through 2510. If you want to specify a different port or range of ports, append that specification to the TCPIP fullname. For example:
Describe the function, purpose of the node with the description attribute. Use the location attribute to identify the MDMS location where the node resides.
List the OPCOM classes of operators with terminals connected to this node who will receive OPCOM messages. Operators who enable those classes will receive OPCOM messages pertaining to devices connected to the node.
For more information about operator communication, see Section .
MDMS provides the group object record to define a group of nodes that share common drives or jukeboxes. Typically, the group object record represents all nodes in an OpenVMS Cluster environment, when drives in the environment are accessible from all nodes.
Some configurations involve sharing a device between nodes of different OpenVMS Cluster environments. You could create a group that includes all nodes that have access to the device.
When you create a group to identify shared access to a drive or jukebox assign the group name as an attribute of the drive or jukebox. When you set the group attribute of the drive or jukebox object record, MDMS clears the node attribute.
The following command examples create a functionally equivalent drive object records.
$!These commands create a drive connected to a Group object
$MDMS CREATE GROUP CLUSTER_A /NODES=(NODE_1,NODE_2,NODE_3)
$MDMS CREATE DRIVE NODE$MUA501/GROUPS=CLUSTER_A
$!
$!This command creates a drive connected to NODE_1, NODE_2, and NODE_3
$MDMS CREATE DRIVE NODE$MUA501/NODES=(NODE_1,NODE_2,NODE_3)
See Figure 10- 2 Groups in the MDMS Domain is a model of organizing clusters of nodes in groups and how devices are shared between groups.
The domain object record describes global attributes for the domain and includes the description attribute where you can enter an open text description of the MDMS domain. Additional domain object attributes define configuration parameters, access rights options, and default volume management parameters. See See The MDMS Domain.
Include all operator classes to which OPCOM messages should go as a comma separated list value of the OPCOM classes attribute. MDMS uses the domain OPCOM classes when nodes do not have their classes defined.
For more information about operator communication, see Section, Managing Operations.
If you want to change the request identifier for the next request, use the request id attribute.
This section briefly describes the attributes of the domain object record that implement rights controls for MDMS users. Refer to Appendix on MDMS Rights and Privileges for the description of the MDMS rights implementation.
If you use MDMS to support ABS, you can set the ABS rights attribute to allow any user with any ABS right to perform certain actions with MDMS. This feature provides a short cut to managing rights by enabling ABS users and managers access to just the features they need. Negating this attribute means users with any ABS rights have no additional MDMS rights.
MDMS defines default low level rights for the application rights attribute according to what ABS and HSM minimally require to use MDMS.
If you want to grant all users certain MDMS rights without having to modify their UAF records, you can assign those low level rights to the default rights attribute. Any user without specific MDMS rights in their UAF file will have the rights assigned to the default rights identifier.
Use the operator rights attribute to identify all low level rights granted to any operator who has been granted the MDMS_OPERATOR right in their UAF.
Use the SYSPRV attribute to allow any process with SYSPRV enabled the rights to perform any and all operations with MDMS.
Use the user rights attribute to identify all low level rights granted to any user who has been granted the MDMS_USER right in their UAF.
The MDMS domain includes attributes used as the foundation for volume management. Some of these attributes provide defaults for volume management and movement activities, others define particular behavior for all volume management operations. The values you assign to these attributes will, in part, dictate how your volume service will function. lists brief descriptions of these attributes.
This section addresses issues that involve installing additional MDMS nodes into an existing domain, or removing nodes from an operational MDMS domain.
Once you configure the MDMS domain, you might have the opportunity to add a node to the existing configuration. See Once you configure the MDMS domain, you might have the opportunity to add a node to the existing configuration. describes the procedure for adding a node to an existing MDMS domain. describes the procedure for adding a node to an existing MDMS domain.
MDMS manages the use of drives for the benefit of its clients, ABS and HSM. You must configure MDMS to recognize the drives and the locations that contain them. You must also configure MDMS to recognize any jukebox that contains managed drives.
You will create drive, location, and possibly jukebox object records in the MDMS database. The attribute values you give them will determine how MDMS manages them. The meanings of some object record attributes are straightforward. This section describes others because they are more important for configuring operations.
Before you begin configuring drives for operations, you need to determine the following aspects of drive management:
You must give each drive a name that is unique within the MDMS domain. The drive object record can be named with the OpenVMS device name, if desired, just as long as the name is not duplicated elsewhere.
Use the description attribute to store a free text description of anything useful to your management of the drive. MDMS stores this information, but takes no action with it.
The device attribute must contain the OpenVMS allocation class and device name for the drive. If the drive is accessed from nodes other than the one from which the command was entered, you must specify nodes or groups in the /NODE or /GROUP attributes in the drive record. Do not specify nodes or groups in the drive name or the device attribute.
If the drive resides in a jukebox, you must specify the name of the jukebox with the jukebox attribute. Identify the position of the drive in the jukebox by setting the drive number attribute. Drives start at position 0.
Additionally, the jukebox that contains the drives must also be managed by MDMS.
MDMS allows you to dedicate a drive solely to MDMS operations, or share the drive with other users and applications. Specify your preference with the shared attribute.
You need to decide which systems in your data center are going to access the drives you manage.
Use the groups attribute if you created group object records to represent nodes in an OpenVMS Cluster environment or nodes that share a common device.
Use the nodes attribute if you have no reason to refer to any collection of nodes as a single entity, and you plan to manage nodes, and the objects that refer to them, individually.
The last decision is whether the drive serves locally connected systems, or remote systems using the RDF software. The access attribute allows you to specify local, remote (RDF) or both.
Specify the kinds of volumes that can be used in the drive by listing the associated media type name in the media types attribute. You can force the drive to not write volumes of particular media types. Identify those media types in the read only attribute.
If the drive has a mechanism for holding multiple volumes, and can feed the volumes sequentially to the drive, but does not allow for random access or you choose not to use the random access feature, then you can designate the drive as a stacker by setting the stacker attribute.
Set the disabled attribute when you have to exclude the managed drive from operations by MDMS. If the drive is the only one of its kind (for example if it accepts volumes of a particular media type that no other drives accept), make sure you have another drive that can take load requests. Return the drive to operation by setting the enabled attribute.
MDMS manages Media Robot Driver (MRD) controlled jukeboxes and DCSC controlled jukeboxes. MRD is a software that controls SCSI-2 compliant medium changers. DCSC is software that controls large jukeboxes manufactured by StorageTek, Inc. This section first describes the MDMS attributes used for describing all jukeboxes by function. Subsequent descriptions explain attributes that characterize MRD jukeboxes and DCSC jukeboxes respectively.
Assign unique names to jukeboxes you manage in the MDMS domain. When you create the jukebox object record, supply a name that describes the jukebox.
Set the control attribute to MRD if the jukebox operates under MRD control. Otherwise, set the control to DCSC.
Use the description attribute to store a free text description of the drive. You can describe its role in the data center operation or other useful information. MDMS stores this information for you, but takes no actions with it.
You can dedicate a jukebox solely to MDMS operations, or you can allow other applications and users access to the jukebox device. Specify your preference with the shared attribute.
You need to decide which systems in the data center are going to access the jukebox.
Use the groups attribute if you created group object records to represent nodes in an OpenVMS Cluster environment or nodes that share a common device.
Use the nodes attribute if you have no reason to refer to any collection of nodes as a single entity, and you plan to manage nodes, and the objects that refer to them, individually.
Disable the jukebox to exclude it from operations. Make sure that applications using MDMS will either use other managed jukeboxes, or make no request of a jukebox you disable. Enable the jukebox after you complete any configuration changes. Drives within a disabled jukebox cannot be allocated.
Set the library attribute to the library identifier of the particular silo the jukebox objects represents. MDMS supplies 1 as the default value. You will have to set this value according the number silos in the configuration and the sequence in which they are configured.
Specify the number of slots for the jukebox. Alternatively, if the jukebox supports magazines, specify the topology for the jukebox (see See Magazines and Jukebox Topology ).
The robot attribute must contain the OpenVMS device name of the jukebox medium changer (also known as the robotic device).
If the jukebox is accessed from nodes other than the one from which the command was entered, you must specify nodes or groups in the /NODE or /GROUP attributes in the jukebox record. Do not specify nodes or groups in the jukebox name or the robot attribute.
The jukebox object record state attribute shows the state of managed MDMS jukeboxes. MDMS sets one of three values for this attribute: Available, In use, and Unavailable.
If you decide that your operations benefit from the management of magazines (groups of volumes moved through your operation with a single name) must set the jukebox object record to enable it. Set the usage attribute to magazine and define the jukebox topology with the topology attribute. See See Magazines for a sample overview of how the 11 and 7 slot bin packs can be used as a magazine.
Setting the usage attribute to nomagazine means that you will move volumes into and out of the jukebox independently (using separate commands for each volume, regardless if they are placed into a physical magazine or not).
Some jukeboxes have their slot range subdivided into towers, faces, and levels. See for an overview of how the configuration of Towers, Faces, Levels and Slots constitute Topology. Note that the topology in comprises 3 towers. In the list of topology characteristics, you should identify every tower in the configuration. For each tower in the configuration, you must inturn identify:
You must manually open the jukebox when moving magazines into and out of the jukebox. Once in the jukebox, volumes can only be loaded and unloaded relative to the slot in the magazine it occupies.
While using multiple TL896 jukebox towers you can treat the 11 slot bin packs as magazines. The following command configures the topology of the TL896 jukebox as shown in for use with magazines:
$ MDMS CREATE JUKEBOX JUKE_1/ -
$_ /TOPOLOGY=(TOWERS=(0,1,2), FACES=(8,8,8), -
$_ LEVELS=(3,3,2), SLOTS=(11,11,11))
This section describes some of the management issues that involve both drives and jukeboxes.
Drive and jukebox object records both use the automatic load reply attribute to provide an additional level of automation.
When you set the automatic reply attribute to the affirmative, MDMS will poll the drive or jukebox for successful completion of an operator-assisted operation for those operations where polling is possible. For example, MDMS can poll a drive, determine that a volume is in the drive, and cancel the associated OPCOM request to acknowledge a load. Under these circumstances, an operator need not reply to the OPCOM message after completing the load. To use this feature, set the automatic reply attribute to the affirmative. When this attribute is set to the negative, which is the default, an operator must acknowledge each OPCOM request for the drive or jukebox before the request is completed.
If you need to make backup copies to a drive in a remote location, using the network, then you must install the Remote Device Facility software (RDF). The RDF software must then be configured to work with MDMS.
See See See for a description of the actions you need to take to configure RDF software. for a description of the actions you need to take to configure RDF software.
When you add another drive to a managed jukebox, just specify the name of the jukebox in which the drive resides, in the drive object record.
You can temporarily remove a drive or jukebox from service. MDMS allows you to disable and enable drive and jukebox devices. This feature supports maintenance or other operations where you want to maintain MDMS support for ABS or HSM, and temporarily remove a drive or jukebox from service.
During the course of management, you might encounter a requirement to change the device names of drives or jukeboxes under MDMS management, to avoid confusion in naming. When you have to change the device names, follow the procedure in See Changing the Names of Managed Devices.
MDMS allows you to identify locations in which you store volumes. Create a location object record for each place the operations staff uses to store volumes. These locations are referenced during move operations, load to, or unload from stand-alone drives.
If you need to divide your location space into smaller, named locations, define locations hierachically. The location attribute of the location object record allows you to name a higher level location. For example, you can create location object records to describe separate rooms in a data center by first creating a location object record for the data center. After that, create object records for each room, specifying the data center name as the value of the location attribute for the room locations.
When allocating volumes or drives by location, the volumes and drives do not have to be in the exact location specified; rather they should be in a compatible location. A location is considered compatible with another if both have a common root higher in the location hierarchy. For example, in See Named Locations, locations Room_304 and Floor_2 are considered compatible, as they both have location Building_1 as a common root.
Your operations staff must be informed about the names of these locations as they will appear in OPCOM messages. Use the description attribute of the location object record to describe the location it represents as accurately as possible. Your operations staff can refer to the information in the event they become confused about a location mentioned in an OPCOM message.
You can divide a location into separate spaces to identify locations of specific volumes. Use the spaces attribute to specify the range of spaces in which volumes can be stored. If you do not need that level of detail in the placement of volumes at the location, negate the attribute.
MDMS provides a configuration procedure to guide you through the configuration process. Please refer to Appendix "Sample Configuration of MDMS" for how to use this procedure.
MDMS includes two interfaces: a command line interface (CLI) and a graphic user interface (GUI). This section describes how these interfaces allow you to interact with MDMS.
The CLI is based on the MDMS command. The CLI includes several features that offer flexibility and control in the way in which you use it. This interface provides for interactive operations and allows you to run DCL command procedures for customized operations.
Understanding these features help you become a more effective command line interface user and DCL programmer.
The command structure includes the MDMS keyword, the operational verb and an object class name at a minimum. Optionally the command can include a specific object name and command qualifiers.
The following example shows the MDMS command structure for most commands:
$MDMS verb object_class [object_name] [/qualifier [,...]]
The Move and Report commands support multiple parameters, as documented in the Archive/Backup System for OpenVMS (ABS) or Hierarchical Storage Management for OpenVMS (HSM) Command Reference Guide.
Some MDMS commands include features for capturing text that can be used to support DCL programming.
The MDMS SHOW VOLUME command includes a /SYMBOLS qualifier to define a set of symbols that store the specified volume's attributes.
Several MDMS commands can involve operator interaction if necessary. These commands includes a /REPLY qualifier to capturing the operator's reply to the OPCOM message created to satisfy the request.
The allocate commands can return an allocated object name. You can assign a process logical name to pick up this object name by using the /DEFINE=logical name qualifier.
The interactions between the MDMS process and object records in the database form the basis of MDMS operations. Most command line interface actions involve the object record verbs MDMS CREATE, MDMS SET, MDMS SHOW, and MDMS DELETE. Use the MDMS CREATE verb to create object records that represent the objects you manage. Use MDMS SHOW and MDMS SET to view and change object attributes. The MDMS DELETE command removes object records from the MDMS database.
You do not create all object records with the MDMS CREATE command or with the GUI creation options. MDMS creates some records automatically. During installation MDMS creates the Domain object record, and volume object records can be created in response to an inventory operation.
This section describes the how to add, remove, and change attribute list values.
The MDMS CREATE and MDMS SET commands for every object class that has one or more attributes with list values include /ADD and /REMOVE qualifiers. These qualifiers allow you to manipulate the attribute lists.
Use the /ADD qualifier to add a new value to the attribute value list with both the MDMS CREATE/INHERIT and MDMS SET commands.
Use the /REMOVE qualifier to remove an identified value from the attribute value list with both the MDMS CREATE/INHERIT and MDMS SET commands.
To change an entire attribute value list, specify a list of new values with the attribute qualifier.
The following example shows how these qualifiers work.
This command creates a new drive object record, taking attribute values from an existing drive object record. In it, the user adds a new media type name to the /MEDIA_TYPE value list.
$MDMS CREATE DRIVE TL8_4 /INHERIT=TL89X_1 /MEDIA_TYPE=(TK9N) /ADD
After being created, the data center management plan requires the jukebox containing drive TL8_4 to service requests from a different group of nodes. To change the group list values, but nothing else, the user issues the following SET command.
$MDMS SET DRIVE TL8_4 /GROUPS=(FINGRP,DOCGRP)
Later, the nodes belonging to DOCGRP no longer need drive TL8_4. The following command removes DOCGRP from the /GROUPS attribute list.
The MDMS command line interface includes commands for operations in the MDMS domain. These commands initiate actions with managed objects. Qualifiers to these commands tailor the command actions to suit your needs. The following examples show how these qualifiers work:
Many MDMS commands include the /NOWAIT qualifier. These commands start actions that require some time to complete. Commands entered with /NOWAIT are internally queued by MDMS as an asynchronous request. The request remains in the queue until the action succeeds or fails.
To show currently outstanding requests, use the MDMS SHOW REQUESTS command. To cancel a request, use the MDMS CANCEL REQUEST command.
MDMS includes a GUI based on Java technology. Through the GUI, you can manage MDMS from any Java enabled system on your network that is connected to an OpenVMS system running MDMS.
Most MDMS operations involve single actions on one or more objects. The basic concept of the GUI supports this management perspective. The interface allows you to select one or more objects and enables management actions through point-and-click operations.
To view object records with the GUI, select the class from the icon bar at the top of the screen. Use the next screen to select the particular records you want to view, then press the Modify or Delete option. The GUI then displays the object record.
In addition to creating, modifying, and deleting object records, the GUI enables management actions. See Operational Actions With the GUI shows the objects and the actions associated with them.
The graphic user interface also provides guides for combined tasks. These guides take you through tasks that involve multiple steps on multiple objects.
This task interface first takes you through the procedures to add a new jukebox and drive to the MDMS domain. The second part of the procedure takes you through all the steps to add volumes to the MDMS domain. You can use just the second part to add volumes at any time.
Use this task interface to remove a jukebox or drive, and volumes from MDMS management. This procedure provides you with the necessary decisions to make sure the MDMS database is kept current after all necessary object records have been deleted. Without this procedure, you could likely delete object records, but leave references to them in the attribute fields of remaining records.
This procedure facilitates moving volumes to an offsite vault location for safe storage. It takes you through the steps to bring volumes from an offsite location, then gather volumes for movement to the offsite location.
Use this procedure when backup operations use volumes in a jukebox and you need to supply free volumes for future backup requests. This procedure allows you to gather allocated volumes from the jukebox, then replace them with free volumes. The procedure also allows you to use the jukebox vision system.
This section describes access rights for MDMS operations. MDMS works with the OpenVMS User Authorization File (UAF), so you need to understand the Authorize Utility and OpenVMS security before changing the default MDMS rights assignments.
MDMS rights control access to operations, not to object records in the database.
Knowing the security implementation will allow you to set up MDMS operation as openly or securely as required.
MDMS controls user action with process rights granted to the user or application through low and high level rights.
The low level rights are named to indicate an action and the object the action targets. For instance, the MDMS_MOVE_OWN right allows the user to conduct a move operation on a volume allocated to that user. The MDMS_LOAD_ALL right allows the user to load any managed volume.
For detailed descriptions of the MDMS low level rights, refer to the ABS or HSM Command Reference Guide.
MDMS associates high level rights with the kind of user that would typically need them. Refer to the ABS or HSM Command Reference Guide for a detailed list of the low level rights associated with each high level right. The remainder of this section describes the high level rights.
The default MDMS_USER right is for any user who wants to use MDMS to manage their own tape volumes. A user with the MDMS_USER right can manage only their own volumes. The default MDMS_USER right does not allow for creating or deleting MDMS object records, or changing the current MDMS configuration.
Use this right for users who perform non-system operations with ABS or HSM.
The default MDMS_APPLICATION right is for the ABS and HSM applications. As MDMS clients using managed volumes and drives, these applications require specific rights.
The ABS or HSM processes include the MDMS_APPLICATION rights identifier which assumes the low level rights associated with it. Do not modify the low level rights values for the Domain application rights attribute. Changing the values to this attribute can cause your application to fail.
The high level rights are defined by domain object record attributes with lists of low level rights. The high level rights are convenient names for sets of low level rights.
For MDMS users, grant high and/or low level rights as needed with the Authorize Utility. You can take either of these approaches to granting MDMS rights.
You can ensure that all appropriate low level rights necessary for a class of user are assigned to the corresponding high level right, then grant the high level rights to users.
You can grant any combination of high level and low level rights to any user.
Use the procedure outlined in See Reviewing and Setting MDMS Rights to review and set rights that enable or disable access to MDMS operations. CLI command examples appear in this process description but can use the GUI to accomplish this procedure as well.
This section describes the basic concepts that relate to creating, modifying, and deleting object records.
Both the CLI and GUI provide the ability to create object records. MDMS imposes rules on the names you give object records. When creating object records, define as many attribute values as you can, or inherit attributes from object records that describe similar objects.
When you create an object record, you give it a name that will be used as long as it exists in the MDMS database. MDMS also accesses the object record when it is an attribute of another object record; for instance a media type object record named as a volume attribute.
MDMS object names may include any digit (0 through 9), any upper case letter (A through Z), and any lower case letter (a through z). Additionally, you can include $ (dollar sign) and _ (underscore).
The MDMS CLI accepts all these characters. However, lower case letters are automatically converted to upper case, unless the string containing them is surrounded by the "(double quote) characters. The CLI also allows you to embed spaces in object names if the object name is surrounded by the " characters.
The MDMS GUI accepts all the allowable characters, but will not allow you to create objects that use lower case names, or embed spaces. The GUI will display names that include spaces and lower case characters if they were created with the CLI.
Compaq recommends that you create all object records with names that include no lower case letters or spaces. If you create an object name with lower case letters, and refer to it as an attribute value which includes upper case letters, MDMS may fail an operation.
The following examples illustrate the concepts for creating object names with the CLI.
These commands show the default CLI behavior for naming objects:
$!Volume created with upper case locked
$MDMS CREATE VOLUME CPQ231 /INHERIT=CPQ000 !Standard upper case DCL
$MDMS SHOW VOLUME CPQ231
$!
$!Volume created with lower case letters
$MDMS CREATE VOLUME cpq232 /INHERIT=CPQ000 !Standard lower case DCL
$MDMS SHOW VOLUME CPQ232
$!
$!Volume created with quote-delimited lower case, forcing lower case naming
$MDMS CREATE VOLUME ìcpq233î /INHERIT=CPQ000 !Forced lower case DCL
$!
$!This command fails because the default behavior translates to upper case
$MDMS SHOW VOLUME CPQ233
$!
$!Use quote-delimited lower case to examine the object record
$MDMS SHOW VOLUME ìcpq233î
This feature allows you to copy the attributes of any specified object record when creating or changing another object record. For instance, if you create drive object records for four drives in a new jukebox, you fill out all the attributes for the first drive object record. Then, use the inherit option to copy the attribute values from the first drive object record when creating the subsequent three drive object records.
If you use the inherit feature, you do not have to accept all the attribute values of the selected object record. You can override any particular attribute value by including the attribute assignment in the command or GUI operation. For CLI users, use the attribute's qualifier with the MDMS CREATE command. For GUI users, set the attribute values you want.
Not all attributes can be inherited. Some object record attributes are protected and contain values that apply only to the specific object the record represents. Check the command reference information to identify object record attributes that can be inherited.
MDMS allows you to specify object record names as attribute values before you create the records. For example, the drive object record has a media types attribute. You can enter media type object record names into that attribute when you create the drive object before you create the media type object records.
The low level rights that enable a user to create objects are MDMS_CREATE_ALL (create any MDMS object record) and MDMS_CREATE_POOL (create volumes in a pool authorized to the user).
Whenever your configuration changes you will modify object records in the MDMS database. When you identify an object that needs to be changed you must specify the object record as it is named. If you know an object record exists, but it does not display in response to an operation to change it, you could be entering the name incorrectly. Section See Naming Objects describes the conventions for naming object records.
Do not change protected attributes if you do not understand the implications of making the particular changes. If you change a protected attribute, you could cause an operation to fail or prevent the recovery of data recorded on managed volumes.
MDMS uses some attributes to store information it needs to manage certain objects. The GUI default behavior prevents you from inadvertently changing these attributes. By pressing the Enable Protected button on the GUI, you can change these attributes. The CLI makes no distinction in how it presents protected attributes when you modify object records. Ultimately, the ability to change protected attributes is allowed by the MDMS_SET_PROTECTED right and implicitly through the MDMS_SET_RIGHTS right.
The low level rights that allow you to modify an object by changing its attribute values are shown below:.
When managed objects, such as drives or volumes, become obsolete or fail, you may want to remove them from management. When you remove these objects, you must also delete the object records that describe them to MDMS.
When you remove object records, there are two reviews you must make to ensure the database accurately reflects the management domain: review the remaining object records and change any attributes that reference the deleted object records, review any DCL command procedures and change any command qualifiers that reference deleted object records.
When you delete an object record, review object records in the database for references to those objects. See Reviewing Managed Objects for References to Deleted Objects shows which object records to check when you delete a given object record. Use this table also to check command procedures that include the MDMS SET command for the remaining objects.
Change references to deleted object records from the MDMS database. If you leave a reference to a deleted object record in the MDMS database, an operation with MDMS could fail.
When you delete an object record, review any DCL command procedures for commands that reference those objects. Other than the MDMS CREATE, SET, SHOW, and DELETE commands for a given object record, See Reviewing DCL Commands for References to Deleted Objects shows which commands to check. These commands could have references to the deleted object record.
Change references to deleted object records from DCL commands. If you leave a reference to a deleted object record in a DCL command, an operation with MDMS could fail.
When you install Media and Device Management Services (MDMS) you are asked whether you want to install the RDF software.
During the installation you place the RDF client software on the nodes with disks you want to backup. You place the RDF server software on the systems to which the tape backup devices are connected. This means that when using RDF, you serve the tape backup device to the systems with the client disks.
All of the files for RDF are placed in TTI_RDF: for your system. There will be separate locations for VAX or Alpha.
After installing RDF you should check the TTI_RDEV:CONFIG_nodename.DAT file to make sure it has correct entries.
Check this file to make sure that all RDF characteristic names are unique to this node.
The following sections describe how to use RDF with MDMS.
RDF software is automatically started up along with then MDMS software when you enter the following command:
The following privileges are required to execute the RDSHOW procedure: NETMBX, TMPMBX.
In addition, the following privileges are required to show information on remote devices allocated by other processes: SYSPRV,WORLD.
You can run the RDSHOW procedure any time after the MDMS software has been started. RDF software is automatically started at this time.
$ @TTI_RDEV:RDSHOW CLIENT
$ @TTI_RDEV:RDSHOW SERVER node_name
$ @TTI_RDEV:RDSHOW DEVICES
node_name is the node name of any node on which the RDF server software is running.
To show remote devices that you have allocated, enter the following command from the RDF client node:
RDALLOCATED devices for pid 20200294, user DJ, on node OMAHA::
Local logical Rmt node Remote device
TAPE01 MIAMI:: MIAMI$MUC0
DJ is the user name and OMAHA is the current RDF client node.
The RDSHOW SERVER procedure shows the available devices on a specific SERVER node. To execute this procedure, enter the following command from any RDF client or RDF server node:
$ @TTI_RDEV:RDSHOW SERVER MIAMI
MIAMI is the name of the server node whose devices you want shown.
Available devices on node MIAMI::
Name Status Characteristics/Comments
MIAMI$MSA0 in use msa0
...by pid 20200246, user CATHY (local)
MIAMI$MUA0 in use mua0
...by pid 202001B6, user CATHY, on node OMAHA::
MIAMI$MUB0 -free- mub0
MIAMI$MUC0 in use muc0
...by pid 2020014C, user DJ, on node OMAHA::
This RDSHOW SERVER command shows any available devices on the server node MIAMI, including any device characteristics. In addition, each allocated device shows the process PID, username, and RDF client node name.
The text (local) is shown if the device is locally allocated.
To show all allocated remote devices on an RDF client node, enter the following command from the RDF client node:
Devices RDALLOCATED on node OMAHA::
RDdevice Rmt node Remote device User name PID
RDEVA0: MIAMI:: MIAMI$MUC0 DJ 2020014C
RDEVB0: MIAMI:: MIAMI$MUA0 CATHY 202001B6
This command shows all allocated devices on the RDF client node OMAHA. Use this command to determine which devices are allocated on which nodes.
This section describes network issues that are especially important when working with remote devices.
The Network Control Program (NCP) is used to change various network parameters. RDF (and the rest of your network as a whole) benefits from changing two NCP parameters on all nodes in your network. These parameters are:
The pipeline quota is used to send data packets at an even rate. It can be tuned for specific network configurations. For example, in an Ethernet network, the number of packet buffers represented by the pipeline quota can be calculated as approximately:
buffers = pipeline_quota / 1498
The default pipeline quota is 10000. At this value, only six packets can be sent before acknowledgment of a packet from the receiving node is required. The sending node stops after the sixth packet is sent if an acknowledgment is not received.
The PIPELINE QUOTA can be increased to 45,000 allowing 30 packets to be sent before a packet is acknowledged (in an Ethernet network). However, performance improvements have not been verified for values higher than 23,000. It is important to know that increasing the value of PIPELINE QUOTA improves the performance of RDF, but may negatively impact performance of other applications running concurrently with RDF.
Similar to the pipeline quota, line receive buffers are used to receive data at a constant rate.
The default setting for the number of line receive buffers is 6.
The number of line receive buffers can be increased to 30 allowing 30 packets to be received at a time. However, performance improvements have not been verified for values greater than 15 and as stated above, tuning changes may improve RDF performance while negatively impacting other applications running on the system.
As stated in DECnet-Plus(Phase V), (DECnet/OSI V6.1) Release Notes, a pipeline quota is not used directly. Users may influence packet transmission rates by adjusting the values for the transport's characteristics MAXIMUM TRANSPORT CONNECTIONS, MAXIMUM RECEIVE BUFFERS, and MAXIMUM WINDOW. The value for the transmit quota is determined by MAXIMUM RECEIVE BUFFERS divided by Actual TRANSPORT CONNECTIONS.
This will be used for the transmit window, unless MAXIMUM WINDOW is less than this quota. In that case, MAXIMUM WINDOW will be used for the transmitter window.
The DECnet-Plus defaults (MAXIMUM TRANSPORT CONNECTIONS = 200 and MAXIMUM RECEIVE BUFFERS = 4000) produce a MAXIMUM WINDOW of 20. Decreasing MAXIMUM TRANSPORT CONNECTIONS with a corresponding increase of MAXIMUM WINDO may improve RDF performance, but also may negatively impact other applications running on the system.
This section describes how to change the network parameters for DECnet Phase IV and DECnet-PLUS.
The pipeline quota is an NCP executor parameter. The line receive buffers setting is an NCP line parameter.
The following procedure shows how to display and change these parameters in the permanent DECnet database. These changes should be made on each node of the network.
For the changed parameters to take effect, the node must be rebooted or DECnet must be shut down.
The Network Control Language (NCL) is used to change DECnet-Plus network parameters. The transport parameters MAXIMUM RECEIVE BUFFERS, MAXIMUM TRANSPORT CONNECTIONS and MAXIMUM WINDOW can be adjusted by using NCL's SET OSI TRANSPORT command. For example:
NCL> SET OSI TRANSPORT MAXIMUM RECEIVE BUFFERS = 4000 !default value
NCL> SET OSI TRANSPORT MAXIMUM TRANSPORT CONNECTIONS = 200 !default value
NCL> SET OSI TRANSPORT MAXIMUM WINDOWS = 20 !default value
To make the parameter change permanent, add the NCL command(s) to the SYS$MANAGER:NET$OSI_TRANSPORT_STARTUP.NCL file. Refer to the DENET-Plus (DECnet/OSI) Network Management manual for detailed information.
Changing the default values of line receive buffers and the pipeline quota to the values of 30 and 45000 consumes less than 140 pages of nonpaged dynamic memory.
In addition, you may need to increase the number of large request packets (LRPs) and raise the default value of NETACP BYTLM.
LRPs are used by DECnet to send and receive messages. The number of LRPs is governed by the SYSGEN parameters LRPCOUNT and LRPCOUNTV.
A minimum of 30 free LRPs is recommended during peak times. Show these parameters and the number of free LRPs by entering the following DCL command:
System Memory Resources on 24-JUN-1991 08:13:57.66
Large Packet (LRP) Lookaside List Packets Bytes
Current Total Size 36 59328
Initial Size (LRPCOUNT) 25 41200
Maximum Size (LRPCOUNTV) 200 329600
Free Space 20 32960
In the LRP lookaside list, this system has:
The SYSGEN parameter LRPCOUNT (LRP Count) has been set to 25. The Current Size is not the same as the Initial Size. This means that OpenVMS software has to allocate more LRPs. This causes system performance degradation while OpenVMS is expanding the LRP lookaside list.
The LRPCOUNT should have been raised to at least 36 so OpenVMS does not have to allocate more LRPs.
Raise the LRPCOUNT parameter to a minimum of 50. Because the LRPCOUNT parameter is set to only 25, the LRPCOUNT parameter is raised on this system even if the current size was also 25.
This is below the recommended free space amount of 30. This also indicates that LRPCOUNT should be raised. Raising LRPCOUNT to 50 (when there are currently 36 LRPs) has the effect of adding 14 LRPs. Fourteen plus the 20 free space equals over 30. This means that the recommended value of 30 free space LRPs is met after LRPCOUNT is set to 50.
The LRPCOUNTV parameter should be at least four times LRPCOUNT. Raising LRPCOUNT may mean that LRPCOUNTV has to be raised. In this case, LRPCOUNTV does not have to be raised because 200 is exactly four times 50 (the new LRPCOUNT value).
Make changes to LRPCOUNT or LRPCOUNTV in both:
Example: Changing LRPCOUNT to 50 in SYSGEN
Username: SYSTEM
Password: (the system password)
$ SET DEFAULT SYS$SYSTEM
$ RUN SYSGEN
SYSGEN> USE CURRENT
SYSGEN> SH LRPCOUNT
Parameter Name Current Default Minimum Maximum
LRPCOUNT 25 4 0 4096
SYSGEN> SET LRPCOUNT 50
SYSGEN> WRITE CURRENT
SYSGEN> SH LRPCOUNT
Parameter Name Current Default Minimum Maximum
LRPCOUNT 50 4 0 4096
After making changes to SYSGEN, reboot your system so the changes take effect.
Example: Changing the LRPCOUNT for AUTOGEN
Add the following line to MODPARAMS.DAT:
$ MIN_LRPCOUNT = 50 ! ADDED {the date} {your initials}
This ensures that when AUTOGEN runs, LRPCOUNT is not set below 50.
The default value of NETACP is a BYTLM setting of 65,535. Including overhead, this is enough for only 25 to 30 line receive buffers. This default BYTLM may not be enough.
Increase the value of NETACP BYTLM to 110,000.
Before starting DECnet, define the logical NETACP$BUFFER_ LIMIT by entering:
$ DEFINE/SYSTEM/NOLOG NETACP$BUFFER_LIMIT 110000
$ @SYS$MANAGER:STARTNET.COM
By default, RDF tries to perform I/O requests as fast as possible. In some cases, this can cause the network to slow down. Reducing the network bandwidth used by RDF allows more of the network to become available to other processes.
The RDF logical names that control this are:
RDEV_WRITE_GROUP_SIZE
RDEV_WRITE_GROUP_DELAY
The default values for these logical names is zero. The following example shows how to define these logical names on the RDF client node:
$ DEFINE/SYSTEM RDEV_WRITE_GROUP_SIZE 30
$ DEFINE/SYSTEM RDEV_WRITE_GROUP_DELAY 1
To further reduce bandwidth, the RDEV_WRITE_GROUP_DELAY logical can be increased to two (2) or three (3).
Remote Device Facility (RDF) can survive network failures of up to 15 minutes long. If the network comes back within the 15 minutes allotted time, the RDCLIENT continues processing WITHOUT ANY INTERRUPTION OR DATA LOSS. When a network link drops while RDF is active, after 10 seconds, RDF creates a new network link, synchronizes I/Os between the RDCLIENT and RDSERVER, and continues processing.
The following example shows how you can test the RDF's ability to survive a network failure. (This example assumes that you have both the RDSERVER and RDCLIENT processes running.)
$ @tti_rdev:rdallocate tti::mua0:
RDF - Remote Device Facility (Version 4.1) - RDALLOCATE Procedure
Copyright (c) 1990, 1996 Touch Technologies, Inc.
Device TTI::TTI$MUA0 ALLOCATED, use TAPE01 to reference it
$ backup/rewind/log/ignore=label sys$library:*.* tape01:test
$ run sys$system:NCP
NCP> show known links
Known Link Volatile Summary as of 13-MAR-1996 14:07:38
Link Node PID Process Remote link Remote user
24593 20.4 (JR) 2040111C MARI_11C_5 8244 CTERM
16790 20.3 (FAST) 20400C3A -rdclient- 16791 tti_rdevSRV
24579 20.6 (CHEERS) 20400113 REMACP 8223 SAMMY
24585 20.6 (CHEERS) 20400113 REMACP 8224 ANDERSON
NCP> disconnect link 16790
.
.
.
Backup pauses momentarily before resuming. Sensing the network disconnect, RDF creates a new -rdclient- link. Verify this by entering the following command:
NCP> show known links
Known Link Volatile Summary as of 13-MAR-1996 16:07:00
Link Node PID Process Remote link Remote user
24593 20.4 (JR) 2040111C MARI_11C_5 8244 CTERM
24579 20.6 (CHEERS) 20400113 REMACP 8223 SAMMY
24585 20.6 (CHEERS) 20400113 REMACP 8224 ANDERSON
24600 20.3 (FAST) 20400C3A -rdclient- 24601 tti_rdevSRV
The RDF Security Access feature allows storage administrators to control which remote devices are allowed to be accessed by RDF client nodes.
You can allow specific RDF client nodes access to all remote devices.
For example, if the server node is MIAMI and access to all remote devices is granted only to RDF client nodes OMAHA and DENVER, then do the following:
Edit TTI_RDEV:CONFIG_MIAMI.DAT
CLIENT/ALLOW=(OMAHA,DENVER)
DEVICE $1$MUA0: MUAO, TK50
DEVICE MSA0: TU80, 1600bpi
OMAHA and DENVER (the specific RDF CLIENT nodes) are allowed access to all remote devices (MUA0, TU80) on the server node MIAMI.
If there is more than one RDF client node being allowed access, separate the node names by commas.
You can allow specific RDF client nodes access to a specific remote device.
If the server node is MIAMI and access to MUA0 is allowed by RDF client nodes OMAHA and DENVER, then do the following:
$ Edit TTI_RDEV:CONFIG_MIAMI.DAT
DEVICE $1$MUA0: MUA0, TK50/ALLOW=(OMAHA,DENVER)
DEVICE MSA0: TU80, 1600bpi
OMAHA and DENVER (the specific RDF client nodes ) are allowed access only to device MUA0. In this situation, OMAHA is not allowed to access device TU80.
You can deny access from specific RDF client nodes to all remote devices. For example, if the server node is MIAMI and you want to deny access to all remote devices from RDF client nodes OMAHA and DENVER, do the following:
$ Edit TTI_RDEV:CONFIG_MIAMI.DAT
CLIENT/DENY=(OMAHA,DENVER)
DEVICE $1$MUA0: MUA0, TK50
DEVICE MSA0: TU80, 16700bpi
OMAHA and DENVER are the specific RDF client nodes denied access to all the remote devices (MUA0, TU80) on the server node MIAMI.
You can deny specific client nodes access to a specific remote device.
If the server node is MIAMI and you want to deny access to MUA0 from RDF client nodes OMAHA and DENVER, do the following:
$ Edit TTI_RDEV:CONFIG_MIAMI.DAT
DEVICE $1$MUA0: MUA0, TK50/DENY=(OMAHA,DENVER)
DEVICE MSA0: TU80, 16700bpi
OMAHA and DENVER RDF client nodes are denied access to device MUA0 on the server node MIAMI.
One of the features of RDF is the RDserver Inactivity Timer. This feature gives system managers more control over rdallocated devices.
The purpose of the RDserver Inactivity Timer is to rddeallocate any rdallocated device if NO I/O activity to the rdallocated device has occurred within a predetermined length of time. When the RDserver Inactivity Timer expires, the server process drops the link to the client node and deallocates the physical device on the server node. On the client side, the client process deallocates the RDEVn0 device.
The default value for the RDserver Inactivity Timer is 3 hours.
The RDserver Inactivity Timer default value can be manually set by defining a system wide logical on the RDserver node prior to rdallocating on the rdclient node. The logical name is RDEV_SERVER_INACTIVITY_TIMEOUT.
To manually set the timeout value:
$ DEFINE/SYSTEM RDEV_SERVER_INACTIVITY_TIMEOUT seconds
For example, to set the RDserver Inactivity Timer to 10 hours, you would execute the following command on the RDserver node:
MDMS manages volume availability with the concept of a life cycle. The primary purpose of the life cycle is to ensure that volumes are only written when appropriate, and by authorized users. By setting a variety of attributes across multiple objects, you control how long a volume, once written, remains safe. You also set the time and interval for a volume to stay at an offsite location for safe keeping, then return for re-use once the interval passes.
This section describes the volume life cycle, relating object attributes, commands and life cycle states. This section also describes how to match volumes with drives by creating media type object records.
The volume life cycle determines when volumes can be written, and controls how long they remain safe from being overwritten. See MDMS Volume State Transitions describes operations on volumes within the life cycle.
Each row describes an operation with current and new volume states, commands and GUI actions that cause volumes to change states, and if applicable, the volume attributes that MDMS uses to cause volumes to change states. Descriptions following the table explain important aspects of each operation.
This section describes the transitions between volume states. These processes enable you to secure volumes from unauthorized use by MDMS client applications, or make them available to meet continuing needs. Additionally, in some circumstances, you might have to manually force a volume transition to meet an operational need.
Understanding how these volume transitions occur automatically under MDMS control, or take place manually will help you manage your volumes effectively.
You have more than one option for creating volume object records. You can create them explicitly with the MDMS CREATE VOLUME command: individually, or for a range of volume identifiers.
You can create the volumes implicitly as the result of an inventory operation on a jukebox. If an inventory operation finds a volume that is not currently managed, a possible response (as you determine) is to create a volume object record to represent it.
You can also create volume object records for large numbers of volumes by opening the jukebox, loading the volumes into the jukebox slots, then running an inventory operation.
Finally, it is possible to perform scratch loads on standalone or stacker drives using the MDMS LOAD DRIVE /CREATE command. If the volume that is loaded is does not exist in the database, MDMS will create it.
You must create volumes explicitly through the MDMS CREATE VOLUME command, or implicitly through the inventory or load operations.
Use the MDMS initialize feature to make sure that MDMS recognizes volumes as initialized. Unless you acquire preinitialized volumes, you must explicitly initialize them MDMS before you can use them. If your operations require, you can initialize volumes that have just been released from allocation.
When you initialize a volume or create a volume object record for a preinitialized volume, MDMS records the date in the initialized date attribute of the volume object record.
Typically, applications request the allocation of volumes. Only in rare circumstances will you have to allocate a volume to a user other than ABS or HSM. However, if you use command procedures for customized operations that require the use of managed media, you should be familiar with the options for volume allocation. Refer to the ABS or HSM Command Reference Guide for more information on the MDMS ALLOCATE command.
Once an application allocates a volume, MDMS allows read and write access to that volume only by that application. MDMS sets volume object record attributes to control transitions between volume states. Those attributes include:
The application requesting the volume can direct MDMS to set additional attributes for controlling how long it keeps the volume and how it releases it. These attributes include:
MDMS allows no other user or application to load or unload a volume with the state attribute value set to ALLOCATED, unless the user has MDMS_LOAD_ALL rights. This volume state allows you to protect your data. Set the amount of time a volume remains allocated according to your data retention requirements.
During this time, you can choose to move the volume to an offsite location.
When a volume's scratch date passes, MDMS automatically frees the volume from allocation.
If the application or user negates the volume object record scratch date attribute, the volume remains allocated permanently.
Use this feature when you need to retain the data on the volume indefinitely.
After the data retention time has passed, you have the option of making the volume immediately available, or you can elect to hold the volume in a TRANSITION state. To force a volume through the TRANSITION state, negate the volume object record transition time attribute.
You can release a volume from transition with the DCL command MDMS SET VOLUME /RELEASE. Conversely, you can re-allocate a volume from either the FREE or TRANSITION states with the DCL command MDMS SET VOLUME /RETAIN.
Once MDMS sets a volume's state to FREE, it can be allocated for use by an application once again.
You can make a volume unavailable if you need to prevent ongoing processing of the volume by MDMS. MDMS retains the state from which you set the UNAVAILABLE state. When you decide to return the volume for processing, the volume state attribute returns to its previous value.
The ability to make a volume unavailable is a manual feature of MDMS.
MDMS matches volumes with drives capable of loading them by providing the logical media type object. The media type object record includes attributes whose values describe the attributes of a type of volume.
The domain object record names the default media types that any volume object record will take if none is specified.
Create a media type object record to describe each type of volume. Drive object records include an attribute list of media types the drive can load, read, and write.
Volume object records for uninitialized volumes include a list of candidate media types. Volume object records for initialized volumes include a single attribute value that names a media type. To allocate a drive for a volume, the volume's media type must be listed in the drive object record's media type field, or its read-only media-type field for read-only operations.
Use magazines when your operations allow you to move and manage groups of volumes for single users. Create a magazine object record, then move volumes into the magazine (or similar carrier) with MDMS. All the volumes can now be moved between locations and jukeboxes by moving the magazine to which they belong.
The jukeboxes must support the use of magazines; that is, they must use carriers that can hold multiple volumes at once. If you choose to manage the physical movement of volumes with magazines, then you may set the usage attribute to MAGAZINE for jukebox object records of jukeboxes that use them. You may also define the topology attribute for any jukebox used for magazine based operations.
If your jukebox does not have ports, and requires you to use physical magazines, you do not have to use the MDMS magazine object record. The jukebox can still access volumes by slot number. Single volume operations can still be conducted by using the move operation on individual volumes, or on a range of volumes.
MDMS provides a feature that allows you to define a series of OpenVMS DCL symbols that describe the attributes of a given volume. By using the /SYMBOLS qualifier with the MDMS SHOW VOLUME command, you can define symbols for all the volume object record attribute values. Use this feature interactively, or in DCL command procedures, when you need to gather information about volumes for subsequent processing.
Refer to the ABS or HSM Command Reference Guide description of the MDMS SHOW VOLUME command.
MDMS manages volumes and devices as autonomously as possible. However, it is sometimes necessary - and perhaps required - that your operations staff be involved with moving volumes or loading volumes in drives. When MDMS cannot conduct an automatic operation, it sends a message through the OpenVMS OPCOM system to an operator terminal to request assistance.
Understanding this information will help you set up effective and efficient operations with MDMS.
This section describes how to set up operator communication between MDMS and the OpenVMS OPCOM facility. Follow the steps in See Setting Up Operator Communication to set up operator communication.
Set the domain object record OPCOM attribute with the default OPCOM classes for any node in the MDMS management domain.
Each MDMS node has a corresponding node object record. An attribute of the node object record is a list of OPCOM classes through which operator communication takes place. Choose one or more OPCOM classes for operator communication to support operations with this node.
Identify the operator terminals closest to MDMS locations, drives and jukeboxes. In that way, you can direct the operational communication between the nodes and terminals whose operators can respond to it.
Make sure that the terminals are configured to receive OPCOM messages from those classes. Use the OpenVMS REPLY/ENABLE command to set the OPCOM class that corresponds to those set for the node or domain.
$REPLY/ENABLE=(opcom_class,[...])
Where opcom_class specifications are those chosen for MDMS communication.
Several commands include an assist feature where you can either require or forego operator involvement. Other MDMS features allow you to communicate with particular OPCOM classes, making sure that specific operators get messages. You can configure jukebox drives for automatic loading, and stand alone drives for operator supported loading. See See Operator Management Features for a list of operator communication features and your options for using them.
Once configured, MDMS serves ABS and HSM with uninterrupted access to devices and volumes for writing data. Once allocated, MDMS catalogs volumes to keep them safe, and makes them available when needed to restore data.
To service ABS and HSM, you must supply volumes for MDMS to make available, enable MDMS to manage the allocation of devices and volumes, and meet client needs for volume retention and rotation.
To create and maintain a supply of volumes, you must regularly add volumes to MDMS management, and set volume object record attributes to allow MDMS to meet ABS and HSM needs.
To prepare volumes for use by MDMS, you must create volume object records for them and initialize them if needed. MDMS provides different mechanisms for creating volume object records: the create, load, and inventory operations. When you create volume object records, you should consider these factors:
If you create volume object records with the use of a vision equipped jukebox, you must command MDMS to use the jukebox vision system and identify the slots in which the new volumes reside. These two operational parameters must be supplied to either the create or inventory operation.
For command driven operations, these two commands are functionally equivalent.
$MDMS INVENTORY JUKEBOX jukebox_name /VISION/SLOTS=slot_range /CREATE
$MDMS CREATE VOLUME /JUKEBOX=jukebox_name /VISION/SLOTS=slot_range
If you create volume object records with the use of a jukebox that does not have a vision system, you must supply the range of volume names as they are labelled and as they occupy the slot range.
If you create volume object records for volumes that reside in a location other than the default location (as defined in the domain object record), you must identify the placement of the volumes and the location in the onsite or offsite attribute. Additionally, you must specify the volume name or range of volume names.
If you create volume object records for volumes that reside in the default onsite location, you need not specify the placement or onsite location. However, you must specify the volume name or range of volume names.
If you acquire preinitialized volumes for MDMS management, and you want to bypass the MDMS initialization feature, you must specify a single media type attribute value for the volume.
Select the format to meet the needs of your MDMS client application. For HSM, use the BACKUP format. For ABS, use BACKUP or RMUBACKUP.
Use a record length that best satisfies your performance requirements. Set the volume protection using standard OpenVMS file protection syntax. Assign the volume to a pool you might use to manage the consumption of volumes between multiple users.
Static volume attributes rarely, if ever, need to be changed. MDMS provides them to store information that you can use to better manage your volumes.
The description attribute stores up to 255 characters for you to describe the volume, its use, history, or any other information you need.
The brand attribute identifies the volume manufacturer.
Use the record length attribute to store the length or records written to the volume, when that information is needed.
If you use a stand alone drive, enable MDMS operator communication on a terminal near the operator who services the drive. MDMS signals the operator to load and unload the drive as needed.
You must have a ready supply of volumes to satisfy load requests. If your application requires specific volumes, they must be available, and the operator must load the specific volumes requested.
To enable an operator to service a stand alone drive during MDMS operation, perform the actions listed in See Configuring MDMS to Service a Stand Alone Drive.
Stock the location where the drive resides with free volumes. |
|
For all subsequent MDMS actions involving the drive, use the assist feature. |
MDMS incorporates many features that take advantage of the mechanical features of automated tape libraries and other medium changers. Use these features to support lights-out operation, and effectively manage the use of volumes.
Jukeboxes that use built-in vision systems to scan volume labels provide the greatest advantage. If the jukebox does not have a vision system, MDMS has to get volume names by other means. For some operations, the operator provides volume names individually or by range. For other operations, MDMS mounts the volume and reads the recorded label.
The inventory operation registers the contents of a jukebox correctly in the MDMS database. You can use this operation to update the contents of a jukebox whenever you know, or have reason to suspect the contents of a jukebox have changed without MDMS involvement.
When you need to update the database in response to unknown changes in the contents of the jukebox, use the inventory operation against the entire jukebox. If you know the range of slots subject to change, then constrain the inventory operation to just those slots.
If you inventory a jukebox that does not have a vision system, MDMS loads and mounts each volume, to read the volume's recorded label.
When you inventory a subset of slots in the jukebox, use the option to ignore missing volumes.
If you need to manually adjust the MDMS database to reflect the contents of jukebox, use the nophysical option for the MDMS move operation. This allows you to perform a logical move for to update the MDMS database.
If you manage a jukebox, you can use the inventory operation to add volumes to MDMS management. The inventory operation includes the create, preinitialized, media types, and inherit qualifiers to support such operations.
Take the steps in See How to Create Volume Object Records with INVENTORY to use a vision jukebox to create volume object records.
To assist with accounting for volume use by data center clients, MDMS provides features that allow you to divide the volumes you manage by creating volume pools and assigning volumes to them.
Use MDMS to specify volume pools. Set the volume pool options in ABS or HSM to specify that volumes be allocated from those pools for users as needed. See Pools and Volumes identifies the pools respective to a designated group of users. Note that `No Pool' is for use by all users.
The pool object record includes two attributes to assign pools to users: authorized users, and default users.
Set the authorized users list to include all users, by node or group name, who are allowed to allocate volumes from the pool.
Set the default users list to include all users, by node or group name, for whom the pool will be the default pool. Unless another pool is specified during allocation, volumes will be allocated from the default pool for users in the default users list.
Because volume pools are characterized in part by node or group names, anytime you add or remove nodes or groups, you must review and adjust the volume pool attributes as necessary.
After you create a volume pool object record, you can associate managed volumes with it. Select the range of volumes you want to associate with the pool and set the pool attribute of the volumes to the name of the pool.
This can be done during creation or at any time the volume is under MDMS management.
To change access to volume pools, modify the membership of the authorized users list attribute.
If you are using the command line to change user access to volume pools, use the /ADD and /REMOVE command qualifiers to modify the current list contents. Use the /NOAUTHORIZED_USERS qualifier to erase the entire user list for the volume pool.
If you are using the GUI to change user access to volume pools, just edit the contents of the authorized users field.
You can also authorize users with the /DEFAULT_USERS attribute, which means that the users are authorized, and that this pool is the pool for which allocation requests for volumes are applied if no pool is specified in the allocation request. You should ensure that any particular user has a default users entry in only one pool.
You can delete volume pools. However, deleting a volume pool may require some additional clean up to maintain the MDMS database integrity. Some volume records could still have a pool attribute that names the pool to be deleted, and some DCL command procedures could still reference the pool.
If volume records naming the pool exist after deleting the pool object record, find them and change the value of the pool attribute.
The MDMS CREATE VOLUME and MDMS LOAD DRIVE commands in DCL command procedures can specify the deleted pool. Change references to the delete pool object record, if they exist, to prevent the command procedures from failing.
You might want to remove volumes from management for a variety of reasons:
To temporarily remove a volume from management, set the volume state attribute to UNAVAILABLE. Any volume object record with the state set to UNAVAILABLE remains under MDMS management, but is not processed though the life cycle. These volumes will not be set to the TRANSITION or FREE state. However, these volumes can be moved and their location maintained.
To permanently remove a volume from management, delete the volume object record describing it.
Volume rotation involves moving volumes to an off-site location for safekeeping with a schedule that meets your needs for data retention and retrieval. After a period of time, you can retrieve volumes for re-use, if you need them. You can rotate volumes individually, or you can rotate groups of volumes that belong to magazines.
The first thing you have to do for a volume rotation plan is create location object records for the on-site and off-site locations. Make sure these location object records include a suitable description of the actual locations. You can optionally specify hierarchical locations and/or a range of spaces, if you want to manage volumes by actual space locations.
You can define as many different locations as your management plan requires.
Once you have object records that describe the locations, choose those that will be the domain defaults (defined as attributes of the domain object record). The default locations will be used when you create volumes or magazines and do not specify onsite and/or offsite location names. You can define only one onsite location and one offsite location as the domain default at any one time.
Manage the volume rotation schedule with the values of the offsite and onsite attributes of the volumes or magazines you manage. You set these values. In addition to setting these attribute values, you must check the schedule periodically to select and move the volumes or magazines.
See Sequence of Volume Rotation Events shows the sequence of volume rotation events and identifies the commands and GUI actions you issue.
MDMS starts three scheduled activities at 1AM, by default, to do the following:
These three activities are controlled by a logical, are separate jobs with names, generate log files, and notify users when volumes are deallocated. These things are described in the sections below.
The start time for scheduled activities is controlled by the logical:
MDMS$SCHEDULED_ACTIVITIES_START_HOUR
By default, the scheduled activities start a 1AM which is defined as:
$ DEFINE/SYSTEM/NOLOG MDMS$SCHEDULED_ACTIVITIES_START_HOUR 1
You can change when the scheduled activities start by changing this logical in SYS$STARTUP:MDMS$SYSTARTUP.COM. The hour must be an integer between 0 and 23.
When these scheduled activities jobs start up, they have the following names:
If any volumes are deallocated, the users in the Mail attribute of the Domain object will receive notification by VMS mail.
Operators will receive Opcom requests to move the volumes or magazines.
These scheduled activities generate log files. These log files are located in MDMS$LOGFILE_LOCATION and are named:
These log files do not show which volumes or magazines were acted upon. They show the command that was executed and whether it was successful or not.
If the Opcom message is not replied to by the time the next scheduled activities is started, the activity is cancel and a new activity is scheduled. For example, nobody replied to the message from Saturday at 1AM, so on Sunday MDMS canceled the request and generated a new request. The log file for Saturday night would look like this:
$ SET VERIFY
$ SET ON
$ MDMS MOVE VOL */SCHEDULE
%MDMS-E-CANCELED, request canceled by user
MDMS$SERVER job terminated at 25-APR-1999 01:01:30.48
Nothing is lost because the database did not change, but this new request could require more volumes or magazines to be moved.
The following shows an example that completed successfully after deallocating and releasing the volumes:
$ SET VERIFY
$ SET ON
$ MDMS DEALLOCATE VOLUME /SCHEDULE/VOLSET
MDMS$SERVER job terminated at 25-APR-1999 01:03:31.66
To notify users when the volumes are deallocated, place the user names in the Mail attribute of the Domain object. For example:
$ MDMS show domain
Description: Smith's Special Domain
Mail: SYSTEM,OPERATOR1,SMITH
Offsite Location: JOHNNY_OFFSITE_TAPE_STORAGE
Onsite Location: OFFICE_65
Def. Media Type: TLZ09M
Deallocate State: TRANSITION
Opcom Class: TAPES
Request ID: 496778
Protection: S:RW,O:RW,G:R,W
DB Server Node: DEBBY
DB Server Date: 26-APR-1999 14:20:08
Max Scratch Time: NONE
Scratch Time: 365 00:00:00
Transition Time: 1 00:00:00
Network Timeout: NONE
$
In the above example, users SYSTEM, OPERATOR1, and SMITH will receive VMS mail when any volumes are deallocated during scheduled activities or when some one issues the following command:
$ MDMS DEALLOCATE VOLUME /SCHEDULE/VOLSET
If you delete all users in the Mail attribute, nobody will receive mail when volumes are deallocated by the scheduled activities or the DEALLOCATE VOLUME /SCHEDULE command.
MDMS GUI users have access to features that guide them through complex tasks. These features conduct a dialog with users, asking them about their particular configuration and needs, and then provide the appropriate object screens with information about setting specific attribute values.
The features support tasks that accomplish the following:
The procedures outlined in this section include command examples with recommended qualifier settings shown. If you choose to perform these tasks with the command line interface, use the MDMS command reference for complete command details.
This task offers the complete set of steps for configuring a drive or jukebox to an MDMS domain and adding new volumes used by those drives. This task can be performed to configure a new drive or jukebox that can use managed volumes.
This task can also be performed to add new volumes into management that can use managed drives and jukeboxes.
Verify that the drive is on-line and available.
|
|
If you are connecting the jukebox or drive to a set of nodes which do not already share access to a common device, then create a group object record. |
|
If you are configuring a new jukebox into management, then create a jukebox object record. |
|
If the drive you are configuring uses a new type of volume, then create a media type object record. |
|
If you need to identify a new place for volume storage near the drive, then create a location object record. |
|
Create the drive object record for the drive you are configuring into MDMS management. |
|
Enable the drive (and if you just added a jukebox, enable it too). |
|
If you are adding new volumes into MDMS management, then continue with See . |
|
If you have added a new media type to complement a new type of drive, and you plan to use managed volumes, set the volumes to use the new media type. |
|
If the volumes you are processing are of a type you do not presently manage, complete the actions in this step. Otherwise, continue with See .
Create a media type object record. |
|
If you are using a jukebox with a vision system to create volume object records, then continue with See . Otherwise, continue with See to create volume records. |
|
If you use magazines in your operation, then continue with this step. Otherwise, continue with See .
If you do not have a managed magazine that is compatible with the jukebox, then create a magazine object record.
Place the volumes in the magazine.
$MDMS MOVE MAGAZINE magazine_name jukebox_name /START_SLOT=n |
|
Place the volumes in the jukebox. If you are not using all the slots in the jukebox, note the slots you are using for this operation. Inventory the jukebox, or just the slots that contain the new volumes.
If you are processing pre-initialized volumes, use the /PREINITIALIZED qualifier, then your volumes are ready for use. |
|
Initialize the volumes in the jukebox if they were not created as preinitialized. After you initialize volumes, you are done with this procedure. |
|
Create volume object records for the volumes you are going to manage.
If you are processing preinitialized volumes, use the /PREINITIALIZED qualifier, then your volumes are ready for use. |
|
Initialize the volumes. This operation will direct the operator when to load and unload the volumes from the drive. |
This task describes the complete set of decisions and actions you could take in the case of removing a drive from management. That is, when you have to remove the last drives of a particular kind, and take with it all associated volumes, then update any remaining MDMS object records that reference the object records you delete. Any other task of removing just a drive (one of many to remain) or removing and discarding volumes involves a subset of the activities described in this procedure.
If there is a volume in the drive you are about to remove from management, then unload the volume from the drive. |
|
Delete the drive from management. |
|
If you have media type object records to service only the drive you just deleted, then complete the actions in this step. Otherwise, continue with See .
Delete the media type object record.
If volumes remaining in management reference the media type, then set the volume attribute value for those volumes to reference a different media type value. Use the following command for uninitialized volumes:
Use the following command for initialized volumes: |
|
If the drives you have deleted belonged to a jukebox, then complete the actions in this step. Otherwise, continue with See .
If the jukebox still contains volumes, move the volumes (or magazines, if you manage the jukebox with magazines) from the jukebox to a location that you plan to keep under MDMS management. |
|
If a particular location served the drives or jukebox, and you no longer have a need to manage it, then delete the location. |
|
Move all volumes, the records of which you are going to delete, to a managed location. |
|
If the volumes to be deleted exclusively use a particular media type, and that media type has a record in the MDMS database, then take the actions in this step. Otherwise, continue with See .
Delete the media type object record.
If drives remaining under MDMS management reference the media type you just deleted, then update the drives' media type list accordingly. |
|
If the volumes to be deleted are the only volumes to belong to a volume pool, and there is no longer a need for the pool, then delete the volume pool. |
|
If the volumes to be deleted exclusively used certain managed magazines, then delete the magazines. |
|
This procedure describes how to gather and rotate volumes from the onsite location to an offsite location. Use this procedure in accordance with your data center site rotation schedule to move backup copies of data (or data destined for archival) to an offsite location. Additionally, this procedure processes volumes from the offsite location into the onsite location.
This procedure describes the steps you take to move allocated volumes from a jukebox and replace them with scratch volumes. This procedure is aimed at supporting backup operations, not operations that involve the use of managed media for hierarchical storage management.
Report on the volumes to remove from the jukebox. |
|
If you manage the jukebox on a volume basis, perform this step with each volume, otherwise proceed with See with instructions for magazine management. |
|
Identify the magazines to which the volumes belong, then move the magazines from the jukebox. |
|
If you manage the jukebox on a volume basis, perform this step, otherwise proceed with See for magazine management. |
|
Move free volumes to the magazine, and move the magazine to the jukebox. |
Preparing For Disaster Recovery
In the event of a disaster, it is essential to know how to get your system up and running as quickly as possible. So that you are prepared for a disaster situation, this section contains the following information:
To ease the recovery process, you should configure your system as recommended in Archive Backup System for OpenVMS Installation Guide .
The information in See Preparing for Disaster Recovery describes the preparations you need to make to be prepared for a disaster situation.
See Disaster Recovery Tasks describes the tasks you must perform to make sure you are prepared in the event of a disaster situation.
See Special Save Request shows an illustrated view of the special save requests that you need to create to ensure quick recovery from a disaster situation.
To recover ABS, follow the procedure in See Recovering ABS.
To recover any OpenVMS client nodes, all ABS client nodes must have access to a tape device or disk device that is compatible with the volume that contains the save set for that node. The recovery procedure is same as described in See Recovering ABS From A Disaster Situation.
When creating a save or restore request, you can specify the exact time that you want the request to begin. See Start Time Formats lists the valid formats that you can use and defines each format listed.
When creating an ABS save request, you can specify an explicit interval at which to repeat the save request. Modifying the explicit interval option does not change the next scheduled start time for the save request. If you are using the INT_QUEUE_MANAGER or EXT_QUEUE_MANAGER scheduler interface option this interval is ignored. For EXT_SCHEDULER and DECSCHEDULER please refer to the description of interval specification for the 3rd party scheduler product being used.
The explicit interval is passed as a string to the scheduler interface being used. It is not used for the default scheduler interface option INT_QUEUE_MANAGER.
Archive Backup System for OpenVMS (ABS) provides the following cleanup utilities:
This information presented in this appendix describes how to start, stop, and change the default behavior of ABS cleanup utilities.
ABS provides the Database Cleanup Utility so that one-time-only save or restore requests that have successfully completed and are not scheduled to run again will be removed from ABS policy database. Review the following descriptions to understand the criteria that the Database Cleanup Utility uses:
The Database Cleanup Utility is started when you run the file ABS$STARTUP.COM. It runs as an OpenVMS batch job and resubmits itself every night at midnight. You can also start the database cleanup utility by executing command procedure ABS$SYSTEM:ABS$START_DB_CLEANUP.COM anytime.
If you wish to change the start time, modify the start_time symbol in command procedure ABS$SYSTEM:ABS$START_DB_CLEANUP.COM.
The default behavior of the Database Cleanup Utility is to remove any successfully executed one-time-only job records from ABS policy database after 72 hours. However, if you wish to change the cleanup delay, modify the symbol cleanup_delay in the command procedure ABS$SYSTEM:ABS$START_DB_CLEANUP.COM.
ABS database cleanup utility creates a new log file named ABS$LOG:ABS_CLEAN_DB_UTIL.LOG each time the cleanup utility is started. This log file contains the information about the records that are removed and any associated error messages.
Recommendation:
For maintenance purposes, periodically check this log file and purge the older versions.
To shut down the database cleanup utility job and to manually prevent the future scheduling, use one of the methods in See Shutting Down the Database Cleanup Utility.
ABS provides a Catalog Cleanup Utility that removes the references to expired data objects from ABS catalog. This feature helps maintain the size of the catalogs. The Catalog Cleanup Utility searches ABS catalogs for any entries that have expired on or before yesterday's date, and then it removes those entries from the catalogs.
ABS creates an OpenVMS batch job for the Catalog Cleanup Utility on each node that is running ABS. If multiple nodes access and use the same catalogs, you may want to change the default behavior of the Catalog Cleanup Utility. Multiple nodes may access the same catalog if the fol-lowing conditions are present:
The catalog cleanup utility is started automatically in SYS$STARTUP:ABS$STARTUP.COM. Command procedure ABS$SYSTEM:ABS$START_CATALOG_CLEANUP.COM is called in the startup procedure to create an OpenVMS batch job which is scheduled to run at noon. The job resubmitts itself for noon every day.
If you wish to change the start time, modify the start_time symbol in command procedure ABS$SYSTEM: ABS$START_CATALOG_CLEANUP.COM. The next time the catalog cleanup utility will resubmit itself with the new start time.
The Catalog Cleanup Utility creates the following log files:
The ABS catalog cleanup utility creates a new log file named ABS$LOG:ABS_CATALOG_DB_UTIL.LOG each time the cleanup utility is started. This log file contains the information about the records that are removed and any associated error messages.
Recommendation:
For maintenance purposes, periodically check this log file and purge the older versions
The file SYS$MANAGER:ABS$SHUTDOWN.COM will shutdown the Catalog Cleanup Utility. This is the recommended method of shutting down ABS and any of the utilities that ABS invokes.
If you need to shut down the catalog cleanup utility without shutting down the rest of the ABS software, deassign ABS_CATALOG_CLEANUP logical name:
$ SHOW LOG/FULL ABS_CATALOG_CLEANUP "ABS_CATALOG_CLEANUP" -
_$ [exec] = "1" (LNM$SYSTEM_TABLE)
$ DEASSIGN/SYSTEM/EXECUTIVE ABS_CATALOG_CLEANUP
$ SHOW LOG/FULL ABS_CATALOG_CLEANUP
%SHOW-S-NOTRAN, no translation for logical name ABS_CATALOG_CLEANUP
Once this logical name is deassigned, ABS performs an orderly shut down of the Catalog Cleanup Utility. Running SYS$MANAGER:ABS$SHUTDOWN.COM deletes the Catalog Cleanup Utility batch job.
If you have previously shutdown the Catalog Cleanup Utility by deassigning the logical name ABS_CATALOG_CLEANUP, you must redefine the logical in order for the Catalog Cleanup Utility to perform a cleanup operation. Failing to do so would allow the Catalog Cleanup Utility to continue to run, but the cleanup operation would not take place.
$ DEFINE/SYSTEM ABS_CATALOG_CLEANUP 1
There are two pieces that are required for the Catalog Cleanup Utility to run correctly:
Archive Backup System for OpenVMS (ABS) provides a set of backup schedules that ease the burden of restore operations. These schedules are illustrated in See Log-n Backup Schedules.
The worksheet provided in See Storage Policy Worksheet is designed to help you configure your ABS storage policies. To reuse this worksheet, make a copy of the worksheet and record your entries on the copy.
Reference:
For detailed information about storage policies, refer to Chapter 7
, Creating Storage Policies
.
The worksheet provided in See Environment Policy Worksheet is designed to help you configure your environment policies. To reuse this worksheet, make a copy of the worksheet and record your configuration on the copy.
Reference:
For detailed information about environment policies, refer to Chapter 8
, Creating Environment Policies
.
The worksheet provided in See Save Request Worksheet is designed to help you configure your save requests. To reuse the worksheet, make a copies of the worksheets and record your configuration on the copies.
Reference:
For detailed information about save requests, refer to Chapter 9
, Creating Save Requests
.
Two logical names have been defined to provide additional module tracing information. Typically, these logical names will not be useful to the customer, but they may assist ABS engineering team in tracking problems.
The following logical names are provided in ABS:
Enables tracing for all ABS components. When this logical name is set to "TRUE", it will provide information about the modules executed by the policy engine (client and server). |
Should you encounter problems when saving or restoring data using ABS for an NT client system, ABS provides way to help you troubleshoot the problem. Assign a system variable on the NT client system that, in turn, creates log files about the NT client system during ABS backup operations. These log files will assist you during the troubleshooting process.
To assign the system variable, use the procedure in See Assigning a System Variable for NT Troubleshooting.
If you are supporting NT or UNIX clients, to ensure successful save and restore operations, set the quotas to the following values on ABS OpenVMS server node:
UCX> SET PROTOCOL TCP /QUOTA=(SEND:50000,RECEIVE:50000)
ABS stores data on tape based on ANSI Standard X3.27-1987, File Structure and Labeling of Magnetic Tapes for Information Exchange. This standard requires that the block length (number of bytes per block for a file) be stored in the header section and the block count (number of blocks in a file) be stored in the end of file section. Together these fields determine the maximum number of bytes that the file contains on tape. So, in theory the following formula is implemented:
block length * block count = number of bytes
These fields on tape are stored in an ASCII format with the block length being five digits, and the block count being six digits. This allows for a maximum save request disk size of 99999 * 999999 = 99,998,900,001bytes (approximately 99 gigabytes (GB)).
ABS uses a default block length of 10240 bytes/block when it stores data to tape. As a result, the maximum disk size by default is 10240* 999999 = 10,239,989,760 (approximately 10 GB). If the actual number of bytes exceeds this amount, then ABS$UBS will raise the following assertion and the save request will fail:
assert error: expression = section_block_count <= 999999
The value of the block length is specified to the underlying gtar backup engine as a blocking factor. The blocking factor is defined as a multiple of 512 bytes. The default block length passed to gtar is "-b20". To determine an appropriate blocking factor or block length for a specific situation, follow these steps:
For example, if the disk size is approximately 30,000,000,000 bytes (30 GB), use the following formula:
30,000,000,000 / 999999 / 512 = 58.59 or 59
This results in a blocking factor of "-b59".
You can modify the default block length from the GUI for an NT or UNIX save or restore request on the Agent Qualifiers window (see Section or Section ). Specify this value in the Agent Qualifiers window.
Restriction:
ABS will not produce the correct results if the value exceeds "-b63". If the disk is large enough to exceed this amount, create more than one save request for that particular disk.
To modify the blocking factor, use the procedure described in See Modifying the Blocking Factor.
Click Backup Agent-Specific Qualifiers and enter the value of -bnn |
Restore requirement:
When restoring data from a save request where the blocking factor has been modified, you must specify the same blocking factor that was specified on the save request. Otherwise, the restore request will fail due to an invalid block size on the tape. As a default, ABS uses 10240.
If you want to back up multiple types of ABS clients to the same volume set, use the same storage policy for those save requests. A single storage policy always uses the same volume set, whether the data comes from an OpenVMS, UNIX, or NT client.
To save data from multiple types of ABS clients to the same volume set, create save requests similar to the following examples:
$ ABS CREATE SAVE/NAME=VMS_SYSTEM/STORAGE=SYSTEM_BACKUPS -
_$/START=00:00 DISK$USER1: /OBJECT_TYPE=VMS_FILES
$ ABS CREATE SAVE/NAME-UNIX_SYSTEM/STORAGE=SYSTEM_BACKUPS -
_$/START=01:00 /abs /OBJECT_TYPE=UNIX_FILES
$ ABS CREATE SAVE/NAME=NT_SYSTEM/STORAGE=SYSTEM_BACKUPS -
_$/START=02:00 c:\ /OBJECT_TYPE=NT_FILES
By using the same storage policy, each save request uses the same volume set.
ABS generates a set of log files that contains information about ABS operations. Two of the log files shown in are policy engine log files. These log files contain information about ABS transactions and remain open to record ABS transactions. The third log file is a save or restore request log file that is generated when you execute a save or restore request using either the GUI or DCL.
See ABS Log Files describes the log file location, log file name, and contents of the specific log file.
ABS$POLICY_<node_name>3.LOG |
Audit information about ABS operations that includes a sequence number and an ABS command. |
|
<request_name>4.LOG |
Log information about the execution of a save or restore request. A new log file is created each time a save or restore request is executed. |
|
Recommendation:
Do not delete these log files. They contain important information that will assist ABS Engineering with any troubleshooting process. However, if the log files are consuming too much disk space, you can delete previous versions of the log files and keep only the most recent log files:
$ PURGE ABS$LOG:filename.log/KEEP=5 ! keeps the latest five versions
$ DELETE ABS$LOG:filename.log/BEFORE=date ! deletes any version before the specified date
A new logical name, ABS$COORD_ALPHA_STACKSIZE, has been added to ABS that can be used to increase the stack size on Alpha systems. ABS now sets the default stack size to 65536, (8 * 8192 = 65536). This corrected several ACCVIO and CMA-F-EXCCOP errors, especially at the end of a tape.
Additional error messages have been added and may show up in ABS$LOG:ABS$POLICY_ENGINE_<node_name>.LOG and ABS$LOG:ABS$POLICY_<node_name>.LOG files. These messages have been added primarily to aid engineering and the support organizations with diagnosing problems.
After upgrading ABS, it may appear that ABS has incorrectly recreated ABS database. This could be caused if your OpenVMS system was rebooted and ABS was not restarted before starting the upgrade installation procedure. This may cause ABS logicals not to be defined. The upgrade procedure requires certain logicals to be present on the OpenVMS system.
If you are upgrading ABS, make sure ABS has been running before you start the upgrade installation procedure. Follow these steps:
If you receive the following error during a save or restore request ABS_NET_CONN_ACCEPT_FAILED, network accept connection request failed you may set a logical name to help eliminate the problem.
$ DEFINE/SYSTEM ABS$MAX_IO_ACCESS_WAIT n
Where n is the number of 5 second increments for the server to wait for connections to be established. The default value is 4.
There are AUDIT flags in the ABS$SYSTEM:ABS$POLICY_ENGINE_CONFIG.DAT file which enable additional tracing for ABS (for example, ABS$AUDIT_SHOW_EXEC_ENV).
Do NOT change the values of these flags unless specifically told to by ABS support or engineering. These symbols will produce a potentially large amount of data to the ABS policy engine log files, or create additional log files.
If you see delays in allocating, loading, mounting or dismounting volumes for use by ABS, it is helpful to enable an OPCOM TAPE operator on your terminal window. MDMS sends helpful OPCOM message to the TAPE operator when it is having difficulty executing the request.
If you report a problem to your COMPAQ support organization, the following information should be included.
This appendix presents Archive Backup System for OpenVMS (ABS) error messages and provides descriptions and User Actions for each.
Explanation: The requested access is denied. The user does not have the proper access controls set to be granted access to the object.
User Action: Ask the owner of the object or the storage administrator to grant access to the object.
Explanation: The coordinator received a service request with no data mover designated as the sender.
User Action: ABS internal error. Submit an SPR.
Explanation: An attempt to dismount a volume failed because the archive File system associated with the storage policy does not support dismount operations. The backup agent information erroneously interpreted an output message as entering the DISMOUNT state.
User Action: ABS internal error. Submit an SPR.
Explanation: An attempt to extend a save operation failed because the archive file system associated with the storage policy does not support extend operations. The backup agent information erroneously interpreted an output message as entering the NEW_VOLUME state.
User Action: ABS internal error. Submit an SPR.
Explanation: The backup agent moving data was aborted.
User Action: Check the log file for more specific information associated with the error, correct the indicated error, and retry the save or restore request.
Explanation: The backup subprocess was stopped externally to ABS and the scheduler in use.
User Action: Retry the data movement operation.
Explanation: An explicitly named backup agent does not exist, or the backup agent information used during a save operation was deleted.
User Action: Specify an existing backup agent.
Explanation: An archive object entry instance was not found.
User Action: ABS internal error. Submit an SPR.
Explanation: An ABS internal archive object entry list is corrupt.
User Action: ABS internal error. Submit an SPR.
Explanation: An archive object entry was not found in the catalog.
User Action: Use a wildcard specification in the catalog reporting facility to verify the archive object name and specify the correct data object name. If using the wildcard specification does not find the correct data object, no valid save operations of the object have been performed.
Explanation: The archive object entry is already on ABS internal AOE return list.
User Action: ABS internal error. Submit an SPR.
Explanation: AOE Show Context is not NULL
User Action: ABS internal error. Submit an SPR.
Explanation: Common AOE information specified does not match the catalog.
User Action: ABS internal error. Submit an SPR.
Explanation: An unexpected exception occurred in the API.
User Action: If you are using the GUI, submit an SPR. If you are using the API, validate the parameters you specified to the API. If error still occurs, submit an SPR.
Explanation: An attempt to access archive resources such as tape drives, tape pools, or disk directories failed.
User Action: Check the transaction log file, repair the error, and retry the operation. Note that you may need to check the MDMS log files for access failures to MDMS archive resources.
Explanation: The specified storage policy was not found in ABS policy database.
User Action: Use a wildcard show operation to determine a valid storage policy name and specify the correct storage policy name.
Explanation: An attempt to release archive resources such as tape drives, media sets or disk directories failed.
User Action: Check the transaction log file, repair the error, and retry the operation. Note that you may need to check the MDMS log files for deaccess failures to MDMS archive resources.
Explanation: An attempt to extend a media set failed.
User Action: Check the transaction log file, repair the error, and retry the operation. Note that you may need to check the MDMS log files for deaccess failures to MDMS archive resources.
Explanation: A save request matching the specified name, name and version, or UID was not found. Specify a valid restore request name, name and version, or UID.
User Action: Use a wildcard show operation to determine a valid save request name; specify the correct save request name.
Explanation: The specified archive transaction was not found in ABS policy database.
User Action: ABS internal error. Submit an SPR.
Explanation: The backup agent state machine entered a bad state.
User Action: ABS internal error. Submit an SPR.
Explanation: An attempt to translate a platform-specific error failed.
User Action: ABS internal error. Submit an SPR.
Explanation: The specified backup agent does not support the specified movement type.
User Action: The backup agent information is incorrect. Submit an SPR.
Explanation: The transaction added to a session specified a storage policy other than the session's storage policy.
User Action: ABS internal error. Submit an SPR.
Explanation: An attempt to close the catalog failed.
User Action: ABS internal error. Submit an SPR.
Explanation: There is a syntax error on ABS command line
User Action: Correct the syntax and retry the command.
Explanation: The transaction coordinator encountered an error, but is retrying the operation.
Explanation: An unexpected exception occurred in the coordinator.
User Action: ABS internal error. Submit an SPR.
Explanation: No transaction UID was found on the coordinator command line.
User Action: Specify a valid transaction UID on the coordinator command line.
Explanation: An attempt to create an ABS environment policy failed.
User Action: Additional information should follow this error. Evaluate the additional information to determine the problem. Correct the problem and retry the command.
Explanation: An attempt to create an ABS storage policy has failed.
User Action: Additional information should follow this error. Evaluate the additional information to determine the problem. Correct the problem and retry the command.
Explanation: An attempt to create a subprocess to house the backup agent failed.
User Action: Check the transaction log file, correct the indicated error, and retry the operation.
Explanation: An unexpected exception occurred in the data mover.
User Action: ABS internal error. Submit an SPR.
Explanation: ABS policy engine's attempt to access ABS policy database failed.
User Action: Check to see if ABS account can access to ABS policy database that is pointed to by the logical ABSDATABASE. If this is true, ABS policy database is corrupt. Restore ABS policy database.
Explanation: An attempt to delete a record from ABS database failed.
User Action: Additional information should follow the error message. Evaluate the additional information to determine the specific problem with ABS policy database. Correct the problem and retry the command.
Explanation: An attempt to delete a record from persistent store failed.
User Action: Additional information should follow this error message. Evaluate the additional information to determine the specific problem with ABS policy database. Correct the problem and retry the operation.
Explanation: An attempt to insert a record from persistent store failed.
User Action: Additional information should follow this error message. Evaluate the additional information to determine the specific problem with ABS policy database. Correct the problem and retry the operation.
Explanation: An attempt to select a record from persistent store failed.
User Action: Additional information should follow this error message. Evaluate the additional information to determine the specific problem with ABS policy database. Correct the problem and retry the operation.
Explanation: A transaction could not be started on ABS policy database.
User Action: Additional information should follow this error message. Evaluate the additional information to determine the specific problem with ABS policy database. Correct the problem and retry the operation.
Explanation: An attempt to update a record from persistent store failed.
User Action: Additional information should follow this error message. Evaluate the additional information to determine the specific problem with ABS policy database. Correct the problem and retry the operation.
Explanation: An attempt to delete a backup agent subprocess failed.
User Action: Check the transaction log file for more information and correct any errors. The transaction should have completed.
Explanation: An attempt to create an on-disk directory for a Files-11 storage policy failed.
User Action: Make sure the primary archive Location specified has a valid Files-11 format. If it does, check the transaction log file for more specific information, correct the indicated error, and retry the operation.
Explanation: ABS found an unexpected relative volume number (RVN) mounted on a drive.
User Action: ABS internal error. Submit an SPR.
Explanation: ABS could not find a drive with the specified relative volume number (RVN) mounted on it.
User Action: ABS internal error. Submit an SPR.
Explanation: An attempt to create an storage policy failed because the storage policy already exists.
User Action: Specify a different storage policy name, or modify the existing storage policy to meet your needs.
Explanation: An attempt to create a save request failed because the save request already exists.
User Action: Specify a different save request name, or modify the existing save request to meet your needs.
Explanation: An attempt to create a request environment policy failed because the request environment policy already exists.
User Action: Specify a different request environment policy name, or modify the existing request environment policy to meet your needs.
Explanation: An attempt to create a restore request failed because the restore request already exists.
User Action: Specify a different restore request name, or modify the existing restore request to meet your needs.
Explanation: The specified element was not found in the list.
User Action: ABS internal error. Submit an SPR.
Explanation: The specified environment policy was not found in ABS policy database.
User Action: Use a wildcard show operation to determine a valid request environment policy name. Specify the valid request environment policy name.
Explanation: The check for media set consolidation interval failed.
User Action: ABS internal error. Submit an SPR.
Explanation: One step in the archive extend operation has completed.
User Action: ABS internal error. Submit an SPR.
Explanation: A failure occurred.
User Action: ABS internal error. Submit an SPR.
Explanation: A fatal error was detected in the data mover.
User Action: Additional information should follow this error message. Evaluate the additional information to determine the specific problem with ABS policy database. Correct the problem and retry the operation.
Explanation: SYS$FIND_HELD service failed.
User Action: ABS internal error. Submit an SPR.
Explanation: SYS$GETJPI(W) call failed.
User Action: ABS internal error. Submit an SPR.
Explanation: An attempt to get cluster node names failed.
User Action: ABS internal error. Submit an SPR.
Explanation: An attempt to get the current process ID failed.
User Action: ABS internal error. Submit an SPR.
Explanation: SYS$GETUAI service failed.
User Action: ABS internal error. Submit an SPR.
ABS_GET_UIC_FAILED, Failed to get current process UIC
Explanation: An attempt to get the current UIC failed.
User Action: ABS internal error. Submit an SPR.
Explanation: SYS$IDTOASC service failed.
User Action: ABS internal error. Submit an SPR.
Explanation: An attempt to activate ABS$USSSHR failed.
User Action: Additional information should follow this error. Evaluate the additional information, correct any indicated errors, and retry the operation.
Explanation: Some archive resources such as tape drives, media sets, or disk directories were allocated, but not all requested resources could be allocated.
User Action: None. The operation continues with the limited resources.
Explanation: An internal error occurred.
User Action: ABS internal error. Submit an SPR.
Explanation: A value being parsed from the backup agent output is too large to be contained in a 32-bit integer.
User Action: ABS internal error. Submit an SPR. The backup agent information is probably trying to parse an incorrect item.
Explanation: Invalid catalog access was specified.
User Action: ABS internal error. Submit an SPR.
Explanation: Invalid access identifier parameter. The access identifier parameter exceeds the maximum length.
User Action: Specify a valid length parameter.
Explanation: Invalid access rights list parameter was specified.
User Action: Specify a valid access rights list parameter.
Explanation: Invalid access rights string was specified.
User Action: Specify a valid access rights string.
Explanation: Invalid backup agent ID data was specified.
User Action: Agent_ID_Data parameter exceeds the maximum valid length. Specify a valid length Agent_ID_Data parameter.
Explanation: Invalid agent file system root parameter was specified.
User Action: Correct the agent file system root parameter.
Explanation: Invalid backup agent indicator was specified.
User Action: Specify a valid backup agent indicator.
Explanation: The specified backup agent information is invalid.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: Specified backup agent name is too long.
User Action: Either the backup agent name is too long or contains invalid characters. Specify a valid backup agent name.
Explanation: The backup agent file system root is invalid.
User Action: The agent_filesystem_root is too long. Specify a shorter agent_filesystem_root.
Explanation: Invalid backup agent selection criteria.
User Action: Specify a valid backup agent selection criteria.
Explanation: The UID string in the backup agent information is invalid.
User Action: If you have made changes to the backup agent information, restore the original backup agent information from ABS distribution kit. If you have not changed the backup agent information, submit an SPR.
Explanation: An invalid archive attributes structure or value was specified.
User Action: Specify a valid archive attributes structure or value.
Explanation: An invalid storage policy name was specified.
User Action: The storage policy name is either too long or contains invalid characters. Specify a valid storage policy name.
Explanation: An invalid archive date format was specified.
User Action: Specify a valid archive date format.
Explanation: An invalid archive interval value was specified.
User Action: Specify a valid archive interval value.
Explanation: Invalid archive object location structure.
User Action: Use ABS_Set_archv_object_location routine to set up a valid archive object location.
Explanation: An invalid archive requirements structure was specified.
User Action: Use ABS add_archive_reqmnts utility routine to set up a correct archive requirements structure.
Explanation: Invalid object archived status.
User Action: Archive object status is too long. Specify a shorter string.
Explanation: Catalog date archived is invalid.
User Action: Specify a valid date archived parameter.
Explanation: The transaction UID is invalid and is specified only in internal routines.
User Action: ABS internal error. Submit an SPR.
Explanation: Invalid backup date format.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid catalog was specified.
User Action: ABS internal error. Submit an SPR.
Explanation: Invalid catalog type.
User Action: ABS internal error. Submit an SPR.
Explanation: A nonexistent scratch collection was specified.
User Action: Specify a valid scratch collection when you create or modify the storage policy.
Explanation: An invalid compound object set structure or value was specified.
User Action: Specify a valid compound object set structure or value. Use theABS_Set_compound_object_set utility routine to set up a valid structure.
Explanation: Catalog ConnectionId argument is invalid.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid consolidation interval criteria structure or value was specified.
User Action: Specify a valid consolidation interval criteria structure or value.
Explanation: Invalid creation date format.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid movement type structure or value was specified.
User Action: Specify a valid movement type structure or value.
Explanation: Unable to format a binary date to an ASCII date format.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid date match criteria was specified.
User Action: A valid date match criteria was specified.
Explanation: Specified an invalid default flag.
User Action: Specify a valid default flag on ABS_SET_COMPOUND_OBJECT_SET utility routine.
Explanation: An invalid archive destination indicator structure or value was specified.
User Action: Specify a valid archive destination indicator structure or value.
Explanation: Invalid device name.
User Action: ABS internal error. Submit an SPR.
Explanation: Missing the diagnostic block parameter.
User Action: Specify a valid diagnostic block address.
Explanation: An invalid disk name was specified in the include specification.
User Action: Specify a valid disk name on the include specification.
Explanation: An invalid drive name list structure or value was specified.
User Action: Specify a valid drive name list structure or value.
Explanation: An invalid drive name structure or value was specified.
User Action: Specify a valid MDMS drive name.
Explanation: An invalid request environment policy name was specified.
User Action: The specified request environment policy name is either too long or contains invalid characters. Specify a valid request environment policy name.
Explanation: An invalid object name was specified.
User Action: Specify a valid object name.
Explanation: An invalid epilogue structure or value was specified.
User Action: Specify a valid epilogue structure or value.
Explanation: Specified an invalid request environment policy name.
User Action: The specified request environment policy does not exist. Perform a wildcard show operation to find the valid request environment policy names. Specify a valid request environment policy.
Explanation: Invalid expiration date format.
User Action: ABS internal error. Submit an SPR
Explanation: Invalid explicit interval.
User Action: Specify a valid explicit interval.
Explanation: Invalid archive file system filename.
User Action: Check the archive file system filename. If it contains valid characters, submit an SPR.
Explanation: An invalid archive file system structure or value was specified.
User Action: Specify a valid archive file system structure or value.
Explanation: Invalid parameter for tag formatting.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid idle retain flag structure or value was specified.
User Action: Specify a idle retain flag valid structure or value.
Explanation: Invalid instance characteristics.
User Action: ABS internal error. Submit an SPR
Explanation: An invalid listing option was specified.
User Action: Specify a valid listing option.
Explanation: An invalid log file handle in an open log file was specified.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid logging option was specified.
User Action: Specify a valid logging option.
Explanation: A nonexistent volume set name was specified.
User Action: Specify a valid volume set name.
Explanation: Invalid method parameter.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: Invalid method string parameter.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: Invalid revision date format.
User Action: ABS internal error. Submit an SPR
Explanation: New volume protocol for backup agent is invalid.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: An invalid node list structure or value was specified.
User Action: Specify a valid node list structure or value.
Explanation: An invalid node name was specified.
User Action: Specify a valid node name.
Explanation: Invalid object name.
User Action: ABS internal error. Submit an SPR
Explanation: An invalid UID for the object was specified.
User Action: Specify a valid object UID.
Explanation: An object version was specified without an object name.
User Action: You must specify object name when you specify an object version.
Explanation: An invalid options list parameter was specified.
User Action: Specify an options list parameter.
Explanation: An invalid original disposition structure or value was specified.
User Action: Specify a valid original disposition structure or value.
Explanation: An invalid output specification pointer was specified.
User Action: Specify a valid output specification pointer.
Explanation: An invalid owner name was specified.
User Action: The owner parameter is too long. Specify a valid owner.
Explanation: An invalid parameter mask was specified.
User Action: Specify a valid parameter mask to set on the set call.
Explanation: Invalid parameter for tag parsing.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid privileges parameter was specified, or the parameter exceeded the maximum length.
User Action: Specify a valid length privileges parameter.
Explanation: An invalid prologue structure or value was specified.
User Action: Specify a valid prologue structure or value.
Explanation: Invalid reason parameter.
User Action: Specify a valid notification reason.
Explanation: The backup agent information is incorrect.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: Invalid requestor logical name.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid requirements list head pointer structure or value was specified.
User Action: Specify a valid requirements list head pointer structure or value.
Explanation: An invalid request name was specified.
User Action: The specified request name is either too long or contains invalid characters. Specify a valid request name.
Explanation: An invalid restart interval structure or value was specified.
User Action: Specify a valid restart interval structure or value.
Explanation: Invalid restore information parameter.
User Action: Specify a valid alternate restore information structure. Normally, the alternate restore information is not specified, but is retrieved from the catalog.
Explanation: Invalid incremental restore level.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid retention criteria structure was specified.
User Action: You must specify a retention criteria parameter.
Explanation: An invalid retention criteria was specified.
User Action: Specify a valid retention criteria.
Explanation: An invalid retention indicator was specified.
User Action: Specify a valid retention indicator.
Explanation: Invalid return block parameter.
User Action: Specify a valid return_block address on a show call.
Explanation: An invalid root archive location was specified.
User Action: The specified primary archive location is too long. Specify a valid primary archive location.
Explanation: Invalid ReferenceObjectSetName.
User Action: ABS internal error. Submit an SPR.
Explanation: Invalid reference object set UID.
User Action: ABS internal error. Submit an SPR
Explanation: An invalid schedule_info structure value was specified.
User Action: Use ABS_Set_schedule_info utility routine to create a valid schedule_info structure.
Explanation: An invalid template segment was specified.
User Action: ABS internal error. Submit an SPR.
Explanation: Invalid selection criteria structure.
User Action: ABS internal error. Submit an SPR.
Explanation: Invalid agent service information.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid simple object set was specified.
User Action: Use ABS add_simple_object_set to add at least one simple object set to the compound object set.
Explanation: Invalid size attributes structure.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid special day parameter was specified.
User Action: The special day parameter is too long. Specify a valid special day class name.
Explanation: Invalid save set name.
User Action: ABS internal error. Submit an SPR.
Explanation: Invalid save set size format.
User Action: ABS internal error. Submit an SPR.
Explanation: Invalid save set UID.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid staging option was specified.
User Action: Specify a valid staging option.
Explanation: Backup agent startup protocol is invalid.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: The start time string is too long.
User Action: The start time string is either too long or contains invalid characters. Specify a valid start time string.
Explanation: Backup agent status return protocol is invalid.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
User Action: Specify a valid ABS status_t value to ABS GetMessageText routine.
Explanation: Invalid system logical name.
User Action: ABS internal error. Submit an SPR.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: Invalid tag format in tag template.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: Invalid tag template.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: An invalid template list was specified.
User Action: ABS internal error. Submit an SPR.
Explanation: Invalid severity in tag template.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: An invalid state was specified in tag template.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: An invalid time format was specified.
User Action: Specify a valid time on the API call or from the graphical user interface (GUI).
Explanation: An invalid parameter was passed to an internal ABS routine.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid user name parameter was specified, or the user name parameter exceeds the maximum length.
User Action: Specify a valid length user name parameter.
Explanation: Specified an invalid user_profile structure.
User Action: Specify a valid user profile structure.
Explanation: An invalid wait flag value was specified.
User Action: Specify either TRUE or FALSE.
Explanation: Agent work request protocol is invalid.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit
Explanation: Invalid transaction UID.
User Action: The save or restore request was tried to execute with a deleted transaction. Delete the job from OpenVMS Queue Manager or scheduler database being used and, if necessary, recreate the operation.
Explanation: Transaction severity is invalid.
User Action: ABS internal error. Submit an SPR.
Explanation: Invalid archive transaction status.
User Action: ABS internal error. Submit an SPR.
Explanation: Invalid transaction summary string parameter.
User Action: Correct the summary string parameter.
Explanation: Invalid transaction type.
User Action: ABS internal error. Submit an SPR.
Explanation: LIB$ADD_TIMES service failed.
User Action: ABS internal error. Submit an SPR.
Explanation: LIB$EDIV service failed.
User Action: ABS internal error. Submit an SPR.
Explanation: LIB$SUB_TIMES service failed.
User Action: ABS internal error. Submit an SPR.
Explanation: An attempt to execute an ABS lookup command failed.
User Action: Addition information should follow the error message. Evaluate the additional information to determine the problem. Correct the problem and retry the command.
Explanation: Coordinator could not locate a thread.
User Action: ABS internal error. Submit an SPR.
Explanation: Error while creating or reading mailbox.
User Action: ABS internal error. Submit an SPR.
Explanation: Object name and UID mismatch.
User Action: An attempt was made to specify both the name and UID of an object. Specify either the name only, the name and version, or the UID only.
Explanation: Owner access to catalog denied.
User Action: Access requested to the catalog does not match the authorized access. Contact the Storage Administrator for access to the specified catalog.
Explanation: Not authorized to access scratch collection.
User Action: Contact the Storage Administrator for access to the specified scratch collection.
Explanation: No backup agent found to handle specified object type.
User Action: No backup agent was found to save or restore the specified object type. Use the pull-down menu on the GUI to determine valid object types. Specify one of the valid object types.
Explanation: The AOE list is corrupt.
User Action: ABS internal error. Submit an SPR.
Explanation: AOE selection criteria argument does not match show context selection criteria.
User Action: ABS internal error. Submit an SPR.
Explanation: AOE show context is NULL.
User Action: ABS internal error. Submit an SPR.
Explanation: A catalog name was not specified on ABS_OpenCatalog.
User Action: Specify a valid catalog name.
Explanation: A compound object set was not specified.
User Action: A compound object set must be specified on the utility routine ABS add_simple_object_set. Use ABS_set_compound_object_set to create a compound object set.
Explanation: ABS cannot dismount FILES-11 disk. An attempt was made to perform a full restore operation to a mounted disk.
User Action: To be perform the full restore operation, you must dismount the disk.
Explanation: No more entries match AOE selection criteria.
User Action: Do not issue another ABS_ShowObjectEntry.
Explanation: No more agent information is available.
User Action: ABS internal error. Submit an SPR.
Explanation: No more entries match transaction log entry selection criteria.
User Action: Do not issue another ABS_ShowLogEntry.
Explanation: The specified node name was not found in the node list.
User Action: Attempted to start a catalog server on a node which is not authorized for the catalog. Start the server on an authorized node, or modify the authorized list for the catalog.
Explanation: The user process does not have the required privileges to perform the requested function.
User Action: Consult your system manager or storage administrator.
Explanation: No selection criteria found.
User Action: Specify at least one selection criteria for catalog lookup. See Chapter 13 , Looking Up Saved Data for the list of valid selection criteria.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: No transaction log entry return list in transaction log entry show context.
User Action: ABS internal error. Submit an SPR.
Explanation: The transaction log entry selection criteria argument does not match show context selection criteria.
User Action: ABS internal error. Submit an SPR.
Explanation: The transaction log entry show context is NULL.
User Action: ABS internal error. Submit an SPR.
Explanation: A node name that is not in the local cluster was specified.
User Action: Specify a valid node name within the local cluster.
Explanation: Volume labels are not supported by the archive file system (AFS).
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: Input line does not match parse data.
User Action: ABS internal error. Submit an SPR.
Explanation: Virtual memory could not be allocated.
User Action: Check the page file quota for the account executing the failing job. Increase the page file quota. If the problem persists, submit an SPR.
Explanation: No more agent parse data.
User Action: ABS internal error. Submit an SPR.
Explanation: No more single object sets in the transaction.
User Action: ABS internal error. Submit an SPR.
Explanation: Nonexistent archive transaction in the transaction chain.
User Action: ABS internal error. Submit an SPR.
Explanation: Specified an nonexistent catalog name.
User Action: A non existent catalog name was specified in a storage policy or on a restore request. Specify a valid catalog name.
Explanation: The specified tag was not a special tag.
User Action: ABS internal error. Submit an SPR.
Explanation: Routine not yet implemented.
User Action: If this message is returned from the GUI, submit an SPR. If this status is returned from the API, check API document for supported functions and parameters.
Explanation: The specified object was not found in ABS policy database.
User Action: ABS internal error. Submit an SPR.
Explanation: Failed to open the catalog.
User Action: Examine the associated error messages for specific reasons why the catalog could not opened.
Explanation: Failed to open the log file.
User Action: Check the associated error messages for specific reasons why the log file could not be opened.
Explanation: Tag formatting overflowed the buffer.
User Action: The command formatting template in the agent information is too long to fit into the 1024 byte command buffer. Reduce the size of the command formatting template so that it fits within a 1024 byte command buffer.
Explanation: Message information overflows field.
User Action: The information parsed from the backup agent exceeds the size of the available field. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: Error creating pipe.
User Action: Check the associated error messages for specific reasons why the pipe could not be created.
Explanation: Error deleting pipe.
User Action: Check the associated error messages for specific reasons why the pipe could not be deleted.
Explanation: Error while reading or writing pipe.
User Action: Check the associated error messages for specific reasons why the pipe could not be read from or written to.
Explanation: Platform-specific error in diagnostic block.
User Action: This message is provided as additional information to help diagnose a failure. The subsequent message contains VMS or other operating system-specific information.
Explanation: Delete failed because the record is in use.
User Action: Wait until all references in the catalog to this object have expired, then delete the object. If desired, you can change the expiration date of the catalog entries to expire before the original expiration date.
Explanation: Thread released with no archive access.
User Action: ABS internal error. Submit an SPR.
Explanation: The restore request was not found in ABS policy database.
User Action: A restore request matching the specified name, name and version, or UID was not found. Specify a valid restore request name, name and version, or UID.
Explanation: An attempt to execute a restore request failed.
User Action: Examine the restore log in ABS$LOG directory. Determine the reason for failure and correct the problem. Re-run the restore request.
Explanation: Agent retry exhausted.
User Action: The retry count specified in the request environment policy has been exceeded. The operation will not be retried.
Explanation: An attempt to execute a save request failed.
User Action: Examine the save log in ABS$LOG directory. Determine the reason for failure and correct the problem. Re-run the save request.
Explanation: Failed to schedule a request job using the current scheduler interface option.
User Action: Check the associated error messages for specific reasons why the job could not be scheduled. For scheduler interface option EXT_QUEUE_MANAGER and EXT_SCHEDULER check logfiles ABS$LOG:ABS$EXT_QUEUE_MANAGER*.LOG or ABS$LOG:ABS$EXT_SCHEDULER*.LOG for more information.
Explanation: Failed to schedule a request job using the current scheduler interface option.
User Action: Check the associated error messages for specific reasons why the job could not be created. For scheduler interface option EXT_QUEUE_MANAGER and EXT_SCHEDULER check logfiles ABS$LOG:ABS$EXT_QUEUE_MANAGER*.LOG or ABS$LOG:ABS$EXT_SCHEDULER*.LOG for more information.
Explanation: Failed to delete the request job using the current scheduler interface option.
User Action: Check the associated error messages for specific reasons why the job could not be deleted. For scheduler interface option EXT_QUEUE_MANAGER and EXT_SCHEDULER check logfiles ABS$LOG:ABS$EXT_QUEUE_MANAGER*.LOG or ABS$LOG:ABS$EXT_SCHEDULER*.LOG for more information.
Explanation: Invalid enumeration in scheduler utility.
User Action: ABS internal error. Submit an SPR.
Explanation: Invalid parameter to scheduler utility.
User Action: ABS internal error. Submit an SPR.
Explanation: An invalid time format was specified for start time or explicit interval.
User Action: Specify a valid OpenVMS date/time.
Explanation: Failed to modify the request job using the current scheduler interface option.
User Action: Check the associated error messages for specific reasons why the job could not be modified. For scheduler interface option EXT_QUEUE_MANAGER and EXT_SCHEDULER check logfiles ABS$LOG:ABS$EXT_QUEUE_MANAGER*.LOG or ABS$LOG:ABS$EXT_SCHEDULER*.LOG for more information.
Explanation: A job entry was not found for this request.
User Action: ABS internal error. Submit an SPR.
Explanation: Failed to show a request job for the current scheduler interface option.
User Action: Check the associated error messages for specific reasons why the job could not be shown. For scheduler interface option EXT_QUEUE_MANAGER and EXT_SCHEDULER check logfiles ABS$LOG:ABS$EXT_QUEUE_MANAGER*.LOG or ABS$LOG:ABS$EXT_SCHEDULER*.LOG for more information.
Explanation: Archive session is in progress.
User Action: The archive session is currently waiting for more work requests to come in. No User Action is required.
Explanation: Set device context failed.
User Action: Check the associated error messages for specific reasons why the operation failed. Check ABS account to make sure it has SETPRV and CMKRNL privileges set.
Explanation: An attempt to set an ABS environment policy failed.
User Action: Addition information should follow the error message. Evaluate the additional information to determine the problem. Correct the problem and retry the command.
Explanation: An attempt to set an ABS restore request failed.
User Action: Addition information should follow the error message. Evaluate the additional information to determine the problem. Correct the problem and retry the command.
Explanation: An attempt to set an ABS save request failed.
User Action: Addition information should follow the error message. Evaluate the additional information to determine the problem. Correct the problem and retry the command.
Explanation: An attempt to set an ABS storage policy failed.
User Action: Addition information should follow the error message. Evaluate the additional information to determine the problem. Correct the problem and retry the command.
Explanation: An attempt to show a database record (ABS object) failed.
User Action: Additional information should follow the error message. Evaluate the additional information to determine the problem. Correct the problem and retry the command.
Explanation: An attempt to shut down ABS Policy Engine failed.
User Action: Additional information should follow the error message. Evaluate the additional information to determine the problem. Correct the problem and retry the shutdown procedure. Also check ABS$LOG:ABS$POLICY_ENGINE_T.LOG file. Errors may be present in that log file which may help determine the problem. If the shutdown still fails, then delete ABS$POLICY process using STOP/ID=pid.
Explanation: Agent ID data argument was not specified.
User Action: ABS internal error. Submit an SPR.
Explanation: Archive object location structure was not specified.
User Action: To create an object entry, you must specify the archive object location.
Explanation: ArchiveTransactionUID was not specified.
User Action: To create a log entry, you must specify the archive transaction UID.
Explanation: Device name was not specified.
User Action: You must specify the device name on ABS_set_aoe_selection_criteria. Specify a wildcard to look at all devices.
Explanation: The device name was not specified in the correct format.
User Action: Specify either physical_disk_name, system_ logical_name, or requestor_logical_name.
Explanation: The archive file system file name was not specified.
User Action: The save set name (archive file system file name) must be specified in ABS_Set_saveset_info routine.
Explanation: Object name was not specified.
User Action: You must specify the object name to be found in the catalog in ABS_Set_aoe_selection_criteria call.
Explanation: Object identification was not specified.
User Action: The object identification must be specified in ABS_CreateObjectEntry call.
Explanation: Archive object location was not specified.
User Action: You must specify the object location on ABS_CreateObjectEntry call.
Explanation: Pathname was not specified.
User Action: You must specify the location of the saveset (archive file system pathname) on ABS_Set_saveset_info call.
Explanation: The return block pointer argument was not specified.
User Action: The return block pointer was NULL on either ABS_ShowLogEntry or ABS_ShowObjectEntry.
Explanation: Selection criteria argument was not specified.
User Action: You must specify the selection criteria parameter on either ABS_ShowLogEntry or ABS ShowObjectEntry call.
Explanation: The save set information was not specified.
User Action: You must specify the save set information parameter in ABS_CreateLogEntry call. Use ABS_Set_saveset_info utility routine to create this structure.
Explanation: The save set UID was not specified.
User Action: The save set UID must be specified in ABS set_saveset_info call.
Explanation: The save set UID specified in object location does not match log entry.
User Action: The save set UID in the object location must match the save set UID in the current log entry for the active catalog connection. Specify the same save set UID in the object location as specified on ABS_CreateObjectEntry, or issue another ABS_CreateLogEntry.
Explanation: String copy overflowed output buffer.
User Action: ABS internal error. Submit an SPR.
Explanation: Error deleting subprocess.
User Action: Check the associated error messages for specific reasons why the subprocess could not be deleted.
Explanation: Normal successful operation was completed.
Explanation: Failed to allocate device.
User Action: Check the associated error messages for specific reasons why the device could not be allocated.
Explanation: SYS$ASSIGN failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$BINTIM service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$CLOSE service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$CONNECT service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: Failed to deallocate device.
User Action: Check the associated error messages for specific reasons why the device could not be deallocated.
Explanation: SYS$DELETE service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$DISCONNECT service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$DISMOU service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$GETDVI service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$GET service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$GET service failed on an archive object entry.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$GET service failed on a transaction log entry.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$MOUNT service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$OPEN service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$PARSE service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$PUT service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$PUT service failed on an archive object entry.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$PUT service failed on a transaction log entry.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$QIOW with IO$_REWIND failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$QIOW with IO$_SKIPFILE failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: Failed to send a message to the operator.
User Action: Check the associated error messages for specific reasons why the operator did not receive a message.
Explanation: An unexpected error occurred in the graphical user interface (GUI).
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$UPDATE service failed.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$UPDATE service failed on an archive object entry.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: SYS$UPDATE service failed on a transaction log entry.
User Action: Check the associated error messages for specific reasons why the operation failed.
Explanation: ABS internal transaction log list is corrupt.
User Action: ABS internal error. Submit an SPR.
Explanation: A transaction log entry was not found in the catalog.
User Action: Validate the selection criteria specified on ABS_Set_tle_selection_criteria.
Explanation: The transaction log entry is already on ABS internal transaction log entry return list.
User Action: ABS internal error. Submit an SPR.
Explanation: The transaction log entry show context is not NULL.
User Action: ABS internal error. Submit an SPR.
Explanation: Too many values found in tag template.
User Action: The backup agent information is incorrect. If you have modified the template information, restore the original template information from ABS distribution kit. If you have not modified the template information, submit an SPR.
Explanation: An attempt to synchronize on an ABS save or restore request failed.
User Action: Execute an ABS SHOW of the save or restore request. If it does not exist, create it.
Explanation: The specified user name does not exist in the current cluster.
User Action: Specify a valid user name within the current cluster.
Explanation: XmText write has failed, check widget ID.
User Action: ABS internal error. Submit an SPR.
Explanation: Transaction completed with failure status.
User Action: Check the transaction log file for specific reasons for the failure.
Explanation: Transaction completed with qualified success.
User Action: Check the transaction log file for specific warnings or non fatal errors.
This Appendix presents Media and Device Management Services for OpenVMS Version 3 (MDMS) error messages and provides descriptions and User Actions for each.
ABORT request aborted by operator
Explanation: The request issued an OPCOM message that has been aborted by an operator. This message can also occur if no terminals are enabled for the relevant OPCOM classes on the node.
User Action: Either enable an OPCOM terminal, contact the operator and retry
or
no action.
Explanation: The MDMS software caused an access violation. This is an internal error.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
ALLOCDRIVEDEV drive string allocated as device string
Explanation: The named drive was successfully allocated, and the drive may be accessed with DCL commands using the device name shown.
ALLOCDRIVE drive string allocated
Explanation: The named drive was successfully allocated.
ALLOCVOLUME volume string allocated
Explanation: The named volume was successfully allocated.
APIBUGCHECK internal inconsistency in API
Explanation: The MDMS API (MDMS$SHR.EXE) detected an inconsistency. This is an internal error.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
APIUNEXP unexpected error in API string line number
Explanation: The shareable image MDMS$SHR detected an internal inconsistency.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
BINDVOLUME volume string bound to set string
Explanation: The specified volume (or volume set) was successfully bound to the end of the named volume set.
BUGCHECK, internal inconsistency
Explanation: The server software detected an inconsistency. This is an internal error.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis. Restart the server.
CANCELLED, request cancelled by user
Explanation: The request was cancelled by a user issuing a cancel request command.
User Action: None, or retry command.
CONFLITEMS, conflicting item codes specified
Explanation: The command cannot be completed because there are conflicting item codes in the command. This is an internal error.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis
CREATVOLUME, volume string created
Explanation: The named volume was successfully created.
DBLOCACC, local access to database
Explanation: This node has the database files open locally.
DBRECERR, error string record for string:
Explanation: The search for a database server received an error from a remote server.
User Action: Check the logfile on the remote server for more information. Check the logical name MDMS$DATABASE_SERVERS for correct entries of database server node.
DBREMACC, access to remote database server on node string
Explanation: This node has access to a remote database server.
DBREP, Database server on node string reports:
Explanation: The remote database server has reported an error condition. The next line contains additional information.
User Action: Depends on the additional information.
DCLARGLSOVR DCL extended status format, argument list overflow
Explanation: During formatting of the extended status, the number of arguments exceeded the allowable limit.
User Action: This is an internal error. Contact Compaq.
DCLBUGCHECK internal inconsistency in DCL
Explanation: You should never see this error. There is an internal error in the DCL.
User Action: This is an internal error. Contact Compaq.
DCSCERROR error accessing jukebox with DCSC
Explanation: MDMS encountered an error when performing a jukebox operation. An accompanying message gives more detail.
User Action: Examine the accompanying message and perform corrective actions to the hardware, the volume or the database, and optionally retry the operation.
Explanation: This is a more detailed DCSC error message which accompanies DCSCERROR.
User Action: Check the DCSC error message file.
DECNETLISEXIT, DECnet listener exited
Explanation: The DECnet listener has exited due to an internal error condition or because the user has disabled the DECNET transport for this node. The DECnet listener is the server's routine to receive requests via DECnet (Phase IV) and DECnet-Plus (Phase V).
User Action: The DECnet listener should be automatically restarted unless the DECNET transport has been disabled for this node. Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis if the transport has not been disabled by the user.
DECNETLISRUN, listening on DECnet node string object string
Explanation: The server has successfully started a DECnet listener. Requests can now be sent to the server via DECnet.
DEVNAMICM device name item code missing
Explanation: During the allocation of a drive, a drive's drive name was not returned by the server. This is an internal error.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
DRIVEEXISTS specified drive already exists
Explanation: The specified drive already exists and cannot be created.
User Action: Use a set command to modify the drive, or create a new drive with a different name.
DRVACCERR error accessing drive
Explanation: MDMS could not access the drive.
User Action: Verify the VMS device name, node names and/or group names specified in the drive record. Fix if necessary.
Verify MDMS is running on a remote node. Check status of the drive, correct and retry.
DRVALRALLOC drive is already allocated
Explanation: An attempt was made to allocate a drive that was already allocated.
User Action: Wait for the drive to become deallocated, or if the drive is allocated to you, use it.
Explanation: The specified drive is empty.
User Action: Check status of drive, correct and retry.
DRVINITERR error initializing drive on platform
Explanation: MDMS could not initialize a volume in a drive.
User Action: There was a system error initializing the volume. Check the log file.
DRVINUSE drive is currently in use
Explanation: The specified drive is already in use.
User Action: Wait for the drive to free up and re-enter command, or try to use another drive.
DRVLOADED drive is already loaded
Explanation: A drive unload appeared to succeed, but the specified volume was still detected in the drive.
User Action: Check the drive and check for duplicate volume labels, or if the volume was reloaded.
DRVLOADING drive is currently being loaded or unloaded
Explanation: The operation cannot be performed because the drive is being loaded or unloaded.
User Action: Wait for the drive to become available, or use another drive. If the drive is stuck in the loading or unloading state, check for an outstanding request on the drive and cancel it. If all else fails, manually adjust the drive state.
DRVNOTALLOC drive is not allocated
Explanation: The specified drive could not be allocated.
User Action: Check again if the drive is allocated. If it is, wait until it is deallocated. Otherwise there was some other reason the drive could not be allocated. Check the log file.
DRVNOTALLUSER drive is not allocated to user
Explanation: You cannot perform the operation on the drive because the drive is not allocated to you.
User Action: In some cases you may be able to perform the operation by specifying a user name. Do that to check if it works or defer the operation.
DRVNOTAVAIL drive is not available on system
Explanation: The specified drive was found on the system, but is not available for use.
User Action: Check the status of the drive and correct.
DRVNOTDEALLOC drive was not deallocated
Explanation: MDMS could not deallocate a drive.
User Action: Either the drive was not allocated or there was a system error deallocating the drive. Check the log file.
DRVNOTFOUND drive not found on system
Explanation: The specified drive cannot be found on the system.
User Action: Check that the OpenVMS device name, node names and/or group names are correct for the drive. Verify MDMS is running on a remote node.
Re-enter command when corrected.
DRVNOTSPEC drive not specified or allocated to volume
Explanation: When loading a volume a drive was not specified, and no drive has been allocated to the volume.
User Action: Retry the operation and specify a drive name.
Explanation: The specified drive is remote on a node where it is defined to be local.
User Action: Check that the OpenVMS device name, node names and/or group names are correct for the drive. Verify MDMS is running on a remote node. Re-enter command when corrected.
DRVSINUSE all drives are currently in use
Explanation: All of the drives matching the selection criteria are currently in use.
User Action: Wait for a drive to free up and re-enter command.
Explanation: A general MDMS error occurred.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
EXECOMFAIL execute command failed, see log file for more explanation
Explanation: While trying to execute a command during scheduled activities, a system service called failed.
User Action: Check the log file for the failure code from the system server call.
FAILALLOCDRV failed to allocate drive
Explanation: Failed to allocate drive.
User Action: The previous message is the error that caused the failure.
FAILCONSVRD, failed connection to server via DECnet
Explanation: The DECnet(Phase IV) connection to an MDMS server either failed or could not be established. See additional message lines and/or check the server's logfile.
User Action: Depends on additional information.
FAILCONSVRT, failed connection to server via TCP/IP
Explanation: The TCP/IP connection to an MDMS server either failed or could not be established. See additional message lines and/or check the server's logfile.
User Action: Depends on additional information.
FAILCONSVR, failed connection to server
Explanation: The connection to an MDMS server either failed or could not be established. See additional message lines and/or check the server's logfile.
User Action: Depends on additional information.
FAILDEALLOCDRV failed to deallocate drive
Explanation: Failed to deallocate drive.
User Action: The previous message is the error that caused the failure.
FAILEDMNTVOL failed to mount volume
Explanation: MDMS was unable to mount the volume.
User Action: The error above this contains the error that caused the volume not to be mounted.
FAILICRES failed item code restrictions
Explanation: The command cannot be completed because there are conflicting item codes in the command. This is an internal error.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
FAILINIEXTSTAT failed to initialize extended status buffer
Explanation: The API could not initialize the extended status buffer. This is an internal error.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
Explanation: The MDMS server encountered a fatal error during the processing of a request.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
FILOPNERR, file string could not be opened
Explanation: An MDMS database file could not be opened.
User Action: Check the server's logfile for more information.
FIRSTVOLUME specified volume is first in set
Explanation: The specified volume is the first volume in a volume set.
User Action: You cannot deallocate or unbind the first volume in a volume set. However, you can unbind the second volume and then deallocate the first, or unbind and deallocate the entire volume set.
FUNCFAILED, Function string failed with:
Explanation: An internal call to a system function has failed. The lines that appear after this error message identify the function called and the failure status.
User Action: Depends on information that appears following this message.
ILLEGALOP illegal move operation
Explanation: You attempted to move a volume within a DCSC jukebox, and this is not supported.
INCOMPATOPT incompatible options specified
Explanation: You entered a command with incompatible options.
User Action: Examine the command documentation and re-enter with allowed combinations of options.
INCOMPATVOL volume is incompatible with volumes in set
Explanation: You cannot bind the volume to the volume set because some of the volume's attributes are incompatible with the volumes in the volume set.
User Action: Check that the new volume's media type, onsite location and offsite location are compatible with those in the volume set. Adjust attributes and retry, or use another volume with compatible attributes.
INSCMDPRIV insufficient privilege to execute request
Explanation: You do not have sufficient privileges to enter the request.
User Action: Contact your system administrator and request additional privileges, or give yourself privileges and retry.
INSOPTPRIV insufficient privilege for request option
Explanation: You do not have sufficient privileges to enter a privileged option of this request.
User Action: Contact your system administrator and request additional privileges, or give yourself privileges and retry. Alternatively, retry without using the privileged option.
INSSHOWPRIV some volumes not shown due to insufficient privilege
Explanation: Not all volumes were shown because of restricted privilege.
User Action: None if you just want to see volumes you own. You need MDMS_SHOW_ALL privilege to see all volumes.
INSSVRPRV insufficient server privileges
Explanation: The MDMS server is running with insufficient privileges to perform system functions.
User Action: Refer to the Installation Guide to determine the required privileges. Contact your system administrator to add these privileges in the MDMS$SERVER account.
INTBUFOVR, internal buffer overflow
Explanation: The MDMS software detected an internal buffer overflow. This an internal error.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis. Restart the server.
INTINVMSG, internal invalid message
Explanation: An invalid message was received by a server. This could be due to a network problem or, a remote non-MDMS process sending messages in error or, an internal error.
User Action: If the problem persists and no non-MDMS process can be identified then provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
INVABSTIME invalid absolute time
Explanation: The item list contained an invalid absolute date and time. Time cannot be earlier than 1-Jan-1970 00: 00: 00 and cannot be greater than 7-Feb-2106 06: 28: 15
User Action: Check that the time is between these two times.
INVALIDRANGE invalid volume range specified
Explanation: The volume range specified is invalid.
User Action: A volume range may contain up to 1000 volumes, where the first 3 characters must be alphabetic and the last 3 may be alphanumeric. Only the numeric portions may vary in the range. Examples are ABC000-ABC999, or ABCD01-ABCD99.
INVDBSVRLIS, invalid database server search list
Explanation: The logical name MDMS$DATABASE_SERVERS contains invalid network node names or is not defined.
User Action: Correct the node name(s) in the logical name MDMS$DATABASE_SERVERS in file MDMS$SYSTARTUP.COM. Redefine the logical name in the current system. Then start the server.
INVDELSTATE object is in invalid state for delete
Explanation: The specified object cannot be deleted because its state indicates it is being used.
User Action: Defer deletion until the object is no longer being used, or otherwise change its state and retry.
INVDELTATIME invalid delta time
Explanation: The item list contained an invalid delta time.
User Action: Check that the item list has a correct delta time.
INVDFULLNAM, invalid DECnet full name
Explanation: A node full name for a DECnet-Plus (Phase V) node specification has an invalid syntax.
User Action: Correct the node name and retry.
INVEXTSTS invalid extended status item desc/buffer
Explanation: The error cannot be reported in the extended status item descriptor. This error can be cause by one of the following: Not being able to read any one of the item descriptors in the item list
Not being able to write to the buffer in the extended status item descriptor
Not being able to write to the return length in the extended status item descriptor
Not being able to initialize the extended status buffer
User Action: Check for any of the above errors in your program and fix the error.
INVITCODE invalid item code for this function
Explanation: The item list had an invalid item code. The problem could be one of the following: Item codes do not meet the restrictions for that function.
An item code cannot be used in this function.
User Action: Refer to the API specification to find out which item codes are restricted for each function and which item codes are allowed for each function.
INVITDESC invalid item descriptor, index number
Explanation: The item descriptor is in error. The previous message gives the error. Included is the index of the item descriptor in the item list.
User Action: Refer to the index number and the previous message to indicate the error and which item descriptor is in error.
INVITLILENGTH invalid item list buffer length
Explanation: The item list buffer length is zero. The item list buffer length cannot be zero for any item code.
User Action: Refer to the API specification to find an item code that would be used in place of an item code that has a zero buffer length.
INVMEDIATYPE media type is invalid or not supported by volume
Explanation: The specified volume supports multiple media types where a single media type is required, or the volume does not support the specified media type.
User Action: Re-enter the command specifying a single media type that is already supported by the volume.
INVMSG, invalid message via string
Explanation: An invalid message was received MDMS software. This could be due to a network problem or, a non-MDMS process sending messages in error or, an internal error.
User Action: If the problem persists and no non-MDMS process can be identified then provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
INVNODNAM, invalid node name specification
Explanation: A node name for a DECnet (Phase IV) node specification has an invalid syntax.
User Action: Correct the node name and retry.
INVPORTS, invalid port number specification
Explanation: The MDMS server did not start up because the logical name MDMS$TCPIP_SND_PORTS in file MDMS$SYSTARTUP.COM specifies and illegal port number range. A legal port number range is of the form
"low_port_number-high_port_number".
User Action: Correct the port number range for the logical name DMS$TCPIP_SND_PORTS in file MDMS$SYSTARTUP.COM. Then start the server.
INVPOSITION invalid jukebox position
Explanation: The position specified is invalid.
User Action: Position is only valid for jukeboxes with a topology defined. Check that the position is within the topology ranges, correct and retry. Example: /POSITION=(1,2,1)
INVSELECT invalid selection criteria
Explanation: The selection criteria specified on an allocate command are invalid.
User Action: Check the command with the documentation and re-enter with a valid combination of selection criteria.
INVSLOTRANGE invalid slot range
Explanation: The slot range was invalid. It must be of the form: 1-100 1,100-200,300-400. The only characters allowed are:
, (comma), - (dash), and numbers (0-9).
User Action: Check that you are using the correct form.
INVSRCDEST invalid source or destination for move
Explanation: Either the source or destination of a move operation was invalid (does not exist).
User Action: If the destination is invalid, enter a correct destination and retry. If a source is invalid, either create the source or correct the current placement of the affected volumes or magazines.
INVTFULLNAM, invalid TCP/IP full name
Explanation: A node full name for a TCP/IP node specification has an invalid syntax.
User Action: Correct the node name and retry.
INVTOPOLOGY invalid jukebox topology
Explanation: The specified topology for a jukebox is invalid.
User Action: Check topology definition; the towers must be sequentially increasing from 0; there must be a face, level and slot definition for each tower.
/TOPOLOGY=(TOWER=(0,1,2), FACES=(8,8,8), - LEVELS=(2,3,2),
SLOTS=(13,13,13))
INVVOLPLACE invalid volume placement for operation
Explanation: The volume has an invalid placement for a load operation.
User Action: Re-enter the command and use the move option.
INVVOLSTATE volume in invalid state for operation
Explanation: The operation cannot be performed on the volume because the volume state does not allow it.
User Action: Defer the operation until the volume changes state. If the volume is stuck in a transient state (e.g. moving), check for an outstanding request and cancel it. If all else fails, manually change the state.
JUKEBOXEXISTS specified jukebox already exists
Explanation: The specified jukebox already exists and cannot be created.
User Action: Use a set command to modify the jukebox, or create a new jukebox with a different name.
JUKENOTINIT jukebox could not be initialized
Explanation: An operation on a jukebox failed because the jukebox could not be initialized.
User Action: Check the control, robot name, node name and group name of the jukebox, and correct as needed. Check access path to jukebox (HSJ etc.), correct as needed. Verify MDMS is running on a remote node. Then retry operation.
JUKETIMEOUT timeout waiting for jukebox to become available
Explanation: MDMS timed out waiting for a jukebox to become available. The timeout value is 10 minutes.
User Action: If the jukebox is in heavy use, try again later. Otherwise, check requests for a hung request - cancel it. Set the jukebox state to available if all else fails.
JUKEUNAVAIL jukebox is currently unavailable
Explanation: The jukebox is disabled.
User Action: Re-enable the jukebox.
LOCATIONEXISTS specified location already exists
Explanation: The specified location already exists and cannot be created.
User Action: Use a set command to modify the location, or create a new location with a different name.
LOGRESET, Log file string by string on node string
Explanation: The server logfile has been closed and a new version has been created by a user.
MAGAZINEEXISTS specified magazine already exists
Explanation: The specified magazine already exists and cannot be created.
User Action: Use a set command to modify the magazine, or create a new magazine with a different name.
MBLISEXIT, mailbox listener exited
Explanation: The mailbox listener has exited due to an internal error condition. The mailbox listener is the server's routine to receive local user requests through mailbox MDMS$MAILBOX.
User Action: The mailbox listener should be automatically restarted. Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
MBLISRUN, listening on mailbox string logical string
Explanation: The server has successfully started the mailbox listener. MDMS commands can now be entered on this node.
MEDIATYPEEXISTS specified media type already exists
Explanation: The specified media type already exists and cannot be created.
User Action: Use a set command to modify the media type, or create a new media type with a different name.
MOVEINCOMPL move is incomplete
Explanation: When moving volumes into and out of a jukebox, some of the volumes were not moved.
User Action: Check that there are enough empty slots in the jukebox when moving in and retry. On a move out, examine the cause of the failure and retry.
MRDERROR error accessing jukebox with MRD
Explanation: MDMS encountered an error when performing a jukebox operation. An accompanying message gives more detail.
User Action: Examine the accompanying message and perform corrective actions to the hardware, the volume or the database, and optionally retry the operation.
Explanation: This is a more detailed MRD error message which accompanies MRDERROR.
User Action: Check the MRU error message file.
NOBINDSELF cannot bind a volume to itself
Explanation: A volume cannot be bound to itself.
User Action: Use another volume.
NOCHANGES no attributes were changed in the database
Explanation: Your set command did not change any attributes in the database because the attributes you entered were already set to those values.
User Action: Double-check your command, and re-enter if necessary. Otherwise the database is already set to what you entered.
NOCHECK drive not accessible, check not performed
Explanation: The specified drive could not be physically accessed and the label check was not performed. The displayed attributes are taken from the database.
User Action: Verify the VMS device name, node name or group name in the drive object. Check availability on system.
Verify MDMS is running on a remote node. Determine the reason the drive was not accessible, fix it and retry.
NODEEXISTS specified node already exists
Explanation: The specified node already exists and cannot be created.
User Action: Use a set command to modify the node, or create a new node with a different name.
NODENOPRIV, node is not privileged to access database server
Explanation: A remote server access failed because the user making the DECnet(Phase IV) connection is not MDMS$SERVER or the remote port number is not less than 1024.
User Action: Verify with DCL command SHOW PROCESS that the remote MDMS server is running under a username of MDMS$SERVER and/or, verify that logical name MDMS$TCPIP_SND_PORTS on the remote server node specifies a port number range between 0-1023.
NODENOTENA, node not in database or not fully enabled
Explanation: The server was not allowed to start up because there is no such node object in the database or its node object in the database does not specify all network full names correctly.
User Action: For a node running DECnet (Phase IV) the node name has to match logical name SYS$NODE on that node.
For a node running DECnet-Plus (Phase V) the node's DECNET_PLUS_FULLNAME has to match the logical name SYS$NODE_FULLNAME on that node. For a node running TCP/IP the node's TCPIP_FULLNAME has to match the full name combined from logical names *INET_HOST and *INET_DOMAIN.
NODENOTINDB, no node object with string name string in database
Explanation: The current server could not find a node object in the database with a matching DECnet (Phase IV) or DECnet-Plus (Phase V) or TCP/IP node full name.
User Action: Use SHOW SERVER/NODES=(...) to see the exact naming of the server's network names. Correct the entry in the database and restart the server.
NODRIVES no drives match selection criteria
Explanation: When allocating a drive, none of the drives match the specified selection criteria.
User Action: Check spelling and re-enter command with valid selection criteria.
NODRVACC, access to drive disallowed
Explanation: You attempted to allocate, load or unload a drive from a node that is not allowed to access it.
User Action: The access field in the drive object allows local, remote or all access, and your attempted access did not conform to the attribute. Use another drive.
NODRVSAVAIL no drives are currently available
Explanation: All of the drives matching the selection criteria are currently in use or otherwise unavailable.
User Action: Check to see if any of the drives are disabled or inaccessible. Re-enter command when corrected.
NOJUKEACC, access to jukebox disallowed
Explanation: You attempted to use a jukebox from a node that is not allowed to access it.
User Action: The access field in the jukebox object allows local, remote or all access, and your attempted access did not conform to the attribute. Use another jukebox.
NOJUKESPEC jukebox required on vision option
Explanation: The jukebox option is missing on a create volume request with the vision option.
User Action: Re-enter the request and specify a jukebox name and slot range.
NOMAGAZINES no magazines match selection criteria
Explanation: On a move magazine request using the schedule option, no magazines were scheduled to be moved.
NOMAGSMOVED no magazines were moved
Explanation: No magazines were moved for a move magazine operation. An accompanying message gives a reason.
User Action: Check the accompanying message, correct and retry.
NOMEDIATYPE no media type specified when required
Explanation: An allocation for a volume based on node, group or location also requires the media type to be specified.
User Action: Re-enter the command with a media type specification.
Explanation: The MDMS server failed to allocate enough memory for an operation.
User Action: Shut down the MDMS server and restart. Contact Compaq.
NOOBJECTS no such objects currently exist
Explanation: On a show command, there are no such objects currently defined.
NOPARAM required parameter missing
Explanation: A required input parameter to a request or an API function was missing.
User Action: Re-enter the command with the missing parameter, or refer to the API specification for required parameters for each function.
NORANGESUPP, slot or space ranges not supported with volset option
Explanation: On a set volume, you entered the volset option and specified either a slot range or space range.
User Action: If you want to assign slots or spaces to volumes directly, do not use the volset option.
NORECVPORTS, no available receive port numbers for incoming connections
Explanation: The MDMS could not start the TCP/IP listener because none of the receive ports specified with this node's TCPIP_FULLNAME are currently available.
User Action: Use a suitable network utility to find a free range of TCP/IP ports which can be used by the MDMS server.
Use the MDMS SET NODE command to specify the new range with the /TCPIP_FULLNAME then restart the server.
NOREMCONNECT, unable to connect to remote node
Explanation: The server could not establish a connection to a remote node. See the server's logfile for more information.
User Action: Depends on information in the logfile.
NOREQUESTS no such requests currently exist
Explanation: No requests exist on the system.
NORESEFN, not enough event flags
Explanation: The server ran out of event flags. This is an internal error.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis. Restart the server.
NOSCRATCH scratch loads not supported for jukebox drives
Explanation: You attempted a load drive command for a jukebox drive.
User Action: Scratch loads are not supported for jukebox drives. You must use the load volume command to load volumes in jukebox drives.
NOSENDPORTS, no available send port numbers for outgoing connection
Explanation: The server could not make an outgoing TCP/IP connection because none of the send ports specified for the range in logical name MDMS$TCPIP_SND_PORTS are currently available.
User Action: Use a suitable network utility to find a free range of TCP/IP ports which can be used by the MDMS server.
Change the logical name MDMS$TCPIP_SND_PORTS in file MDMS$SYSTARTUP.COM. Then restart the server.
NOSLOT not enough slots defined for operation
Explanation: The command cannot be completed because there are not enough slots specified in the command, or because there are not enough empty slots in the jukebox.
User Action: If the jukebox is full, move some other volumes out of the jukebox and retry. If there are not enough slots specified in the command, re-enter with a larger slot range.
Explanation: An uninitialized status has been reported. This an internal error.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
NOSUCHDEST specified destination does not exist
Explanation: In a move command, the specified destination does not exist.
User Action: Check spelling or create the destination as needed.
NOSUCHDRIVE specified drive does not exist
Explanation: The specified drive does not exist.
User Action: Check spelling or create drive as needed.
NOSUCHGROUP specified group does not exist
Explanation: The specified group does not exist.
User Action: Check spelling or create group as needed.
NOSUCHINHERIT specified inherited object does not exist
Explanation: On a create of an object, the object specified for inherit does not exist.
User Action: Check spelling or create the inherited object as needed.
NOSUCHJUKEBOX specified jukebox does not exist
Explanation: The specified jukebox does not exist.
User Action: Check spelling or create jukebox as needed.
NOSUCHLOCATION specified location does not exist
Explanation: The specified location does not exist.
User Action: Check spelling or create location as needed.
NOSUCHMAGAZINE specified magazine does not exist
Explanation: The specified magazine does not exist.
User Action: Check spelling or create magazine as needed.
NOSUCHMEDIATYPE specified media type does not exist
Explanation: The specified media type does not exist.
User Action: Check spelling or create media type as needed.
NOSUCHNODE specified node does not exist
Explanation: The specified node does not exist.
User Action: Check spelling or create node as needed.
NOSUCHOBJECT specified object does not exist
Explanation: The specified object does not exist.
User Action: Check spelling or create the object as needed.
NOSUCHPOOL specified pool does not exist
Explanation: The specified pool does not exist.
User Action: Check spelling or create pool as needed.
NOSUCHREQUESTID specified request does not exist
Explanation: The specified request does not exist on the system.
User Action: Check the request id again, and re-enter if incorrect.
NOSUCHUSER no such user on system
Explanation: The username specified in the command does not exist.
User Action: Check spelling of the username and re-enter.
NOSUCHVOLUME specified volume(s) do not exist
Explanation: The specified volume or volumes do not exist.
User Action: Check spelling or create volume(s) as needed.
NOSVRACCOUNT, username string does not exist
Explanation: The server cannot startup because the username MDMS$SERVER is not defined in file SYSUAF.DAT.
User Action: Enter the username of MDMS$SERVER (see Installation manual for account details) and then start the server.
NOSVRMB, no server mailbox or server not running
Explanation: The MDMS server is not running on this node or the server is not servicing the mailbox via logical name MDMS$MAILBOX.
User Action: Use the MDMS$STARTUP procedure with parameter RESTART to restart the server. If the problem persists, check the server's logfile and file SYS$MANAGER:MDMS$SERVER.LOG for more information.
NOTALLOCUSER volume is not allocated to user
Explanation: You cannot perform the operation on the volume because the volume is not allocated to you.
User Action: Either use another volume, or (in some cases) you may be able to perform the operation specifying a user name.
NOUNALLOCDRV no unallocated drives found for operation
Explanation: On an initialize volume request, MDMS could not locate an unallocated drive for the operation.
User Action: If you had allocated a drive for the operation, deallocate it and retry. If all drives are currently in use, retry the operation later.
NOVOLSMOVED no volumes were moved
Explanation: No volumes were moved for a move volume operation. An accompanying message gives a reason.
User Action: Check the accompanying message, correct and retry.
NOVOLSPROC no volumes were processed
Explanation: In a create, set or delete volume command, no volumes were processed.
User Action: Check the volume identifiers and re-enter command.
NOVOLUMES no volumes match selection criteria
Explanation: When allocating a volume, no volumes match the specified selection criteria.
User Action: Check the selection criteria. Specifically check the relevant volume pool. If free volumes are in a volume pool, the pool name must be specified in the allocation request, or you must be a default user defined in the pool. You can re-enter the command specifying the volume pool as long as you are an authorized user. Also check that newly-created volumes are in the FREE state rather than the UNITIALIZED state.
OBJECTEXISTS specified object already exists
Explanation: The specified object already exists and cannot be created.
User Action: Use a set command to modify the object, or create a new object with a different name.
OBJNOTEXIST referenced object !AZ does not exist
Explanation: When attempting to allocate a drive or volume, you specified a selection object that does not exist.
User Action: Check spelling of selection criteria objects and retry, or create the object in the database.
PARTIALSUCCESS some volumes in range were not processed
Explanation: On a command using a volume range, some of the volumes in the range were not processed.
User Action: Verify the state of all objects in the range, and issue corrective commands if necessary.
POOLEXISTS specified pool already exists
Explanation: The specified pool already exists and cannot be created.
User Action: Use a set command to modify the pool, or create a new pool with a different name.
QUEUED operation is queued for processing
Explanation: The asynchronous request you entered has been queued for processing.
User Action: You can check on the state of the request by issuing a show requests command.
RDFERROR error allocating or deallocating RDF device
Explanation: During an allocation or deallocation of a drive using RDF, the RDF software returned an error.
User Action: The error following this error is the RDF error return.
SCHEDULECONFL schedule qualifier and novolume qualifier are incompatible
Explanation: The /SCHEDULE and /NOVOLUME qualifiers are incompatible for this command.
User Action: Use the /SCHEDULE and /VOLSET qualifiers for this command.
SCHEDVOLCONFL schedule qualifier and volume parameter are incompatible
Explanation: The /SCHEDULE and the volume parameter are incompatible for this command.
User Action: Use the /SCHEDULE qualifier and leave the volume parameter blank for this command.
SETLOCALEFAIL an error occurred when accessing locale information
Explanation: When executing the SETLOCALE function an error occurred.
User Action: A user should not see this error.
SNDMAILFAIL send mail failed, see log file for more explanation
Explanation: While sending mail during the scheduled activities, a call to the mail utility failed.
User Action: Check the log file for the failure code from the mail utility.
SPAWNCMDBUFOVR spawn command buffer overflow
Explanation: During the mount of a volume, the spawned mount command was too long for the buffer. This is an internal error.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
SVRBUGCHECK internal inconsistency in SERVER
Explanation: You should never see this error. There is an internal error.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis. Restart the server.
SVRDISCON, server disconnected
Explanation: The server disconnected from the request because of a server problem or a network problem.
User Action: Check the server's logfile and file SYS$MANAGER:MDMS$SERVER.LOG for more information. Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
Explanation: Server exited. Check the server logfile for more information.
User Action: Depends on information in the logfile.
SVRLOGERR, server logged error
Explanation: The server failed to execute the request. Additional information is in the server's logfile.
User Action: Depends on information in the logfile.
SVRRUN, server already running
Explanation: The MDMS server is already running.
User Action: Use the MDMS$SHUTDOWN procedure with parameter RESTART to restart the server.
SVRSTART, Server stringnumber.number-number started
Explanation: The server has started up identifying its version and build number.
SVRTERM, Server terminated abnormally
Explanation: The MDMS server was shut down. This could be caused by a normal user shutdown or it could be caused by an internal error.
User Action: Check the server's logfile for more information. If the logfile indicates an error has caused the server to shut down then provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
SVRUNEXP, unexpected error in SERVER string line number
Explanation: The server software detected an internal inconsistency.
User Action: Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
TCPIPLISEXIT, TCP/IP listener exited
Explanation: The TCP/IP listener has exited due to an internal error condition or because the user has disabled the TCPIP transport for this node. The TCP/IP listener is the server's routine to receive requests via TCP/IP.
User Action: The TCP/IP listener should be automatically restarted unless the TCPIP transport has been disabled for this node. Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis if the transport has not been disabled by the user.
TCPIPLISRUN, listening on TCP/IP node string port string
Explanation: The server has successfully started a TCP/IP listener. Requests can now be sent to the server via TCP/IP.
Explanation: Either entries cannot be added to a list of an MDMS object or existing entries cannot be renamed because the maximum list size would be exceeded.
User Action: Remove other elements from list and try again.
TOOMANYSORTS too many sort qualifiers, use only one
Explanation: When you specify more than one field to sort on.
User Action: Specify only one field to sort on.
TOOMANY too many objects generated
Explanation: You attempted to perform an operation that generated too many objects.
User Action: There is a limit of 1000 objects that may be specified in any volume range, slot range or space range.
Re-enter command with a valid range.
UNDEFINEDREFS object contains undefined referenced objects
Explanation: The object being created or modified has references to undefined objects.
User Action: This allows objects to be created in any order, but some operations may not succeed until the objects are defined. Show the object and verify the spelling of all referenced objects or create them if not defined.
UNSUPPORTED1, unsupported function string
Explanation: You attempted to perform an unsupported function.
UNSUPPORTED unsupported function
Explanation: You attempted to perform an unsupported function.
UNSUPRECVER, unsupported version for record string in database string
Explanation: The server has detected unsupported records in a database file. These records will be ignored.
User Action: Consult the documentation about possible conversion procedures provided for this version of MDMS.
USERNOTAUTH user is not authorized for volume pool
Explanation: When allocating a volume, you specified a pool for which you are not authorized.
User Action: Specify a pool for which you are authorized, or add your name to the list of authorized users for the pool.
Make sure the authorized user includes the node name or group name in the pool object.
VISIONCONFL vision option and volume parameter are incompatible
Explanation: You attempted to create volumes with the vision option and the volume parameter. This is not supported.
User Action: The vision option is used to create volumes with the volume identifiers read by the vision system on a jukebox.
Re-enter the command with either the vision option (specifying jukebox and slot range), or with volume identifier(s), but not both.
VOLALRALLOC specified volume is already allocated
Explanation: You attempted to allocate a volume that is already allocated.
User Action: Use another volume.
VOLALRINIT volume is already initialized and contains data
Explanation: When initializing a volume, MDMS detected that the volume is already initialized and contains data.
User Action: If you are sure you still want to initialize the volume, re-enter the command with the overwrite option.
VOLIDICM, volume ID code missing
Explanation: The volume ID is missing in a request.
User Action: Provide volume ID and retry request
VOLINDRV volume is currently in a drive
Explanation: When allocating a volume, the volume is either moving or in a drive, and nopreferred was specified.
User Action: Wait for the volume to be moved or unloaded, or use the preferred option.
VOLINSET volume is already bound to a volume set
Explanation: You cannot bind this volume because it is already in a volume set and is not the first volume in the set.
User Action: Use another volume, or specify the first volume in the volume set.
VOLLOST volume location is unknown
Explanation: The volume's location is unknown.
User Action: Check if the volume's placement is in a magazine, and if so if the magazine is defined. If not, create the magazine. Also check the magazine's placement.
VOLMOVING volume is currently being moved
Explanation: In a move, load or unload command, the specified volume is already being moved.
User Action: Wait for volume to come to a stable placement and retry. If the volume is stuck in the moving placement, check for an outstanding request and cancel it. If all else fails, manually change volume state.
VOLNOTALLOC specified volume is not allocated
Explanation: You attempted to bind or deallocate a volume that is not allocated.
User Action: None for deallocate. For bind, allocate the volume and then bind it to the set, or use another volume.
VOLNOTBOUND volume is not bound to a volume set
Explanation: You attempted to unbind a volume that is not in a volume set.
VOLNOTINJUKE volume is not in a jukebox
Explanation: When loading a volume into a drive, the volume is not in a jukebox.
User Action: Use the move option and retry the load. This will issue OPCOM messages to move the volume into the jukebox.
VOLNOTLOADED the volume is not loaded in a drive
Explanation: On an unload request, the volume is not recorded as loaded in a drive.
User Action: If the volume is not in a drive, none. If it is, issue an unload drive command to unload it.
VOLONOTHDRV volume is currently in another drive
Explanation: When loading a volume, the volume was found in another drive.
User Action: Wait for the volume to be unloaded, or unload the volume and retry.
VOLSALLOC String volumes were successfully allocated
Explanation: When attempting to allocate multiple volumes using the quantity option, some but not all of the requested quantity of volumes were allocated.
User Action: See accompanying message as to why not all volumes were allocated.
VOLUMEEXISTS specified volume(s) already exist
Explanation: The specified volume or volumes already exist and cannot be created.
User Action: Use a set command to modify the volume(s), or create new volume(s) with different names.
VOLWRTLCK volume loaded with hardware write-lock
Explanation: The requested volume was loaded in a drive, but is hardware write-locked when write access was requested.
User Action: If you need to write to the volume, unload it, physically enable it for write, and re-load it.
WRONGVOLUME wrong volume is loaded in drive
Explanation: On a load volume command, MDMS loaded the wrong volume into the drive.
User Action: Check placement (jukebox, slot etc.) of both the volume in the drive and the requested volume. Modify records if necessary. Unload volume and retry.
If you are migrating from SLS to ABS as your backup product, the information presented here may help you equate SLS to ABS backup functions.
See Comparing SLS and ABS Backup Attributes lists the attributes in an SLS SBK file and gives the equivalent ABS attribute.
This section is intended for SLS users who are considering a conversion from SLS to ABS.
SLS is a legacy product of Compaq Computer Corporation. Although it is fairly reliable once it is configured, learning to configure SLS is quite a challenge. In addition, when problems do occur, diagnosing the problem and making a fix to the source code is very difficult, and sometimes impossible.
ABS was released in 1995. ABS has the following advantages over SLS:
ABS uses a new version of the Media and Device Management Services (MDMS). A brief overview of MDMS is given in this Appendix with more information elsewhere in this guide. MDMS provides a utility that automatically converts the SLS volume, slot and magazine databases, and the TAPESTART.COM command definitions, to MDMS databases. The MDMS conversion is discussed in the Appendix "Converting SLS/MDMS V2 to V3".
ABS has the ability to read old SLS history sets and restore data from old SLS backups, so conversion to ABS can be performed in stages on different nodes over time - this is known as a rolling upgrade.
ABS policy, what gets backed up and how it is stored in a single policy database on an OpenVMS cluster in your network is called Central Security Domain, or CSD. Through this centralized policy database, ABS supports the ability to control your entire network's backup policy from a single location, or allows you to distribute the responsibility for backups and restores to other systems on other OpenVMS nodes. This is contrary to SLS method of storing SBK files on each node to be backed up, where a minor change in tape drive configuration or policy can take hours or days to propagate through all SBK files on a large network.
ABS Policy is organized into simple policy objects:
ABS provides improved logging and diagnostic capabilities. Notification of job completion and status can be sent via MAIL or OPCOM. ABS log files are easier to read and interpret than SLS log files.
ABS provides backup and restore capabilities for NT and UNIX clients. This allows the workstation disks to be backed up and cataloged with the reliability and availability of OpenVMS.
ABS makes backup scheduling easy by providing "complex" backup schedules, such as Weekly Full with Daily Incremental, and log based scheduling. These backup schedule minimize tape usage and backup time by only doing occasional Full backups, with Incremental backups making up the bulk of the data movement operations. ABS also provides the Full Restore capability for a disk or other data object, automatically restoring the necessary Full and Incremental backups to retrieve the data.
ABS provides users the ability to backup and archive their own files, if allowed by the site management. By setting Access Control Lists (ACL's) on Storage Classes and Execution Environments, you can allow users to save data and restore data without intervention of the system manager.
ABS provides the ability to backup disk savesets. This ability is especially useful for optical media, since many optical devices appear as disk devices to the operating system. In addition, disk storage classes can be used for backup operations in which the savesets need to be online for quick restores.
This section gives you an overview of backup policy as implemented in SLS and ABS, comparing the two products representation and organization of policy.
Backup policy can be viewed as:
SLS Backup policy is stored in SBK files, which are DCL command procedures. These SBK files are located on the system where the backup is to be run. The SBK files define a variety of DCL symbols, which identify the What, When, Where, Who and How of each backup to be performed.
See DCL Symbols and ABS Equivalent gives the primary DCL symbols of the SBK file which identify each component of the backup policy. There are numerous other parameters in SBK and ABS policies, but this table gives an overview.
See See SBK Symbols in ABS Terminology for a complete description of each SBK Symbol and its ABS equivalent.
In ABS, Backup Policy is consolidated in a network wide Policy Database. The policy is created and modified using ABS DCL Interface, or ABS Graphical User Interface (GUI).
To define a backup policy in SLS, you log onto the system where the backup is to be performed, copy SYSBAK.TEMPLATE to a new SBK file, and then edit the SBK files using a regular text editor. You define each SBK symbol according to what you want the backup policy to do.
This manual editing of SBK files tends to be error prone. Each SBK symbol must be defined in a particular syntax, and it is easy to make typing errors. When syntactic errors are made in the SBK symbols, it is often unclear in the SLS log files what the actual problem is, and how it can be fixed.
In addition, in a fairly large network, the management of the SBK files can become quite cumbersome, requiring substantial time and organization on your part.
In ABS, Backup Policy is consolidated in a "Policy Engine", which contains all the backup policy for your network. ABS Backup Policy is defined by creating one or more Storage Classes, one or more Execution Environments, and one or more Save Requests. This is done using either the DCL command interface, or using the Graphical User Interface (GUI). ABS has an easy-to-use GUI for convenience of defining your backup policy.
Storage Classes and Execution Environments can be shared by multiple Save Requests. From the Central Security Domain (where your ABS Policy Engine is installed), you can create Save Requests for disks, files or other data objects on remote nodes, which share the common Storage Classes and Execution Environments.
In SLS, to restore a particular set of files for a user, the Operator or System Administrator must regularly get involved. This is because the SLS History Sets, which stores information needs to perform restore, are not accessible to regular users.
In ABS, Access Control List (ACL) on the Storage Class can be set up to allow access to all users, or only selected users. This allows individual users to restore their own files from these Storage Classes.
When using the SLS Storage Restore command, the user (or Operator) must either specify the specific volume, saveset and version of the file to restore. When using the Storage Restore screen, no choice is given about the version of the file to be restored.
In ABS, a date oriented selection for a file to restore is provided. Normally, users will request the most recent version of a file to be restored, since user restores are regularly due to accidental deletion. However, in ABS, the user can specify that the most recent copy before a particular date should be restored. This allows the user to fine tune the restore operation.
In SLS, an operation called a Full Disk Restore is provided. This is a fairly manual method by which a Full backup and associated Incremental backups can be used to restore a full disk.
To perform a Full Disk Restore in SLS, you must manually select the Full and Incremental backups to be applied, and in what order. Although this provides a great deal of versatility, it can also be error prone. Once an automated backup policy is set up, many customers are not familiar with the specific backups which are done, and determining the correct order of a restore can be difficult.
In ABS, when a Restore Request is created for a Disk, the type of restore can be specified as "Full Restore". This causes ABS to automatically find the most recent Full Backup, and all subsequent incremental backups (of appropriate level in the case of log-based backup schedules), and commence the restore in the correct order.
ABS uses a new Media And Device Management Services (MDMS) component that supports the concept of a domain. An MDMS domain has scope across multiple geographical locations each with their own nodes, jukeboxes, drives and volumes. Communication within the domain utilizes TCP/IP, DECnet Phase IV and/or DECnet-Plus at the user's choice. When upgrading from SLS to ABS, it is first necessary to convert the SLS volume, magazine and slot databases, and the TAPESTART.COM definitions, to MDMS databases. A utility is provided for this purpose and this is described in the Appendix "Converting SLS/MDMS V2 to MDMS V3". MDMS has been designed so that this conversion can be performed as a rolling upgrade, starting with the set of nodes designated as database servers. These MDMS V3 database servers can support V2.x clients running ABS or SLS.
In ABS, the Backup Policy is stored in five different types of policy objects:
These policy objects each have a Name, and Access Control List (although a catalog's access is controlled through Storage Class). They are created using the ABS DCL Command, or using the Graphical User Interface (GUI).
These policy objects (except the catalogs) are stored in a central location, called the Policy Database. The Policy Database resides on a single OpenVMS cluster in your network, called the Central Security Domain (CSD). The CSD should also support the MDMS database servers. The CSD controls all of the Storage Classes and Execution Environments used throughout the network, but Save and Restore Requests can be created from any OpenVMS node in the network.
The ABS policy can be completely controlled from the CSD, or responsibility for creating backups and restores can be distributed to the system managers on other OpenVMS nodes. The Access Control on Storage Classes and Execution Environments determine which OpenVMS nodes in the network are allowed to create Save and Restore requests referencing these objects.
Catalogs are stored on each node where backups are performed. This substantially reduces the network bandwidth required for doing backups across the network to a centrally located robot or storage facility.
ABS Storage Class contains information about where backed up files and other data objects (such as databases or UNIX and NT file systems) are to be stored. As many Storage Classes as necessary can be created in ABS Policy Database. Multiple Save Requests can share a single Storage Class.
The information in a Storage class, with each parameter's SBK file equivalent are given in See Storage Class Parameter and SBK File Equivalent.
An ABS Execution Environment (or simply Environment) object stores information about how backups are to be performed. This includes parameters regarding data safety, file interlocking, notification, and so forth. As many Environment objects as necessary can be created in the ABS Policy Database. Multiple Save Requests can share a single Execution Environment.
The information in an ABS Environment, along with each parameter's SBK equivalent, is given in See ABS and SBK Equivalent.
An ABS Save Request identifies the data to be backed up, the Storage Class and Environment to be used for the backup(s), and the schedule on which the request is to be executed. As many Save Requests as necessary can be created in the ABS Policy Database, and multiple Save Requests can share any Storage Class or Execution Environment.
The information stored in a Save Request, with each parameter's SBK equivalent, is given in the See Save Request and SBK Equivalent.
An ABS Restore Request stores information about files, disks, or other data objects to be restored from a Storage Class. As many Restore Requests as necessary can be created in the ABS Policy Database.
The information stored in a Restore Request are given in See Restore Request Parameter Information. Because SLS does not provide a formal Restore Request mechanism, no SBK or other SLS equivalents are given in the See Restore Request Parameter Information.
An ABS Catalog object stores information about what backup operations have been performed, and what files or other data objects have been backed up. ABS Catalog object combines the SLS concepts of a Summary File, a System History Set, and a User History Set.
Catalog objects are accessed through one or more Storage Classes. More than one Storage Class can share a Catalog, or each Storage Class can have a separate catalog. The ABS Catalogs can also be queried using the ABS LOOKUP command (or associated GUI function) and the ABS REPORT SAVE command.
ABS Catalogs are created using ABS$SYSTEM:ABS_CATALOG_OBJECT.EXE utility. An ABS Catalog has the following parameters, as specified in See ABS Parameter and SLS Equivalent when it is created:
This section gives you a comparison of SLS and ABS operations. This gives information about how Save Requests are executed in each product, with similarities and differences between the two pointed out.
In SLS, the SBK symbols DAYS_n and TIME_n identify the schedule and start time for the save request to execute. Each DAYS_n symbol can have one of the following forms:
Every night at midnight, a special utility process in SLS is executed, which scans all the SBK files on the system. It determines which SBK files are to be executed that day and at what time. It then submits a Batch job for each SBK, specifying the start time of the batch job to match the TIME_n parameter.
ABS allows the use of different scheduler interfaces to schedule its Save Requests. A Save Request is scheduled using a Start Time and a Scheduling Option. ABS uses the programming interface the OpenVMS Queue Manager as the default scheduler interface option.
A variety of pre-defined scheduling options are provided:
In addition, ABS provides access to explicit interval schedules. This requires a 3rd party scheduler product which supports complex interval setting.
An important difference between SLS and ABS is that an ABS Save Request can only have a single schedule and start time, while an SLS SBK file can have multiple DAYS_n and TIME_n parameters. However, because the ABS Save Request can be run using a 3rd party scheduler, any number of scheduler jobs can be created to run the Save Request as needed.
Another important difference between ABS and SLS is SLS's ability to specify a list of day names. To provide the same functionality in ABS, a 3rd party scheduler product is required which allows setting specific days for scheduled jobs.
This section discusses the various types of operations performed by SLS and/or ABS, and identifies similarities and differences between the two products.
System Backups are the type of backup which is performed by the system on behalf of the users. Normally, this type of backup backs up entire disks or file systems, which can then be used to restore any particular file or set of files for any particular user.
The System Backup is what implemented via the SLS SBK file. It has these characteristics:
In ABS, a System Backup is performed by setting up an Execution Environment whose User Profile indicates the ABS account (with particular privileges and access rights). Then, any Save Request which uses that Environment would be considered a "System Backup".
ABS installation kit provides out-of-the box policy objects for performing system backups. These are the SYSTEM_BACKUPS Storage Class, and the SYSTEM_BACKUPS_ENV execution environment. If the parameters on these policy objects are not suitable to your environment, they can be modified using the ABS SET command (or equivalent GUI functions).
An ABS System Backup has the following characteristics:
As shown above SLS and ABS System Backup operations are very similar in their overall operation. Both use subprocesses to perform the actual backup operations using other utilities, or Backup Agents. Both produce a catalog of the operations performed and the files (or data objects) backed up.
However, SLS and ABS System Backup operations have these important differences:
A "Full" backup operation is one which saves all the information on a disk, including any file system specific information. OpenVMS Backup calls these "Image" backups.
An "Incremental" backup only saves data and directory structure that has changed since either a particular date, or since each file was backed up.
Usually, a backup policy combines these two types of operations. Although a Full backup is desirable because it contains all of the data on a disk or filesystem at the time of the backup, it also uses more tape, is more time consuming, and occupies more catalog space. An Incremental backup uses less tape, less time, and less catalog space, but requires more time during the restore of a full disk.
SLS provides indirect access to Full and Incremental operations. In the SBK file, the QUALIFIERS symbol can be defined to contain the string "/IM" (Image) or "/SINCE=BACKUP" (Incremental) to manually determine whether a particular save operation is a Full or Incremental backup. You must explicitly set up the Full and Incremental schedule using the QUALIFIERS symbol.
In ABS, Full and Incremental operations can be automated using the "complex" schedules:
See the Appendix "Log-n Backup Schedules" for full information on these schedules.
The "complex" scheduling options on a Save Request cause the ABS Coordinator to automatically decide the correct operation to perform each time it is executed. For example, a fully functional, efficient backup schedule for a set of disk can be set up to run each night at 6:00PM with the single ABS Command:
$ ABS Save DISK$USER1:,DISK$USER2:,DISK$USER3:,DISK$USER4: -
_$ /Name=NIGHTLY_BACKUPS/Start="18:00"/Schedule=LOG-2 -
_$ /Storage=SYSTEM_BACKUPS
Then, if one of these disks goes bad, for example DISK$USER3:, you can restore the whole disk to the previous night's backup by issuing the commands:
$ DISMOUNT/NOUNLOAD/CLUSTER DISK$USER3! prepare for restore
$ ABS Restore/Full DISK$USER3:/Storage=SYSTEM_BACKUPS
ABS will automatically find the most recent Full save, apply that to the disk, and then apply each subsequent Incremental backup in the correct order to the disk.
Selective Backup operations are portions of an entire disk or filesystem. For example, you might want to only backup a particular user's directory, or a particular file or set of files.
A Selective Operation in SLS is identified when neither "/IM" (Image) nor "/SINCE=BACKUP" (Incremental) is found in the QUALIFIERS symbol. This indicates that the files given in FILES_n should be backed up "as is", and not as part of a whole disk or file system.
In ABS, the type of operation is specified on the Save Request. If the operation is specified as "Selective", then sets of files or other data objects can be backed up.
SLS provides the Storage Save command, which allows individual users or the system administrator to backup files "on demand". This type of operation is called a "User Backup" under SLS, because it has the following characteristics:
In ABS, any user with appropriate access levels can issue an ABS Save command, and back up their own files. It is access to the Storage Classes and Execution Environments which constrain which tapes and tape drives can be used by individual users.
For example, if you set the Access Control List (ACL) on the SYSTEM_BACKUPS Storage Class to be:
/ACCESS=(USER=*::*,ACCESS="READ+WRITE")
then any user can write into the Storage Class (do backups to it) or Read from the Storage Class (restore files from it).
The ABS installation kit provides an out-of-the-box set of policy objects for user backup operations. These are the Storage Class USER_BACKUP and the Environment USER_BACKUP_ENV.
The User Profile in the Execution Environment determines the context in which the backup operations are performed. Except in special cases, Environments which are accessible to the average users will have a user profile specifying the keyword "<REQUESTER>" as the user under which the backups are to be performed. This causes ABS to capture the user's username, privileges and access rights when they create a Save Request using this Environment, and use these parameters during the backup operations.
In ABS, all volumes are owned and managed by the ABS account. The primary goal of ABS is data safety. This primary goal precludes allowing individual users to manage their own tapes, since the user may destroy data accidentally, or misuse the tapes. Access to the tapes owned by ABS is allowed by setting the Access Control on Storage Classes.
Using the USER_BACKUP Storage Class and USER_BACUPS_ENV execution environment, users can issue their own Save and Restore requests. These backup operations share a common pool of tapes and a common catalog with other users, but each user can only access their own backed up data on these tapes.
If you want a user to be limited to a particular set of tapes, or record their backups in a separate catalog, ABS also allows this to be configured by issuing the steps below:
This section describes differences in how SLS and ABS handle media and device management.
ABS uses a new Media And Device Management Services (MDMS) component that supports the concept of a domain. An MDMS domain has scope across multiple geographical locations each with their own nodes, jukeboxes, drives and volumes. MDMS utilizes a network-wide database that supports the following types of objects:
Domain - the entire scope of operations covered by a single MDMS database and managed as a single environment, networked together by a choice of protocols
Drive - a device that can read or write data to/from tape volumes, and that may reside in a jukebox
Group - a group of nodes that have some kind of relationship; the group name can be used as a convenience for multiple node names in a variety of contexts (e.g. OpenVMS clusters)
Jukebox - a device that performs random-access loading and unloading of volumes into drives
Location - a physical location that contains nodes, jukeboxes and volumes, which can be configured in a hierarchy and may contain spaces for volume and magazine storage
Magazine - a collection of volumes in a physical magazine that are moved as a group
Media Type - a logical description of the type of media of a volume, including attributes such as density, compaction and length
Node - an OpenVMS system running MDMS and ABS
Pool - a collection of volumes that may be allocated by authorized users
Volume - a piece of tape media that ABS uses to backup and restore customer data
Communication within the domain utilizes TCP/IP, DECnet Phase IV and/or DECnet-Plus at the user's choice. A Java-based GUI is also provided for MDMS operations, and this runs on Alpha VMS and Windows platforms. A comprehensive, consistent DCL syntax and the GUI replace the STORAGE commands and the forms interface provided with SLS; all useful functions can be performed from either interface. All database changes can be applied dynamically without the need to restart MDMS, and are applicable in all parts of the domain.
This is substantially different from the media management supplied by SLS. In that product, many definitions are stored in a configuration file called TAPESTART.COM. Not only was there a tendancy for this file to vary across nodes, but any change to the configuration required SLS to be restarted. Other definitions, such as volumes, magazines and slots were in bona-fide databases, but access to the database was inconsistent and incomplete. For example, many of the volume's attributes could not be modified using standard commands. In addition, the DCL and forms interfaces, while overlapping, were not complete in their own right; certain operations could only be performed using one of the interfaces, meaning that the user/operator probably had to learn and use both of them.
One other substantial difference with MDMS is that it utilizes no device-specific code; MDMS attempts to perform operations on devices and if there are errors takes corrective action as needed, transparent to the user. One great advantage of this approach is that new devices are automatically supported, rather than having to wait for a software upgrade.
When upgrading from SLS to ABS, it is first necessary to convert the SLS volume, magazine and slot databases, together with the TAPESTART.COM definitions, to MDMS databases. A utility is provided for this purpose. This utility, together with a full description of how the media manager in SLS and MDMS are different, are described in the Appendix "Converting SLS/MDMS V2 to MDMS V3". MDMS has been designed so that the conversion can be performed as a rolling upgrade, starting with the set of nodes designated as database servers. These MDMS V3 database servers can support V2.x clients running ABS or SLS.
There are also a couple of significant differences between the way SLS and ABS handle volume management:
Volume Sets are a collection of tape volumes that are treated as a single entity. The volumes are logically appended to one another, allowing more data to be stored and wasting less tape.
SLS only provides very rudimentary volume set management in the form of the CONTINUE symbol. Any SBK file which uses a consistent value of the CONTINUE symbol will have its savesets appended to a volume set managed by SLS. However, the user must explicitly name the volume sets (i.e. the value of the CONTINUE symbol), and creating new volume sets is problematic.
In ABS, data is written to volume sets automatically. Each Storage Class has one or more volume sets which it manages. The number of volume sets managed by a Storage Class is called the Number of Streams, or Number of Simultaneous Read and Write Operations.
ABS automatically creates new volume sets based upon the Storage Class's Consolidation Criteria. The Consolidation Criteria determines how data is consolidated onto volume sets. There are three criteria which can be used to limit the amount of data written by ABS to a volume set:
Once ABS determines the consolidation criteria has been exceeded on a volume set, it automatically "retires" the volume set. Retiring a volume set allows data to be restored from it, but no more data is written to the volume set. ABS then automatically creates a new volume set for backing up data in the Storage Class.
SLS is very inconsistent in its management of volumes and drives. For example, SLS System Backups do a different style of tape load than SLS User Backups, and the source code is completely different. In addition, the way that tapes are appended to volume sets, the messages indicating problems, and the methods for debugging problems in these areas are completely inconsistent.
ABS does all volume and drive management via MDMS through the ABS Coordinator. All types of data movements, from Selective to Full, Saves and Restores, and all types of Backup Agents are managed by the ABS Coordinator. This means that volume, robot and drive management are completely consistent across all operations.
This section identifies similarities and differences between how SLS and ABS handle cataloging operations. Catalogs (called History Sets in SLS) record what backup operations have taken place, and what files or other data objects have been backed up.
SLS History Sets come in two varieties: System History Sets and User History Sets. System History Sets are updated for system backups (i.e. SBK files), while User History Sets are updated for user backups.
Creating and configuring history sets is troublesome on SLS. The TAPESTART.COM command procedure is used to determine what System History Sets are created, and where they are located. The User History Sets are created on a per user basis, using the ASNUSRBAK.COM command procedure. Configuring User Histories depends upon the user executing the SLS$TAPSYMBOLS.COM procedure in their LOGIN.COM.
Although the information stored in both System and User History Sets are similar, the source code to write to them, look up files and restore files is totally different. This is a problem for maintenance, and prevents consistency in the data available for data backed up by SLS.
System History Sets are "staged", which means that during the backup operation, the history records are written to a sequential temporary file. This improves the performance of the backups. Then, at a later time, the temporary files are loaded into the actual System History Sets, which can then be used to restore files. User History Sets are never staged.
SLS History Sets contain no protection information. The entire history set must be protected against individual users reading and restoring data. This means that at most sites, restoring data from system backups must be done by the Operator or System Administrator to prevent users from restoring data they would not normally have access to.
ABS has only one catalog format, called a Brief catalog. This catalog provides basic information about what backup operations have been done, and what files or other data objects have been backed up. All ABS Catalogs have the same format, and are written by the ABS Coordinator, so information is consistent across all backups.
Creating catalogs in ABS is done by running the ABS$SYSTEM:ABS_CATALOG_OBJECT utility. Using this utility, you can create new catalogs with any name and owner. The utility also allows you to specify whether the catalog supports staging or not. Staging is where the catalog records are written to a temporary sequential file during the backup to improve performance, and then loaded into the actual catalog at a later time.
By default, ABS stores all catalogs in a single location on each system where backups are performed. This location is referenced by the ABS$CATALOG logical name. If you want to place the catalogs in other locations, you can move the catalog files to another directory, and redefine the ABS$CATALOG logical name.
ABS provides the ability to perform lookups and selective restores using SLS System History Sets. SLS System History Sets are written by SLS system backup operations to locate the tapes and savesets containing backed up data. This ability is an important function for any conversion plan, since the data backed up previously by SLS may be needed even after ABS has been fully deployed.
ABS cannot perform full disk restores or Oracle RDB Database restores using SLS history sets. It can restore VMS files selected using a wildcard file specification.
The steps for restoring data using ABS with SLS History Sets are:
$ Catalog_obj := $ABS$SYSTEM:ABS_CATALOG_OBJECT.EXE
$ Catalog_obj Create <hisnam> SLS ABS NO
where <hisnam> is the name of the SLS history set from TAPESTART.COM. This will also be the name of the ABS catalog.
This chapter identifies the Conversion Process from SLS to ABS. First, the steps involved in converting from SLS to ABS are presented and explained. A Conversion Utility to help with the conversion is then presented.
This section identifies the steps involved in converting a site's backup management from SLS to ABS. These steps are intended as guidelines, since each site has different requirements and need for their backup management.
The first step in converting from SLS to ABS is to convert the volume, slot and magazine databases, and the media and device portions of TAPESTART.COM to MDMS databases. A command procedure is provided for this purpose and this procedure is documented in the Appendix "Converting SLS/MDMS V2 to MDMS V3". Please note that this version of ABS and all future versions require the accompanying version of MDMS included in the installation kit.
The next step in converting from SLS to ABS is to identify how you use SLS. There are three major uses of SLS:
ABS provides the same functionality as SLS System Backups and SLS User Backups. However, ABS cannot perform the same function as SLS Standby Archiving. SLS Standby Archiving has the following characteristics:
If you use SLS System Backups (as many sites do), then converting to ABS is fairly simple. If you use SLS User Backups, converting to ABS is slightly more involved, but is still fairly straightforward. If you use SLS Standby Archiving, ABS will not provide equivalent functionality.
This section describes how to convert SLS System Backups (SBK files) into ABS Policy.
At many sites, only a few of the SBK files which reside in SLS$SYSBAK are actually used. The other SBK files are a result of experimentation, false starts at configuring SLS, or are obsolete.
In order to simplify the conversion of SLS System Backups, you should first identify the SBK files which you actually use. Usually, SBK Files are used at your site if:
A simple way of finding the SBK files which are scheduled by SLS is to search the SBK files for the DAYS_1 symbol. Any SBK file which does not define DAYS_1, or defines it as blank, is not scheduled by SLS for automatic execution. These SBK files are prime candidates for obsolete or unused files.
After identifying the SBK files which are not automatically scheduled, carefully determine which of the files may be invoked manually by you or the Operator.
Once you have identified the obsolete or unused SBK files, you can remove them from SLS$SYSBAK (after backing them up in case of mistakes, of course).
Once you have cleaned up your SLS$SYSBAK directory to only contain those SBK files you actually use, it is time to convert these SBK files to ABS Storage Classes, Execution Environments and Save Requests.
Compaq provides a utility to help in this conversion process. The conversion utility is called SLS_CONVERT.COM, and is included as an installation kit on the ABS Kit. To install the SLS to ABS Conversion utility, issue the command:
$ @SYS$UPDATE:VMSINSTAL SLSTOVABS031 ABS$SYSTEM: ! VAX System OR
$ @SYS$UPDATE:VMSINSTAL SLSTOAABS031 ABS$SYSTEM: ! Alpha System
This command will install the conversion utility into ABS$SYSTEM:SLS_CONVERT.COM, and will create a subdirectory under the ABS$ROOT called SLS_CONVERSION. In addition, a logical name, ABS$SLS_CONVERSION will be defined to point to the work directory for the conversion effort.
The conversion utility creates DCL Command Procedures which issues appropriate ABS DCL commands equivalent to each SBK file. No changes are made to your ABS Policy Configuration directly. This allows you to experiment with the conversion utility safely, without affecting either the execution of your SLS SBK files, or starting ABS Save Requests inadvertently.
Once you have installed the conversion utility, you can create ABS command procedures for all of your SBK files by issuing the command:
The asterisk indicates you want to convert all SBK files to ABS DCL Commands. If you only want to convert one SBK file, you can specify the name of the SBK without the _SBK.COM or SLS$SYSBAK on the command line. For example, to convert NIGHTLY_SBK.COM, you would issue the command:
After running the conversion utility, the ABS$SLS_CONVERSION directory will contain one ABS DCL Command Procedure for each SBK file converted. These output command files will contain:
You should not execute these command procedures blindly. The conversion utility attempts to duplicate the backup policy reflected in each SBK file, but you should carefully examine each command file produced to ensure that errors were not made, and that the ABS Policy to be created correctly reflects the backup policy you expect.
The things you should check for in the produced command procedures are:
The command procedure for each SBK file processed will contain the ABS DCL Commands to create one Storage Class, one Execution Environment and one or more Save Requests.
The name of the Storage Class created for an SBK file will be the value of the CONTINUE symbol (if defined) followed by the suffix "_SC". If the CONTINUE symbol is not defined, the Storage Class will be named the same as the SBK file with the "_SC" suffix.
The Environment created for an SBK will be named the same as the SBK file, but with an "_ENV" suffix. When a Save Request specifies a Storage Class, the default Environment used will be the same name as the Storage Class, but with the "_ENV" suffix. Thus, the Environment created should be used by default.
Each SBK File will produce one or more ABS Save Requests. More than one ABS Save Request will be produced from an SBK file if all the following conditions are met:
For example, if an SBK file has QUALIFIER_1 defined as "/IM" indicating an Image (or Full) backup operation, but QUALIFIERS_2 is defined as "/SINCE=BACKUP" indicating an Incremental operation, then ABS will need two separate Save Requests to implement this policy. This is because an ABS Save Request will only do Full, Incremental or Selective operations, not a mix of them.
If all QUALIFIERS_n specify the same movement type and there are fewer than eight FILES_n, then ABS can combine all the operations into a single Save Request.
The Save Requests created will be named the same as the SBK file, but with "_FULL", "_INC" or "_SEL" to indicate the data movement type included in the Save Request. For example, if the SBK file NIGHTLY_SBK.COM defines FILES_1 through FILES_20, and all qualifiers include the "/IM" Image qualifier, then the conversion tool will create three Save Requests, called NIGHTLY_FULL_1 through NIGHTLY_FULL_3. Because ABS has a limit of 8 operations per save request, NIGHTLY_FULL_1 and NIGHTLY_FULL_2 would perform 8 Full backup operations, and NIGHTLY_FULL_3 would perform the last four.
The Conversion Utility shipped with ABS is a very simple utility. It converts each SBK file into the appropriate ABS DCL Commands to create the Storage Classes, Execution Environments and Save Requests necessary to reflect the backup policy in the SBK file.
No attempt is made to consolidate the Storage Classes and Execution Environments, or to overlay the Save Requests for more optimum performance.
Before executing the command procedures to create the ABS Policy objects, you should try to consolidate Storage Classes and Execution Environments. Save Requests may be combined if warranted by the intended policy, but in some cases, breaking a Save Request into several is better for reducing nightly backup time, simplifying an overall backup policy, or backing up different objects at different intervals.
Consolidating the Storage Classes is done by comparing their parameters. For each pair of Storage Classes, you can determine whether they can be combined by using the See Storage Class Parameter. Note that in all cases, you can decide that one or the other parameter is correct for both, and consolidate based upon that decision.
Consolidating Execution Environments is again done by comparing the parameters of pairs of Environments, and then combining those Environments if your decisions indicate they can serve the same purpose. Use See Execution Environment Parameter as a guide:
Will always be ABS from conversion utility, choose PRIVI LEGES best for intended Environment's use |
|
As with Storage Classes, only the Administrator at a site can truly determine if two separate Environments can be consolidated based upon the intended use of the Environment.
This section discusses the steps involved in implementing the ABS Policy as produced by the conversion utility and evaluated by the site Administrator.
Executing the Command Procedures
After you have examined the raw output command files from the conversion utility and done what consolidation or modifications seem appropriate, the command files can simply be executed using the at sign (@) operator at DCL. When each command procedure is invoked, it will:
Integrating the Prolog and Epilog Commands
There are several features of an SBK file which are not directly supported by ABS. The conversion utility creates a Prolog command file and an Epilog command file which implement some of these other features.
For example, ABS does not support the Offsite Date or Onsite Date in the SBK file directly. However, by issuing the appropriate MDMS SET VOLUME command, this can be implemented. The conversion utility writes these commands into the Prolog or Epilog command files.
When the conversion utility produces a Prolog and Epilog command procedure, they will be created in the ABS$SLS_CONVERSION directory, and will be called, the same name as the Save Request, but will have "_PROLOG" or "_EPILOG" appended. For example, if you convert the NIGHTLY_SBK.COM, you will end up with the Prolog and Epilog command files:
ABS$SLS_CONVERSION:NIGHTLY_ABS_PROLOG.COM
ABS$SLS_CONVERSION:NIGHTLY_ABS_EPILOG.COM
If you need the features implemented in the Prolog or Epilog command procedures, you should integrate these into your own Prolog and Epilog command procedures (if any). Both the Execution Environment and the Save Request may have Prolog and Epilog commands associated with them, which are usually the execution of a site specific command procedure. If you want the features implemented in the Prolog or Epilog command procedures produced by the conversion utility, you should invoke them from your site specific command procedure.
When executing an SBK file, SLS makes various DCL symbols accessible to the prolog and epilog command files you invoke. For example, SLS will define the DCL symbol DO_DISK as the name of the disk being backed up during an SBK execution. The objective is to allow the prolog or epilog commands to produce log messages, or perform other operations based upon the SBK file execution.
ABS provides logical names which provide similar functionality. For example, ABS defines the logical name ABS_OS_OBJECT_SET_1 as the set of files being backed up in the first data movement operation. Thus, it can be used in the place of the FILES_n symbol in an SBK file.
The conversion utility kit provides a command procedure, SLS_SYMBOLS.COM, which attempts to define many of the same DCL symbols as an SBK file does based upon the ABS logical names. For example, it defines the DO_DISK symbol based upon the ABS logical name ABS_OS_OBJECT_SET_1.
See ABS$SLS_CONVERSION:SBK_SYMBOLS.COM command procedure for details on the definition of each SBK symbol. Not all SLS DCL symbols defined are supported by the command procedure.
It is very important to note that once you have executed the DCL Command procedures produced by the conversion utility, the ABS Save Requests will be executing according to their schedules. This means that you will be doing both SLS and ABS backups if you do not disable the SLS SBK files.
The SLS SBK files can be disabled by changing their DAYS_n and TIME_n qualifiers to empty. This causes SLS to no longer schedule the SBK files for execution.
Since SLS and ABS use different media management subsystems, it is highly recommended that you do not use both products on the same node. If you do, you will find that the SLS and MDMS volume databases may get out of synchronization, and there may be contention and other unexpected troubles with drives and jukeboxes. If you wish to stage your SLS to ABS conversion across your network, the following approach is recommended:
Define your Central Security Domain as your first set of nodes to convert; these nodes will run the ABS policy engine and the MDMS database server
Perform the MDMS conversion on these nodes - see Appendix "Converting SLS/MDMS V2 to V3".
Perform the ABS conversion on these nodes
On other client nodes still running SLS, define the appropriate TAPESTART.COM to point to nodes in the ABS/MSMS Central Security Domain in the symbol DB_NODES
At this point, your volume, magazine and slot databases are being managed by MDMS, but your client systems are still able to use SLS as the backup paradigm. It is recommended that you convert the remainder of your systems to ABS/MDMS as soon as practical, because some of the more unusual features of SLS/MDMS are not supported by the new MDMS database server.
The conversion utility does not convert User Backup policy automatically. It is only intended to make converting SBK files easier or automatic.
To allow a particular user to do their own backups, follow the steps as outlined in Section . Note that there is no automatic way to set up Storage Classes for the entire user population, or a large set of users except by creating a DCL command procedure issuing the correct ABS DCL commands.
After implementing your backup policy in ABS, you should carefully monitor the activities of ABS until you are confident that your policy is being executed as intended.
There are three ways to monitor ABS activity:
ABS has the ability to restore data backed up by SLS. After you have implemented your backup policy using ABS, it may be necessary to restore data which was backed up using SLS prior to the conversion. Please see Section for more information on this capability.
$ @ABS$SYSTEM:SLS_CONVERT <wildcard_SBK_spec> [<match1>] [<match2>...]
This parameter identifies the set of SBK files to be converted by this command. The string given should not include SLS$SYSBAK: or the _SBK.COM suffix. For example, if you want to convert the SBK file SLS$SYSBAK:NIGHTLY*_SBK.COM, you should issue the command:
$ @ABS$SYSTEM:SLS_CONVERT NIGHTLY*
These optional parameters allow you to search the SBK files defined by the <wildcarded_SBK_spec> and only process those files which contain ALL of the given strings. The strings must all appear on the same line in the SBK file, since the SLS_CONVERT command procedure uses a /MATCH=AND on the Search command.
The output of the SLS_CONVERT conversion utility is one DCL command procedure for each SBK file processed. The command procedures will be created in the ABS$SLS_CONVERSION directory.
Each command procedure will be named the same as the SBK file, but substituting "ABS" for "SBK". For example, if the SBK file SYSTEM_DISK_SBK.COM is converted, the output command procedure will be ABS$SLS_CONVERSION:SYSTEM_DISK_ABS.COM.
Although the command procedures can be executed immediately, it is highly recommended that you review their contents before executing them to ensure that the ABS Policy objects which will be created accurately reflect your intended backup policy.
Each output command file will contain:
See SBK Symbols in ABS Terminology, lists SBK Symbols in ABS Terminology.
See ABS Storage Classes and SLS SBK Equivalent, lists ABS Storage Class object parameters and their SLS SBK Equivalents.
See ABS Execution Environment Parameter and SLS SBK Equivalent, lists ABS Execution Environment parameters and their SLS SBK Equivalents
See ABS Save Request Parameter and SLS SBK Equivalent, lists ABS Save Request parameters and their SLS SBK equivalents.
Differences Between MDMS Version 2 and MDMS Version 3
This Appendix addresses differences between MDMS Version 2 and MDMS Version 3 (V3.0 and later). It describes differences in command syntax, software features replacing the MDMS User, Operator, and Administrator interfaces, and features replacing the TAPESTART.COM command procedure.
For MDMS version 3.0 and later, the MDMS command set replaces the STORAGE command set. See Comparing MDMS Version 2 and Version 3 Commandscompares the STORAGE command set with MDMS commands.
The MDMS Version 2 forms interface provides features that are not found in the command set. This section compares the features of the three forms interfaces with MDMS Version 3 commands.
The command procedure TAPESTART.COM is no longer used. shows TAPESTART.COM symbols and the comparable features of the MDMS Version 3.
Configuration - which involves the creation or definition of MDMS objects, should take place in the following order:
Creating these objects in the above order ensures that the following informational message, does not appear:
%MDMS-I-UNDEFINEDREFS, object contains undefined referenced objects
This message appears if an attribute of the object is not defined in the database. The object is created even though the attribute is not defined. The sample configuration consists of the following:
SMITH1 - ACCOUN cluster node
SMITH2 - ACCOUN cluster node
SMITH3 - ACCOUN cluster node
JONES - a client node
$1$MUA560
$1$MUA561
$1$MUA562
$1$MUA563
$1$MUA564
$1$MUA565
The following examples illustrate each step in the order of configuration.
This example lists the MDMS commands to define an offsite and onsite location for this domain.
$ !
$ ! create onsite location
$ !
$ MDMS CREATE LOCATION BLD1_COMPUTER_ROOM -
/DESCRIPTION="Building 1 Computer Room"
$ MDMS SHOW LOCATION BLD1_COMPUTER_ROOM
Location: BLD1_COMPUTER_ROOM
Description: Building 1 Computer Room
Spaces:
In Location:
$ !
$ ! create offsite location
$ !
$ MDMS CREATE LOCATION ANDYS_STORAGE -
/DESCRIPTION="Andy's Offsite Storage, corner of 5th and Main"
$ MDMS SHOW LOCATION ANDYS_STORAGE
Location: ANDYS_STORAGE
Description: Andy's Offsite Storage, corner of 5th and Main
Spaces:
In Location:
This example shows the MDMS command to define the media type used in the TL826.
!
$ ! create the media type
$ !
$ MDMS CREATE MEDIA_TYPE TK88K -
/DESCRIPTION="Media type for volumes in TL826 with TK88 drives" -
/COMPACTION ! volumes are written in compaction mode
$ MDMS SHOW MEDIA_TYPE TK88K
Media type: TK88K
Description: Media type for volumes in TL826 with TK88 drives
Density:
Compaction: YES
Capacity: 0
Length: 0
This example shows the MDMS command to set the domain attributes. The reason this command is not run until after the locations and media type are defined, is because they are default attributes for the domain object. Note that the deallocation state (transition) is taken as the default. All of the rights are taken as default also.
$ !
$ ! set up defaults in the domain record
$ !
$ MDMS SET DOMAIN -
/DESCRIPTION="Smiths Accounting Domain" - ! domain name
/MEDIA_TYPE=TK88K - ! default media type
/OFFSITE_LOCATION=ANDYS_STORAGE - ! default offsite location
/ONSITE_LOCATION=BLD1_COMPUTER_ROOM - ! default onsite location
/PROTECTION=(S:RW,O:RW,G:RW,W) ! default protection for volumes
$ MDMS SHOW DOMAIN/FULL
Description: Smiths Accounting Domain
Mail: SYSTEM
Offsite Location: ANDYS_STORAGE
Onsite Location: BLD1_COMPUTER_ROOM
Def. Media Type: TK88K
Deallocate State: TRANSITION
Opcom Class: TAPES
Priority: 1536
Request ID: 2576
Protection: S:RW,O:RW,G:RW,W
DB Server Node: SPIELN
DB Server Date: 1-FEB-1999 08:18:20
Max Scratch Time: NONE
Scratch Time: 365 00:00:00
Transition Time: 14 00:00:00
Network Timeout: 0 00:02:00
ABS Rights: NO
SYSPRIV Rights: YES
Application Rights: MDMS_ASSIST
MDMS_LOAD_SCRATCH
MDMS_ALLOCATE_OWN
MDMS_ALLOCATE_POOL
MDMS_BIND_OWN
MDMS_CANCEL_OWN
MDMS_CREATE_POOL
MDMS_DEALLOCATE_OWN
MDMS_DELETE_POOL
MDMS_LOAD_OWN
MDMS_MOVE_OWN
MDMS_SET_OWN
MDMS_SHOW_OWN
MDMS_SHOW_POOL
MDMS_UNBIND_OWN
MDMS_UNLOAD_OWN
Default Rights:
Operator Rights: MDMS_ALLOCATE_ALL
MDMS_ASSIST
MDMS_BIND_ALL
MDMS_CANCEL_ALL
MDMS_DEALLOCATE_ALL
MDMS_INITIALIZE_ALL
MDMS_INVENTORY_ALL
MDMS_LOAD_ALL
MDMS_MOVE_ALL
MDMS_SHOW_ALL
MDMS_SHOW_RIGHTS
MDMS_UNBIND_ALL
MDMS_UNLOAD_ALL
MDMS_CREATE_POOL
MDMS_DELETE_POOL
MDMS_SET_OWN
MDMS_SET_POOL
User Rights: MDMS_ASSIST
MDMS_ALLOCATE_OWN
MDMS_ALLOCATE_POOL
MDMS_BIND_OWN
MDMS_CANCEL_OWN
MDMS_DEALLOCATE_OWN
MDMS_LOAD_OWN
MDMS_SHOW_OWN
MDMS_SHOW_POOL
MDMS_UNBIND_OWN
MDMS_UNLOAD_OWN
This example shows the MDMS commands for defining the three MDMS database nodes of the cluster ACCOUN. This cluster is configured to use DECnet-PLUS.
Note that a node is defined using the DECnet node name as the name of the node.
$ !
$ ! create nodes
$ ! database node
$ MDMS CREATE NODE SMITH1 - ! DECnet node name
/DESCRIPTION="ALPHA node on cluster ACCOUN" -
/DATABASE_SERVER - ! this node is a database server
/DECNET_FULLNAME=SMI:.BLD.SMITH1 - ! DECnet-Plus name
/LOCATION=BLD1_COMPUTER_ROOM -
/TCPIP_FULLNAME=SMITH1.SMI.BLD.COM - ! TCP/IP name
$ MDMS SHOW NODE SMITH1
Node: SMITH1
Description: ALPHA node on cluster ACCOUN
DECnet Fullname: SMI:.BLD.SMITH1
TCP/IP Fullname: SMITH1.SMI.BLD.COM:2501-2510
Disabled: NO
Database Server: YES
Location: BLD1_COMPUTER_ROOM
Opcom Classes: TAPES
Transports: DECNET,TCPIP
$ MDMS CREATE NODE SMITH2 - ! DECnet node name
/DESCRIPTION="ALPHA node on cluster ACCOUN" -
/DATABASE_SERVER - ! this node is a database server
/DECNET_FULLNAME=SMI:.BLD.SMITH2 - ! DECnet-Plus name
/LOCATION=BLD1_COMPUTER_ROOM -
/TCPIP_FULLNAME=SMITH2.SMI.BLD.COM - ! TCP/IP name
/TRANSPORT=(DECNET,TCPIP) ! TCPIP used by JAVA GUI and JONES
$ MDMS SHOW NODE SMITH2
Node: SMITH2
Description: ALPHA node on cluster ACCOUN
DECnet Fullname: SMI:.BLD.SMITH2
TCP/IP Fullname: SMITH2.SMI.BLD.COM:2501-2510
Disabled: NO
Database Server: YES
Location: BLD1_COMPUTER_ROOM
Opcom Classes: TAPES
Transports: DECNET,TCPIP
$ MDMS CREATE NODE SMITH3 - ! DECnet node name
/DESCRIPTION="VAX node on cluster ACCOUN" -
/DATABASE_SERVER - ! this node is a database server
/DECNET_FULLNAME=SMI:.BLD.SMITH3 - ! DECnet-Plus name
/LOCATION=BLD1_COMPUTER_ROOM -
/TCPIP_FULLNAME=CROP.SMI.BLD.COM - ! TCP/IP name
/TRANSPORT=(DECNET,TCPIP) ! TCPIP used by JAVA GUI and JONES
$ MDMS SHOW NODE SMITH3
Node: SMITH3
Description: VAX node on cluster ACCOUN
DECnet Fullname: SMI:.BLD.SMITH3
TCP/IP Fullname: CROP.SMI.BLD.COM:2501-2510
Disabled: NO
Database Server: YES
Location: BLD1_COMPUTER_ROOM
Opcom Classes: TAPES
Transports: DECNET,TCPIP
This example shows the MDMS command for creating a client node. TCP/IP is the only transport on this node.
$ !
$ ! client node
$ ! only has TCP/IP
$ MDMS CREATE NODE JONES -
/DESCRIPTION="ALPHA client node, standalone" -
/NODATABASE_SERVER - ! not a database server
/LOCATION=BLD1_COMPUTER_ROOM -
/TCPIP_FULLNAME=JONES.SMI.BLD.COM - ! TCP/IP name
/TRANSPORT=(TCPIP) ! TCPIP is used by JAVA GUI
$ MDMS SHOW NODE JONES
Node: JONES
Description: ALPHA client node, standalone
DECnet Fullname:
TCP/IP Fullname: JONES.SMI.BLD.COM:2501-2510
Disabled: NO
Database Server: NO
Location: BLD1_COMPUTER_ROOM
Opcom Classes: TAPES
Transports: TCPIP
This example shows the MDMS command for creating a jukebox
$ !
$ ! create jukebox
$ !
$ MDMS CREATE JUKEBOX TL826_JUKE -
/DESCRIPTION="TL826 Jukebox in Building 1" -
/ACCESS=ALL - ! local + remote for JONES
/AUTOMATIC_REPLY - ! MDMS automatically replies to OPCOM requests
/CONTROL=MRD - ! controled by MRD robot control
/NODES=(SMITH1,SMITH2,SMITH3) - ! nodes the can control the robot
/ROBOT=$1$DUA560 - ! the robot device
/SLOT_COUNT=176 ! 176 slots in the library
$ MDMS SHOW JUKEBOX TL826_JUKE
Jukebox: TL826_JUKE
Description: TL826 Jukebox in Building 1
Nodes: SMITH1,SMITH2,SMITH3
Groups:
Location: BLD1_COMPUTER_ROOM
Disabled: NO
Shared: NO
Auto Reply: YES
Access: ALL
State: AVAILABLE
Control: MRD
Robot: $1$DUA560
Slot Count: 176
Usage: NOMAGAZINE
This example shows the MDMS commands for creating the six drives for the jukebox.
This example is a command procedure that uses a counter to create the six drives. In this example it is easy to do this because of the drive name and device name. You may want to have the drive name the same as the device name. For example:
$ MDMS CREATE DRIVE $1$MUA560/DEVICE=$1$MUA560
This works fine if you do not have two devices in your domain with the same name.
$ COUNT = COUNT + 1
$ IF COUNT .LT. 6 THEN GOTO DRIVE_LOOP
$DRIVE_LOOP:
$ MDMS CREATE DRIVE TL826_D1 -
/DESCRIPTION="Drive 1 in the TL826 JUKEBOX" -
/ACCESS=ALL - ! local + remote for JONES
/AUTOMATIC_REPLY - ! MDMS automatically replies to OPCOM requests
/DEVICE=$1$MUA561 - ! physical device
/DRIVE_NUMBER=1 - ! the drive number according to the robot
/JUKEBOX=TL826_JUKE - ! jukebox the drives are in
/MEDIA_TYPE=TK88K - ! media type to allocate drive and volume for
/NODES=(SMITH1,SMITH2,SMITH3)! nodes that have access to drive
$ MDMS SHOW DRIVE TL826_D1
Drive: TL826_D1
Description: Drive 1 in the TL826 JUKEBOX
Device: $1$MUA561
Nodes: SMITH1,SMITH2,SMITH3
Groups:
Volume:
Disabled: NO
Shared: NO
Available: NO
State: EMPTY
Stacker: NO
Automatic Reply: YES
RW Media Types: TK88K
RO Media Types:
Access: ALL
Jukebox: TL826_JUKE
Drive Number: 1
Allocated: NO
:
:
:
$ MDMS CREATE DRIVE TL826_D5 -
/DESCRIPTION="Drive 5 in the TL826 JUKEBOX" -
/ACCESS=ALL - ! local + remote for JONES
/AUTOMATIC_REPLY - ! MDMS automatically replies to OPCOM requests
/DEVICE=$1$MUA565 - ! physical device
/DRIVE_NUMBER=5 - ! the drive number according to the robot
/JUKEBOX=TL826_JUKE - ! jukebox the drives are in
/MEDIA_TYPE=TK88K - ! media type to allocate drive and volume for
/NODES=(SMITH1,SMITH2,SMITH3)! nodes that have access to drive
$ MDMS SHOW DRIVE TL826_D5
Drive: TL826_D5
Description: Drive 5 in the TL826 JUKEBOX
Device: $1$MUA565
Nodes: SMITH1,SMITH2,SMITH3
Groups:
Volume:
Disabled: NO
Shared: NO
Available: NO
State: EMPTY
Stacker: NO
Automatic Reply: YES
RW Media Types: TK88K
RO Media Types:
Access: ALL
Jukebox: TL826_JUKE
Drive Number: 5
Allocated: NO
$ COUNT = COUNT + 1
$ IF COUNT .LT. 6 THEN GOTO DRIVE_LOOP
This example shows the MDMS commands to define two pools: ABS and HSM. The pools need to have the authorized users defined.
$ !
$ ! create pools
$ !
$ mdms del pool abs
$ MDMS CREATE POOL ABS -
/DESCRIPTION="Pool for ABS" -
/AUTHORIZED=(SMITH1::ABS,SMITH2::ABS,SMITH3::ABS,JONES::ABS)
$ MDMS SHOW POOL ABS
Pool: ABS
Description: Pool for ABS
Authorized Users: SMITH1::ABS,SMITH2::ABS,SMITH3::ABS,JONES::ABS
Default Users:
$ mdms del pool hsm
$ MDMS CREATE POOL HSM -
/DESCRIPTION="Pool for HSM" -
/AUTHORIZED=(SMITH1::HSM,SMITH2::HSM,SMITH3::HSM)
$ MDMS SHOW POOL HSM
Pool: HSM
Description: Pool for HSM
Authorized Users: SMITH1::HSM,SMITH2::HSM,SMITH3::HSM
Default Users:
This example shows the MDMS commands to define the 176 volumes in the TL826 using the /VISION qualifier. The volumes have the BARCODES on them and have been placed in the jukebox. Notice that the volumes are created in the UNINITIALIZED state. The last command in the example initializes the volumes and changes the state to FREE.
$ !
$ ! create volumes
$ !
$ ! create 120 volumeS for ABS
$ ! the media type, offsite location, and onsite location
$ ! values are taken from the DOMAIN object
$ !
$ MDMS CREATE VOLUME -
/DESCRIPTION="Volumes for ABS" -
/JUKEBOX=TL826_JUKE -
/POOL=ABS -
/SLOTS=(0-119) -
/VISION
$ MDMS SHOW VOLUME BEB000
Volume: BEB000
Description: Volumes for ABS
Placement: ONSITE BLD1_COMPUTER_ROOM
Media Types: TK88K Username:
Pool: ABS Owner UIC: NONE
Error Count: 0 Account:
Mount Count: 0 Job Name:
State: UNINITIALIZED Magazine:
Avail State: UNINITIALIZED Jukebox: TL826_JUKE
Previous Vol: Slot: 0
Next Vol: Drive:
Format: NONE Offsite Loc: ANDYS_STORAGE
Protection: S:RW,O:RW,G:RW,W Offsite Date: NONE
Purchase: 1-FEB-1999 08:19:00 Onsite Loc: BLD1_COMPUTER_ROOM
Creation: 1-FEB-1999 08:19:00 Space:
Init: 1-FEB-1999 08:19:00 Onsite Date: NONE
Allocation: NONE Brand:
Scratch: NONE Last Cleaned: 1-FEB-1999 08:19:00
Deallocation: NONE Times Cleaned: 0
Trans Time: 14 00:00:00 Rec Length: 0
Freed: NONE Block Factor: 0
Last Access: NONE
$ !
$ ! create 56 volumes for HSM
$ !
$ MDMS CREATE VOLUME -
/DESCRIPTION="Volumes for HSM" -
/JUKEBOX=TL826_JUKE -
/POOL=HSM -
/SLOTS=(120-175) -
/VISION
$ MDMS SHOW VOL BEB120
Volume: BEB120
Description: Volumes for HSM
Placement: ONSITE BLD1_COMPUTER_ROOM
Media Types: TK88K Username:
Pool: HSM Owner UIC: NONE
Error Count: 0 Account:
Mount Count: 0 Job Name:
State: UNINITIALIZED Magazine:
Avail State: UNINITIALIZED Jukebox: TL826_JUKE
Previous Vol: Slot: 120
Next Vol: Drive:
Format: NONE Offsite Loc: ANDYS_STORAGE
Protection: S:RW,O:RW,G:RW,W Offsite Date: NONE
Purchase: 1-FEB-1999 08:22:16 Onsite Loc: BLD1_COMPUTER_ROOM
Creation: 1-FEB-1999 08:22:16 Space:
Init: 1-FEB-1999 08:22:16 Onsite Date: NONE
Allocation: NONE Brand:
Scratch: NONE Last Cleaned: 1-FEB-1999 08:22:16
Deallocation: NONE Times Cleaned: 0
Trans Time: 14 00:00:00 Rec Length: 0
Freed: NONE Block Factor: 0
Last Access: NONE
$ !
$ ! initialize all of the volumes
$ !
$ MDMS INITIALIZE VOLUME -
/JUKEBOX=TL826_JUKE -
/SLOTS=(0-175)
$ MDMS SHOW VOL BEB000
Volume: BEB000
Description: Volumes for ABS
Placement: ONSITE BLD1_COMPUTER_ROOM
Media Types: TK88K Username:
Pool: ABS Owner UIC: NONE
Error Count: 0 Account:
Mount Count: 0 Job Name:
State: FREE Magazine:
Avail State: FREE Jukebox: TL826_JUKE
Previous Vol: Slot: 0
Next Vol: Drive:
Format: NONE Offsite Loc: ANDYS_STORAGE
Protection: S:RW,O:RW,G:RW,W Offsite Date: NONE
Purchase: 1-FEB-1999 08:19:00 Onsite Loc: BLD1_COMPUTER_ROOM
Creation: 1-FEB-1999 08:19:00 Space:
Init: 1-FEB-1999 08:19:00 Onsite Date: NONE
Allocation: NONE Brand:
Scratch: NONE Last Cleaned: 1-FEB-1999 08:19:00
Deallocation: NONE Times Cleaned: 0
Trans Time: 14 00:00:00 Rec Length: 0
Freed: NONE Block Factor: 0
Last Access: NONE
This appendix discusses the main operational differences in the new version of MDMS from previous versions. In some cases, there are conceptual differences in approach, while others are more changes of the 'nuts and bolts' kind. This appendix is designed to acquaint you with the changes, including why some of them were made, in order to make the upgrade as smooth as possible. It will also enable you to use the new features to optimize your configuration and usage of the products.
The media manager used for previous versions of ABS and HSM was embedded within the SLS product. The MDMS portion of SLS was implemented in the same requester (SLS$TAPMGRRQ), database (SLS$TAPMGRDB) and OPCOM (SLS$OPCOM) processes used for SLS.
The STORAGE DCL interface contained both SLS and MDMS commands, as did the forms interface and the configuration file TAPESTART.COM. All media management status and error messages used the SLS prefix. All in all, it was quite difficult to determine where MDMS left off and SLS began. In addition, SLS contained many restrictions in its design that inhibited optimal use of ABS and HSM in a modern environment.
Compaq reviewed the SLS/MDMS design and the many requests for enhancements and decided to completely redesign the media manager for ABS and HSM. The result is MDMS V3 (V3.0 and later), which is included as the preferred media manager for both ABS and HSM V3.0 and later. The main functional differences between MDMS V3 and previous versions include:
The following sections will guide you through the changes one by one.
The previous SLS/MDMS contained several "interfaces" that you used to configure and run operations. These were:
While these interfaces together provided a fully functional product, their inconsistent syntax and coverage made them hard to use.
With MDMS V3, a radical new approach was taken. Two interfaces were chosen for implementation, each of which is fully functional:
this interface was designed with a consistent syntax which is easier to remember. It is also functionally complete so that all MDMS operations can be initiated without manipulating files or forms. This interface can be used by batch jobs and command procedures, as well as by users.
based on Java technology, is provided for those users who prefer graphical interfaces. Like the DCL interface, it is functionally complete and all operations can be initiated from it (with necessary exceptions).
In addition, it contains a number of wizards that can be used to guide you through complex operations such as configuration and volume rotation. The GUI is usable on both OpenVMS Alpha (V7.1 and later) systems and Windows-based PC systems.
There are also a limited number of logical names used for tailoring the functionality of the product and initial startup (when the database is not available).The forms interface, TAPESTART and the utilities have been eliminated. When you install MDMS V3 you will be asked about converting TAPESTART and the old databases to the new format. This is discussed in the Appendix of the Guide to Operations.
Both the DCL and GUI take a forgiving approach to creating, modifying and deleting objects, in that they allow you to perform the operation even if it creates an inconsistency in the database, as follows:
Both the DCL interface and the GUI require privileges to execute commands. These privileges apply to all commands, including defining objects and attributes that used to reside in TAPESTART.
With MDMS V3, privileges are obtained by defining MDMS rights in users' UAF definitions. There are three high-level rights, one each for an MDMS user, application and operator. There are also a large set of low-level rights, several for each command, that relate to high level rights by a mapping defined in the domain object.
In addition, a guru right is enabled which allows any command, and the OpenVMS privilege SYSPRV can optionally be used instead of the guru right. This mechanism replaces the six SLS/MDMS V2 rights defined in TAPESTART and the OPER privilege.
A full description of rights can be found in the Appendix of the ABS/HSM Command Reference Guide.
There was no real concept of a domain with SLS/MDMS V2. The scope of operations within SLS varied according to what was being considered.
For example, attributes defined in TAPESTART were applicable to all nodes using that version of the file - normally from one node to a cluster. By contrast, volumes, magazines and pools had scope across clusters and were administered by a single database process running somewhere in the environment.
MDMS V3 formally defines a domain object, which contains default attribute values that can be applied to any object which does not have them specifically defined. MDMS formally supports a single domain, which supports a single database. All objects (jukeboxes, drives, volumes, nodes, magazines etc.) are defined within the domain.
This introduces some level of incompatibility with the previous version, especially regarding parameters stored in TAPESTART. Since TAPESTART could potentially be different on every node, default parameters like MAXSCRATCH could potentially have different values on each node (although there seemed to be no particularly good reason for this). MDMS V3 has taken the approach of defining default attribute values at the domain level, but also allowing you to override some of these at the a specific object level (for example, OPCOM classes for nodes). In other cases, values such at LOC and VAULT defined in TAPESTART are now separate objects in their own right.
After installing MDMS V3, you will need to perform conversions on each TAPESTART that you have in your domain. If your TAPESTART files on every node were compatible (not necessarily identical, but not conflicting) this conversion will be automatic. However, if there were conflicts, these are flagged in a separate conversion log file, and need to be manually resolved. For example, if there are two drives called $1$MUA500 on different nodes, then one or both need to be renamed for use in the new MDMS.
It is possible to support multiple domains with MDMS V3, but when you do this you need to ensure that no objects span more than one domain.
Each domain contains its own database, which has no relationship to any database in another domain.
For example, your company may have two autonomous groups which have their own computer resources, labs and personnel. It is reasonable for each group to operate within their own domain, but realize that nodes, jukeboxes and volumes cannot be shared among the two groups. If there is a need to share certain resources (e.g. jukeboxes) it is also possible to utilize a single domain, and separate certain resources in other ways.
The drive object in MDMS is similar in concept to a drive in SLS/MDMS V2. However, the naming convention for drives in MDMS V3 is different.
In V2, drives were named after the OpenVMS device name, optionally qualified by a node.
In MDMS V3, drives are named like most other objects - they may be any name up to 31 characters in length, but they must be unique within the domain. This allows you to give drives names like DRIVE_1 rather than $1$MUA510 if you wish, and specify the OpenVMS device name with the DEVICE_NAME attribute. It is also equally valid to name the drive after the OpenVMS device name as long as it is unique within the domain.
Nodes for drives are specified by the NODES or GROUPS attributes. You should specify all nodes or groups that have direct access to the drive.
Do not specify a node or group name in the drive name or OpenVMS device name.
Consider two drives named $1$MUA500, one on cluster BOSTON, the other on cluster HUSTON, and you wish to use a single MDMS domain.
Here's how you might set up the drives
$ MDMS CREATE DRIVE BOS_MUA500/DEVICE=$1$MUA500/GROUP=BOSTON
$ MDMS CREATE DRIVE HUS_MUA500/DEVICE=$1$MUA500/GROUP=HUSTON
The new ACCESS attribute can limit use of the drive to local or remote access. Local access is defined as access by any of the nodes in the NODES attribute, or any of the nodes defined in the group object defined in the GROUP attributes. Remote access is any other node. By default, both local and remote access are allowed.
With MDMS V3, drives may be defined as being as jukebox controlled, stacker controlled or stand-alone as follows:
A drive is jukebox controlled when it resides in a jukebox, and you wish random-access loads/unloads of any volume in the jukebox. Define a jukebox name, a control mechanism (MRD or DCSC), and a drive number for an MRD jukebox. The drive number is the number MRD uses to refer to the drive, and starts from zero.
A drive may be defined as a stacker when it resides in a jukebox and you wish sequential loading of volumes, or if the drive supports a stacker loading system. In this case, do not define a jukebox name, but set the STACKER attribute.
If the drive is stand-alone (loadable only by an operator), do not define a jukebox and clear the STACKER attribute.
Set the AUTOMATIC_REPLY attribute if you wish OPCOM requests on the drive to be completed without an operator reply. This enables a polling scheme which will automatically cancel the request when the requested condition has been satisfied.
In previous SLS/MDMS versions, jukeboxes were differentiated as libraries, loaders and ACS devices, each with their own commands and functions. With MDMS V3, all automatic loading devices are brought together under the concept of a jukebox object.
Jukeboxes are named like other objects as a unique name up to 31 characters. Each jukebox may be controlled by one of two subsystems:
The new ACCESS attribute can limit use of the jukebox to local or remote access. Local access is defined as access by any of the nodes in the NODES attribute, or any of the nodes defined in the group object defined in the GROUP attributes. Remote access is any other node. By default, both local and remote access is allowed.
For MRD jukeboxes, the robot name is the name of the device that MRD accesses for jukebox control, and is equivalent to the device name listed first in the old TAPE_JUKEBOXES definition in TAPESTART, but without the node name. As with drives, nodes for the jukebox must be specified using the NODES or GROUPS attributes.
Jukeboxes now have a LOCATION attribute, which is used in OPCOM messages related to moving volumes into and out of the jukebox. When moving volumes into a jukebox, you may first be prompted to move them to the jukebox location (if they are not already in that location). Likewise, when moving volumes out of the jukebox they will first be moved to the jukebox location. The reason for this is practical; it is more efficient to move all the volumes from wherever they were to the jukebox location, then move all the volumes to the final destination.
One of the more important aspects of jukeboxes is whether you will be using the jukebox with magazines. As described in the magazine section below, MDMS V3 treats magazines as a set of volumes within a physical magazine that share a common placement and move schedule. Unlike SMS/MDMS V2, it is not necessary to relate volumes to magazines just because they reside in a physical magazine, although you can. It is equally valid for volumes to be moved directly and individually in and out of jukeboxes regardless of whether they reside in a magazine within the jukebox.
This is the preferred method when it is expected that the volumes will be moved independently in and out of the jukebox.
If you decide to formally use magazines, you should set the jukebox usage to magazine. In addition, if the jukebox can potentially hold multiple magazines at once (for example, a TL820style jukebox), you can optionally define a topology field that represents the physical topology of the jukebox (i.e. towers, faces, levels and slots). If you define a topology field, OPCOM messages relating to moving magazines in and out of the jukebox will contain a magazine position in the jukebox, rather than a start slot for the magazine. Use of topology and position is optional, but makes it easier for operators to identify the appropriate magazine to move.
Importing and exporting volumes (or magazines) into and out of a jukebox has been replaced by a common MOVE command, that specifies a destination parameter. Depending on whether the destination is a jukebox, a location or a magazine, the direction of movement is determined. Unlike previous versions, you can move multiple volumes in a single command, and the OPCOM messages contain all the volumes to move that have a common source and destination location. If the jukebox supports ports or caps, all available ports and caps will be used. The move is flexible in that you can stuff volumes into the ports/caps in any order when importing, and all ports will be used on export. All port/cap oriented jukeboxes support automatic reply on OPCOM messages meaning that the messages do not have to be acknowledged for the move to complete.
The concept of locations has been greatly expanded from SLS/MDMS V2, where a copy of TAPESTART had a single "onsite" location defined in the LOC symbol, and a single "offsite" location defined in the "VAULT" symbol.
With MDMS V3, locations are now separate objects with the usual object name of up to 31 characters. Locations can be arranged in a hierarchy, allowing locations to be within other locations. For example, you can define BOSTON_CAMPUS as a location, with BUILDING_1, BUILDING_2 located in BOSTON_CAMPUS, and ROOM_100, ROOM_200 located within BUILDING_1. Locations that have common roots are regarded as compatible locations, which are used for allocating drives and volumes. For example, when allocating a volume currently located in ROOM_200 but specifying a location of BUILDING_1, these two locations are considered compatible. However, if BUILDING_2 was specified, they are not considered compatible since ROOM_200 is in BUILDING_1.
Locations are not officially designated as ONSITE or OFFSITE, as they could be both in some circumstances. However, each volume and magazine have offsite and onsite location attributes that should be set to valid location objects. This allows for any number of onsite or offsite locations to be defined across the domain.
You can optionally associate "spaces" with locations: spaces are subdivisions within a location in which volumes or magazines can be stored. The term "space" replaces the term "slot" in SLS/MDMS V2 as that term was overloaded. In MDMS V3, "slot" is reserved for a numeric slot number in a jukebox or magazine, whereas a space can consist of up to 8 alphanumeric characters.
In SLS/MDMS V2, media type, density, length and capacity were attributes of drives and volumes, defined both in TAPESTART and in volume records. With MDMS V3, media types are objects that contain the attributes of density, compaction, length, and capacity; drives and volumes reference media types only; the other attributes are defined within the media type object.
If you formerly had media types defined in TAPESTART with different attributes, you need to define multiple media types with MDMS V3. For example, consider the following
TAPESTART definitions:
MTYPE_1 := TK85K
DENS_1 :=
DRIVES_1 := $1$MUA510:, $1$MUA520:
MTYPE_2 := TK85K
DENS_2 := COMP
DRIVES_2 := $1$MUA510:, $1$MUA520:
This definition contains two media type definitions, but with the same name. In MDMS V3, you need to define two distinct media types and allow both drives to support both media types. The equivalent commands in MDMS V3 would be:
$ MDMS CREATE MEDIA_TYPE TK85K_N /NOCOMPACTION
$ MDMS CREATE MEDIA_TYPE TK85K_C /COMPACTION
$ MDMS CREATE DRIVE $1$MUA510:/MEDIA_TYPES=(TK85K_N,TK85K_C)
$ MDMS CREATE DRIVE $1$MUA520:/MEDIA_TYPES=(TK85K_N,TK85K_C)
As discussed in the jukebox section, the concept of magazine is defined as set of volumes sharing common placement and move schedules, rather than simply being volumes loaded in a physical magazine. With the previous SLS/MDMS V2, all volumes physically located in magazines had to be bound to slots in the magazine for both DLT-loader jukeboxes, and TL820 style bin-packs (if moved as a whole).
When converting from SLS/MDMS V2 to MDMS V3, the automatic conversion utility will take existing magazine definitions and create magazines for MDMS V3. It is recommended that you continue to use magazines in this manner until you feel comfortable eliminating them. If you do eliminate them, you remove the dependency of moving all volumes in the magazine as a whole. For TL820 style jukeboxes, volumes will move via the ports.
For DLT-loader style jukeboxes, OPCOM requests will refer to individual volumes for movement. In this case, the operator should remove the magazine from the jukebox, remove or insert volumes into it and reload the magazine into the jukebox.
If you utilize magazines with TL820-style jukeboxes, movement of magazines into the jukebox can optionally be performed using jukebox positions (i.e. the magazine should be placed in tower n, face n, level n) instead of a start slot. For this to be supported, the jukebox should be specified with a topology as explained in the jukebox section. For single-magazine jukeboxes like the TZ887, the magazine can only be placed in one position (i.e. start slot 0).
Like individual volumes, magazines can be set up for automatic movement to/from an offsite location by specifying an offsite/onsite location and date for the magazine. All volumes in the magazine will be moved. An automatic procedure is executed daily at a time specified by logical name
MDMS$SCHEDULED_ACTIVITIES_START_HOUR, or at 01:00 by default. However, MDMS V3 also allows these movements to be initiated manually using a /SCHEDULE qualifier as follows:
$ MDMS MOVE MAGAZINE */SCHEDULE=OFFSITE ! Scheduled moves to offsite
$ MDMS MOVE MAGAZINE */SCHEDULE=ONSITE ! Scheduled moves to onsite
$ MDMS MOVE MAGAZINE */SCHEDULE ! All scheduled moves
A node is an OpenVMS computer system capable of running MDMS V3, and a node object must be created for each node running ABS or HSM in the domain. Each node object has a node name, which must be the same as the DECnet Phase IV name of the system (i.e. SYS$NODE) if the node runs DECnet, otherwise it can be any unique name up to 31 characters in length.
If you wish the node to support either or both DECnet-Plus (Phase V) or TCP/IP, then you need to define the appropriate fullnames for the node as attributes of the node. Do not specify the fullnames as the node name. For example, the following command specifies a node capable of supporting all three network protocols:
$ MDMS CREATE NODE BOSTON -
$_ /DECNET_FULLNAME=CAP:BOSTON.AYO.CAP.COM -
$_ /TCPIP_FULLNAME=BOSTON.AYO.CAP.COM
A node can be designated as supporting a database server or not. A node supporting a database server must have direct access to the database files in the domain (DFS/NFS access is not recommended). The first node you install MDMS V3 on should be designated as a database server.
Subsequent nodes may or may not be designated as database servers. Only one node at a time actually performs as the database server, but if that node fails or is shut down, another designated database server node will take over.
MDMS V3 introduces the group object as a convenient mechanism for describing a group of nodes that have something in common. In a typical environment, you may wish to designate a cluster alias as a group, with the constituent nodes defined as attributes. However, the group concept may be applied to other groups of nodes rather than just those in a cluster. You may define as many groups as you wish, and individual nodes may be defined in any number of groups. However, you may not specify groups within groups, only nodes.
You would typically define groups as a set of nodes that have direct access to drives and jukeboxes, then simply relate the group to the drive or jukebox using the GROUPS attribute. Other uses for groups may be for the definition of users. For example, user SMITH may be the same person for both the BOSTON and HUSTON clusters, so you might define a group containing constituent nodes from the BOSTON and HUSTON clusters. You might then utilize this group as part of an authorized user for a volume pool.
Pools retain the same purpose for MDMS V3 as for SLS/MDMS V2. They are used to validate users for allocating free volumes. Pool authorization used to be defined through the old forms interface. With MDMS V3, pool authorization is through the pool object. A pool object needs to be created for each pool in the domain.
Pool objects have two main attributes: authorized users and default users. Both sets of users must be in the form NODE::USERNAME or GROUP::USERNAME, and a pool can support up to 1024 characters of authorized and default users. An authorized user is simply a user that is allowed to allocate free volumes from the pool. A default user also allows that user to allocate free volumes from the pool, but in addition it specifies that the pool is to be used when a user does not specify a pool on allocation. As such, each default user should be specified in only one pool, whereas users can be authorized for any number of pools.
The volume object is the most critical object for both MDMS V3 and SLS/MDMS V2. Nearly all of the attributes from V2 have been retained, although a few have been renamed. When converting from SLS/MDMS V2.X to MDMS V3, all volumes in the old volume database are created in the new MDMS V3 database. Support for the following attributes has been changed or is unsupported:
You can create volumes in the MDMS V3 database in one of three ways:
Once a volume is created and initial attributes are set, it is not normally necessary to use the SET VOLUME commands to change attributes. Rather, the attributes are automatically modified as a result of some action on the volume, such as ALLOCATE or LOAD. However, in some cases, the volume database and physical reality may get out of synchronization and in these cases you can use SET VOLUME to correct the database.
Note that several fields in the volume object are designated as "protected". These fields are used by MDMS to control the volume's operations within MDMS. You need a special privilege to change protected fields, and in the GUI you need to "Enable Protected" to make these fields writable. When changing a protected field you should ensure that its new value is consistent with other attributes. For example, if manually setting the volume's placement to jukebox, you should ensure that a jukebox name is defined.
Two key attributes in the volume object are "state" and "placement". The volumes states are:
The placement attribute is a new attribute for MDMS V3, and describes a volume's current placement: in a drive, jukebox, magazine or onsite or offsite location. The placement may also be "moving", meaning that it is changing placements but the change has not completed. No load, unload or move commands may be issued to a volume that is moving. While a volume is moving, it is sometimes necessary for an operator to determine to where it is moving: for example, moving from a jukebox to a onsite location and space. The operator can issue a SHOW VOL-UME command for moving volumes that shows exactly to where the volume is supposed to be moved.
The new MDMS V3 CREATE VOLUME command replaces the old "Add Volume" storage command. Note that most attributes are supported for both the create volume and set volume commands for consistency purposes.
Volumes can be set up for automatic movement to/from an offsite location by specifying an offsite/onsite location and date, similar to MDMS/SLS V2. Similarly, volumes can be set up for automatic recycling using the scratch date (to go from the allocated to transition states) and free dates (to go from the transition to free states). An automatic procedure is executed daily at a time specified by logical name MDMS$SCHEDULED_ACTIVITIES_START_HOUR, or at 01:00 by default. However, MDMS V3 also allows these movements/state changes to be initiated manually using a /SCHEDULE qualifier as follows:
$ MDMS MOVE VOLUME */SCHEDULE=OFFSITE ! Scheduled moves to offsite
$ MDMS MOVE VOLUME */SCHEDULE=ONSITE ! Scheduled moves to onsite
$ MDMS MOVE VOLUME */SCHEDULE ! All scheduled moves
$ MDMS DEALLOCATE VOLUME /SCHEDULE ! All scheduled deallocations
MDMS V3 continues to support the ABS volume set objects (those objects whose volume IDs begins with "&+"). These are normally hidden objects, but they may be displayed in SHOW VOLUME and REPORT VOLUME commands with the ABS_VOLSET option.
In all other respects, the MDMS V3 volume object is equivalent to the SLS/MDMS V2 volume object.
In MDMS V3, support for remote devices is handled through the Remote Device Facility (RDF) in the same manner that was supported for SLS/MDMS V2. DECnet support on both the client and target nodes is required when using RDF.
This section describes how to convert the SLS/MDMS V2.X symbols and database to Media and Device Management Services Version 3 (MDMS). The conversion is automated as much as possible, however, you will need to make some corrections or add attributes to the objects that were not present in SLS/MDMS V2.X.
Before doing the conversion, you should read Chapter 16 - MDMS Configuration in this Guide to Operations to become familiar with configuration requirements.
All phases of the conversion process should be done on the first database node on which you installed MDMS V3. During this process you will go through all phases of the conversion:
When you install on any other node that does not use the same TAPESTART.COM as the database node, you only do the conversion of TAPESTART.COM
To execute the conversion command procedure, type in the following command:
$ @MDMS$SYSTEM:MDMS$CONVERT_V2_TO_V3
The command procedure will introduce itself and then ask what parts of the SLS/MDMS V2.x you would like to convert.
During the conversion, the conversion program will allow you to start and stop the MDMS server. The MDMS server needs to be running when converting TAPESTART.COM and the database authorization file. The MDMS should not be running during the conversion of the other database files.
During the conversion of TAPESTART.COM the conversion program generates the following file:
$ MDMS$SYSTEM:MDMS$LOAD_DB_nodename.COM
This file contains the MDMS commands to create the objects in the database. You have the choice to execute this command procedure or not during the conversion.
The conversion of the database files are done by reading the SLS/MDMS V2.x database file and creating objects in the MDMS V3 database files.
You must have the SLS/MDMS V2.x DB server shutdown during the conversion process. Use the following command to shut down the SLS/MDMS V2.x DB server:
Because of the difference between SLS/MDMS V2.x and MDMS V3 there will be conflicts during the conversion. Instead of stopping the conversion program and asking you about each conflict, the conversion program generates the following file during each conversion:
$ MDMS$MDMS$LOAD_DB_CONFLICTS_nodename.COM
Where nodename is the name of the node you ran the conversion on. This file is not meant to be executed, it is there for you to look at and see what commands executed and caused a change in the database. This change is flagged because there was already an object in the database or this command changed an attribute of the object.
An example could be that you had two media types of the same name but one specified compressed and one other specified non compressed. This would cause a conflict. MDMS V3 does not allow two media types with the same name but different attributes. What you see in the conflict file would be the command that tried to create the same media type. You will have to create a new media type.
See Symbols in TAPESTART.COM shows the symbols in TAPESTART.COM file and what conflicts they may cause.
At the completion of the conversion of the database files, you will see a message that notes the objects that where in an object but not defined in the database. For example: the conversion program found a pool in a volume record that was not a pool object.
Because of the differences between SLS/MDMS V2.x and MDMS V3 you should go through the objects and check the attributes and make sure that the objects have the attributes that you want. See Things to Look for After the Conversion shows the attributes of objects that you may want to check after the conversion.
This section describes how older versions of SLS/MDMS can coexist with the new version of MDMS for the purpose of upgrading your MDMS domain. You may have versions of ABS, or HSM or SLS which are using SLS/MDMS V2 and which cannot be upgraded or replaced immediately. MDMS V3 provides limited support for older SLS/MDMS clients to make upgrading your MDMS domain to the new version as smooth as possible. This limited support allows rolling upgrade of all SLS/MDMS V2 nodes to MDMS V3. Also ABS and HSM version 3.0 and later have been modified to support either SLS/MDMS V2 or MDMS V2 to make it easy to switch over to the new MDMS. The upgrade procedure has been completed as soon as all nodes in your domain are running the new MDMS V3 version exclusively.
The major difference between SLS/MDMS V2 and MDMS V3 is the way information about objects and configuration is stored. To support the old version the new server can be set up to accept requests for DECnet object SLS$DB which was serving the database before. Any database request sent to SLS$DB will be executed and data returned compatible with old database client requests. This allows SLS/MDMS V2 database clients to still send their database requests to the new server without any change.
The SLS$DB function in the new MDMS serves and shares information for the following objects to a V2 database client:
The new MDMS server keeps all its information in a per object database. The MDMS V3 installation process propagates definitions of the objects from the old database to the new V3 database. However, any changes made after the installation of V3 have to be carefully entered by the user in the old and new databases. Operational problems are possible if the databases diverge. Therefore it is recommended to complete the upgrade process as quickly as possible.
Upgrading your SLS/MDMS V2 domain starts with the nodes, which have been defined as database servers in symbol DB_NODES in file TAPESTART.COM. Refer to the Installation Guide for details on how to perform the following steps.
If you had to change any of the logical name settings above you have to restart the server using '@SYS$STARTUP:MDMS$STARTUP RESTART'. You can type the server's logfile to verify that the DECnet listener for object SLS$DB has been successfully started.
This prevents a SLS/MDMS V2 server from starting the old database server process SLS$TAPMGRDB.
Use a "STORAGE VOLUME" command to test that you can access the new MDMS V3 database.
Note that no change is necessary for nodes running SLS/MDMS V2 as a database client. For any old SLS/MDMS client in your domain you have to add its node object to the MDMS V3 database. In V3 all nodes of an MDMS domain have to be registered (see command MDMS CREATE NODE). These clients can connect to a new MDMS DB server as soon as the new server is up and running and has been added to the new database.
A node with either local tape drives or local jukeboxes which are accessed from new MDMS V3 servers need to have MDMS V3 installed and running.
A node with either local tape drives or local jukeboxes, which are accessed from old SLS/MDMS V2 servers, need to have SLS/MDMS V3 running.
If access is required from both old and new servers then both versions need to be started on that node. But in all cases DB_NODES in all TAPESTART.COM needs to be empty.
MDMS V3 allows you to convert the MDMS V3 volume database back to the SLS/MDMS V2 TAPEMAST.DAT file. Any changes you did under MDMS V3 for pool and magazine objects need to be entered manually into V2 database. Any changes you did under MDMS V3 for drive, jukebox or media type objects need to be updated in file TAPESTART.COM.
The following steps need to be performed to revert back to a SLS/MDMS V2 only domain:
This section describes how to convert the MDMS V3 volume database back to a SLS/MDMS V2.X volume database.
If for some reason, you need to convert back to SLS/MDMS V2.X a conversion command procedure is provided. This conversion procedure does not convert anything other than the volume database. If you have added new objects, you will have to add these to TAPESTART.COM or to the following SLS/MDMS V2.X database files:
To execute the conversion command procedure, type in the following command:
$ @MDMS$SYSTEM:MDMS$CONVERT_V3_TO_V2
After introductory information, this command procedure will ask you questions to complete the conversion.
Using ABS to Backup Oracle
Databases
This appendix describes how to use ABS to backup Oracle databases using the Oracle Server Manager. When doing a backup, you can either do a closed database backup or an open database backup. This appendix contains an example database and examples of ABS environment policies, ABS storage policies, and ABS save requests that are used to backup the example database. The examples uses a jukebox that has three tape drives. Be sure to read the Performing Operating System Backup and Recovery section in the Oracle Backup and Recovery Guide.
Before getting started with describing how to use ABS to backup the Oracle database, let us first look at the example database so we know the name of the files to backup.
The following shows the tablespaces and datafiles:
SQL> select t.name "Tablespace", f.name "Datafile"
from v$tablespace t, v$datafile f where T.ts# = f.ts#
order by t.name;
Tablespace Datafile
------------------------------------------------------------------
SYSTEM DISK$ALPHA:[ORACLE.DB_PAYROLL]ORA_SYSTEM.DBS
TBS1 DISK$ORACLE1:[PAYROLL_TBS1]PAYROLL_TBS1.DF
TBS2 DISK$ORACLE2:[PAYROLL_TBS2]PAYROLL_TBS2.DF
TBS3 DISK$ORACLE3:[PAYROLL_TBS3]PAYROLL_TBS3.DF
TBS4 DISK$ORACLE4:[PAYROLL_TBS4]PAYROLL_TBS4.DF
TBS5 DISK$ORACLE5:[PAYROLL_TBS5]PAYROLL_TBS5.DF
TBS6 DISK$ORACLE6:[PAYROLL_TBS6]PAYROLL_TBS6.DF
7 rows selected.
The following shows the online redo log files:
SQL> select member from v$logfile;
MEMBER
-------------------------------------------------
DISK$ALPHA:[ORACLE.DB_PAYROLL]ORA_LOG1.RDO
DISK$ALPHA:[ORACLE.DB_PAYROLL]ORA_LOG2.RDO
The following shows the control files:
SQL> select value from v$parameter where name = 'control_files';
VALUE
------------------------------
ora_control1, ora_control2
All of the files used to create the database are located in the following directory which is also a backup:
When backing up a closed database, the database must be shutdown. After shutting down the database, all of the database files can be backed up before starting up the database again. To accomplish this task, the following example creates an environment policy that has a prologue file that shutsdown the database. The environment policy also has an epilogue file that restarts the database. This example has three save requests. Each save request will execute on a separate tape drive.
The following sections show the environment policy, storage policy, and save requests for the closed database backup.
The first thing to create is an environment policy, ORA_CLOSED_BACKUP_ENV. The environment has a prologue file and epilogue file which is only run once for each save request. In a save request, the prologue and epilogue files are run for each save object. That would not work, so the prologue and epilogue files are put in the environment file. These prologue and epilogue files have if statements that execute different code for each save request.
The following shows the creation of the ORA_CLOSED_BACKUP_ENV environment:
$ ABS CREATE ENVIRONMENT ORA_CLOSED_BACKUP_ENV -
/PROLOGUE=@DISK$ALPHA:[ORACLE.COM]ORA_CLOSED_BACKUP_PROLOG -
/EPILOGUE=@DISK$ALPHA:[ORACLE.COM]ORA_CLOSED_BACKUP_EPILOG
$ ABS SHOW ENV ORA_CLOSED_BACKUP_ENV/FULL
Name - ORA_CLOSED_BACKUP_ENV
Version - 1
UID - 883D0028-226A-11D4-AD9F-474F4749524C
Data Safety Options - CRC_VERIFICATION
Listing Option - NO_LISTING
Span Filesystem Options - NO FILESYSTEM SPAN
Symbolic Links Option - LINKS_ONLY
Compression Options - None
Original Object Action - NO_CHANGE
User Profile
Node - CURLEY
Cluster -
User - ABS
Privs - SETPRV,TMPMBX,OPER,NETMBX
Access Right - ORA_DBA
Owner - CURLEY::DBA
Access Right - CURLEY::DBA
Access Granted - READ, WRITE, SET, SHOW, DELETE, CONTROL
Notification Method - NO_NOTIFICATION
Notification Receiver - TAPES
Notification When - FATAL
Notification Type - BRIEF
Notification Method - MAIL
Notification List - DBA
Notification Reason - COMPLETE
Notification Type - BRIEF
Locking Options - None
Number of Drives - 1
Retry Count - 3
Retry Interval - 15
Prologue Command - @DISK$ALPHA:[ORACLE.COM]ORA_CLOSED_BACKUP_PROLOG
Epilogue Command - @DISK$ALPHA:[ORACLE.COM]ORA_CLOSED_BACKUP_EPILOG
The prologue and epilogue files have logic in them that only executes when the save request is ORA_CLOSED_BACKUP1. Because the environment is executed for each save request, the database should not be shutdown or started up for each save request. The shutdown and startup of the database is only going to happen when the save request is named ORA_CLOSED_BACKUP1.
The logic also starts the other two save requests:
The following is the prologue file, ORA_CLOSED_BACKUP_PROLOG.COM:
$!
$! THIS COMMAND PROCEDURE SHUTSDOWN THE ORACLE DATABASE
$! SO THAT A CLOSED DATABASE BACKUP CAN BE COMPLETED
$!
$! IT THEN STARTS ALL OF THE BACKUPS OF THE DATABASE AND
$! THEN SYNCHRONIZES ON THEM TO WAIT UNTIL THEY ARE FINISHED
$!
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .NES. "ORA_CLOSED_BACKUP1")
$ THEN
$ EXIT
$ ENDIF
$ @DISK$ALPHA:[ORACLE.DB_PAYROLL]ORAUSER_PAYROLL J3944
$ SVRMGRL @SYS$INPUT
SET ECHO ON
CONNECT INTERNAL AS SYSDBA
SHUTDOWN IMMEDIATE
EXIT
$!
$! START THE BACKUP OF DATABASE
$ ABS SET SAVE/START ORA_CLOSED_BACKUP2
$ ABS SET SAVE/START ORA_CLOSED_BACKUP3
$ EXIT
The epilogue file, ORA_CLOSED_BACKUP_EPILOG.COM, also has logic defined to wait for the other two save requests to finish before the database is started. The following is the epilogue file, ORA_CLOSED_BACKUP_EPILOG.COM:
$!
$! THIS COMMAND PROCEDURE STARTS UP THE ORACLE DATABASE
$! AT THE COMPLETION OF THE ORACLE DATABASE BACKUP
$!
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .NES. "ORA_CLOSED_BACKUP1")
$ THEN
$ EXIT
$ ENDIF
$!
$! NOW SYNCHRONIZE TO MAKE SURE ALL BACKUPS ARE DONE BEFORE RESTARTING
$! THE DATABASE
$!
$ ABS SYNCHRONIZE ORA_CLOSED_BACKUP2
$ ABS SYNCHRONIZE ORA_CLOSED_BACKUP3
$!
$ @DISK$ALPHA:[ORACLE.DB_PAYROLL]ORAUSER_PAYROLL J3944
$ SVRMGRL
SET ECHO ON
CONNECT INTERNAL AS SYSDBA
STARTUP
EXIT
$ EXIT
The storage policy, ORA_CLOSED_BACKUP_SP, supports the use of three drives at one time. The following shows the creation of the storage policy:
$ ABS CREATE STORAGE_CLASS ORA_CLOSED_BACKUP_SP -
/MAXIMUM_SAVES=3 -
/TYPE_OF_MEDIA=TK89
$ ABS SHOW STORAGE_CLASS ORA_CLOSED_BACKUP_SP /full
Storage Class
Name - ORA_CLOSED_BACKUP_SP
Version - 1
UID - 889E98D3-226A-11D4-ADE2-474F4749524C
Execution Node Name - CURLEY
Archive File System
Primary Archive Location -
Staging Location -
Primary Archive Type - SLS/MDMS
Owner - CURLEY::DBA
Access Right - CURLEY::DBA
Access Granted - READ, WRITE, SET, SHOW, DELETE, CONTROL
Tape Pool - None
Volume Set Name -
Retention Period - 365
Consolidation Criteria
Count - 0
Size - 0
Interval - 7
Catalog Name - ABS_CATALOG
Maximum Saves - 3
Media Management Info
Media Location - None
Type of Media - TK89
Drive List - None
Since this example is using three tape drives there are also three save requests to allow multiple stream operation. All three save requests backup approximately the same amount of data to keep all three drives in use. The first save request, ORA_CLOSED_BACKUP1, is scheduled daily and backups all of the following files:
The ORA_CLOSED_BACKUP1 save request is scheduled each day in this example to start at 23:00. When it starts, the environment policy runs its prologue which shuts down the database and starts the other two save requests. The other two save requests are scheduled on demand.
The other two save requests, ORA_CLOSED_BACKUP2 and ORA_CLOSED_BACKUP3, are scheduled on demand and backup two table spaces each.
The following shows the creation of the three save requests and then the scheduling of ORA_CLOSED_BACKUP1 at 23:00.
$!
$! CREATE SAVE REQUESTS
$!
$ ABS SAVE /NOSTART DISK$ALPHA:[ORACLE.DB_PAYROLL]ORA_SYSTEM.DBS -
/NAME=ORA_CLOSED_BACKUP1 -
/ENVIRONMENT=ORA_CLOSED_BACKUP_ENV -
/SCHEDULE_OPTION=DAILY -
/STORAGE_CLASS=ORA_CLOSED_BACKUP_SP
$ ABS SET SAVE ORA_CLOSED_BACKUP1 -
DISK$ALPHA:[ORACLE.DB_PAYROLL]ORA_LOG*.RDO/ADD, -
DISK$ALPHA:[ORACLE.DB_PAYROLL]*.ARC/ADD, -
DISK$ALPHA:[ORACLE.DB_PAYROLL]*.ORA/ADD
$ ABS SET SAVE ORA_CLOSED_BACKUP1 -
DISK$ORACLE1:[PAYROLL_TBS1]PAYROLL_TBS1.DF/ADD,
DISK$ORACLE2:[PAYROLL_TBS2]PAYROLL_TBS2.DF/ADD
$!
$ ABS SAVE /NOSTART DISK$ORACLE3:[PAYROLL_TBS3]PAYROLL_TBS3.DF -
/NAME=ORA_CLOSED_BACKUP2 -
/SCHEDULE_OPTION=ON_DEMAND -
/ENVIRONMENT=ORA_CLOSED_BACKUP_ENV -
/STORAGE_CLASS=ORA_CLOSED_BACKUP_SP
$ ABS SET SAVE ORA_CLOSED_BACKUP2 -
DISK$ORACLE4:[PAYROLL_TBS4]PAYROLL_TBS4.DF/ADD
$!
$ ABS SAVE /NOSTART DISK$ORACLE5:[PAYROLL_TBS5]PAYROLL_TBS5.DF -
/NAME=ORA_CLOSED_BACKUP3 -
/SCHEDULE_OPTION=ON_DEMAND -
/ENVIRONMENT=ORA_CLOSED_BACKUP_ENV -
/STORAGE_CLASS=ORA_CLOSED_BACKUP_SP
$ ABS SET SAVE ORA_CLOSED_BACKUP3 -
DISK$ORACLE6:[PAYROLL_TBS6]PAYROLL_TBS6.DF/ADD
$!
$! NOW TO START IT
$!
$ ABS SET SAVE ORA_CLOSED_BACKUP1/START="23:00"
Backing up an open database allows users to have normal access to all online tablespaces during backup. During the backup of an online tablespace, the Oracle Server Manager must be notified at the beginning of the backup and at the end of the backup. To accomplish this task, the following example creates an environment policy that has a prologue file that notifies the Oracle Server Manager that a backup of tablespace is about to begin. Also, the environment policy has an epilogue file that notifies the Oracle Server Manager that a backup of the tablespace has ended.
This example has the following save requests:
The first thing to create is an environment policy, ORA_OPEN_BACKUP_ENV. The environment has a prologue file and epilogue file which is only run once for each save request. These prologue and epilogue files have if statements that execute different code for each save request.
The following shows the creation of the ORA_OPEN_BACKUP_ENV environment:
$!
$! CREATE ENVIRONMENT POLICY
$!
$ ABS CREATE ENVIRONMENT ORA_OPEN_BACKUP_ENV -
/PROLOGUE=@DISK$ALPHA:[ORACLE.com]ORA_OPEN_BACKUP_PROLOG -
/EPILOGUE=@DISK$ALPHA:[ORACLE.com]ORA_OPEN_BACKUP_EPILOG
$ ABS SHOW ENV ORA_OPEN_BACKUP_ENV/FULL
Execution Environment
Name - ORA_OPEN_BACKUP_ENV
Version - 1
UID - F253D0EC-2A42-11D4-942B-474F4749524C
Data Safety Options - CRC_VERIFICATION
Listing Option - NO_LISTING
Span Filesystem Options - NO FILESYSTEM SPAN
Symbolic Links Option - LINKS_ONLY
Compression Options - None
Original Object Action - NO_CHANGE
User Profile
Node - CURLEY
Cluster -
User - ABS
Privs - SETPRV,TMPMBX,OPER,NETMBX
Access Right - ORA_DBA
Owner - CURLEY::DBA
Access Right - CURLEY::DBA
Access Granted - READ, WRITE, SET, SHOW, DELETE, CONTROL
Notification Method - NO_NOTIFICATION
Notification Receiver - TAPES
Notification When - FATAL
Notification Type - BRIEF
Locking Options - None
Number of Drives - 1
Retry Count - 3
Retry Interval - 15
Prologue Command - @DISK$ALPHA:[ORACLE.COM]ORA_OPEN_BACKUP_PROLOG
Epilogue Command - @DISK$ALPHA:[ORACLE.COM]ORA_OPEN_BACKUP_EPILOG
The prologue and epilogue files have logic in them that executes different code for each save request. In the save requests that backup tablespaces, the code in the prologue file runs SQL code that notifies the Oracle Server Manager that a backup of the tablespace is about to begin. In the save requests that backup tablespaces, the code in the epilogue file runs SQL code that notifies the Oracle Server Manager that a backup of the tablespace has ended. For the save, ORA_OPEN_BACKUP, the logic deletes the old copy of the control file and starts all of the other save requests except ORA_OPEN_BACKUP_RDO.
The following is the prologue file, ORA_OPEN_BACKUP_PROLOG.COM:
$!
$! THIS COMMAND PROCEDURE STARTS UP ALL OF THE SAVE REQUESTS FOR
$! AN OPEN DATABASE BACKUP IF THIS SAVE REQUEST IS
$! ORA_OPEN_BACKUP.
$!
$! ALSO, IT BEGINS THE DATABASE BACKUP TABLESPACE TBS1 AND TBS2
$! IT THEN STARTS ALL OF THE BACKUPS OF THE DATABASE AND
$! THEN SYNCHRONIZES ON THEM TO WAIT UNTIL THEY ARE FINISHED
$!
$ @DISK$ALPHA:[ORACLE.DB_APITEST]ORAUSER_APITEST J3944
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP")
$ THEN
$ !
$ ! DELETE THE COPY OF THE CONTROL FILE, WE WILL MAKE ANOTHER COPY
$ ! AFTER WE GET THROUGH
$ !
$ IF(F$SEARCH("DISK$ALPHA:[ORACLE.DB_APITEST]COPY_OF_CONTROL_FILE.CON") .NES. "")
$ THEN
$ DELETE/NOCONFIRM DISK$ALPHA:[ORALE.DB_APITEST]COPY_OF_CONTROL_FILE.CON;*
$ ENDIF
$ !
$ ! START OTHER SAVE REQUESTS
$ !
$ ABS SET SAVE/START ORA_OPEN_BACKUP_SYS
$ ABS SET SAVE/START ORA_OPEN_BACKUP_TBS3
$ ABS SET SAVE/START ORA_OPEN_BACKUP_TBS4
$ ABS SET SAVE/START ORA_OPEN_BACKUP_TBS5
$ ABS SET SAVE/START ORA_OPEN_BACKUP_TBS6
$ !
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER TABLESPACE TBS1 BEGIN BACKUP;
ALTER TABLESPACE TBS2 BEGIN BACKUP;
EXIT
$ EXIT
$ ENDIF
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP_SYS")
$ THEN
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_SYSTEM_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER TABLESPACE SYSTEM BEGIN BACKUP;
EXIT
$ EXIT
$ ENDIF
$ !
$ !
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP_TBS3")
$ THEN
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_TBS3_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER TABLESPACE TBS3 BEGIN BACKUP;
EXIT
$ EXIT
$ ENDIF
$ !
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP_TBS4")
$ THEN
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_TBS4_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER TABLESPACE TBS4 BEGIN BACKUP;
EXIT
$ EXIT
$ ENDIF
$ !
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP_TBS5")
$ THEN
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_TBS5_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER TABLESPACE TBS5 BEGIN BACKUP;
EXIT
$ EXIT
$ ENDIF
$ !
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP_TBS6")
$ THEN
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_TBS6_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER TABLESPACE TBS6 BEGIN BACKUP;
EXIT
$ EXIT
$ ENDIF
$ !
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP_RDO")
$ THEN
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_RDO_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER SYSTEM ARCHIVE LOG CURRENT;
ALTER SYSTEM SWITCH LOGFILE;
ALTER DATABASE BACKUP CONTROLFILE TO
'DISK$ALPHA:[ORACLE.DB_APITEST]COPY_OF_CONTROL_FILE.CON';
EXIT
$ EXIT
$ ENDIF
$ !
$ EXIT
The epilogue file, ORA_OPEN_BACKUP_EPILOG.COM, also has logic defined to execute different code depending on the save request. All save requests that back up tablespaces executes code that notifies the Oracle Server Manager that the backup of the tablespace has ended. If the save request is ORA_OPEN_BACKUP, it waits on all of the jobs to complete and then it starts the ORA_OPEN_BACKUP_RDO save request to archive the redo log files and make a copy of the control file to backup.
The following is the example of ORA_OPEN_BACKUP_EPILOG.COM:
$!
$! THIS COMMAND PROCEDURE STARTS UP ALL OF THE SAVE REQUESTS FOR
$! AN OPEN DATABASE BACKUP IF THIS SAVE REQUEST IS
$! ORA_OPEN_BACKUP.
$!
$! ALSO, IT BEGINS THE DATABASE BACKUP TABLESPACE TBS1 AND TBS2
$! IT THEN STARTS ALL OF THE BACKUPS OF THE DATABASE AND
$! THEN SYNCHRONIZES ON THEM TO WAIT UNTIL THEY ARE FINISHED
$!
$ @DISK$ALPHA:[ORACLE.DB_APITEST]ORAUSER_APITEST J3944
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP")
$ THEN
$ !
$ ! DELETE THE COPY OF THE CONTROL FILE, WE WILL MAKE ANOTHER COPY
$ ! AFTER WE GET THROUGH
$ !
$ IF(F$SEARCH("DISK$ALPHA:[ORACLE.DB_APITEST]COPY_OF_CONTROL_FILE.CON") .NES. "")
$ THEN
$ DELETE/NOCONFIRM DISK$ALPHA:[ORALE.DB_APITEST]COPY_OF_CONTROL_FILE.CON;*
$ ENDIF
$ !
$ ! START OTHER SAVE REQUESTS
$ !
$ ABS SET SAVE/START ORA_OPEN_BACKUP_SYS
$ ABS SET SAVE/START ORA_OPEN_BACKUP_TBS3
$ ABS SET SAVE/START ORA_OPEN_BACKUP_TBS4
$ ABS SET SAVE/START ORA_OPEN_BACKUP_TBS5
$ ABS SET SAVE/START ORA_OPEN_BACKUP_TBS6
$ !
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER TABLESPACE TBS1 BEGIN BACKUP;
ALTER TABLESPACE TBS2 BEGIN BACKUP;
EXIT
$ EXIT
$ ENDIF
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP_SYS")
$ THEN
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_SYSTEM_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER TABLESPACE SYSTEM BEGIN BACKUP;
EXIT
$ EXIT
$ ENDIF
$ !
$ !
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP_TBS3")
$ THEN
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_TBS3_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER TABLESPACE TBS3 BEGIN BACKUP;
EXIT
$ EXIT
$ ENDIF
$ !
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP_TBS4")
$ THEN
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_TBS4_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER TABLESPACE TBS4 BEGIN BACKUP;
EXIT
$ EXIT
$ ENDIF
$ !
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP_TBS5")
$ THEN
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_TBS5_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER TABLESPACE TBS5 BEGIN BACKUP;
EXIT
$ EXIT
$ ENDIF
$ !
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP_TBS6")
$ THEN
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_TBS6_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER TABLESPACE TBS6 BEGIN BACKUP;
EXIT
$ EXIT
$ ENDIF
$ !
$ IF ( F$TRNLNM("ABS_SAVE_REQUEST_NAME") .EQS. "ORA_OPEN_BACKUP_RDO")
$ THEN
$ SVRMGRL @SYS$INPUT
SET ECHO ON
SPOOL DISK$ALPHA:[ORACLE.LOG]ORA_OPEN_BACKUP_RDO_PROLOG.LOG
CONNECT INTERNAL AS SYSDBA
ALTER SYSTEM ARCHIVE LOG CURRENT;
ALTER SYSTEM SWITCH LOGFILE;
ALTER DATABASE BACKUP CONTROLFILE TO
'DISK$ALPHA:[ORACLE.DB_APITEST]COPY_OF_CONTROL_FILE.CON';
EXIT
$ EXIT
$ ENDIF
$ !
$ EXIT
The storage policy, ORA_OPEN_BACKUP_SP, supports the use of three drives at one time. The following shows the creation of the storage policy:
$!
$! CREATE STORAGE POLICY
$!
$ ABS CREATE STORAGE_CLASS ORA_OPEN_BACKUP_SP -
/MAXIMUM_SAVES=3 -
/TYPE_OF_MEDIA=TK89
$ ABS SHOW STORAGE_CLASS ORA_OPEN_BACKUP_SP/FULL
Storage Class
Name - ORA_OPEN_BACKUP_SP
Version - 1
UID - F28D2E91-2A42-11D4-9453-474F4749524C
Execution Node Name - CURLEY
Archive File System
Primary Archive Location -
Staging Location -
Primary Archive Type - SLS/MDMS
Owner - CURLEY::DBA
Access Right - CURLEY::DBA
Access Granted - READ, WRITE, SET, SHOW, DELETE, CONTROL
Tape Pool - None
Volume Set Name -
Retention Period - 365
Consolidation Criteria
Count - 0
Size - 0
Interval - 7 00:00:00
Catalog Name - ABS_CATALOG
Maximum Saves - 3
Media Management Info
Media Location - None
Type of Media - TK89
Drive List - None
This section shows the commands necessary to create the ABS save requests for an open database backup. These save requests are implemented for a jukebox that has three tape drive. The controlling save request, ORA_OPEN_BACKUP, is the only save request that is scheduled, the rest have a scheduling policy of ON_DEMAND. The ORA_OPEN_BACKUP save request backs up tablespaces TBS1 and TBS2. This keeps one tape drive busy while the other two backup the rest of the database. After all tablespaces are backed up, the ORA_OPEN_BACKUP save request starts the ORA_OPEN_BACKUP_RDO save request to archive the redo log files and then backup the archived log files and the control file.
The following shows the creation of the save requests and the scheduling of ORA_OPEN_BACKUP at 23:00.
$!
$! CREATE SAVE REQUESTS
$!
$ ABS SAVE /NOSTART DISK$ALPHA:[ORACLE.DB_APITEST]ORA_SYSTEM.DBS -
/AGENT_QUALIFIER="/IGNORE=(INTERLOCK,NOBACKUP)" -
/NAME=ORA_OPEN_BACKUP_SYS -
/ENVIRONMENT=ORA_OPEN_BACKUP_ENV -
/SCHEDULE_OPTION=DAILY -
/STORAGE_CLASS=ORA_OPEN_BACKUP_SP
$!
$ ABS SAVE /NOSTART DISK$ORACLE3:[APITEST_TBS3]APITEST_TBS3.DF -
/AGENT_QUALIFIER="/IGNORE=(INTERLOCK,NOBACKUP)" -
/NAME=ORA_OPEN_BACKUP_TBS3 -
/SCHEDULE_OPTION=ON_DEMAND -
/ENVIRONMENT=ORA_OPEN_BACKUP_ENV -
/STORAGE_CLASS=ORA_OPEN_BACKUP_SP
$!
$ ABS SAVE /NOSTART DISK$ORACLE4:[APITEST_TBS4]APITEST_TBS4.DF -
/AGENT_QUALIFIER="/IGNORE=(INTERLOCK,NOBACKUP)" -
/NAME=ORA_OPEN_BACKUP_TBS4 -
/SCHEDULE_OPTION=ON_DEMAND -
/ENVIRONMENT=ORA_OPEN_BACKUP_ENV -
/STORAGE_CLASS=ORA_OPEN_BACKUP_SP
$!
$ ABS SAVE /NOSTART DISK$ORACLE5:[APITEST_TBS5]APITEST_TBS5.DF -
/AGENT_QUALIFIER="/IGNORE=(INTERLOCK,NOBACKUP)" -
/NAME=ORA_OPEN_BACKUP_TBS5 -
/SCHEDULE_OPTION=ON_DEMAND -
/ENVIRONMENT=ORA_OPEN_BACKUP_ENV -
/STORAGE_CLASS=ORA_OPEN_BACKUP_SP
$!
$ ABS SAVE /NOSTART DISK$ORACLE6:[APITEST_TBS6]APITEST_TBS6.DF -
/AGENT_QUALIFIER="/IGNORE=(INTERLOCK,NOBACKUP)" -
/NAME=ORA_OPEN_BACKUP_TBS6 -
/SCHEDULE_OPTION=ON_DEMAND -
/ENVIRONMENT=ORA_OPEN_BACKUP_ENV -
/STORAGE_CLASS=ORA_OPEN_BACKUP_SP
$!
$ ABS SAVE /NOSTART DISK$ORACLE1:[APITEST_TBS1]APITEST_TBS1.DF -
/NAME=ORA_OPEN_BACKUP -
/AGENT_QUALIFIER="/IGNORE=(INTERLOCK,NOBACKUP)" -
/SCHEDULE_OPTION=DAILY -
/ENVIRONMENT=ORA_OPEN_BACKUP_ENV -
/STORAGE_CLASS=ORA_OPEN_BACKUP_SP
$ ABS SET SAVE ORA_OPEN_BACKUP -
DISK$ORACLE2:[APITEST_TBS2]APITEST_TBS2.DF/ADD
$!
$ ABS SAVE /NOSTART DISK$ALPHA:[ORACLE.DB_APITEST]*.ARC -
/NAME=ORA_OPEN_BACKUP_RDO -
/SCHEDULE_OPTION=ON_DEMAND -
/ENVIRONMENT=ORA_OPEN_BACKUP_ENV -
/STORAGE_CLASS=ORA_OPEN_BACKUP_SP
$ ABS SET SAVE ORA_OPEN_BACKUP_RDO -
DISK$ALPHA:[ORACLE.DB_APITEST]COPY_OF_CONTROL_FILE.CON/ADD
$!
$! NOW TO START IT
$!
$ ABS SET SAVE ORA_OPEN_BACKUP/START="23:00"
This glossary contains terms defined for the Archive Backup System for OpenVMS (ABS). It also contains terms associated with the following products when related to ABS:
A data-entry format for specifying the date or time of day. The format for absolute time is [dd-mmm-yyyy[:]][hh:mm:ss.cc]. You can specify a date and time, or use the keywords TODAY, TOMORROW, or YESTERDAY.
The MDMS server process that is currently active. The active server process responds to requests issued from an MDMS client process.
To reserve something for private use. In MDMS software, a user is able to allocate volumes or drives.
The state of a drive or volume when a process is granted exclusive use of that drive or volume. The drive or volume remains allocated until the process gives up the allocation.
One of four volume states. Volumes that are reserved for exclusive use by a user (such as ABS) are placed in the allocated state. Allocated volumes are available only to the user name (such as ABS) assigned to that volume.
The abbreviation for the American National Standards Institute, an organization that publishes computer industry standards.
A magnetic tape that complies with the ANSI standards for label, data, and record formats. The format of VMS ANSI-labeled magnetic tape volumes is based on Level 3 of the ANSI standard for magnetic tape labels and file structure.
A repository of data that consists of
The abbreviation for the American Standard Code for Information Interchange.
This code is a set of 8-bit binary numbers representing the alpha- bet, punctuation, numerals, and other special symbols used in text representation and communications protocols.
To make duplicate copies of one or more files, usually onto different media than the original media. This provides the availability to restore the original data if it is lost or corrupted.
The client or utility that performs the actual save or restore operation. Examples are the VMS BACKUP Utility and the RMU Backup Utility.
The backup engine moves data to and from the storage policy. Examples of backup engines: VMS BACKUP, RMU BACKUP, and UBS.
Standard OpenVMS BACKUP format. The BACKUP format is the recording format used by the VMS Backup utility to back up data to save sets.
A node or OpenVMS Cluster system that has control over creating save requests. A backup management domain is usually controlled by a single storage administrator.
The act of logically binding volumes into a magazine. This makes the volumes a logical unit that cannot be separated unless an UN- BIND operation is done on the volumes.
The number of records in a physical tape block. The length of a physical block written to magnetic tape is determined by multiplying the record length by the blocking factor. For example, if a record length of 132 and a blocking factor of 20 are specified, the length of each physical block written to tape will be 2640 bytes (or characters).
The blocking factor is only used when MDMS software is writing an EBCDIC tape.
Contains records of data movement operations. Each time a save request is initiated, the history of the data movement operation is recorded in an associated ABS central security domain: The node or OpenVMS Cluster system where ABS policy server is installed. This domain controls all ABS pol- icy objects, particularly storage and environment policies.
Client nodes send database requests to the server node.combination time: A data-entry format for specifying date and time. Combination time consists of an absolute time value plus or minus a delta time value.
An instruction, generally an English word, entered by the user at a terminal. The command requests the software to perform a pre- defined function.
The acronym for cyclic redundancy check. It is a verification process used to ensure data is correct.
The number of days (in VMS time format) between the creation of new volume sets.
A data object specification, such as an OpenVMS file name or an Rdb/VMS database file name.
Either a save or restore request initiated through either the DCL command interface or ABS graphical user interface.
The day on which an allocated volume is scheduled to go into the transition state or the free state.
A value or operation automatically included in a command or field unless the user specifies differently.
The number of bits per inch (bpi) on magnetic tape. Typical values are 6250 bpi and 1600 bpi.
One of four volume states. Volumes that are either damaged, lost, or temporarily removed from the MDMS volume database for cleaning are placed in the down state.
Extended Binary Coded Decimal Interchange Code. EBCDIC is an unlabeled IBM recording format. Volumes in EBCDIC format do not have records in the MDMS volume database.
ABS policy object that defines the environment in which data ABS save and restore requests occur.
The date and time at which an archived data is no longer considered useful. The archived data can be deleted and its space removed.
The volume state that allows volumes to be selected by users or other software applications.
A shared physical or logical boundary between computing system components. Interfaces are used for sending and/or accepting information and control between programs, machines, and people.
The act of automatically updating the MDMS database. MDMS can mount each volume located in a magazine and update the MDMS volume database through this process.
A jukebox component that enables an operator to manually insert and retrieve volumes. The I/O station consists of an I/O station door on the outside of the jukebox and an I/O station slot on the inside. See also I/O station door and I/O station slot.
An actual door on the outside of the jukebox that can be opened and closed. Behind the I/O station door is the I/O station slot.
The process which makes a volume physically available to the computer system, such as for read or write operations.
Any file into which status and error messages are written to reflect the progress of a process.
The active server node to which all MDMS database requests are sent to be serviced. In a high-availability configuration, when the active server node fails, another node (see MDMS standby server process) in the OpenVMS Cluster system becomes the active server node.
The MDMS software is an OpenVMS software service that enables you to implement media and device management for your storage management operations. MDMS provides services to SLS, ABS, and HSM.
Any MDMS server process that is not currently active. The standby server process waits and becomes active if the active server process fails.
A physical container that holds from 5 to 11 volumes. The magazine contains a set of logically bound volumes that reside in the MDMS database.
The MDMS database that contains the magazine name and the volume names associated with that magazine.
A mass storage unit. Media is referred to in this document as a volume. Volumes provide a physical surface on which data is stored. Examples of physical volumes are magnetic tape, tape cartridge, and optical cartridge.
Storage in which file headers are accessible through the operating system, but accessing data requires extra intervention.
Nearline storage employs a robotic device to move volumes between drives and volume storage locations. Nearline storage is less costly for each megabyte of data stored. Access times for data in nearline storage may vary. Access to data may be nearly instantaneous when a volume containing the data is already loaded in a drive. The time required for a robotic device to move to the most distant storage location, retrieve a volume, load it into a drive, and position the volume determines the maximum access time.
The devices of nearline storage technology include, but are not limited to, automated tape libraries and optical jukeboxes.
Storage in which neither the file headers nor the data is accessible by the operating system and requires extra intervention.
Offline storage requires some type of intervention to move volumes between drives and the volumes' storage location. Offline storage is the least costly for each megabyte of data stored. Access times for data in offline storage vary for the same reasons as described for nearline storage. For archive data stored in a remote vault, access time can take more than a day.
The devices of offline storage technology include, but are not limited to, standalone tape drives, optical disk drives, and sequential stack loader tape drives.
Storage in which file headers and data can be accessed through the operating system. Online storage is the most costly for each megabyte of data stored.
As a trade off, online storage also offers the highest access performance. Online storage devices offer continuous service. The devices of online storage technology include disk storage and electronic (RAM) storage that uses disk I/O channels.
OpenVMS Operator Communication Manager. An online communication tool that provides a method for users or batch jobs to request assistance from the operator, and allows the operator to send messages to interactive users.
The level of privilege required by a system operator to suspend an MDMS operation and to perform a variety of maintenance procedures on volumes, as well as archive files and saved system files.
The decisions and methods in which you implement your ABS policy. This includes when and how often you back up or archive data from online to nearline or offline storage.
The component in ABS that makes intelligent decisions based upon the implementation of your ABS policy.
The method in which ABS enables you to implement your ABS policy. ABS provides the following policy objects:
ABS server component. Placement of this component determines the central security domain (CSD).
A set of volumes in the free state. Those volumes can be allocated by users who have access to the volume pool. The storage administrator creates and authorizes user access to pools.
A set of related data treated as a unit of information. For example, each volume that is added to the MDMS volume database has a record created that contains information about the volume.
The unique arrangement of data on a volume according to a predetermined standard. Examples of recording format are BACKUP, EBCDIC, and ANSI.
The method by which the contents of a file or disk are recovered from a volume or volumes that contain the saved data. ABS software will restore data by querying ABS catalog for the file or disk name specified in the restore request, and then locate the BACKUP save sets from one or more volumes, extract the data from those save sets, and place the information onto a Files-11 structured disk where the restored data can be accessed by a user.
A request to restore data from the archives to either its original location or an alternate location. Restore re- quests are initiated either through the DCL command interface or ABS graphical user interface.
The requester profile is the profile of the user who is creating the save or restore request. This profile is captured at the time the request is created.
A tape or optical drive that provides automatic loading of volumes, such as a TF867 or a TL820.
The method by which copies of files are made on magnetic or optical volumes for later recovery or for transfer to another site.
For BACKUP formatted volumes, an ABS save operation creates BACKUP save sets on magnetic tape volume, a system disk, or optical volume.
A file created by the VMS Backup Utility on a volume. When the VMS Backup Utility saves data, it creates a file in BACKUP format called a save set on the specified output volume. A single BACKUP save set can contain numerous files. Only BACKUP can interpret save sets and restore the data stored in the save set.
A vertical storage space for storing a volume. The storage racks and cabinets used in data centers contain multi-row slots that are labeled to easily locate stored volumes.
One or more privileged users responsible for installing, configuring, and maintaining ABS software. This user has enhanced ABS authorization rights and privileges and controls the central security domain (CSD) by creating and maintaining ABS storage and environment policies.
The level of privilege required to install the software and add user names to the system.
An ABS system typically saves the system disk, also known as a full disk backup. The system backup can direct ABS software to perform automotive save operations on a predetermined schedule.
Volumes in the transition state are in the process of being deallocated, but are not yet fully deallocated. The transition state provides a grace period during which a volume can be reallocated to the original owner if necessary.
User identification code. The pair of numbers assigned to users, files, pools, global sections, common event flag clusters, and mailboxes. The UIC determines the owner of a file or ABS policy object. UIC-based protection determines the type of access available to the object for its owner, members of the same UIC group, system accounts, and other (world) users.
A save request created by an individual user (not the system) when they would like to make copies of a file or set of files for later recovery or for transfer to another site.
The set of information about a user that defines the user's right to access data or the user's right to access an ABS policy object. For ABS on OpenVMS, this includes the following information:
An OpenVMS Operating System utility that performs save and restore operations on files, directories, and disks using the BACKUP recording format.
A physical piece of media (volume) that is known logically to the MDMS volume database. A volume can be a single magnetic tape or disk, or as in the case of an optical cartridge, can refer to one side of double-sided media. A volume is assigned a logical name, known as the volume label.
The volume's internal identification used to verify that the correct volume has been selected. The volume label should be the same as the volume ID.
One or more volumes logically connected in a sequence to form a single volume set. A volume set can contain one or more save sets. ABS adds volumes to a volume set until the storage policy's consolidation criteria has been met or exceeded.
A volume status flag. In MDMS software, volumes are placed in one of the following states:
A nonnumeric or nonalphanumeric character such as an asterisk (*) or percent sign (%) that is used in a file specification to indicate "ALL" for a given field or portion of a field. Wildcard characters can replace all or part of the file name, file type, directory name, or version number.
configuring NTsystem backups 5-11
configuring OpenVMS system backups 5-6