Hierarchical Storage
Management for OpenVMS
This manual contains information and guidelines for operation of HSM and
Media, Device and management Services (MDMS).
Storage Library System for OpenVMS V2.9B or higher, or Media, Device and Management |
|
© Hewlett-Packard Development Company, L.P. 2003.
Confidential computer software. Valid license from hpand/or its subsidiaries required for possession, use, or
copying.
Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's commercial license.
Neither hp nor any of its subsidiaries shall be liable for technical or editorial errors or omissions contained herein. The information in this document is provided "as is" without warranty of any kind and is subject to change without notice. The warranties for hp products are set forth in the express limited warranty statements accompanying such products. Nothing herein should be construed as constituting an additional warranty.
1.1 Storage Management in the OpenVMS Environment 1-1
1.1.2 Device Capacity, Cost, and Performance 1-1
1.1.3 Storage Management Planning 1-2
1.2 Storage Management with HSM 1-3
1.2.1 File Headers and Location 1-3
1.2.2 Controlling File Movement 1-3
1.3 HSM Storage Management Concepts 1-4
1.4.1 Starting the Shelving Process 1-4
1.4.2 File Selection for Explicit Shelving 1-5
1.4.3 File Selection for Implicit Shelving 1-5
1.4.4 Modifying File Attributes of a Shelved File 1-6
1.5 The Unshelving Process 1-6
1.5.1 Starting the Unshelving Process 1-6
1.5.2 Process Default Unshelving Action 1-6
1.5.3 The Results of Unshelving a File 1-7
1.5.4 Handling Duplicate Requests to Unshelve a File 1-7
1.6 The Preshelving Process 1-7
1.7 The Unpreshelving Process 1-7
1.8 File Headers and Access Security 1-8
1.9 HSM File State Diagram 1-8
1.10.1 HSM Operations with Cache 1-9
1.10.2 Cache in the Shelving and Preshelving Processes 1-9
1.10.3 Unshelving from Cache 1-9
1.10.4 Exceeding Cache Capacity 1-10
1.12 HSM Archive Repacking 1-10
1.13.1 HSM Basic Mode Functions 1-11
1.13.2 HSM Plus Mode Functions 1-11
1.14 Media Types for HSM Basic Mode 1-12
2.3.1 Using Multiple Shelf Copies 2-5
2.3.2 Defining Shelf Copies 2-6
2.3.2.1 Archive Lists and Restore Archive Lists 2-6
2.3.2.2 Primary and Secondary Archive Classes 2-7
2.3.2.3 Multiple Shelf Copies 2-7
2.3.6 Number of Updates for Retention 2-8
2.4 HSM Basic Mode Archive Class 2-9
2.5 HSM Plus Mode Archive Class 2-10
2.6.1 Sharing and Dedicating Devices 2-10
2.6.3 Devices and Archive Classes 2-12
2.6.4 Magazine Loaders for HSM Basic Mode 2-13
2.6.5 Compatible Media for HSM Basic Mode 2-14
2.6.6 Automated Loaders and HSM Plus Mode 2-15
2.7.2 Shelving Operations 2-16
2.7.5 Files Excluded from Shelving 2-16
2.8.1 Advantages and Disadvantages of Using a Cache 2-17
2.8.3.1 Timing of Shelf Copies 2-18
2.8.3.4 Cache Flush Interval 2-18
2.8.3.5 Cache Flush Delay 2-18
2.8.3.6 Delete and Modify File Action 2-19
2.8.4 Optimizing Cache Usage 2-19
2.8.5 Using Magneto-Optical Devices 2-19
2.9.2.1 Scheduled Trigger 2-21
2.9.2.2 User Disk Quota Exceeded Trigger 2-21
2.9.2.3 High Water Mark Trigger 2-22
2.9.2.4 Volume Full Trigger 2-22
2.9.3 File Selection Criteria 2-22
2.9.5 Make Space Requests and Policy 2-24
2.10.2 Execution Timing and Interval 2-25
3.1 Configuring a Customized HSM Environment 3-1
3.1.1 Customizing the HSM Facility 3-1
3.1.2 Creating Shelf Definitions 3-1
3.1.3 Enabling and Disabling a Shelf Definition 3-2
3.1.4 Modifying Archive Classes 3-2
3.1.5 Creating Device Definitions 3-3
3.1.6 Modifying Device Definitions 3-3
3.1.7 Enabling and Disabling a Volume Definition 3-3
3.1.9 Enabling and Disabling a Policy Definition 3-4
3.1.10 Scheduling Policy Executions 3-4
3.2 Implementing Shelving Policies 3-5
3.2.1 Determining the Disk Volumes 3-5
3.2.2 Creating Volume Definitions 3-5
3.2.3 Determining File Selection Criteria 3-6
3.2.4 Creating Policy Definitions 3-6
3.2.5 Using Expiration Dates 3-7
4.1 What the User Sees in an HSM Environment 4-1
4.1.1 Identifying Shelved Data using the DIRECTORY Command 4-1
4.1.1.2 DIRECTORY/FULL for Unpopulated Index Files 4-2
4.1.1.3 DIRECTORY/FULL for Populated Indexed Files 4-3
4.1.1.4 DIRECTORY/SHELVED_STATE 4-4
4.1.3 Decreasing Volume Full and Disk Quota Exceeded Errors 4-5
4.2 Controlling Shelving and Unshelving 4-6
4.2.1 Automatic Shelving Operations 4-6
4.2.2 User-Controlled Shelving Operations 4-6
4.4 Working with Remote Files 4-8
4.5 Resolving Duplicate Operations on the Same File 4-8
5.4 Restoring Files to a Different Disk 5-4
5.5 Protecting System Files from Shelving 5-4
5.5.1 Critical HSM Product Files 5-4
5.5.2 OpenVMS System Files and System Disks 5-5
5.6 DFS, NFS and PATHWORKS Access Support 5-6
5.6.4 Logical Names for NFS and PATHWORKS Access 5-8
5.7 Ensuring Data Safety with HSM 5-8
5.7.1 Access Control Lists for Shelved Files 5-8
5.7.2 Handling Contiguous and Placed Files 5-9
5.8 Using Backup Strategies with HSM 5-9
5.8.1 Backing up Critical HSM Files 5-9
5.8.1.1 Defining a Backup Strategy 5-9
5.8.1.2 Using OpenVMS BACKUP to Save the Files 5-9
5.8.1.3 Maintaining a Manual Copy of the Files 5-10
5.8.2 Backing Up Shelved Data 5-10
5.8.2.1 Considerations for OpenVMS BACKUP and Shelving 5-10
5.8.2.2 Using Multiple HSM Archive Classes for Backup 5-10
5.8.2.3 Storing HSM Archive Classes Offsite 5-11
5.8.3 Backing Up Data Stored in an Online Cache 5-11
5.8.3.1 Flushing the Cache 5-11
5.9 Finding Lost User Data 5-11
5.10.1 Recovering Data Shelved Through HSM 5-12
5.10.2 Recovering Critical HSM Files 5-12
5.10.3 Recovering Boot-Up Files 5-13
5.10.4 Reshelving an Archive Class 5-13
5.11 Maintaining Shelving Policies 5-13
5.11.1 The HSM Policy Model 5-14
5.11.1.1 Concepts of HSM Policy 5-14
5.11.1.2 Policy Governs the Shelving Process 5-15
5.11.1.3 The Balance to Achieve When Implementing Policy 5-16
5.11.2 HSM Policy Situations and Resolutions 5-16
5.11.2.1 Situation : Volume Occupancy Full Event 5-16
5.11.2.2 Situation : Shelving Goal Not Reached 5-17
5.11.2.3 Situation : Frequent Reactive Shelving Requests 5-18
5.11.2.4 Situation : Application and User Performance Impeded 5-19
5.11.3 Ranking Policy Execution 5-20
5.12 Managing HSM Catalogs 5-21
5.13 Repacking Archive Classes 5-23
5.13.1 Repack Performance 5-25
5.14 Replacing and Creating Archive Classes 5-25
5.15 Replacing A Lost or Damaged Shelf Volume 5-25
5.16 Catalog Analysis and Repair 5-26
5.17 Consolidated Backup with HSM 5-28
5.17.7 Other Recommendations 5-32
5.18 Determining Cache Usage 5-32
5.19 Maintaining File Headers 5-32
5.19.1 Determining File Header Limit 5-32
5.19.2 Specifying a Volume's File Headers 5-33
5.19.3 Extending the Index File 5-33
5.19.4 Maintaining the Number of File Headers 5-33
5.20.1 Accessing the Logs 5-34
5.20.2 Shelf Handler Log Entries 5-34
5.22 Converting from Basic Mode to Plus Mode 5-37
5.22.1 Shutting Down the Shelf Handler 5-38
5.22.2 Disabling the Shelving Facility 5-38
5.22.3 Entering Information for MDMS 5-38
5.22.4 Changing from Basic Mode to Plus Mode 5-38
5.22.5 Restarting the Shelf Handler 5-38
5.22.6 Using the Same Archive Classes 5-39
6.1 Enabling the Operator Interface 6-1
6.2 Loading and Unloading Single Tapes for HSM Basic Mode 6-1
6.2.1 Load Volume, No Reply Needed 6-2
6.2.4 Volume Initialization Confirmation 6-2
6.2.5 Unload Label Request 6-3
6.3 Responding to BACKUP Requests for HSM Basic Mode 6-3
6.4 Working with Magazine Loaders for HSM Basic Mode 6-3
6.5 Working with Automated Loaders for HSM Plus Mode 6-4
6.5.1 Providing the Correct Magazine 6-4
6.5.2 Providing the Correct Volume for a TL820 6-5
6.7 Drive Selection and Reservation Messages for Both Modes 6-6
7.1 Introduction to Troubleshooting 7-1
7.2.2 After a problem occurs, the first things you should check are the event logs: 7-3
7.2.7 SMU SET and SHOW Commands 7-4
7.2.8 MDMS Tools for HSM Plus Mode 7-5
7.4.2 The Shelf Handler Does Not Start Up 7-6
7.4.3 Policy Execution Process Does Not Start Up 7-8
7.4.4 HSM Does Not Shut Down 7-8
7.4.5 Shelving and SMU Commands Do Not Work 7-9
7.6 Shelving on System Disks 7-10
7.6.1 HSM Plus Mode (MDMS) Problems 7-11
7.7 HSM VMScluster Problems 7-12
7.10 Magneto-Optical Device Problems 7-15
7.11 Offline Device Problems 7-16
7.12 Magazine and Robotic Loader Problems 7-17
7.16 HSM System File Problems 7-22
7.17.1 OpenVMS Limit on File Headers 7-23
7.17.2 Attempting to Cancel Execution of a Shelved File 7-24
9.3 User Interface Restrictions 9-4
9.4 Graphical User Interface 9-4
9.4.6 Showing and Modifying Objects 9-9
9.4.8 Viewing Relationships Between Objects 9-10
9.4.9 Performing Operations on Objects 9-11
9.4.10 Showing Current Operations 9-12
9.4.11 Reporting on Volumes 9-13
9.4.12 Viewing MDMS Audit and Event Logging 9-15
9.5 Access Rights for MDMS Operations 9-16
9.5.1 Description of MDMS Rights 9-17
9.5.1.2 High Level Rights 9-17
9.5.2 Granting MDMS Rights 9-17
9.6 Creating, Modifying, and Deleting Object Records 9-19
9.6.1 Creating Object Records 9-19
9.6.1.2 Differences Between the CLI and GUI for Naming Object Records 9-20
9.6.2 Inheritance on Creation 9-20
9.6.3 Referring to Non-Existent Objects 9-20
9.6.4 Rights for Creating Objects 9-21
9.6.5 Modifying Object Records 9-21
9.6.6 Protected Attributes 9-21
9.6.7 Rights for Modifying Objects 9-21
9.6.8 Deleting Object Records 9-21
9.6.9 Reviewing Managed Objects for References to Deleted Objects 9-22
9.6.10 Reviewing DCL Command Procedures for References to Deleted Objects 9-22
10.1 MDMS Domain Configuration 10-1
10.2.2 Application Rights 10-2
10.2.7 Maximum Scratch Time 10-3
10.3.10 Read-Only Media Types 10-6
10.3.14 Allocate Drive (DCL Only) 10-7
10.3.15 Deallocate Drive (DCL Only) 10-8
10.5.18 Inventory Jukebox 10-12
10.7.1 Jukebox, Start Slot and Position 10-14
10.7.2 Onsite and Offsite Locations and Dates 10-15
10.9.4 Transports and Full Names 10-17
10.10.1 Authorized Users 10-18
10.11.1 Allocation Fields - Account, Username, UIC and Job 10-20
10.11.2 Allocation and Movement Dates 10-20
10.11.7 Previous and Next Volumes 10-22
10.11.8 Placement - Jukebox, Magazine, Locations, Drive 10-23
10.11.9 Formats - Brand, Format, Block Factor, Record Size 10-23
10.11.12 Allocate Volume 10-24
10.11.13 Allocate Volume(s) by Selection Criteria 10-25
11.1 The MDMS Management Domain 11-1
11.1.1.1 Database Performance 11-3
11.1.1.3 Moving the MDMS Database 11-5
11.1.2.1 Server Availability 11-6
11.1.2.2 The MDMS Account 11-6
11.1.3 The MDMS Start Up File 11-7
11.1.3.1 MDMS$DATABASE_SERVERS - Identifies Domain Database Servers 11-7
11.1.3.2 MDMS$LOGFILE_LOCATION 11-8
11.1.3.3 MDMS Shut Down and Start Up 11-8
11.1.4 Managing an MDMS Node 11-9
11.1.4.1 Defining a Node's Network Connection 11-9
11.1.4.2 Defining How the Node Functions in the Domain 11-9
11.1.4.3 Enabling Interprocess Communication 11-10
11.1.4.4 Describing the Node 11-10
11.1.4.5 Communicating with Operators 11-10
11.1.5 Managing Groups of MDMS Nodes 11-10
11.1.6 Managing the MDMS Domain 11-11
11.1.6.1 Domain Configuration Parameters 11-12
11.1.6.2 Domain Options for Controlling Rights to Use MDMS 11-12
11.1.6.3 Domain Default Volume Management Parameters 11-12
11.1.7 MDMS Domain Configuration Issues 11-13
11.1.7.1 Adding a Node to an Existing Configuration 11-13
11.1.7.2 Removing a node from an existing configuration 11-14
11.2 Configuring MDMS Drives, Jukeboxes and Locations 11-14
11.2.1 Configuring MDMS Drives 11-14
11.2.1.1 How to Describe an MDMS Drive 11-14
11.2.1.2 How to Control Access to an MDMS Drive 11-15
11.2.1.3 How to Configure an MDMS Drive for Operations 11-15
11.2.1.4 Determining Drive State 11-15
11.2.1.5 Adding and Removing Managed Drives 11-15
11.2.1.6 Configuring MDMS Jukeboxes 11-16
11.2.1.7 How to Describe an MDMS Jukebox 11-16
11.2.1.8 How to Control Access to an MDMS Jukebox 11-16
11.2.1.9 How to Configure an MDMS Jukebox for Operations. 11-16
11.2.1.10 Attribute for DCSC Jukeboxes 11-16
11.2.1.11 Attributes for MRD Jukeboxes 11-16
11.2.1.12 Determining Jukebox State 11-17
11.2.1.13 Magazines and Jukebox Topology 11-17
11.2.2 Summary of Drive and Jukebox Issues 11-18
11.2.2.1 Enabling MDMS to Automatically Respond to Drive and Jukebox Requests 11-18
11.2.2.2 Creating a Remote Drive and Jukebox Connection 11-19
11.2.2.3 How to Add a Drive to a Managed Jukebox 11-19
11.2.2.4 Temporarily Taking a Managed Device From Service 11-19
11.2.2.5 Changing the Names of Managed Devices 11-19
12.1.2 Volume States by Manual and Automatic Operations 12-2
12.1.2.1 Creating Volume Object Records 12-3
12.1.2.2 Initializing a Volume 12-3
12.1.2.3 Allocating a Volume 12-3
12.1.2.4 Holding a Volume 12-4
12.1.2.5 Freeing a Volume 12-4
12.1.2.6 Making a Volume Unavailable 12-4
12.1.3 Matching Volumes with Drives 12-4
12.1.4 Magazines for Volumes 12-5
12.1.5 Symbols for Volume Attributes 12-5
12.2.1 Setting Up Operator Communication 12-6
12.2.1.1 Set OPCOM Classes by Node 12-6
12.2.1.2 Identify Operator Terminals 12-6
12.2.1.3 Enable Terminals for Communication 12-6
12.2.2 Activities Requiring Operator Support 12-7
12.3 Serving Clients of Managed Media 12-8
12.3.1 Maintaining a Supply of Volumes 12-8
12.3.1.1 Preparing Managed Volumes 12-8
12.3.2 Servicing a Stand Alone Drive 12-9
12.3.3 Servicing Jukeboxes 12-9
12.3.3.1 Inventory Operations 12-10
12.3.4 Managing Volume Pools 12-11
12.3.4.1 Volume Pool Authorization 12-12
12.3.4.2 Adding Volumes to a Volume Pool 12-12
12.3.4.3 Removing Volumes from a Volume Pool 12-12
12.3.4.4 Changing User Access to a Volume Pool 12-12
12.3.4.5 Deleting Volume Pools 12-12
12.3.5 Taking Volumes Out of Service 12-13
12.3.5.1 Temporary Volume Removal 12-13
12.3.5.2 Permanent Volume Removal 12-13
12.4 Rotating Volumes from Site to Site 12-13
12.4.1 Required Preparations for Volume Rotation 12-13
12.4.2 Sequence of Volume Rotation Events 12-13
12.5 Scheduled Activities 12-15
12.5.1 Logical Controlling Scheduled Activities 12-15
12.5.2 Job Names of Scheduled Activities 12-15
13.1 Creating Jukeboxes, Drives, and Volumes 13-1
13.2 Deleting Jukeboxes, Drives, and Volumes 13-4
14.1 The RDF Installation 14-1
14.3.1 Starting Up and Shutting Down RDF Software 14-2
14.3.2 The RDSHOW Procedure 14-2
14.3.4 Showing Your Allocated Remote Devices 14-2
14.3.5 Showing Available Remote Devices on the Server Node 14-3
14.3.6 Showing All Remote Devices Allocated on the RDF Client Node 14-3
14.4 Monitoring and Tuning Network Performance 14-3
14.4.2 DECnet-Plus (Phase V) 14-4
14.4.3 Changing Network Parameters 14-4
14.4.4 Changing Network Parameters for DECnet (Phase IV) 14-5
14.4.5 Changing Network Parameters for DECnet-Plus(Phase V) 14-6
14.4.6 Resource Considerations 14-6
14.4.7 Controlling RDF's Effect on the Network 14-8
14.4.8 Surviving Network Failures 14-8
14.5 Controlling Access to RDF Resources 14-9
14.5.1 Allow Specific RDF Clients Access to All Remote Devices 14-9
14.5.2 Allow Specific RDF Clients Access to a Specific Remote Device 14-9
14.5.3 Deny Specific RDF Clients Access to All Remote Devices 14-10
14.5.4 Deny Specific RDF Clients Access to a Specific Remote Device 14-10
B.1.1 Configuration Step 1 Example - Defining Locations B-2
B.1.2 Configuration Step 2 Example - Defining Media Type B-2
B.1.3 Configuration Step 3 Example - Defining Domain Attributes B-2
B.1.4 Configuration Step 4 Example - Defining MDMS Database Nodes B-4
B.1.5 Configuration Step 5 Example - Defining a Client Node B-5
B.1.6 Configuration Step 6 Example - Creating a Jukebox B-5
B.1.7 Configuration Step 7 Example - Defining a Drive B-6
B.1.8 Configuration Step 8 Example - Defining Pools B-7
B.1.9 Configuration Step 9 Example - Defining Volumes using the /VISION qualifier B-7
D.1 Converting SLS/MDMS V2.X Symbols and Database D-1
D.1.1 Executing the Conversion Command Procedure D-1
D.1.2 Resolving Conflicts During the Conversion D-2
D.2 Things to Look for After the Conversion D-5
D.3 Using SLS/MDMS V2.x Clients With the MDMS V4 Database D-9
D.3.1 Limited Support for SLS/MDMS V2 during Rolling Upgrade D-9
D.3.2 Upgrading the Domain to MDMS V4 D-9
D.3.3 Reverting to SLS/MDMS V2 D-10
D.4 Convert from MDMS Version 3 to a V2.X Volume Database D-11
This document contains information about Hierarchical Storage Management for OpenVMS? (HSM) and Media, Device and Management Services (MDMS) software. Use this document to define, configure, operate, and maintain your HSM and MDMS environment. Installation information is found in a separate Installation and Configuration Guide listed in the related documents table. Command information for both HSM and MDMS is found in the HSM Command Reference Guide also listed in the related documents table.
The audience for this document includes people who apply HSM for OpenVMS? (HSM) to solve storage management problems in their organization. The users of this document should have some knowledge of the following:
This document is organized in the following manner and includes the following information:
Chapter 1 Introduction to HSM.
Chapter 2 Understanding HSM Concepts.
Chapter 3 Customizing the HSM Environment.
Chapter 4 Contains information on using HSM.
Chapter 5 Managing the HSM Environment.
Chapter 6 Operator Activities in the HSM Environment.
Chapter 7 Solving Problems with HSM.
Chapter 8 Provides an introduction to Media, Device and Management Services
(MDMS).
Chapter 9 Contains information on MDMS menu Operations.
Chapter 10 Contains information on Media Management.
Chapter 11 Contains information on MDMS Configuration.
Chapter 12 Contains information on MDMS Management Operations.
Chapter 13 Contains information on MDMS Tasks.
Chapter 14 Contains information on Remote Device Facility.
Appendix A Lists HSM-specific status messages and error messages.
Appendix B Gives a Sample Configuration of MDMS.
Appendix C Lists MDMS-specific status messages and error messages.
The following documents are related to this documentation set or are mentioned in this manual. The lower case x in the part number indicates a variable revision letter.
HSM Hard Copy Documentation Kit (Consists of the above HSM documents and a cover letter) |
|
The following related products are mentioned in this documentation.
The following conventions are used in this document.
If you encounter a problem while using HSM, report it to hp through your usual support channels. Review the Software Product Description (SPD) and Warranty Addendum for an explanation of warranty. If you encounter a problem during the warranty period, report the problem as indicated previously or follow alternate instructions provided by hp for reporting SPD nonconformance problems.
This chapter provides an introduction to the general concepts of storage management in the OpenVMS? environment and defines the role of hp's Hierarchical Storage Management (HSM) for OpenVMS? software. Henceforth in this book, the term HSM is used as a replacement for Hierarchical Storage Management.
Storage management is the means by which you control the devices on which the frequently accessed (active) data on your system is kept. To be useful, active data must be available for use and remain unchanged (persistent) in the event of unexpected events, such as disasters.
Typically, data exists in one of three categories:
On most systems, 80 percent of the I/O requests access only 20 percent of stored data. The remaining 80 percent of your data occupies expensive media (magnetic disks), but is used infrequently.
There are many different devices on which your data can be stored, and the selection of which device best meets your storage needs depends on three factors:
The relationship among these three factors is illustrated in Figure 1-1. In general, high-performance devices have a lower capacity and higher cost than high-capacity devices. High-capacity devices trade performance for the ability to store large amounts of data.
Your storage management plan should allow you to cost effectively place your data on those devices best suited to meet your cost and access requirements. This plan should include placing your active data on the most responsive devices in your system, placing your dormant data on less responsive devices, and placing your inactive data on the highest capacity devices. File activity and associated data storage are summarized in Table 1-1.
HSM software is an extension of the OpenVMS? file system that allows you to manage your dormant data efficiently. It moves your dormant data from primary storage (where your active data is usually kept) to shelf storage. This frees the space in primary storage for use, while the dormant data remains available on lower cost media. The movement of your dormant data to shelf storage is called shelving.
To meet your storage management requirements, HSM:
Data managed by HSM resides in one of the following states:
While a file is shelved, the file's header information is maintained in primary storage. When you display the header of a shelved file, the allocated file size is shown as zero blocks, indicating that the data contents are located in shelf storage.
The directory information and file headers for your shelved data are maintained in directories on your primary storage devices. The data itself is located in shelf storage. When access is requested for the shelved data, HSM automatically returns it to primary storage. Introduction to HSM
Information on your files always can be found in your active directories, even though the actual data resides in shelf storage.
You can control shelving in the following ways:
To implement shelving control, you use HSM policies. For additional information about HSM policies, see See HSM Policies
There are several key storage management concepts required to properly understand and use HSM. These concepts include:
These concepts are described in detail in Chapter 2.
An HSM shelf is a logical software object that relates the data in a set of online disk volumes, on which shelving is enabled, to a set of archive classes that contain the shelved data for those volumes.
An archive class is a logical software object that represents a single copy of shelved data. Identical copies are written to one or more archive classes when a file is shelved. For each shelf, you can specify the number of archive classes (data copies) to have to ensure reliability of the data. Because shelved data is not backed up automatically, multiple shelf copies provide the only means of recovery if the primary copy of the shelf data is lost or destroyed. hp recommends you have at least two archive classes for each shelf.
An HSM policy is a defined set of parameters that controls when shelving begins and ends.
HSM implements data management through HSM policies that specify responses to events. HSM policies contain HSM-specific commands to shelve or unshelve data in response to a scheduled or situational trigger event. Trigger events, used in conjunction with appropriately designed file selection criteria, work to provide enough online disk space to satisfy users' needs. For detailed information about HSM policies, see See File Selection Criteria.
The shelving process moves files from primary storage to shelf storage. The header information for files that have been shelved is still visible to users through the OpenVMS? directory command, even though the file's data contents are not stored online. You can modify these file headers without unshelving the files.
Your control over the start of the shelving process is either explicit or implicit.
Explicit shelving is a process that starts in response to the DCL SHELVE command. You can issue the SHELVE command directly to the OpenVMS? operating system, or you can execute it in an OpenVMS? command procedure.
Implicit shelving is a process that occurs in response to one of the following triggers:
The DCL SHELVE command accepts file specifications, including wildcards, for files to process. Qualifiers to this command allow flexibility in selecting files for explicit shelving. Refer to HSM Command Reference Guide for complete information about using the SHELVE command.
File selection for implicit shelving is specified through HSM policy. Once you understand the file selection process, you can use Shelf Management Utility (SMU) commands to specify file selection criteria and achieve efficient use of primary storage.
When an application or user creates a file or extends the file, the operation may not complete because the disk volume is full or the user has exceeded the disk quota.
If shelving is enabled on the volume, this situation generates a make space request to HSM to free up enough disk space to satisfy the request. If responding to make space requests is enabled, HSM executes the defined policy for the volume and shelves enough files to free up the requested space. While shelving files, HSM sends an informational message to notify the user that the file access may take much longer than usual due to the shelving activity.
Table 1-2 lists the stages of file selection for implicit shelving.
After a file has been shelved, its header remains on the disk. You still see the file in directories, and you may view and modify the file's attributes without having to access the data in shelf storage. Any modifications you make to the shelved file's header will be in effect when the file is unshelved.
The unshelving process moves files from shelf storage to primary storage. Once the file has been unshelved, you can access it normally.
Your control over the start of the unshelving process is either explicit or implicit.
Explicit unshelving is a process that starts in response to the DCL UNSHELVE command.
You can issue the UNSHELVE command directly to the OpenVMS? operating system, or you can execute it in an OpenVMS? command procedure. The UNSHELVE command accepts one or more file specifications, including wildcard file specifications.
Implicit unshelving is a process that HSM starts in response to a file fault. A file fault is a high-priority request that occurs when a shelved file is accessed for a read, write, extend, or truncate operation.
Table 1-3 shows the process for unshelving a file.
For each user process, you can specify a default unshelving action that controls implicit unshelving initiated by DCL commands and applications. By default, access to a shelved file causes a file fault.
However, you can specify instead that an error be returned on such access by issuing a SET PROCESS/NOAUTO_UNSHELVE command. This is especially useful
for commands such as wildcard searches when you do not need to unshelve files to examine them for the matching string.
When a file is unshelved, its data contents are moved into the location defined by its current directory entry in the(online) file header. If you renamed the file header while the file was shelved, the file will be unshelved into its new location or its new name. After a file has been unshelved from nearline/offline media, the copy remains on the nearline/offline media. Once unshelved, the file can be shelved again. If the file has been modified, a new shelf copy is made and the old copy is invalidated. If a file has not been modified since it was shelved originally, the previously shelved file copy remains valid and a new copy is not made.
Subsequent requests to unshelve a given file while the file is undergoing the unshelving process are treated as duplicate requests. HSM signals that both requests have completed after the first request (the one that initiated the unshelving process) completes.
The preshelving process is a variation of the shelving process. It is similar to the shelving process in that it copies the file's data to shelf storage. It differs from the shelving process in that it allows the file to remain online and accessible even though a shelf copy is made.
A request to preshelve a file that has already been shelved or preshelved succeeds immediately. After a file is preshelved, it can still be accessed normally. If the online file is modified, the shelf copy is invalidated. Any subsequent shelve or preshelve operation causes the file to be shelved again. If the preshelved file is not modified, a subsequent shelve operation simply truncates the file's data which removes the data from primary storage.
When a shelved file is unshelved, it goes into the preshelved state. That is, the file's HSM shelf data is still valid. If the file is later shelved without being modified, no additional data copies are made and the existing shelf data is used.
However, if the file is modified, its shelf data becomes obsolete. This process is called unpreshelving, and occurs automatically if an application writes to the file. It can also be explicitly requested using the UNPRESHELVE DCL command. When a file is unpreshelved, its HSM shelf data is marked invalid, and may be subject to deletion during repack according to the updates parameter set on the associated shelf. In addition, if the shelf data is in a cache with the /NOHOLD qualifier, the cache copy of the file (and its associated catalog entry) are immediately deleted.
If a file has been unpreshelved for any reason, a subsequent shelve or preshelve operation will cause a new copy of the data to be made. An unpreshelved file is effectively identical to a file that has never been shelved.
When a file is shelved, a copy of its header is kept with the data and the original header remains in primary storage (on the disk). The header that remains in primary storage is the valid file header.
HSM maintains file access security even when the contents of the file are not present on the online disk volume, because the online file header contains file owner, protection flags, and access control lists. If you change the file protection or ownership while a file is shelved, the user who shelved the file may not be allowed to unshelve it.
Figure 1-2 illustrates the various HSM states in which a file can reside, the locations of the file's directory, header, and data, and the operations that transition a file from one state to another.
Cache is shelf storage comprised of one or more online or nearline storage devices. These devices can include magnetic and magneto-optical disks. You can use any number of devices for the cache. The cache temporarily stages shelved data between its primary online storage location and the nearline/offline media used for shelf storage. Cache is fully described in See Cache Usage.
Using a cache greatly improves shelving performance, because the time needed to complete the operation is only as long as it takes to copy a file to another disk. The cache then can be flushed to a nearline or offline device at a later time when the shelving operation will have less impact on system performance.
Magneto-optical (MO) devices make an ideal repository for shelved data because they cost less than magnetic disks but still provide excellent response time. HSM supports MO devices as cache devices, rather than nearline devices, because the OpenVMS? system sees them as system-mounted, Files-11 devices. This means you can define an MO device as temporary cache or as permanent (nonflushing) cache that functions as shelf storage.
Because cache is an alternate location for temporarily storing shelved files, the shelving and preshelving processes differ only slightly when cache is enabled.
The file selection process does not function differently when cache is used. Table 1-4 describes both the shelving and preshelving processes in which cache is used.
The time taken to unshelve a file from cache is almost the same as that for copying the file from one disk to another.
Files that exceed the capacity of the cache are moved directly to the nearline/offline media. You can limit the amount of storage the cache can use on each online volume you designate as a cache, or you can use the entire volume for the cache.
Flushing the cache is the process used to reclaim cache space. Any of the following events can start the cache flushing process:
Depending on how you defined the cache, the following events occur when the cache is flushed:
HSM catalogs contain the information HSM needs to locate and unshelve all shelved files. There is one default catalog, used for maintaining global HSM information, and a number of shelf catalogs that are related to specific shelves and volumes. If an HSM catalog suffers an unrecoverable loss, the associated shelved data may be lost. For this reason, HSM catalogs are an essential part of the HSM environment.
For information on setting up shelf catalogs, see See Shelf Catalog. For information on protecting HSM catalogs from loss or corruption, see See Protecting System Files from Shelving.
HSM provides the capability to repack shelf media on a per-archive class basis (optionally with selected volumes) by copying valid shelf data to new media in the same or different archive classes; deleted and obsolete files are not copied. The old media can then be reused. In addition, the catalog entries of deleted and obsolete files are deleted. The system administrator can specify a delay in deleting shelf data after an online delete, and also the number of updates a file undergoes before a shelf copy is considered obsolete. Refer to See Repacking Archive Classes for more detailed information.
HSM software operates in one of two modes:
Except for the media, device and management configuration and support, both modes operate identically.
You choose a mode to operate when you install the HSM for OpenVMS? software. However, you can change modes after you make the initial decision. The following restrictions apply to changing modes after installation:
HSM Basic mode provides the following functionality and features:
HSM Plus mode provides the following functionality and features:
All other functions, including HSM policies and cache, are provided in both modes.
HSM Basic mode automatically determines the media type based on the specific device(s) you define for use. Table 1-6 shows how media types map to devices for HSM Basic mode. Check the HSM Software Product Description (SPD 46.38.xx) for the latest list of supported devices.
With these device types and media types, HSM Basic mode provides formal support and identification of the device and media types. In addition, HSM Basic mode checks that devices and media are compatible to support operations within an archive class. HSM Basic mode does not formally support other devices and media types, but they might work under the following circumstances:
Generally, a nonmagazine loader third-party tape drive with any media type may work as an `unknown' device and media type.
HSM supports the nearline and offline devices listed in the HSM Software Product Description (SPD 46.38.xx). hp is continually testing new devices and adding them to the list. If you have a question about a particular device, contact hp customer support.
HSM provides shelving support for most online disk devices within a cluster. However, HSM does not support the following types of online disk devices:
In addition, HSM does not support shelving and unshelving of local disks that are not connected to a shelf server. If you want to use shelving and unshelving with local disks, hp recommends you make the local disks accessible to the cluster using MSCP protocols.
HSM provides limited support for remote operations. For HSM Version 3.2A, this support includes:
HSM does not support the following kinds of remote operations:
HSM Basic mode does not support the use of remote nearline or offline tape devices, unless they are configured to appear as local devices. HSM Plus mode supports remote devices (devices that are not directly connected to the cluster) through the Remote Device Facility (RDF) portion of MDMS. For HSM Plus mode to recognize a remote device, you must have defined the remote device correctly through MDMS and you must use the /REMOTE qualifier on the SMU SET DEVICE command. For more information, see the section on "Working with RDF-served Devices" in HSM Plus Mode in the Getting Started with HSM Chapter of the HSM Installation Guide.
Before running HSM in your production environment, you need to understand various definitions and concepts. For each concept, HSM provides a configuration option that you use to manage the HSM environment. This chapter presents an explanation of the HSM concepts and configuration options, structured around the following managed entities in the system:
This chapter also defines the relationships among the managed entities, and provides guidelines for their definition to create an optimal HSM environment. Once you understand the configuration options, you can proceed with the required configuration tasks, as described in the Getting Started with HSM Chapter of the HSM Installation Guide.
For additional information and guidelines for migrating to a more specialized environment that best meets your system requirements, see Chapter 3.
The HSM environment consists of the definitions you create and the relationships that exist among the definitions. The definitions described in the following sections are maintained in definition databases. The HSM environment is shown in See .
The HSM facility entity allows you to control HSM functions across the entire cluster. You can control the following functions at the facility level:
You can specify whether HSM operates in Basic or Plus mode.
When deciding whether to operate in Basic or Plus mode, consider the following:
You can specify whether shelving or unshelving operations are enabled across the cluster as a whole. This includes operations initiated as a result of policy triggers, cache flush operations, and manually initiated HSM commands.
The shelving parameter controls shelving, preshelving and cache flush operations. The unshelving parameter controls unshelving and automatically-generated file faults.
Under normal circumstances, you should enable both shelving and unshelving across your cluster. This allows HSM to maintain desired disk usage through automatic policy operations and also allows users access to shelved data at all times.
You may need to disable HSM operations at certain times if they conflict with other activities (such as backups) and there are limited offline devices available. For example, if backups are performed nightly at midnight, you could set up a policy to disable shelving at that time.
When necessary, you can disable shelving and probably not cause problems with disk usage exceeding the defined goals. However, if you disable unshelving, your users and applications may experience errors accessing shelved files. You should disable unshelving only if you do not anticipate needing access to shelved data.
A shelf server is a single HSM node in a cluster that performs all operations to nearline and offline devices on behalf of all nodes in the cluster. It also coordinates clusterwide operations such as checkpointing archive classes and resetting event logs.
If the facility option Catalog_Server is enabled, all cache operations and catalog updates are also performed by the shelf server. By default, cache operations are performed by the requesting client node for performance reasons. Such operations are passed from other (client) nodes to the shelf server for processing. The shelf server consolidates requests from all nodes and optimizes operations to minimize tape loading and positioning, as well as to support dedicated device access.
Although many nodes can be authorized for shelf server operation, only one HSM node functions as the shelf server at any given time. This way, if the current shelf server node fails, operations are immediately transferred and recovered by another authorized shelf server node. You can specify up to 10 specific nodes to be authorized for shelf server operation. By default, all nodes in the cluster are authorized. The current shelf server node can be displayed using an SMU SHOW FACILITY command.
When deciding whether to authorize a node as a shelf server, consider the following:
Using the default authorization of all nodes is acceptable if the above conditions are met and all your nodes have similar capabilities.
If you operate a cluster with a few large systems and many satellite workstations, restricting shelf server operations to the large systems provides much better performance for all cluster users. Defining specific shelf servers is highly recommended in this case.
HSM gives you the option of directing all HSM operations and all catalog updates through the shelf server by enabling the Catalog_Server option. With this option, all cache operations and catalog updates are performed by the shelf server node in a similar manner to tape operations.
There are two main reasons you may want to enable this feature:
The downside of enabling the catalog server option is that caching speed is somewhat reduced due to extra intracluster communications, and possible delays in shelf server response time.
HSM provides four event log files that enable you to monitor and tune the HSM environment, as well as to detect errors in HSM operation:
Event logging can be enabled and disabled within the following categories:
hp recommends that you enable all logging at all times to keep track of all activity. This is especially important when you have to report a problem.
A shelf is a named entity that relates a set of online disk volumes, on which shelving is enabled, to a set of archive classes that contains the shelved file data for those disk volumes. For each shelf, you can control the following:
You can define any number of shelves, but any specific online disk volume can be associated with only one shelf.
HSM provides a default shelf, named HSM$DEFAULT_SHELF, to which all volumes are associated if no other associations are defined.
If your data reliability requirements are the same across all disk volumes, you can simply use the default shelf and specify the desired number of copies to use on that shelf. All volumes acquire the data reliability specified by the default shelf.
If your data reliability requirements differ from volume to volume, you can define multiple shelves, each of which can contain different numbers of copies for data reliability purposes. You can then relate each volume to the shelf that has the appropriate number of copies.
hp recommends that you specify at least two copies for each volume.
If you have a very large number of online disk volumes, hp recommends that you define multiple shelves, each with a separate catalog. This prevents any particular catalog from becoming so large that catalog access performance degrades. hp recommends that you associate between 10 and 50 online disk volumes with each shelf, depending on the amount of shelving you plan to do.
The shelf entity does not define the volumes associated with the shelf. Instead, you associate individual volume entities (see See Volume) with the shelf. You can associate a particular volume with exactly one shelf. If you do not define volumes explicitly, all volumes implicitly use the default shelf.
This section explains why you need multiple shelf copies and how to define them.
One of the most important decisions that you need to make concerns the number of copies of shelved file data that you need for data safety purposes.
Shelved data is not normally backed up in the normal backup regimen because the OpenVMS BACKUP utility (and layered products like Storage Library System software that use BACKUP) work in the following way:
In other words, after a file is shelved, it is likely that its data will not be backed up again. A typical backup strategy recycles the backup tapes when a certain number of more recent copies have been made. This cycle may be anywhere from a few days to several years.
However, there eventually will come a time when all of the backup tapes contain only the headers of shelved files.
Unless the tapes are never recycled, the shelved file data on the backup media will eventually be lost. As such, the easy way to enhance reliability of shelved file data is to make duplicate copies of the data by using multiple shelf copies.
Shelf copies are defined using a concept called an archive class.
An archive class is a named entity that represents a single copy of shelf data. Identical copies of the data are written to each archive class when a file is shelved.
For each shelf, you can specify the archive classes to be used for shelf copies for all volumes associated with the shelf.
The minimum recommended number of copies (archive classes) for each shelf is two.
Archive classes are represented by both an archive name and an archive identifier. Archive identifiers are used in Shelf Management Utility (SMU) commands for ease of use. HSM Basic mode supports 36 archive classes named HSM$ARCHIVE01 to HSM$ARCHIVE36, with associated archive identifiers of 1 to 36 respectively. HSM Plus mode supports up to 9999 archive classes, named HSM$ARCHIVE01 through HSM$ARCHIVE9999, with associated archive identifiers of 1 to 9999.
For each shelf, you must specify two lists of archive identifiers:
The archive and restore archive lists are defined using the SMU SET SHELF command with the /ARCHIVE and /RESTORE qualifiers. See HSM Command Reference Guide for a complete description of the shelf management utility and its commands.
Restore archive classes are used for unshelving files in the order specified in the restore archive list. The first attempt to restore a file's data is made from the first archive class specified in the restore list. If this fails, an attempt is made from the next archive class, and so on. Although only 10 archive classes are supported for shelf copies, up to 36 are supported for restore, because the restore list must contain a complete list of all archive classes that have ever been used for shelving on the shelf. This enables files to be restored not only from the current list of shelf archive classes, but also from all previously-defined shelf archive classes. In this way, you can add or change archive classes for a shelf by:
Changing the archive classes in the archive list, which affects subsequent shelving operations only
Adding new archive classes to the restore list, while keeping the existing definitions in place, so that files shelved under those definitions still can be restored Archive classes also are related to media types and devices, as discussed in See Device. When a shelf is first created, the archive classes specified in the archive list are copied to the restore list if the restore list is not specified. Thereafter, the two lists must be maintained separately.
When defining your restore archive list, it is useful to think of the first archive class in the restore list as a primary archive class and all the others as secondary archive classes. For shelving operations, all of the archive classes in the archive list receive the same amount of operations, because HSM copies data to all archive classes at the time of shelving. However, for unshelving, this is different. In most cases, HSM only needs to read from the primary archive class to restore the data. These concepts are useful when deciding how to relate your archive classes to media types and devices as described in See Devices and Archive Classes.
You need to determine the appropriate number of shelf copies for your shelved file data, depending on the importance of the data being shelved.
hp recommends a minimum of at least two shelf copies of all data, because media can be lost or destroyed. If the data is especially critical, you can make additional copies, some of which might be taken offsite and stored in a vault. HSM provides a mechanism called checkpointing to synchronize your shelved data media and backup media so that they can be removed to an offline location together (see HSM Command Reference Guide).
See illustrates the relationship between volumes and archive classes. Each disk volume has an associated archive class and restore archive class, as shown in the archive and restore archive lists. In this example, as with most cases, the archive and restore lists are identical.
You can control the same operations for a shelf as you can for the facility, except that the operations defined for the shelf affect only the volumes associated with the shelf.
This gives you a finer level of shelving control, which might be useful if certain classes of volumes are not regularly accessed at certain times, and you want to disable shelving activity. However, as with the facility control, it is expected that shelving and unshelving operations usually are enabled.
The shelf catalog contains information regarding the location of near-line and off-line data for all volumes associated with the shelf. hp recommends that you define a separate catalog for each shelf, but it is possible for several shelves to share a catalog, or for all shelves to use the default catalog.
Defining a separate catalog for each shelf has the following advantages:
As a guideline, hp recommends that each shelf be associated with between 10 and 50 volumes, and that each shelf has its own catalog. A shelf catalog needs to be protected with a similar level of protection as the default catalog, namely:
It is also recommended that the catalog for a shelf be placed on a disk volume other than one associated with the shelf itself. In very large environments, it might be appropriate to dedicate one or more shadowed disk sets for HSM catalogs, and to disable shelving on those disks. When defining a new catalog for a shelf, or a new shelf for a volume, HSM automatically splits all associated shelving data from the old catalog, and merges it into the new catalog. See See Managing HSM Catalogs for more information on this process.
You can specify a delete save option for shelved files that have been deleted. This option allows the specification of a delta time which keeps a file's shelved data in the HSM subsystem for this period after the file is deleted. The actual purging of deleted files (after the specified delay) is performed by the REPACK function.
This option allows the specification of a number of updates to a shelved file that will be kept in the HSM subsystem.
This option applies to files that have been updated in place, not new versions of files that have been created after an update. New versions are controlled by online disk maintenance outside the scope of HSM. The actual purging of obsolete shelf data is performed by the REPACK function.
As previously discussed, HSM Basic mode supports 36 archive classes named HSM$ARCHIVE01 through HSM$ARCHIVE36, with archive identifiers of 1 to 36 respectively. You must configure archive classes by using the SMU SET ARCHIVE command to identify the archive class name. Once you have defined the archive class, you can then associate archive classes with shelves and devices using appropriate commands. From these associations, HSM Basic mode determines the appropriate media type for the archive class.
There is a separate set of tape volumes with specific labels associated with each archive class for HSM Basic mode. HSM allows limited maintenance on archive classes by allowing you to modify the shelving volume label attribute. The volume labels must be in the proper format for each archive class, as listed in Table 2-1.
Table 2-1 HSM Basic Mode Archive Class Identifier/Label Reference
1. |
|||||
For each of the 36 archive classes, the first three characters of the volume label are fixed and represent the archive class. The last three characters of the volume label (shown in Table 2-1 as xxx) represent a sequence number in the range 001 to Z99, allowing up to 3600 tape volumes per archive class. At any one time, there is one shelving volume for each archive class. This volume represents the volume on which the next shelve (write) operation is to be performed.
In the case of an error, you can explicitly change the shelving volume label for the archive class. However, if you do so, the specified volume label must adhere to the convention shown in the table, otherwise HSM cannot use it.
Manually setting the shelving volume label is not recommended. By default, HSM uses the first shelving volume label for an archive class (for example HSA001), then increments the labels automatically (HSA002, HSA003, and so forth) as the volumes become full. If you want to remove the current shelving volume and go to the next one, use the CHECKPOINT command rather than resetting the label manually.
As previously discussed, HSM Plus mode supports up to 9999 archive classes named HSM$ARCHIVE01 through HSM$ARCHIVE9999, with archive identifiers of 1 to 9999 respectively.
You must configure archive classes by using the SMU SET ARCHIVE command to identify the archive class, media type, and optionally density. When specifying media type and density, they must exactly match the corresponding media type and density defined in the MDMS TAPESTART.COM file.
Once you have defined the archive class, you can then associate archive classes with shelves and devices using appropriate commands.
Unlike HSM Basic mode, HSM Plus mode does not require special naming conventions for volumes, because MDMS chooses the volumes for HSM Plus mode to use.
When setting up your HSM environment, you need to consider which nearline and offline devices you want to use. When setting up a device for HSM, you can control:
HSM provides a default device record that has the following attributes:
These defaults are applied if you specify a device for HSM without identifying these attributes. Once the device is defined, you can modify the attributes for that device. You also can modify the default device record attributes if you find that you are typically using a different set of attributes for your devices.
For HSM use, you can specify a nearline or offline device to be used for dedicated or shared usage.
When a device is dedicated, HSM does not release it to other applications and keeps the current volume mounted until the drive is needed for another HSM volume.
When a device is shared, HSM releases the device, and dismounts and unloads the associated media within one minute of inactivity on the device. The media is unloaded for security reasons.
When thinking about devices, you should consider the trade-offs involved in dedicating devices to HSM.
Dedicated devices have the following advantages:
It is possible to operate in a mixed mode, whereby the device is sometimes shared and sometimes dedicated. For example, you can set up a scheduled policy with a script that toggles between the two modes at specified times. A useful application of this would be to dedicate devices to HSM during normal working hours and at policy execution time, but switch to shared devices during the backup cycle.
For each device, you can specify which operations are enabled. The choices are shelving and unshelving. By default, both operations are enabled when a device is specified.
When operating in Plus mode it is recommended that all devices are defined for both shelving and unshelving as MDMS, not HSM, actually chooses the optimal device. Restricting operations sometimes leads to conflicts between HSM and MDMS.
When you are using multiple devices in Basic mode, you can optimize operations by specifying that only shelving or only unshelving is enabled on the device. This will effectively guide those operations to the enabled device rather than allowing many load/unload operations as the requests come in. For example, if you are using two devices, you might specify that one is used for shelving and the other is used for unshelving. A special override allows unshelving on a shelving device if the currently mounted media contains the requested file, which is common if the file is unshelved shortly after it is shelved.
If you specify only a single device for HSM, it must support both operations for correct usage.
When setting up a device for HSM use, you define a media type by relating the device to one or more archive classes whose media type and density are compatible with the device.
This does not mean that shelving devices have to be identical for any archive class. For example, a TK50 device might be specified for shelving and a TK70 device be specified for unshelving. This is valid because a TK70 can read a TK50 written cartridge, but not vice versa.
However, if you do use compatible but not identical media types, you must control the operations on the devices so the tapes are always written in a compatible format. The media must be written in the format readable by both device types (in this case TK50), and all media must be in the same format for a specified archive class.
Nearline and offline devices are associated with archive classes that relate to shelves. When specifying archive classes for shelf copies, you must consider the media type on which you want these copies to reside. Each archive class uses exactly one media type, so that all data written to a specific archive class uses compatible media. Be aware that multiple archive classes can use the same media type.
You establish the relationship between archive classes, devices, and media type by using the SMU SET DEVICE command and specifying an archive list. Remember that for HSM Plus mode, you also use the media type definitions in the MDMS TAPESTART.COM file to encapsulate the media type and drives relationship. Regardless of how archive classes and shelves relate, the relationship between archive classes and devices is not one-to-one. This means that:
See shows the archive class/media type/device relationship for three archive classes and the associated TA90 and TK50 tape devices. As shown in the figure, the two TA90 devices can each archive data belonging to their common archive classes, but the TK50 device can only operate with a single archive class.
Ideally, an HSM configuration uses identical media types for all archive classes, allowing the maximum sharing of devices, because each device could support all archive classes. However, this is not always possible or desirable. For example, you may want to define a primary archive class that uses a robot-controlled nearline device and some secondary archive classes that use human-operated 9-track magnetic tape devices.
When selecting the devices associated with an archive class, you should consider such aspects as:
A robot-controlled nearline device is recommended for primary archive classes, because users will be able to access shelved files without human intervention, on a 24 - hour basis. The need for such devices is less on secondary archive classes, especially if an online cache is used (see See Cache Usage).
HSM Basic mode supports certain tape magazine loaders as nearline devices that can be associated with archive classes. A magazine is a stacker containing one or more tape volumes that can be loaded into a single drive. The following magazine loaders are fully supported with random-access loading and unloading of tape volumes:
HSM Basic mode supports multiple magazines, with multiple volumes per magazine. In addition, volumes for multiple archive classes may reside in a single magazine. However, there are a few restrictions that must be observed for HSM:
At initialization time, and when a new magazine is loaded, HSM performs an inventory on the magazine. Each volume in the magazine is loaded and mounted, and its label is noted. This information is stored in a device database, which has multiple magazine entries. This operation takes 20 to 30 minutes, during which time the drive cannot be used.
hp highly recommends that volumes are not shuffled around in a magazine or moved to different magazines after initial configuration, because this will cause HSM to perform another inventory on the magazine. If the shelf handler discovers an inventory error, it loads all volumes and retakes inventory on the magazine. A new magazine entry is entered into the database.
In addition, all existing magazine entries containing any of the volumes are then invalidated.
Under ideal circumstances, inventory on any magazine should have to be done only once, regardless of system crashes and other disruptions.
Once inventory is taken, the shelf handler uses random- access load and unload commands to load the appropriate volumes into the drive. The device database is updated on all load and unload operations, so that the state of the drive and magazine is known at all times, even after system disruptions.
If an inventory detects an illegal configuration with duplicate tape labels, the shelf handler prints an OPCOM message to the operator and will not use the magazine.
When defining a device as a magazine loader, it is necessary to specify a robot name to be associated with the device. The robot name depends on the controller to which the tape device is connected, as follows:
The robot name should include the allocation class if there is one.
HSM Basic mode makes a first-level attempt to ensure that tape device configurations and loading are directed to compatible media. For this level, HSM ensures that the media type is physically capable of being loaded into the specified device, and that the media can support the operation. HSM also verifies that media contained in magazine loaders are not requested for nonloader drives and vice versa.
Table 2-2 lists the compatible media types HSM supports. HSM also supports unknown media types, but does not check them for compatibility. It is therefore possible to specify different types of tape devices with "Unknown" media type into an impractical configuration. If using such drives and media, you must ensure that the configuration is practical.
HSM Plus mode supports automated loaders according to the MDMS functionality and requirements. In general, MDMS recognizes automated loaders and the volumes contained therein only by process of how you configure the information in TAPESTART.COM and through the STORAGE commands. For more information, see the Getting Started with HSM Chapter of the HSM Installation and Configuration Guide.
HSM allows you to customize HSM activity on a per-volume basis. By default, there is only one HSM volume entity, HSM$DEFAULT_VOLUME, which is used as the basis for HSM activity for all volumes in the cluster. You can add any number of specific volumes, each relating to a single online disk volume, as you want. Any disk volumes not associated with a specific volume entry are implicitly associated with the default volume.
The default volume is preconfigured with a default set of attributes. You can modify any or all of the attributes on the default volume, which are then applied to all volumes associated with the default volume. The attributes of the default volume also are used as a template for specific volume entities.
With the volume entity, you can specify the following attributes:
The shelf attribute relates the disk volume definition to a single shelf definition. The shelf must be set up before associating a volume with it. For information on setting up the shelf, see See The Shelf By default, all volumes use the default shelf HSM$DEFAULT_ SHELF.
HSM provides volume definition options that allow you to control shelving operations on the online disk volume for which the volume definition applies. If no volume definition is found, HSM uses the HSM$DEFAULT_VOLUME definition.
The following operations can be enabled on a per-volume basis:
By default, implicit operations (high water mark, occupancy, and quota) are disabled and explicit operations (shelve and unshelve) are enabled on the volume.
The volume policy parameters identify the policy definitions used to shelve files when a critical need for space on the disk is encountered. This policy implementation reacts to critical situations in which additional primary storage space is needed.
A reactive policy is implemented with a disk volume definition. Reactive policy determines how to react to high water mark, volume occupancy exceeded, and user disk quota exceeded events. In these instances, some event takes place that requires primary storage space be made available.
HSM takes action to make the space available only when the event takes place. A reactive policy execution can be disabled by specifying that no policy is desired for the specified event.
You can specify a percentage of the volume's capacity that will be used as a trigger for running the occupancy policy on the volume. See See Policy for more details.
There are two types of files that you should give special attention to when considering their disposition in an HSM environment:
These files have special attributes when they are created that may not be possible to recreate when the files are shelved and later unshelved.
Files that are marked contiguous must occupy contiguous logical block numbers on the disk. When such a file is shelved, its storage is released. During unshelving operations, this type of file must be restored contiguously. If this is not possible because the available space on the disk is fragmented, the unshelve operation fails. To avoid this problem, you should specify that files marked contiguous are ineligible for shelving. By default, files marked contiguous are not shelvable.
Placed files are assigned specific logical block numbers on the disk volume when created. When such a file is shelved and later restored, it is virtually guaranteed that they cannot be restored to the originally assigned logical blocks. If the file must be assigned to the assigned logical blocks, it should not be shelved. One way of disabling such shelving is to disable shelving on all placed files on the volume. Another way is to mark the file as not shelvable using an OpenVMS command.
By default, HSM allows shelving on placed files. To prevent this behavior, you need to specifically disable shelving of placed files for the volume.
The cache is storage comprised of one or more online disk storage devices or magneto-optical devices. You can use cache volumes for one of two purposes:
By using a cache, you gain speed for shelving operations by dedicating additional online storage for the HSM system. With online cache, a shelving operation can complete in the time it takes for the files to be copied to another disk.
The archive/backup system is not needed immediately. However, you lose online storage capacity otherwise dedicated to applications and users. This is the trade-off to consider when using online cache. If your system includes some older, slower online drives, then online cache provides multilevel hierarchical storage management.
All cache devices must be system-mounted and accessible to all nodes in the cluster except when the Catalog Server facility option is enabled. In this case, the cache devices need only be system-mounted and accessible to all designated shelf server nodes.
Another major advantage to using online cache is that flush operations to
nearline/offline storage can be performed at regular intervals. These flush operations are optimized to reduce the amount of tape reloading and positioning compared to individual shelve operations directly to tape. This is especially true when multiple archive classes are specified, and the archive classes are sharing devices.
You can specify the following attributes for each online disk volume supporting the cache:
You can specify that data copies to the shelf archive classes be performed at one of two times:
By default, the shelf copies are made when the cache is flushed, and this is the recommended mode of operation when using the cache as a staging area. With this configuration, operations to and from the cache are fast, taking about as much time as a normal disk copy.
If you are using the cache as a permanent shelf instead of a staging area (for example, using a magneto-optical device), there is no cache flushing, so any shelf copies need to be made at shelving time. When the shelf copies are made at flush time, the shelving process is not complete until all shelf copies of a file have been made to the shelf archive classes.
You can specify the maximum amount of space on the online volume to be used for HSM caching. HSM never exceeds this amount. If shelving a file would exceed this amount, it is diverted to another cache device that can hold the data, or the file is copied directly to the shelf archive classes.
To allow an unlimited amount of space on a disk to be used for caching, you can enter a block size of zero, which defaults to the device capacity. This is useful when using magneto-optical devices as a permanent shelf.
If you do not specify a block size, HSM uses a default value of 50,000 blocks.
You can specify that a cache flush be triggered when a specified percentage of the cache block size is exceeded. In this way, you should never get into a situation where the block size is exceeded. By default, cache flushing begins when 80 percent of the block size is used.
In addition to high water mark cache flushing, you also can flush the cache at regular intervals. This allows you to restrict all nearline or offline shelving operations to occur at a specific time of day, ideally at times other than during the backup cycle. By default, the cache is flushed every 6 hours.
In conjunction with the flush interval, you can specify a delay to start the first cache flush. Thereafter, the delay is used in conjunction with the interval to flush at regularly timed intervals.
You can specify how the cache reacts when an online file that is shelved to the cache is deleted, or if it is unshelved and modified. You can choose that the file remains in the cache when these events occur, or is deleted together with its associated catalog entries. The former action is safer in that the cache copy can be used to recover the file data if it is erroneously deleted or modified. However, it also means that extraneous copies of obsolete data are retained in the cache, which may eventually be flushed to tape. When migrated to tape, shelf options such as delete save time and number of updates can be used to purge any obsolete data during a repack operation.
The following guidelines on configuring the cache will provide optimal HSM performance for all users on the cluster:
By using a cache effectively, you are using HSM in the most efficient way and providing the best overall service to the system users.
Magneto-optical (MO) devices make an ideal repository for shelved file data, because their cost is significantly lower than magnetic disks but their response time is good. HSM supports magneto-optical as cache devices only; they cannot be defined like tape devices to support archive classes.
To configure a magneto-optical device, you should define a label and mount the volume as a normal Files-11 disk. The volume label should not be an HSM label in the HSxxxx format, but should be of the system administrator's choosing. If you are using a magneto-optical robot loader with multiple platters, each platter that you want HSM to use should:
You can define the magneto-optical devices as either a cache staging area, or as a permanent shelf for fast response time using the /BACKUP attribute of the SET CACHE command. For more information and an example, see the SMU SET CACHE command in HSM Command Reference Guide.
HSM policy is at the center of the shelving process. The policy options you define establish the conditions that start the shelving process and determine the amount of primary storage available when shelving operations end.
HSM policies are implemented through the available file selection options. These options allow you to define how HSM will implement storage management on your system. The HSM policy file selection options which may be set are:
Figure 2-4 shows the general sequence of HSM policy operations. Once a reactive or preventative policy is established, system operations continue normally until a trigger event occurs. The trigger event activates HSM policy and files are shelved in accordance with the file selection criteria until the policy goal is reached.
The trigger is an event that causes the shelving process to begin moving files to shelf storage. These events activate HSM policies that fall into two general categories, based on the kind of trigger used:
When you install HSM, you get a set of default policy definitions. You can obtain the most value from HSM by modifying the default preventive and reactive policies according to the exact types and usage of data in your installation and the specific archive storage devices that are installed.
A scheduled trigger is generated according to a schedule definition. You define a schedule that specifies a time interval on which HSM initiates the shelving process. This trigger, used with appropriate file selection criteria, makes sure enough online capacity is available to meet a steady demand for storage space.
The user disk quota exceeded trigger is an event that occurs when a process requests additional online storage space that would force it to exceed the allowable permanent disk quota. The shelving process selects to shelve files owned by the owner of the file being created or extended. This trigger, used in conjunction with an appropriately designed file selection criteria, provides enough online disk space to satisfy the request. This trigger uses the quota policy defined for the volume. The shelving process initiated with the disk quota exceeded trigger shelves files owned by the owner of the file being created or extended. This trigger is independent of the owner of the process that extends the file; only the file ownership is significant.
For example, if user A creates a file, and user B extends the file beyond user A's disk file quota, user A's files will be shelved.
The high water mark trigger is an event that occurs when the amount of online disk storage space used exceeds a defined percentage of capacity. The HSM system regularly polls all online disk devices and compares the used storage with a defined value. This trigger, used with appropriately designed file selection criteria, ensures enough online capacity is available to meet a steady demand of storage space. This trigger uses the occupancy policies defined for the volume.
The volume full trigger is an event that occurs when the file system encounters a request for more space than is currently available on the disk volume. This trigger, used in conjunction with an appropriately designed file selection criteria, provides enough online disk space to satisfy the request. This trigger uses the occupancy policies defined for the volume. The shelving policy implemented with the volume full trigger shelves any files on the disk volume that meet the defined file selection criteria.
The file selection criteria determine the best files to be shelved in response to the need for shelving. You define the file selection criteria depending on your need to create and access data.
Examples of file selection criteria include:
The first consideration for defining file selection criteria involves selecting files that have been accessed or that have expired within a certain time frame. There are four file dates from which to choose:
OpenVMS does not support a last access date as such. However, you can set up policies using an effective last access date by:
Using the expiration date coupled with volume retention time is the recommended and default configuration for HSM policies. This ensures that files are shelved only if they have not been accessed for read or write operations within a certain time frame. Use of the other date fields, while supported, may result in some frequently-accessed files being shelved.
For more information, see See Using Expiration Dates.
Candidate file ordering is then achieved by using one of the following algorithms which use the specified date:
The least recently used policy selects files based on the selected date option and the last time the date changed. It creates a listing of files ranked from the greatest time since last accessed to the smallest time since last accessed.
The space time working set policy selects files based on a combination of the file size and the LRU ranking. STWS is the product of the file size and the time since last access. Candidates are ordered from the greatest to the least ranking value returned for all files. Larger files tend to be ranked higher than smaller files.
The script is a DCL command file containing SHELVE, PRESHELVE, or UNSHELVE commands. Other DCL commands also may be included.
Each HSM policy supports both a primary and a secondary policy definition. The primary policy definition is always executed. If the volume's lowwater mark is reached after the primary policy execution completes, the secondary policy definition is not executed. If the volume's lowwater mark is not reached after the primary policy execution completes, the secondary policy definition may be executed. This second execution occurs only when either one or both policy definitions is a user-defined script.
Refer to the SMU SET POLICY command description in HSM Command Reference Guide for a detailed description of primary and secondary policy.
When using the predefined file selection algorithms STWS and LRU, you can specifically exclude files that may be selected based on a relative or absolute date. For example, you may want to always exclude files that have been accessed within the last 60 days. There are three fields from which you can choose to exclude files:
Specifying a relative elapsed time is mutually exclusive of defining absolute before and/or since times. The time fields apply to only the predefined STWS and LRU algorithms. They do not apply to script files.
A script file is a user-written command procedure that can be executed instead of the pre-defined algorithms supplied with HSM. When the script file is executed, parameter P1 contains the name of the volume on which the policy was triggered. This can be used to perform custom shelving operations on the specified volume. When a script is defined, the file selection criteria, file exclusion criteria and goal defined for the policy are not applied. The script file executes to completion exactly as written in all cases.
The goal is the condition that causes the shelving process to stop. There are two ways to reach the shelving goal:
The low water mark is checked at the completion of, but not during, a script execution. The secondary policy is run if the primary policy did not reach the low water mark.
When an application or user creates or extends a file, the operation may not complete because the disk volume is full or the user has exceeded his disk quota. If shelving is enabled on the volume, this situation generates a make space request to HSM to free up enough disk space to satisfy the request. If responding to make space requests is enabled, HSM executes the defined policy for the volume and shelves enough files to free up the requested space. While shelving files, HSM sends an informational message to notify the user that the file access may take much longer than usual due to the shelving activity. After the requested disk space is made available, the create or extend operation continues normally. If for any reason the make space operation fails, the user's original request to create or extend a file fails with one of the following two error messages:
%SYSTEM-E-DEVICEFUL, device full - allocation failure
%SYSTEM-E-EXDISKQUOTA, exceeded disk quota
Because make space operations may take a significant amount of time, and because you may prefer certain applications to issue an immediate error rather than wait for the request to complete, you can disable make space requests on a per- policy, per-volume, or per-process basis.
Make space requests start a policy execution for the volume. The user process that requested the make space allocation is allowed to continue as soon as the amount of space allocation that was requested is satisfied. However, in anticipation of future make space requests, the policy continues executing until a defined low water mark is reached. Make space requests cannot free up space below the defined low water mark.
If the make space operation is triggered by a user disk quota exceeded condition, the files are selected based on the owner of the file being created or extended, rather than the user of the requesting process.
The cause of a make space request determines the scope of online disk storage that is involved with file selection as follows:
To prevent storage problems, you set up scheduled execution for preventive policies at regular intervals. HSM provides the capability to schedule policy execution with the following attributes:
When you schedule a policy execution, you specify the online volumes on which to apply the policy. When setting up a schedule, a separate entry is created for the policy execution for each volume. The volume selection should be based on the goal of maintaining volume capacity between the low water mark and the high water mark at all times. Thus, you need to schedule policies to execute more often on those volumes on which files are frequently created or modified and less often on those volumes on which files are infrequently created or modified.
Policies can be scheduled to execute at a certain time of day, and at regular intervals. hp recommends you run nightly scheduled policy runs at an hour that does not conflict with high system activity or system backups. Ideally, the frequency of policy runs should coincide with the rate of new data creation on the specified volumes. The preventive policy should be run prior to the volume reaching its high water mark capacity, so that all shelving operations can be controlled to occur at certain times of day. This not only reduces overhead of reactive policy execution during the period of high system activity, but also minimizes the use of nearline/offline resources for HSM purposes.
You can specify the node on which you want the policy to run. Although policies can run on any node that has access to the online volume, cache devices, and nearline/offline devices, it is more efficient if it runs on a shelf server node. If the shelf server node changes, you can use HSM's requeue feature to requeue any and all policy entries to run on an alternative shelf server node.
HSM uses four logical names that point to devices and directories that hold important files for HSM operations. The logical names are needed because different levels of data reliability are required to ensure proper HSM operation, and for the security of user data. The four logical names are:
The first three logical names must be defined at installation, or later, as system wide logical names affecting all processes. Moreover, the definitions must be the same on all nodes in the cluster. The logical name HSM$REPACK is optional.
HSM$CATALOG The HSM$CATALOG logical name points to the location of the default HSM catalog. The catalog contains the information needed to locate a shelved file's data in the cache or the shelf. HSM supports multiple catalogs, which can be specified on a per-shelf basis.
HSM catalogs are considered critical files and should be stored on devices and in a directory that has the maximum protection for loss. In particular:
The size of the catalog file depends on the number of files you intend to shelve on the system. Approximately 1.25 blocks are used for each copy of a file in the cache or the shelf. When a cache copy is flushed to the shelf, the cache catalog entry is deleted. However, copies to the nearline/offline shelf remain permanently in the catalog. For information on backing up the catalog, see See Managing HSM Catalogs.
The files stored in the location referenced by HSM$MANAGER are important in HSM operations, but can usually be recovered. These files include:
Loss of these files will result in a temporarily unusable HSM system, until SMU commands are entered to restore the environment. However, as long as the catalog is available, user data can be recovered. Although the critical level of files in HSM$MANAGER is not as high as HSM$CATALOG, the same protection mechanisms are recommended, if possible. At a minimum, a backup of the current SMU database should always be available. The size of the files in HSM$MANAGER is relatively fixed, but depends on the number of nodes in the cluster. You should allocate 5000 blocks plus 2049 blocks for each node in the cluster.
HSM uses the HSM$LOG location for storing event logs. These logs are written during HSM operation, but their content is designed for the use of the system administrator to monitor HSM activity. As such, their existence is not critical. The size of the event log files can grow rather large if not maintained. However, once the logs have been analyzed by the system administrator, they can be RESET and then deleted.
HSM uses the optional HSM$REPACK logical name to point to a staging area used while repacking archive classes. If the logical name is not defined, the repack function uses HSM$MANAGER instead. Repack needs a staging area in order to repack files into multi-file savesets. The staging area must be at least 100,000 blocks for repack to function. The staging area is cleaned up after a repack operation.
Repack can be a time consuming process if the catalogs are huge. Repack can be performed in 2 phases which is facilitated by the use of the following qualifiers:
If REPORT option is specified, Repack will only perform the analysis phase of a repack and not actual repacking. This feature would be extremely useful for a system manager to:
If used with the /SAVE option, the resultant candidates file will be saved and can be used in subsequent repack/s if the system manager wants the entire repack, as analyzed, to proceed.
Since repacks can take several hours/days to complete, it would be useful to allow the continuation of a repack that had been interrupted for any reason. The /RESTART qualifier would help do this along with /SAVE which would preserve the current candidates file. The repack can be started later from where it left off, without a further analysis or repacking files/volumes that had already been repacked.
This chapter provides a task-oriented description for changing the HSM environment to better suit your operating environment. It contains the following sections:
For a complete example of a custom configuration for HSM Basic mode or PLUS mode, see the Appendix in the HSM Installation Guide.
This section describes the various definitions used to customize an HSM environment and the operations enabled and disabled by each command.
Commands submitted to the HSM facility control operations across the entire cluster.
Create shelf definitions that include the archive classes for shelving and unshelving data.
There are three options for enabling and disabling shelving operations that use a particular shelf. The following table lists the options that may be used with the SET SHELF /ENABLE or SET SHELF/DISABLE command.
HSM provides multiple archive classes for you to use. You cannot modify the archive class names. You can, however, determine the devices to which an archive class is written and reassign volumes to allow you to move archive class to offsite storage.
In HSM Plus mode, you can modify the media type and density only if the archive class has not been used and no devices or shelves reference the archive class. You can add or remove volume pools as desired.
Create device definitions to identify the devices you will use for shelving operations. Also decide whether to dedicate the devices for the sole use by HSM or to share them with other applications.
The device definitions let HSM know which devices to use for a given archive class and whether to dedicate or share the devices.
The volume definition allows you to enable and disable specific reactive policy operations or control operations on the entire volume.
Shelving operations initiated by the user disk quota exceeded event |
HSM allows you to defines temporary caches or permanent caches. If you want to use magneto-optical devices with HSM, you must define them as a cache
Define a magneto-optical device as a permanent cache !/SMU SET CACHE/BLOCK=0/BACKUP /NOINTERVAL/HIGHWATER_MARK=100
You can enable or disable specific policy definitions.
Disabling a policy definition affects both primary and secondary policy as follows:
After installing HSM, you can consider, then implement, your own policies. This section provides a series of tasks implementing both preventive and reactive policies. The guidelines expressed in this section include the commands, definitions, and values that apply to each aspect of creating and implementing policy.
See HSM Command Reference Guide for a complete description of the commands used in this section.
Determine the disk volumes on which you want to manage storage capacity. The following example commands are used to perform this task.
Determine names of online disk volumes and the amount of capacity used |
|
Determine user disk quotas and shelving option in user processes |
Create volume definitions for the disk volumes. Use the SMU SET VOLUME command to create a volume definition and consider the capabilities offered by the volume definitions.
Determine how files should be selected for shelving on a regular basis. The following list gives you some planning considerations:
Create policy definitions that specify the file selection criteria anticipated to be most useful. Use the SMU to create a policy definition considering the capabilities offered.
If you plan on using a files expiration date as an event for file selection, you must make sure the OpenVMS file system is processing them. Follow the procedure in See Procedure for Setting File Expiration Dates to establish file expiration dates for the files on the disk volumes.
You must be allowed to enable the system privilege SYSPRV or have write access to the disk volume index file to perform this procedure.
To set file expiration dates, follow the procedure in See Procedure for Setting File Expiration Dates. For more information about the OpenVMS command SET VOLUME/RETENTION, see the OpenVMS DCL Dictionary.
Once you set volume retention on a volume, and define a policy using expiration date as a file selection criteria, the expiration dates on files on the volume must be initialized. HSM automatically initializes expiration dates on all files on the volume that do not already have an expiration date upon the first running of the policy on the volume. The expiration date is set to the current date and time, plus the maximum retention time as specified in the SET VOLUME/RETENTION command.
After the expiration date has been initialized, the OpenVMS file system automatically updates the expiration date upon read or write access to the file, at a frequency based on the minimum and maximum retention times.
The following command sets the minimum retention period to 15 days and the maximum to 20 days:
$ SET VOLUME DUA0: /RETENTION=(15-0:0, 20-0:0)
The following command sets the minimum retention period to 3 days and calculates the maximum. Twice the minimum is 6 days; the minimum plus 7 days is 10. Thus, the value for the maximum is 6 days because that is the smaller value:
$ SET VOLUME DUA1: /RETENTION=(3)
If you are not already using expiration dates, the following settings for retention times are suggested:
Use the SMU SET SCHEDULE command to create the schedule definitions that apply the policy definitions to the volume definitions.
Specify the time that the schedule should be first implemented and the interval thereafter at which the policy will be applied to the volume |
If the storage administrator has defined policies that control file shelving and unshelving, you (as a typical user) may not be aware that HSM is on the system. Shelving and unshelving files may be almost transparent to you. Or, you may work in an environment where the storage administrator lets you do more of your own data management, in which case you will know HSM is installed. Whether or not you know HSM is on your system, there are some things you will see that let you know just what is going on. There are a few specific ways you will know that HSM is on the system:
As described in Chapter 1, HSM shelves file data but retains the file header information in online storage. You can use the DCL DIRECTORY command, with specific qualifiers, to determine if a file is shelved.
To find out which, if any, files have been shelved, use one of the following qualifiers on the DCL DIRECTORY command:
The DIRECTORY/FULL command lists all available information about a file as contained in the file header.
$ DIR/FULL
Directory SYS$SYSDEVICE:[COLORADO]
CONFIG_LOG.TXT;1 File ID: (3346,2,0)
Size: 56/0 Owner: [COLORADO]
Created: 08-Jan-2003 12:04:56.85
Revised: 08-Jan-2003 14:24:01.41 (7)
Expires: <None specified>
Backup: <No backup recorded>
Effective: <None specified>
Recording: <None specified>
File organization: Sequential
Shelved state: Shelved
File attributes: Allocation: 0, Extend: 0, Global buffer
count: 0
Version limit: 3
Record format: Variable length, maximum 137 bytes
Record attributes: Carriage return carriage control
RMS attributes: None
Journaling enabled: None
File protection: System:RWED, Owner:RWED, Group:RE, World:R
DECW$SM.LOG;2 File ID: (3270,13,0)
Size: 5/6 Owner: [COLORADO]
Created: 08-Jan-2003 08:16:14.08
Revised: 08-Jan-2003 14:24:01.47 (3)
Expires: <None specified>
Backup: <No backup recorded>
Effective: <None specified>
Recording: <None specified>
File organization: Sequential
Shelved state: Online
File attributes: Allocation: 6, Extend: 0, Global buffer
count: 0
Version limit: 3, Not shelvable
Record format: VFC, 2 byte header
Record attributes: Print file carriage control
RMS attributes: None
Journaling enabled: None
File protection: System:RWED, Owner:RWED, Group:RE, World:
Access Cntrl List: None
If you shelve an empty (unpopulated) index file, the file size will look different after you shelve it if you do a DIRECTORY/FULL on the file. In Example 4-1 notice that the file size before shelving is 3/3 and after shelving, its 0/0. When you see this, do not be alarmed. No data has been lost. This is a normal representation of an unpopulated index file.
$ CREATE/FDL=HSM$CATALOG.FDL EMPTY_INDEXED.DAT
$ DIRECTORY/FULL EMPTY_INDEXED.DAT
Directory DISK$USER1:[SHELVING_FILES]
Example 4-1 (Cont.) Shelve an empty (unpopulated) indexed file
EMPTY_INDEXED.DAT;1 File ID: (645,26,0)
Size: 3/3 Owner: [SYSTEM]
Created: 08-Jan-2003 14:18:13.79
Revised: 08-Jan-2003 14:18:13.93 (1)
Expires: <None specified>
Backup: <No backup recorded>
Effective: <None specified>
Recording: <None specified>
File organization: Indexed, Prolog: 3, Using 5 keys
Shelved state: Online
File attributes: Allocation: 3, Extend: 0, Maximum bucket size: 2
Global buffer count: 0, Version limit: 3
Contiguous best try
Record format: Variable length, maximum 484 bytes, longest 0 bytes
Record attributes: None
RMS attributes: None
Journaling enabled: None
File protection: System:R, Owner:RWED, Group:, World:
Access Cntrl List: None
Total of 1 file, 3/3 blocks.
$ SHELVE EMPTY_INDEXED.DAT
$ DIRECTORY/FULL EMPTY_INDEXED.DAT
Directory DISK$USER1:[SHELVING_FILES]
EMPTY_INDEXED.DAT;1 File ID: (645,26,0)
Size: 0/0 Owner: [SYSTEM]
Created: 08-Jan-2003 14:18:13.79
Revised: 08-Jan-2003 14:18:13.93 (5)
Expires: <None specified>
Backup: <No backup recorded>
Effective: <None specified>
Recording: <None specified>
File organization: Indexed, further information shelved
Shelved state: Shelved
File attributes: Allocation: 0, Extend: 0, Maximum bucket size: 2
Global buffer count: 0, Version limit: 3
Contiguous best try
Record format: Variable length, maximum 484 bytes, longest 0 bytes
Record attributes: None
RMS attributes: None
Journaling enabled: None
File protection: System:R, Owner:RWED, Group:, World:
Total of 1 file, 0/0 blocks.
When you shelve a populated index file, and do a DIRECTORY /FULL on it afterwards, the file size will look different afterwards. In Example 4-2 you will notice that the file size went from 84/84 to 84/0. This is normal. The displayed size of a populated indexed file appears normal in the directory listing.
$ COPY HSM$CATALOG:HSM$CATALOG.SYS POPULATED_INDEXED.DAT
$ DIRECTORY/FULL POPULATED_INDEXED.DAT
Directory DISK$USER1:[SHELVING_FILES]
POPULATED_INDEXED.DAT;1 File ID: (691,51007,0)
Size: 84/84 Owner: [SYSTEM]
Created: 08-Jan-2003 14:30:47.15
Revised: 08-Jan-2003 14:30:47.31 (1)
Expires: <None specified>
Backup: <No backup recorded>
Effective: <None specified>
Recording: <None specified>
File organization: Indexed, Prolog: 3, Using 5 keys
Shelved state: Online
File attributes: Allocation: 84, Extend: 0, Maximum bucket size: 2
Global buffer count: 0, Version limit: 3
Record format: Variable length, maximum 484 bytes, longest 0 bytes
Record attributes: None
RMS attributes: None
Journaling enabled: None
File protection: System:RWED, Owner:RWED, Group:RE, World:
Access Cntrl List: None
Total of 1 file, 84/84 blocks.
$ SHELVE POPULATED_INDEXED.DAT;1
$ DIRECTORY/FULL POPULATED_INDEXED.DAT
Directory DISK$USER1:[SHELVING_FILES]
POPULATED_INDEXED.DAT;1 File ID: (691,51007,0)
Size: 84/0 Owner: [SYSTEM]
Created: 08-Jan-2003 14:30:47.15
Revised: 08-Jan-2003 14:30:47.31 (5)
Expires: <None specified>
Backup: <No backup recorded>
Effective: <None specified>
Recording: <None specified>
File organization: Indexed, further information shelved
Shelved state: Shelved
File attributes: Allocation: 0, Extend: 0, Maximum bucket size: 2
Global buffer count: 0, Version limit: 3
Record format: Variable length, maximum 484 bytes, longest 0 bytes
Record attributes: None
RMS attributes: None
Journaling enabled: None
File protection: System:RWED, Owner:RWED, Group:RE, World:
Total of 1 file, 84/0 blocks.
The DIRECTORY/SHELVED_STATE command lists the files and a keyword that tells you if the file is online or shelved.
$ DIR/SHELVED
Directory DISK$MYDISK:[IAMUSER]
A1.DAT;1 Shelved
AA.A;1 Shelved
BAD_LOGIN.COM;1 Shelved
BOINK.EXE;1 Shelved
BUILD.DIR;1 Online
CLUSTER_END_031694.COM;1
Shelved
CLUSTER_TEST_030194.COM;2
Shelved
CLUSTER_TEST_030394.COM;1
Shelved
CMA.DIR;1 Online
CODE.DIR;1 Online
COSI.DIR;1 Online
COSI_TEST.DIR;1 Online
...
Z6.DAT;1 Shelved
Z7.DAT;1 Shelved
Z8.DAT;1 Shelved
Z9.DAT;1 Shelved
Total of 153 files.
The DIRECTORY/SIZE command lists the size of the files in the directory. The allocated file size for a shelved file is 0. If you use /SIZE=ALL, OpenVMS displays both the used and allocated blocks for the files (as shown in the example below). If you use /SIZE=ALLOC, OpenVMS displays only the allocated file size for the files.
$ DIR/SIZE=ALL
Directory DISK$MYDISK:[IAMUSER]
A1.DAT;1 1/0
AA.A;1 5/0
BAD_LOGIN.COM;1 6/0
BOINK.EXE;1 10/0
BUILD.DIR;1 4/24
CLUSTER_END_031694.COM;1 2/0
CLUSTER_TEST_030194.COM;2 1/0
CLUSTER_TEST_030394.COM;1 1/0
CMA.DIR;1 1/3
CODE.DIR;1 21/54
COSI.DIR;1 1/54
COSI_TEST.DIR;1 8/9
...
Z6.DAT;1 1/0
Z7.DAT;1 1/0
Z8.DAT;1 1/0
Z9.DAT;1 1/0
Total of 153 files, 42199/42339 blocks.
You use the same DCL commands and application programs to access shelved files as you would online data files. If you are working on a system that is running HSM, you will notice some differences in file access time. When shelving is occurring, file access time may be temporarily lengthened while the shelving process completes.
When you access a currently shelved file through a read, write, extend, or truncate operation, it may take longer for that file to be accessed than you would expect. You may see a message indicating that unshelving is occurring.
Depending on the storage device being used to shelve and unshelve the data, you may experience a large or small increase in the access time. See Typical File Access Time by Storage Deviceshows how various storage devices relate to file access time in an HSM environment.
Approximately two times the normal access time for online storage |
|
These access times depend on the type of storage device used, rather than on the working time of HSM. In other words, if you already use various storage devices to access your data, using HSM will not significantly increase your access time.
Well-defined shelving policies will decrease the number of volume full and user disk quota exceeded conditions on your system. However, if the volume should become full or if you exceed your OpenVMS-defined disk quota, HSM may shelve files according to policies defined by the storage administrator.
When you access a currently shelved file through a read, write, extend, or truncate operation, you might see a message like this:
%HSM-I-UNSHLVPRG, unshelving file $1$DUA0:[MY_DIR]AARDVARKS.TXT
If you attempt to create or extend a file and there is not enough space available to do so, you might see this message:
%HSM-I-SHLVPRG, shelving files to free disk space
You see these messages only if you have enabled /BROADCAST on your terminal.
From your perspective, shelving and unshelving files can be defined to occur automatically or manually. In the case of automatic shelving and unshelving, the storage administrator defines policies that control this behavior and you may not realize shelving and unshelving are occurring. In the case of manual shelving and unshelving, you issue specific commands to shelve and unshelve files.
If the storage administrator defines policies to shelve and unshelve files, you do not need to specifically request files be shelved and unshelved. In this case, the storage administrator decides when data ought to be shelved based on various criteria discussed in Chapter 2.
You may not notice when the files are shelved and may only notice when a file is unshelved if the file access time is significantly longer than expected. You can find out if you have shelved files using the qualifiers discussed above for the DIRECTORY command.
To specifically shelve a file (or files), use the DCL SHELVE command or the DCL PRESHELVE command.
Using the SHELVE command frees up disk space by shelving files you do not expect to need soon and by minimizing the possibility that files you do intend to use are not shelved automatically.
Using the PRESHELVE command copies the file to shelf storage. The data in the file remains in your work area. Preshelving files allows the system to respond more rapidly when it needs to free up disk space for use.
To stop an explicit shelving operation, type Ctrl/Y. The operation will complete on the file that is currently being shelved. All files that were shelved before you entered the Ctrl/Y will remain shelved. To cancel any remaining pending operations, you must reenter the command using the /CANCEL qualifier, as shown in the following example:
HSM provides three methods to select files for explicit shelving:
You can include files based on a time span around one of four file dates. The file dates used include the following:
Creation date
Backup date
Modification date
Expiration date
Time values are specified with the /SINCE and /BEFORE qualifiers.
In addition to specifying file names, file dates, and time spans, you have the option of further limiting the files selected for shelving. The additional criteria considers file size and is specified with the /SELEC qualifier. See File Selection lists three options for applying the /SELECT qualifier.
You have the option of specifying the number of file versions you shelve or preshelve with any manual operation. In most cases, you want to shelve the earlier versions of a file, leaving later versions of the file available for immediate access.
To specify the number of versions to keep in primary storage, use the /KEEP qualifier with the SHELVE or PRESHELVE command.
When you enter the PRESHELVE or SHELVE command, the amount of time taken to complete the operation depends on the following factors:
The number and size of the files to be preshelved or shelved will determine how long the operation takes. More and/or larger files require more time to process than fewer and/or smaller files.
When you implement online cache, the operation requires approximately twice the amount of time taken to perform an OpenVMS COPY operation to copy the files to another disk.
By using the /NOWAIT qualifier, HSM returns control of the user process in which the PRESHELVE or SHELVE command was entered. The operation is then carried out in the context of the HSM system process.
You can cause a shelved file to be returned to primary storage through one of the following methods:
When you access the data of a shelved file through a file fault, you will receive the following message as the file is being routinely unshelved:
$ EDIT AARDVARKS.TXT
%HSM-I-UNSHLVPRG, unshelving file $1$DUA0:[MY_DIR]AARDVARKS.TXT
To cancel an implicit unshelving of a file, enter Ctrl/Y. This action immediately stops the operation and results in the file remaining at its status before you entered the command that caused the file to be unshelved.
To stop an explicit unshelving operation, enter Ctrl/Y. The operation will complete on the file that is currently being unshelved. All files that were unshelved before you entered the Ctrl/Y will remain unshelved. To cancel any remaining pending operations, you must reenter the command using the /CANCEL qualifier, as shown in the following example:
If you have lost data you think was shelved, see your storage administrator. There are several procedures, explained in See Finding Lost User Data, that the storage administrator can use to find the lost data.
You can perform all regular DCL command line operations on files residing in a system or VMScluster from a remote node in the same manner as you can for operations on a local node. However, you cannot use the HSM DCL commands (SHELVE, PRESHELVE, and UNSHELVE) on remote files.
Implicit shelving and unshelving operations are possible for remote systems. Unlike local operations, you do not receive the "Unshelving filename" or "Shelving Files To Free Disk Space" status messages for remote operations.
If you cancel an implicit operation on a file from a remote node (implicit operations only are allowed), the operation will continue at the HSM system, but the request will be canceled without returning the result of the operation to the remote node.
If two users simultaneously enter duplicate command on the same file, HSM performs the operation for both users as if each had entered the command alone. For example, if an UNSHELVE command is entered on the same file, HSM unshelves the file once and issues duplicate success messages.
If two users simultaneously enter conflicting commands on the same file, the action taken by HSM is dependent upon the nature of the conflicting commands. A summary of the actions taken by HSM is given in See How HSM Resolves Conflicting Requests
In addition to explicitly shelving and unshelving files, you can perform the following file management tasks:
Check with your system manager to determine if the defaults have been changed for your installation.
This chapter provides information on managing and maintaining your systems in an HSM environment. Storage administrators will find this information especially useful. This chapter is divided into two main parts:
When HSM performs shelving operations on online disk volumes, it opens a file on each disk. This file can remain open for extended periods of time. If you need to dismount a disk that supports HSM operations, you may need to disable the HSM operations before the dismount can take place.
For normal online volumes that HSM has accessed, disable all HSM operations on the disk:
$ SMU SET VOLUME device_name /DISABLE=ALL
In addition, if the disk has been defined as an HSM cache device, delete the cache definition or disable the cache:
$ SMU SET CACHE device_name/DELETE
Because the cache disk contains files necessary to support HSM, the disk cannot be dismounted until all the cache files are flushed to the nearline/offline archive classes. Deleting the cache initiates a cache flush, which may take from minutes to hours to complete.
If you need to dismount the disk immediately for any reason (without initiating a cache flush), you should disable the cache instead using the following command:
$ SMU SET CACHE cache_name /DISABLE
Note that if you dismount a cache disk, users will not be able to access shelved file data that remains in the cache.
You should not dismount the disks referenced by the logical names HSM$CATALOG, HSM$MANAGER, or HSM$LOG, otherwise you will seriously disrupt HSM operations. If this is absolutely necessary, follow these procedures:
If you need to dismount a disk containing a shelf catalog, you should move the catalog to another disk using the SET SHELF command prior to dismounting the original disk. For example:
$ SMU SET SHELF shelf_name/CATALOG=new_location
Note that this operation may take tens of minutes to hours to complete. See Section 5.12 for more details on this operation.
Very often, it is necessary to move a directory tree of files from one location to another, most often to a new larger disk. If you use the normal OpenVMS facilities COPY or BACKUP to perform this operation, any shelved files in the source directory will be unshelved prior to copying to the destination. While this is safe, it is usually undesirable because it forces the unshelving of dormant data, which only becomes active due to the COPY or BACKUP being performed.
HSM provides a means to copy shelved files in the shelved state and update the HSM catalog to the new locations. This is achieved by using the SMU COPY command, which accepts a full file specification as input, and a disk/directory specification on output - files are not renamed.
If you are "moving" shelved files from one location to another on the same disk, the OpenVMS RENAME command is recommended. SMU COPY should be used to copy shelved files to another disk in the same HSM environment. If you are copying files to be taken to a different system (outside of the current HSM environment), then COPY or BACKUP should be used to unshelve the files prior to the copy.
The SMU COPY command implicitly uses the BACKUP utility which has different semantics to the OpenVMS COPY command, especially when using wildcard directory trees. Therefore, you should review the behavior of BACKUP wildcard operations when using this command. Specifically, the following are examples of correct operation:
$ SMU COPY DISK$USER1:[JONES...]*.*;* DISK$USER15:[JONES...]
$ SMU COPY DISK$PROD1:[ACCOUNTS...]*.*;* DISK$PRODARC:[ARCHIVE.ACCOUNTS...]
$ SMU COPY $1$DKA100:[000000...]*.*;* $15$DKA100:[*...]
The first example moves user JONES' directory tree from one disk to another, preserving all subdirectories from the input disk on the output disk.
The second example moves all files from DISK$PROD1:[ACCOUNTS...] and all subdirectories to a new disk and new subdirectory structure, preserving all subdirectories from DISK$PROD1:[ACCOUNTS] to DISK$PRODARC:[ARCHIVE.ACCOUNTS].
The third example moves all files from $1$DKA100: to $15$DKA100: preserving all subdirectories. Note, however, that the following syntax does not provide the expected results:
$ SMU COPY $1$DKA100:[000000...]*.*;* $15$DKA100:[000000...]
The above example flattens the (sub)directory structure in somewhat unpredictable ways, which is usually not desired. Please avoid this form of the command.
Note also that SMU COPY will not preserve more than seven levels of subdirectory, which is a restriction imposed by BACKUP.
It is often necessary to rename disks on the system, and this has an impact on the ability of HSM to process shelved files. There are two ways to rename disks from an HSM viewpoint:
If you perform the second type of rename you must:
Very often after a disk failure, or other reason, it is desirable to restore files from a backup copy to a different disk than the one from which the backup was originally taken. If the backup copy contains shelved and preshelved files, such a restore will create a discrepancy between the online location of the files, and the location stored in the HSM catalogs.
As such, it is necessary to perform the same recovery operations as for renaming disks, namely:
There are certain critical files that you must not delete or shelve if you are using HSM. These files include:
Considerations regarding the handling of these files are discussed in this section.
The HSM product files listed in Table 5-1 must not be deleted or shelved. During installation, these files are protected from deletion and marked /NOSHELVABLE, but care must be taken to prevent inadvertent deletion or shelving.
hp strongly recommends that the disks on which these files reside be shadowed and backed up on a regular basis (both image and incremental).1
The HSM shelf catalogs contain the information needed to locate and unshelve all files that have been shelved. The catalog locations are defined in the SMU SHELF database. It is recommended that all catalog names begin with "HSM$" to preclude any possibility that they could be shelved. If a shelf catalog suffers an unrecoverable loss, access to the associated shelved file data can also be lost. For this reason, the shelf catalogs are an essential part of the HSM environment.
You must protect the shelf catalogs from loss or corruption by using one or more of the following procedures:
hp recommends that shelving be disabled on system disks. If shelving is allowed on system disks, critical files may be shelved when a policy is triggered. Serious performance degradation or a deadlock during boot operations may result when these files are accessed. You can disable shelving on system disks with the following command:
SMU> SET VOLUME/DISABLE=ALL SYS$SYSDEVICE:
If shelving is allowed on system disks, care should be taken to avoid shelving system-critical files by using SET FILE/NOSHELVABLE for each system file. The HSM installation process will perform this operation on OpenVMS system files but not on layered product files. Certain files on the system disk have the /NOSHELVABLE flag set by default. These flags should not be reset.
HSM does not shelve or preshelve the following files:
HSM Version 3.0A supports access to shelved files from client systems where access is through DFS, NFS and PATHWORKS. At installation, HSM sets up such access by default. However, you may want to review this access and change it as needed, because it can potentially affect all accesses.
File faulting (and therefore file events) work as expected, with the exception of Ctrl/Y. Typing Ctrl/Y during a file fault has no effect. The client process waits until the file fault completes and the file fault is not canceled.
In addition, with DFS one can determine the shelved state of a file just as if the disk were local (i.e. DIRECTORY /SHELVED and DIRECTORY/SELECT both work correctly).
The SHELVE and UNSHELVE commands do not work on files on DFS-served devices. The commands do work on the cluster that has local access to the devices, however.
The normal default faulting mechanism (fault on data access), does not work for NFS-served files. The behavior is as if the file is a zero-block sequential file. Performing "cat", for example, (or similar commands) results in no output.
However, at installation time, HSM Version 3.0A enables such access by defining a logical name that causes file faults on an OPEN of a file by the NFS server process. By default, the following logical name is defined:
$ DEFINE/SYSTEM HSM$FAULT_ON_OPEN "NFS$SERVER"
This definition supports access to NFS-served files upon an OPEN of a file. If you do not want NFS access to shelved files, simply de-assign the logical name as follows:
$ DEASSIGN/SYSTEM HSM$FAULT_ON_OPEN
For a permanent change, this command should be placed in:
For NFS-served files, file events (device full and user quota exceeded) occur normally with the triggering process being the NFS$SERVER process. The quota exceeded event occurs normally because any files extended by the client are charged to the client's proxy not NFS$SERVER.
If the new logical is defined for the NFS$SERVER, the fault will occur on OPEN and will appear transparent to the client, with the possible exception of messages as follows:
% cat /usr/bubble/shelve_test.txt.2
NFS2 server bubble not responding still trying
NFS2 server bubble ok
The first message appears when the open doesn't complete immediately. The second message (ok) occurs when the open completes. The file contents, in the above example, are then displayed.
Typing Ctrl/C during the file fault returns the user to the shell. Since the NFS server does not issue an IO$_CANCEL against the faulting I/O, the file fault is not canceled and the file will be unshelved eventually.
It is not possible to determine whether a given file is shelved from the NFS client. Further, like DFS devices, the SHELVE and UNSHELVE commands are not available to NFS clients.
Normal attempts to access a shelved file from a PATHWORKS client initiate a file fault on the server node. If the file is unshelved quickly enough (e.g. from cache), the user sees only the delay in accessing the file. If the unshelve is not quick enough, an application-defined timeout may occur and a message window pops up indicating the served disk is not responding. The timeout value depends on the application. No retry is attempted. However, this behavior can be modified by changing HSM's behavior to a file open by returning a file access conflict error, upon which most PC applications retry (or allow the user to retry) the operation after a delay. After a few retries, the file fault will succeed and the file can be accessed normally. To enable PATHWORKS access to shelved files using the retry mechanism, HSM defines the following logical name on installation:
$ DEFINE/SYSTEM HSM$FAULT_AFTER_OPEN "PCFS_SERVER, PWRK$LMSRV"
This definition supports access to PATHWORKS files upon an OPEN of a file. If you do not want PATHWORKS to access shelved files via retries, simply de-assign the logical name as follows:
$ DEASSIGN/SYSTEM HSM$FAULT_AFTER_OPEN
For a permanent change, this command should be placed in:
The decision on which access method to use depends upon the typical response time to access shelved files in your environment.
If the logical name is defined, HSM imposes a 3-second delay in responding to the OPEN request for PATHWORKS accesses only. During this time, the file may be unshelved: otherwise, a "background" unshelve is initiated which will result in a successful open after a delay and retries.
At this point, the file fault on the server node is under way and cannot be canceled.
The affect of the access on the PC environment varies according to the PC operating system. For windows 3.1 and DOS, the computer waits until the file is unshelved. For Windows NT and Windows-95, only the windows application itself waits.
File events (device full and user quota exceeded) occur normally with the triggering process being the PATHWORKS server process. The quota exceeded event occurs normally because any files extended by the client are charged to the client's proxy not the PATHWORKS server.
It is not possible from a PATHWORKS client to determine whether a file is shelved. In addition, there is no way to shelve or unshelve files explicitly (via shelve or unshelve commands). There is also no way to cancel a file fault once it has been initiated.
Most PC applications are designed to handle "file sharing" conflicts. Thus, when HSM detects the PATHWORKS server has made an access request, it can initiate unshelving action, but return "file busy". The typical PC application will continue to retry the original open, or prompt the user to retry or cancel. Once the file is unshelved, the next OPEN succeeds without shelving interaction.
As just discussed, HSM supports two logical names that alter the behavior of opening a shelved file for NFS and PATHWORKS access support. These are:
The default behavior is to perform no file fault on Open; rather the file fault occurs upon a read or write to the file.
Each logical name can take a list of process names to alter the behavior of file faults on open. For example:
$ DEFINE/SYSTEM HSM$FAULT_ON_OPEN "NFS$SERVER, USER_SERVER, SMITH"
The HSM$FAULT_ON_OPEN can also be assigned to "HSM$ALL", which will cause a file fault on open for all processes. This option is not allowed for HSM$FAULT_AFTER_OPEN.
As these logicals are defined to allow NFS and PATHWORKS access, they are not recommended for use with other processes, since they will cause many more file faults than are actually needed in a normal OpenVMS environment. When used, the logicals must be system-wide, and should be defined identically on all nodes in the VMScluster environment.
These logical name assignments or lack thereof take effect immediately without the need to restart HSM.
This section explains specific considerations about keeping shelved data safe.
Access control lists (ACLs) for shelved files should not be deleted. In particular, the following commands should not be entered for shelved or preshelved files:
$ SET ACL /DELETE=ALL
$ SET FILE /ACL /DELETE=ALL
If the ACLs for shelved files are deleted, data is usually recovered automatically because a full catalog scan is performed. This causes a degradation of HSM performance. If the catalog scan fails, the data usually can be recovered manually using the SMU LOCATE command.
You may modify or delete ACE entries not used by HSM, for example, file protection ACEs.
By default, HSM does not shelve files marked contiguous, files that must occupy sequential blocks of disk space. If these files are shelved, HSM will not unshelve them to noncontiguous disk space. If HSM cannot unshelve the file to contiguous space, it aborts the operation and displays an error message. When this happens, defragment the disk to restore contiguous space and retry the operation.
Placed files are files that are placed on specific blocks of disk space by an application. By default, HSM shelves these files, but does not necessarily unshelve placed files to their original location on the disk volume.
Usually, this change is not critical to the operation of an application. If a problem arises with a placed file after unshelving, the file should be set to NOSHELVABLE, or you can use the SMU SET VOLUME/NOPLACEMENT command to cause these files to not be shelved for a specified volume.
This section explains backup strategies you may want to use to protect data shelved through HSM. There are several areas of concern:
As explained in Section 5.5.1, HSM requires certain files to operate. To facilitate HSM recovery in the event a disaster occurs, hp strongly recommends you backup these critical files using one of the methods described in this section. This is a preventive situation; if you do not use one of these methods to backup the critical files, you may not be able to easily recover shelved data after a disaster.
If you already have a backup strategy designed and implemented on your system for the volume on that the critical HSM project files reside, then these files will be backed up as part of that implementation.
If, however, you do not have an existing strategy defined, you will need to define one. You need to consider the following things:
The OpenVMS BACKUP utility provides two major methods of backing up your files: image backup (also called full backup) and incremental backup. The image backup saves all files on a disk into a save set. The incremental backup saves only those files that have been created or modified since the last image or incremental backup.
If you do not want to use a general backup strategy or product to back up your critical HSM files or if you just want an additional way to ensure they are safe, you can always create manual copies of the files. Just use the OpenVMS COPY command to copy the files to another location, probably on another disk. If you do this, hp recommends you develop an automated procedure to do this on a regular basis.
Once data is shelved, there are several mechanisms you can use to ensure there is a backup copy of that data:
If you want to use OpenVMS BACKUP to maintain backup copies of your shelved data, there are some specific issues you need to consider.
HSM can reduce the amount of space needed on your image backups, and the time required to do them. When doing image backups under HSM, only the file headers of shelved files are backed up. The data itself remains shelved.
Files modified since the last backup are backed up as a part of the incremental process unless specifically excluded. If a modified file is shelved before the next incremental backup, it is unshelved for the incremental backup.
To avoid the delay caused by retrieving file contents needlessly during an incremental backup, you should do incremental backups at a shorter interval than specified by HSM policy. This causes the files to be backed up before being shelved, thereby avoiding the unshelving delay.
When planning your image backups, remember that only the file headers are backed up. If you have shelved a file that has been modified or created since the last incremental backup, its data is not backed up. This can be avoided by keeping the files online for at least one incremental backup.
When an otherwise unmodified file is shelved, it is not unshelved and backed up again during the next incremental backup because its revision date is not changed by the shelve operation. This precludes unnecessarily long incremental backup times when infrequently used files are shelved.
Safety of shelved data is ensured by establishing multiple archive classes per shelf. Through the multiple archive classes, duplicate copies of your data are automatically made when files are shelved. hp recommends that one or more of these copies be stored in the same place as your system backups, perhaps in a remote location and preferably in a vault.
The SMU CHECKPOINT command allows you to dismount the current tape used for shelving that is associated with a specific archive class. In this way, copies can be removed from the system and separately stored for disaster recovery purposes. The next shelve operation for the archive class will be applied to the next tape volume for the archive class.
Because an online cache is part of online storage, it is backed up as part of your defined backup strategy. If, however, you use the online cache as a staging area to a shelf, there are some additional considerations for ensuring the information in the cache is backed up.
When you "flush" the cache, data that was stored in the cache is copied to the specified nearline or offline device. Once the copy is complete, the data in the cache is deleted. As a result, you need to ensure that the data is backed up while in the cache or is flushed to multiple archive classes for shelf storage.
There are two particular areas in which HSM can be used to recover lost user data:
In each of these instances, if you have defined multiple archive classes for HSM, you should be able to retrieve the data automatically from one of the defined archive classes. In other instances, such as when the online file has been deleted, you may need to use SMU LOCATE to find the shelved file data.
The SMU LOCATE command retrieves full information about a file's data locations from the shelving catalog. SMU LOCATE reads the HSM catalog(s) directly to find a shelved file's data locations.
You should note that SMU LOCATE does not work quite the same way as a typical OpenVMS utility when it comes to look-up and wildcard processing. The file-descriptor you supply as input (including any wildcards) applies to the file as stored in the HSM catalog at the time of shelving. Thus, for example:
When you retrieve information using SMU LOCATE, several instances or groups of stored locations may be displayed. These reflect the locations of the file when it was shelved at various stages of its life. You should carefully review the shelving time and revision time of the file to determine which, if any, is the appropriate copy to restore.
Although HSM tries to restore data from all known locations automatically, even when some of the file's metadata is missing, there may be occasions when this fails. In these situations, you should use SMU LOCATE to locate the file's data, then attempt to restore the data through BACKUP (from tape) or COPY (from cache).
If the user is certain file data was shelved, but is unable to simply retrieve that data through either an implicit or explicit unshelving operation, use the following procedure to find and retrieve the missing data:
HSM provides tools that allow you to prevent loss of HSM data. This section describes various ways you can use these tools.
If you have a site disaster in which your onsite data is unavailable, you may be able to recover that data from BACKUP files and tapes dismounted using the SMU CHECKPOINT command.
Once onsite, the following sequence is recommended:
If you lose any of the following HSM data, you must recover it before HSM will function correctly:
If any or all of the critical HSM product files are deleted and you have backed up this information through a mechanism such as the OpenVMS BACKUP utility, you should restore them from the latest backup sets (including incremental backups) as soon as possible. Then, you should restart HSM.
Although you could reinstall the HSM database from your installation kit, this procedure would lose all the current information in your HSM database. Because this is policy data, you can re-create it easily.
The HSM catalogs are essential to recovering shelved data. If you do not use BACKUP to create a backup of the catalogs, you could backup the catalogs by making copies of the catalog files and storing them in a safe location. Then, once you have restored any other pieces of the HSM system, you can copy the catalog files back over into the proper locations for HSM to use it. These locations are defined by the logical name HSM$CATALOG for the default catalog, and the locations specified in the SMU SHELF database for other shelf catalogs.
If you inadvertently shelved your boot-up files, you can only recover them if you have an alternate system disk you can use to boot the system and then unshelve the files.
The most efficient way to recover an archive class is to use the SMU REPACK command, and specify a /FROM_ARCHIVE and one or more volumes with the /VOLUME qualifier. This command uses the /FROM_ARCHIVE to retrieve shelved data and copy it to the archive class containing the lost shelf media. See Section 5.15 for more details.
An alternative but much slower way to reclaim lost shelf media is to reshelve files. Use the following command:
This variation of the SHELVE command shelves only data whose status is SHELVED, not ONLINE. It transparently unshelves the data from its current archive class and reshelves the data to any new archive classes. Data in an archive class is reshelved also if the online ACL is deleted.
This section explains how to evaluate your policy definitions with respect to the HSM policy model. Understanding this model will help you define the most effective policies for your environment.
This section presents a model-related concepts-that explains how shelving works. Understanding the model will help you define and manage an effective shelving policy.
By implementing HSM, you can maintain a reasonable amount of available online storage capacity, and reduce the cost of storing large amounts of data.
Your particular disk configurations and their usage dictate specific values to consider when you create the various definitions. The policies you implement with HSM determine how you meet your storage management needs.
To apply these concepts, first think of each of your online disk volumes in terms of its total online storage capacity. Then, consider how much space should always remain available.
The central element of policy is the latitude of available online storage capacity you maintain.
Figure 5-1 shows the HSM policy model. Table 5-2 provides definitions for each of the concepts shown in the figure.
The policies you implement by creating and modifying the various HSM definitions govern the shelving process. This example of reactive policy shows you how the HSM system reacts to a high water mark event, returning the available capacity to the low water mark.
Figure 5-2 shows the policy model in the stages of the shelving process. Table 5-3 lists the stages of the shelving process as they occur in response to reactive policy
The model described in Section 5.11.1 has practical application. This section demonstrates how the model can be applied to help monitor the effectiveness of policy in various situations.
One of the benefits of HSM is the ability to implement a preventive policy that helps avoid volume occupancy full events. Figure 5-3 shows the policy model as it applies during a volume occupancy full event.
The goal is an important part of policy as it is the result of the shelving process controlled through file selection criteria in the policy definition. Figure 5-4 shows the policy model when a shelving policy fails to reach its defined goal.
Your reactive policy should be planned and implemented as a contingency. As such, shelving in response to reactive policy should occur infrequently. The policy model in Figure 5-5 shows the policy that creates frequent requests for reactive policy.
With HSM, you design and implement policy that allows you to maintain available online capacity and retain data on less expensive media. The trade-off with implementing HSM is that when shelved files are needed, applications and users trying to access them must wait until the files are restored. Figure 5-6 shows the policy model in a situation when available storage is maintained at the expense of application and user performance.
HSM provides the means to determine what a policy execution would do before the policy is run. This process is called ranking a policy on a volume, and is initiated by the SMU RANK command.
This feature helps you determine the effectiveness of your policies by letting you see exactly what files would be shelved if the policy were run. The files are listed in the order that they would be shelved. Ranking applies only to policies that use the automatic algorithms STWS and LRU. HSM cannot rank policies based on user script files.
hp recommends that you rank all your policies before putting them into a production environment.
The following example shows how to rank a policy:
$ SMU RANK DISK$USER1: HSM$DEFAULT_OCCUPANCY
Policy HSM$DEFAULT_OCCUPANCY is enabled for shelving
Policy History:
Created: 28-OCT-1999 10:36:36.45
Revised: 28-OCT-1999 11:26:21.09
Selection Criteria:
State: Enabled
Action: Shelving
File Event: Expiration date
Elapsed time: 180 00:00:00
Before time: <none>
Since time: <none>
Low Water Mark: 80 %
Primary Policy: Space Time Working Set(STWS)
Secondary Policy: Least Recently Used(LRU)
Verification:
Mail notification: <none>
Output file: <none>
Volume capacity: 2271640 blocks
Current utilization: 1818245 blocks
Volume lowwater mark: 1817312 blocks
Blocks to be reclaimed: 933
Executing primary policy definition
DISK$USER1:[SMITH]WATCH_BATCH.COM;5
date: 28-OCT-1999 size: 462
DISK$USER1:[SMITH]LOCAL_DB.COM;1
date: 28-OCT-1999 size: 279
DISK$USER1:[SMITH]PERSONAL.LGP;1
DISK$USER1:[SMITH]REMOTE.MEM;1
date: 28-OCT-1999 size: 57
Total of 4 files ranked which will recover 951 blocks
Volume lowwater mark can be reached
When you install HSM for the first time, all HSM shelving data is placed in the default catalog, located at:
As the amount of shelving information increases over time, hp recommends that you define multiple shelves, distribute your disk volumes amongst these shelves, and define a separate catalog for each shelf. hp recommends that a shelf be associated with between 10 and 50 volumes each, depending on the size of the volumes and the amount of shelving activity on those volumes.
After analyzing your storage subsystem and coming up with a distribution plan for volumes and shelves, the following commands can be used to implement this distribution, for example:
$!
$! Define new shelves with separate catalogs
$!
$ SMU SET SHELF PRODUCTION_SHELF1 -
_$ / CATALOG=DISK$SYSTEM2:[HSM.CATALOG]HSM$PRODUCTION_SHELF1_CAT.SYS
$ SMU SET SHELF PRODUCTION_SHELF2 -
_$ / CATALOG=DISK$SYSTEM2:[HSM.CATALOG]HSM$PRODUCTION_SHELF2_CAT.SYS
$!
$! Re-associate volumes to the new shelves
$!
$ SMU SET VOLUME DISK$USER1:/SHELF=PRODUCTION_SHELF1
$ SMU SET VOLUME DISK$USER2:/SHELF=PRODUCTION_SHELF1
$ . . . . . . .
$ . . . . . . .
$ SMU SET VOLUME DISK$USER20:/SHELF=PRODUCTION_SHELF2
$ SMU SET VOLUME DISK$USER21:/SHELF=PRODUCTION_SHELF2
$
It is recommended that the catalog file names are preceded by "HSM$" to eliminate any possibility that they might be shelved: shelving a catalog file is not supported and can lead to serious problems.
These are the only commands you need to enter to distribute your volumes among shelves, and to populate the catalogs.
When you enter these commands, HSM begins a process called split-merge, which moves shelving data from the old catalog to the new catalog for a volume. A split-merge operation can be initiated by either command.
Since potentially thousands of catalog entries are affected by a spit-merge, the process can take several minutes or even hours to complete. During this time, the associated volume and/or shelf is associated with two catalogs - the old and the new. These can be seen by issuing an SMU SHOW VOLUME or SMU SHOW SHELF during a split-merge, which have special displays as shown in the examples below:
$!
$! SMU displays when changing a shelf for a volume:
$!
$ SMU SHOW VOLUME _$15$DKA300:/FULL
Volume _$15$DKA300: on Shelf HSM$DEFAULT_SHELF, Shelving is enabled,
Unshelving is enabled, Highwater mark detection is disabled, Occupancy full detection is disabled, Disk quota exceeded detection is disabled
Created: 8-FEB-1998 15:57:54.32
Revised: 8-FEB-19986 15:58:28.44
Ineligible files: <contiguous>
Highwater mark: 90%
OCCUPANCY Policy: HSM$DEFAULT_OCCUPANCY
QUOTA Policy: HSM$DEFAULT_QUOTA
Split/Merge state: COPY
Alternate shelf: PRODUCTION_SHELF1
$!
$! SMU displays when changing a catalog for a shelf:
$!
$ SMU SHOW SHELF
Shelf TEST_SHELF1 is enabled for Shelving and Unshelving
Catalog File: DISK$USER1:[HSM.CATALOG]HSM$CAT1.SYS
Shelf History:
Created: 1-DEC-1998 11:44:46.26
Revised: 28-DEC-1998 15:22:00.91
Backup Verification: Off
Save Time: <none>
Updates Saved: All
Archive Classes:
Archive list: HSM$ARCHIVE01 id: 1
Restore list: HSM$ARCHIVE01 id: 1
Split/Merge state: COPY
Alternate Catalog: DISK$USER1:[HSM.CATALOG]HSM$CAT2.SYS
You may notice that the catalogs change positions during the split-merge between While a split-merge is in progress, certain HSM operations may proceed normally, some HSM operations are suspended, while certain others are rejected. Suspending an operation means that the operation is queued until the split-merge is completed, while rejection means that the command must be re-entered at a later time. The following table indicates the disposition of requests during a split-merge:
HSM initiates split-merge operations in the background; the SMU command that initiated the split merge does not wait for the operation to complete. As such, it is possible to request an incompatible split-merge operation, for example:
$ SMU SET VOLUME DISK$USER1/SHELF=SHELF1
$ SMU SET VOLUME DISK$USER1/SHELF=SHELF2
In this example, the second command is rejected while the split-merge for the first command is processed.
If an error occurs during a background split-merge operation, the final completion state of the operation will either revert to the old definition, or the new definition, depending on the phase of split-merge that failed. There are essentially two phases of split-merge:
If an error occurs during the copy phase, the SMU database is reset to the old catalog/shelf. If an error occurs during the delete phase, the new catalog/shelf definition stays in effect.
You may wish to examine the database later with SMU to determine if the operation succeeded and the definitions are as you expect. Also, the shelf handler audit and error logs contain entries for all split-merge operations for further information.
Shelf media used by HSM contain shelved file data from many sources, some of which remains valid for a long time, but some also becomes obsolete. Unlike BACKUP tapes, which can be recycled regularly, this is not the case with HSM media, since they contain the only copies of the shelved file data. Without some sort of custom analysis of HSM media, the media would have to be retained indefinitely. After a long time, where the majority of the data is obsolete, this would result in shelf media having a very low percentage of valid data, resulting in wastage.
HSM provides the SMU REPACK function to perform an analysis of valid and obsolete data on shelf media, and copy the valid data to other media, allowing the old media to be freed up. In addition, REPACK purges the catalog entries associated with the obsolete data.
Shelf file data can become obsolete in two ways:
HSM provides the system administrator a way to control the obsolescence of files for use in repacking. It may not be appropriate for a file to become obsolete as soon as it is deleted or updated, as it may need to be recovered in its old state at a later date. As such, two new options are provided in the SMU SHELF definition as follows:
Complete flexibility is applied to both options ranging from zero delete save time and no updates, to indefinite delay and number of updates, and anything in between. The options apply to all preshelved and shelved files on all volumes in the shelf.
Repacking is normally applied to all volumes in an archive class. However, the system administrator can restrict the volumes being repacked by specifying them in a /VOLUME qualifier. If any of the specified volumes are part of a volume set, all volumes in the volume set will be repacked.
Finally, it may or may not be worth repacking a particular volume or volume set depending on the percentage of valid data on the volume. For example, if a volume contains 90% valid data, the 10% bonus in space acquired by repacking the volume may not justify the effort of repacking, at least not yet. As such, the system administrator can apply a threshold percentage value of obsolete data that is used to determine whether to repack a particular volume or volume set. The default threshold value is 50%.
A threshold value should only be applied when repacking to the same archive class. When repacking to create a new archive class or replacing a shelved volume, all valid files should be repacked by specifying /NOTHRESHOLD.
Repacking requires two compatible tape devices in order to proceed. For this reason, HSM allows only ONE repack operation at a time. In addition, a REPACK request is suspended while a catalog split-merge operation is in progress; the two operations cannot safely proceed simultaneously.
The following example shows a normal repack operation:
This command repacks archive class 1 to new media also in archive class 1. The default threshold value of 50% is applied. When the operation is complete, the old media for archive class 1 are deallocated.
Repack requires a disk staging area of at least 100,000 blocks in order to produce optimal multi-file savesets on output. For example, files shelved with HSM V1.x into single-file savesets are consolidated into more efficient multi-file savesets on output. The staging area used is referenced by the system-wide logical name HSM$REPACK, which should be assigned to a suitably sized disk/directory combination. If this logical name is not defined, the logical HSM$MANAGER is used instead. The staging area is cleaned up after a repack operation.
The repack operation, especially on tape volumes created under HSM V1.x, is likely to take several days to complete. While repacking is being performed, certain tape-oriented operations are suspended and queued to avoid conflicts. However, when HSM detects that a conflicting tape operation is pending, the repack operation is suspended temporarily, usually within 10 minutes, to allow the other operations to proceed. Therefore, despite the duration of the repack operation, other HSM operations will only suffer minor delays, and the long duration of repack should not be a concern.
HSM provides a mechanism for replacing and/or creating new archive classes, and populating associated shelf media with valid data. You may wish to create a new archive class to provide additional data safety. More likely though, you may wish to create a new archive class to upgrade your tape hardware to new technology or move your shelved data to a new tape library.
Although HSM provides the reshelving function to accomplish this, this is slow and involves intermediate disk transfers. A much more efficient way is to use the REPACK function and specify a NEW_ARCHIVE qualifier. When performing a repack for this purpose, you must not specify any volumes in the volume list, and no threshold value. It is important that all valid files are copied to the new archive class. However, the purging of obsolete files is still performed when creating a new archive class using repack.
The following example creates a new archive class:
$ SMU REPACK 1/TO_ARCHIVE=3/NOTHRESHOLD
This command creates a new archive class 3, using all valid data from archive class 1. Archive class 3 may be of a different media type than archive class 1.
If you lose or damage a shelf tape, you will not be able to recover the data on that tape, and are at risk for not providing the level of data safety that HSM provides. As soon as you discover that a shelf tape has been lost or damaged, you should take steps to replace it by using REPACK to copy the contents of the tape, from another archive class, to new media.
When discovering the lost or damaged tape, you should determine which archive class it belonged to. Then issue a REPACK command specifying an alternate archive class that is or was defined for the shelf. When performing this operation, you should specify the volume to be replaced but no thresholds for the copy. However, as with all repack operations, obsolete files are not copied.
The following example replaces a lost or damaged shelf volume:
$ SMU REPACK 1/VOLUME=ACG001/FROM_ARCHIVE=2/NOTHRESHOLD
This example replaces shelf volume ACG001 from archive class 1, using media from archive class 2. It may take several volumes from archive class 2 to replace the data in the volume. Also, the replacement volume will have a different label to ACG001, but its contents contain the valid replacement data for ACG001. If the archive class is not checkpointed after the operation, the replacement volume becomes the current shelving volume for the archive class, and will be filled up.
This function cannot be performed if only one archive class is specified for the shelf, which is not recommended for this very reason.
If you have a site disaster, and most or all of the media for an archive class are damaged, then you should create a new archive class as described in the previous section, rather than recover each volume individually.
The ANALYZE/REPAIR utility is used to align the contents of the HSM catalog(s) with a disk that has been backed up and later restored, or has been renamed. It is also useful to run this utility if you suspect that any other discrepancies between the online disk state and the HSM catalog(s) may have occurred.
SMU ANALYZE will scan all files on a disk looking for shelved and preshelved files. When a file is found that is of interest, its HSM metadata (file header and ACE information) is compared against entries in the HSM catalog(s) and any discrepancies are reported. If the /REPAIR qualifier is used, the discrepancy can be repaired. If /CONFIRM is not used with /REPAIR, then the default repair action will be applied.
$ SMU ANALYZE DKB500
%SMU-I-PROCESSING, processing input device DKB500
%SMU-I- scanning for shelved files on disk volume _$1$DKB500:
File (14,1,0) "$1$DKB500:[ANALYZE_TEST]STATUS.RPT;1"
Stored in catalog as:
FID (13,1,0) "BOGUS$DEVICE1:[ANALYZE_TEST]STATUS.RPT;1"
Invalid HSM metadata found for
File (15,1,0) "$1$DKB500:[ANALYZE_TEST]LOGIN.COM;1"
Stored in catalog as:
FID (12,1,0) "$1$DKB500:[ANALYZE_TEST]LOGIN.COM;1"
Invalid HSM metadata found for
File (16,1,0) "$1$DKB500:[ANALYZE_TEST]Q4_RESULTS.TXT;1"
No catalog entry found - file not repairable
Invalid HSM metadata found for
File (17,1,0) "$1$DKB500:[ANALYZE_TEST]ANALYSIS.DAT;1"
File (18,1,0) "$1$DKB500:[ANALYZE_TEST]RECIPE.MEM;1"
Revision date mismatch -
Current: 9-JUL-1999 16:45:39.37
Catalog: 10-JUL-1999 15:54:21.74
File (19,1,0) "$1$DKB500:[ANALYZE_TEST]MAIL.SAV;1"
Stored in catalog as:
FID (19,1,0) "BOGUS$DEVICE1:[ANALYZE_TEST]MAIL.SAV;1"
%SMU- completed scan for shelved files on disk volume _
%SMU-I-ERRORS, 6 error(s) detected, 0 error(s) repaired
$ SMU ANALYZE/REPAIR DKB500
%SMU-I-PROCESSING, processing input device DKB500
%SMU-I-scanning for shelved files on disk volume _$1$DKB500:
File (14,1,0) "$1$DKB500:[ANALYZE_TEST]STATUS.RPT;1"
Stored in catalog as:
FID (13,1,0) "BOGUS$DEVICE1:[ANALYZE_TEST]STATUS.RPT;1"
File entry repaired - 1 repairs made.
Invalid HSM metadata found for
File (15,1,0) "$1$DKB500:[ANALYZE_TEST]LOGIN.COM;1"
Stored in catalog as:
FID (12,1,0) "$1$DKB500:[ANALYZE_TEST]LOGIN.COM;1"
File entry not repaired.
Invalid HSM metadata found for
File (16,1,0) "$1$DKB500:[ANALYZE_TEST]Q4_RESULTS.TXT;1"
No catalog entry found - file not repairable
Invalid HSM metadata found for
File (17,1,0) "$1$DKB500:[ANALYZE_TEST]ANALYSIS.DAT;1"
File entry repaired - 1 repairs made.
File (18,1,0) "$1$DKB500:[ANALYZE_TEST]RECIPE.MEM;1"
Revision date mismatch -
Current: 9-JUL-1999 16:45:39.37
Catalog: 10-JUL-1999 15:54:21.74
File entry repaired - 1 repairs made.
File (19,1,0) "$1$DKB500:[ANALYZE_TEST]MAIL.SAV;1"
Stored in catalog as:
FID (19,1,0) "BOGUS$DEVICE1:[ANALYZE_TEST]MAIL.SAV;1"
File entry repaired - 1 repairs made.
%SMU- completed scan for shelved files on disk volume _
%SMU-I-ERRORS, 6 error(s) detected, 4 error(s) repaired
$ SMU ANALYZE/REPAIR/CONFIRM DKB500
%SMU-I-PROCESSING, processing input device DKB500
%SMU-I- scanning for shelved files on disk volume _$1$DKB500:
File (14,1,0) "$1$DKB500:[ANALYZE_TEST]STATUS.RPT;1" Stored in catalog as:
FID (13,1,0) "BOGUS$DEVICE1:[ANALYZE_TEST]STATUS.RPT;1"
** Repair catalog entry to reset volume, FID to _ (14,1,0)? [Y]: N
File entry not repaired.
Invalid HSM metadata found for
File (15,1,0) "$1$DKB500:[ANALYZE_TEST]LOGIN.COM;1"
Stored in catalog as:
FID (12,1,0) "$1$DKB500:[ANALYZE_TEST]LOGIN.COM;1"
** Repair catalog entry to reset FID to (15,1,0) ?
** WARNING: Repair may affect the wrong file - with caution [N]: Y
File entry repaired - 1 repairs made.
Invalid HSM metadata found for
File (16,1,0) "$1$DKB500:[ANALYZE_TEST]Q4_RESULTS.TXT;1"
No catalog entry found - file not repairable
Invalid HSM metadata found for
File (17,1,0) "$1$DKB500:[ANALYZE_TEST]ANALYSIS.DAT;1"
** Repair by adding HSM metadata for file (17,1,0) ? [Y]:
File entry repaired - 1 repairs made.
File (18,1,0) "$1$DKB500:[ANALYZE_TEST]RECIPE.MEM;1"
Revision date mismatch -
Current: 9-JUL-1999 18:29:09.96
Catalog: 10-JUL-1999 17:37:52.33
** Repair by deleting HSM metadata for file (18,1,0) ? [Y]: Y
File entry repaired - 1 repairs made.
File (19,1,0) "$1$DKB500:[ANALYZE_TEST]MAIL.SAV;1"
Stored in catalog as:
FID (19,1,0) "BOGUS$DEVICE1:[ANALYZE_TEST]MAIL.SAV;1"
** Repair catalog entry to reset volume to _ ? [Y]: Y
File entry repaired - 1 repairs made.
%SMU- completed scan for shelved files on disk volume _
%SMU-I-ERRORS, 6 error(s) detected, 4 error(s) repaired
$ SMU ANALYZE/REPAIR/CONFIRM/OUTPUT=ANALYZE.OUT DKB500
File (14,1,0) "$1$DKB500:[ANALYZE_TEST]STATUS.RPT;1"
Stored in catalog as:
FID (13,1,0) "BOGUS$DEVICE1:[ANALYZE_TEST]STATUS.RPT;1"
** Repair catalog entry to reset volume, FID to _ (14,1,0) ? [Y]: Y
File entry repaired - 1 repairs made.
Invalid HSM metadata found for File (15,1,0) "$1$DKB500:[ANALYZE_TEST]LOGIN.COM;1"
Stored in catalog as:
FID (12,1,0) "$1$DKB500:[ANALYZE_TEST]LOGIN.COM;1"
** Repair catalog entry to reset FID to (15,1,0) ?
** WARNING: Repair may affect the wrong file - with caution [N]: Y
File entry repaired - 1 repairs made.
Invalid HSM metadata found for
File (16,1,0) "$1$DKB500:[ANALYZE_TEST]Q4_RESULTS.TXT;1"
No catalog entry found - file not repairable
Invalid HSM metadata found for
File (17,1,0) "$1$DKB500:[ANALYZE_TEST]ANALYSIS.DAT;1"
** Repair by adding HSM metadata for file (17,1,0) ? [Y]:
File entry repaired - 1 repairs made.
File (18,1,0) "$1$DKB500:[ANALYZE_TEST]RECIPE.MEM;1"
Revision date mismatch - Current:9-JUL-1999 18:38:58.06
Catalog: 10-JUL-1999 17:47:40.42
** Repair by deleting HSM metadata for file (18,1,0) ? [Y]: Y
File entry repaired - 1 repairs made.
File (19,1,0) "$1$DKB500:[ANALYZE_TEST]MAIL.SAV;1"
Stored in catalog as:
FID (19,1,0) "BOGUS$DEVICE1:[ANALYZE_TEST]MAIL.SAV;1"
** Repair catalog entry to reset volume to _ ? [Y]: Y
File entry repaired - 1 repairs made.
$
$ TYPE ANALYZE.OUT
%SMU-I-PROCESSING, processing input device DKB500
%SMU-I- scanning for shelved files on disk volume _$1$DKB500:
File (14,1,0) "$1$DKB500:[ANALYZE_TEST]STATUS.RPT;1"
Stored in catalog as:
FID (13,1,0) "BOGUS$DEVICE1:[ANALYZE_TEST]STATUS.RPT;1"
File entry repaired - 1 repairs made.
Invalid HSM metadata found for File (15,1,0)
"$1$DKB500:[ANALYZE_TEST]LOGIN.COM;1"
Stored in catalog as:
FID (12,1,0) "$1$DKB500:[ANALYZE_TEST]LOGIN.COM;1"
File entry repaired - 1 repairs made.
Invalid HSM metadata found for
File (16,1,0) "$1$DKB500:[ANALYZE_TEST]Q4_RESULTS.TXT;1"
No catalog entry found - file not repairable
Invalid HSM metadata found for
File (17,1,0) "$1$DKB500:[ANALYZE_TEST]ANALYSIS.DAT;1"
File entry repaired - 1 repairs made.
File (18,1,0) "$1$DKB500:[ANALYZE_TEST]RECIPE.MEM;1"
Revision date mismatch - Current: 9-JUL-1999 18:38:58.06
Catalog: 10-JUL-1999 17:47:40.42
File entry repaired - 1 repairs made.
File (19,1,0) "$1$DKB500:[ANALYZE_TEST]MAIL.SAV;1"
Stored in catalog as:
FID (19,1,0) "BOGUS$DEVICE1:[ANALYZE_TEST]MAIL.SAV;1"
File entry repaired - 1 repairs made.
%SMU- completed scan for shelved files on disk volume -
%SMU-I-ERRORS, 6 error(s) detected, 5 error(s) repaired
HSM offers a paradigm to consolidate HSM shelf data with that required for backup/restore purposes. This paradigm is called Consolidated Backup With HSM, and is designed for use with very large sites where the number of tapes is problematic, or sites who are reaching the limit of their backup window. This paradigm is also known as Backup via shelving.
We refer to this as a paradigm, rather than an HSM function, because no special HSM functions are required; the paradigm is implemented using normal HSM and BACKUP (or SLS) commands. The paradigm consists of the following elements, which are described in subsequent sections:
To implement this paradigm, HSM has provided a special version of BACKUP, called HSM$BACKUP, with this release. This version allows backing up only the headers of preshelved and shelved files, and in the shelved state. It is expected that this functionality will be incorporated into a future version of OpenVMS BACKUP.
If you are using SLS as your regular BACKUP product, you need to configure SLS to use the new HSM$BACKUP image for your regular backups. This feature is supported only with SLS V2.8 or later.
The steps you need to take are:
You set up SLS to use HSM$BACKUP by defining the following logical name:
$ DEFINE/TABLE=LNM$SLS$VALUES SLS$HSM_BACKUP 1
Depending on the type of backups or restores you are performing, you may want to include the new /[NO]SHELVED and /[NO]PRESHELVED qualifiers (as described in Section 5.17.3) in the following cases:
This paradigm is not yet supported for Archive/Backup System (ABS).
The key to this paradigm is preshelving most files on the system. From HSM V2.0, preshelved files have a unique state, and are flagged as preshelved in the file header. Since the data of a preshelved file remains online, the file can be modified at any time. If a preshelved file is modified, extended, or truncated, a new HSM function changes the file from preshelved to unshelved. Also, in V2.0 and later, the eligibility for preshelving files is the same as shelved file, and the following types of files cannot be preshelved:
However, all other files (except those on system disks) can and should be preshelved to utilize this paradigm. This can be done in two ways:
$!
$! This sets up a preshelve policy to regularly execute on all
$! affected volumes on a regular basis:
$!
$ SMU SET POLICY policy_name /PRESHELVE /NOELAPSED /LOWWATER_MARK=0
$ SMU SET SCHEDULE volume_list policy_name/AFTER=time
$!
$! This manually preshelves all files on a volume; this command may
$! be placed in an HSM policy script file.
$! $ PRESHELVE volume:[000000...]*.*;*
HSM will not preshelve files that are already preshelved or shelved, so these commands affect only files that have been created or modified since the last preshelve operation. Since HSM does not preshelve open files, you can perform the preshelving during the day.
When starting this paradigm up for the first time, however, thousands of files per volume will be preshelved, so it is recommended that only one volume at a time is processed during this startup phase.
While using this paradigm, it is still necessary to perform regularly (for example, nightly) backups using your regular backup regimen. This is required to restore a disk's index file and directory structure following a disk failure.
For this paradigm to work, you must use "HSM$BACKUP" as provided with the HSM kit as your backup engine. This backup engine can be supported by SLS. The paradigm substantially reduces the backup window because only the 512-byte header for each preshelved file is backed up: the data is stored in the HSM subsystem.
The recommended paradigm for regular backups is:
Two new qualifiers are provided to HSM$BACKUP to implement this paradigm:
The following examples contain the recommended options for performing image and incremental backups using this paradigm:
$!
$! Image BACKUP
$!
$ HSM$BACKUP/IMAGE/IGNORE=INTERLOCK/RECORD/LOG/NOPRESHELVED -
_$ volume: device:saveset/SAVESET
$!
$! Incremental BACKUP
$!
$ HSM$BACKUP/RECORD/SINCE=BACKUP/NOPRESHELVED/NOSHELVED/LOG/ IGNORE=INTERLOCK
_$ volume: device:saveset/SAVESET
$!
Each of these commands backs up only the headers of shelved and preshelved files, and they are copied to the backup saveset in the shelved state. The online state remains unchanged.
If there becomes a need to restore a disk volume because it has become damaged, the normal restoration process is follows, namely:
After applying the image and incremental backups, you have restored all the metadata and directory structure for the volume, and also have restored most of the files to the shelved state (that is, all files that were preshelved and shelved during the backup are restored to the shelved state). You can use either HSM$BACKUP or normal OpenVMS Backup for the restore process.
Before making the volume available to users, it is necessary to repair the HSM catalog, since the file identifiers (FIDs) of shelved and preshelved files may have changed. You can repair them with the following command, which will take several minutes to run:
Note that this operation completes successfully if you restore the files to the same volume (device name) or to a different device.
Once this command completes, the disk volume is ready for use. Note, however, that most files are still shelved. If you wish to avoid file faults on first file access on recently-accessed files,
you may want to initiate an unshelve procedure such as the following:
$ UNSHELVE volume:[000000...]*.*;*/SINCE=10-OCT-1999/EXPIRED
This command unshelves all files that have been accessed since 10-OCT-1999 (assuming you have enabled volume retention as recommended). The use of this command is optional.
You restore individual files by locating the volume that has the latest (or desired) copy of the file and restoring the file in the usual way. If, however, the file is restored in the shelved state, you should run the SMU ANALYZE/REPAIR command to reset its file identifier in the catalog.
Since you are using HSM as the repository of virtually all files on your system, the number of HSM media is liable to become very large. In order to keep this under control, it is recommended that you repack your archive classes regularly. Once every three-six months is recommended in such an environment. See section 7.14 for information on repacking archive classes.
You should not use consolidated backup with HSM on system disks. Preshelving files on system disks (and having them restored in the shelved state) will likely result in an inability to reboot your system. This is highly unrecommended.
Also, you should define multiple shelves and multiple catalogs for this environment. The catalogs should be stored on shadowed disks with preshelving disabled. You should not preshelve any HSM-internal files, otherwise unshelving may not be possible after a restore.
If you wish to see how many files and blocks are being used for a cache device, you can enter a DIRECTORY command for the cache directory. For each cache device defined using SMU, the cache directory is located at device:[HSM_CACHE]. To determine usage, enter a command as shown in the following example:
$ DIRECTORY/GRAND/SIZE=ALL $1$DKA100:[HSM_CACHE]
Grand total of 1 directory, 221 files, 9021/9021 blocks.
Because HSM keeps file headers in online storage while moving the file data to shelf storage, you need to consider your system limits for the number of file headers that can be on a given volume. If you exceed the allowable number of file headers on a given volume, users may see INDEXFILEFULL and HEADERFULL errors when creating files. To prevent this problem, you need to understand how OpenVMS limits the number of file headers on your disk and how you can control this information.
OpenVMS limits the number of file headers you can have on a volume by calculating a value
for MAXIMUM_FILES, using the following equation:
MAXIMUM_FILES = maxblock / (cluster_factor + 1)
Where maxblock is the value for "total blocks" from SHOW DEVICE/FULL and cluster_factor must be between:
Min value: maxblock / (255 * 4096) (or 1 whichever is greater)
Max value: maxblock / 100
Many systems use the default value for cluster_factor, which is 3 for disks whose capacity is greater than or equal to 50,000 blocks. Occasionally, you may have a problem with very large disks when the default value of three does not work and you need to calculate the minimum value using the equation. For additional information, see the INITIALIZE command in the OpenVMS DCL Dictionary.
By default, MAXIMUM_FILES is (maxblock / ((cluster_factor + 1) * 2)), which is half of the actual maximum.
To initialize a volume with the greatest number of file headers possible, use the following DCL command:
$ INITIALIZE {device} {label}/CLUSTER = (maxblock/(255*4096)) -
/MAXIMUM_FILES = (maxblock/(cluster + 1)) -
/HEADERS = (maxblock/(cluster + 1))
If you initialize a volume with the largest number of file headers, the index file will be very large, and none of that space can be used for anything but file headers. This is not necessary nor desirable, because you end up using approximately 25 percent of your disk space for file metadata. In reality, you probably want to set aside about 1 percent of your disk space for file metadata.
Note in the INITIALIZE command that /MAXIMUM_FILES reserves space for the index file while /HEADERS allocates space for the index file. Using the /HEADERs qualifier is the only way to guarantee you can create that many files. Once initialized, you cannot ever have more files on the disk than the value given with the MAXIMUM_FILES qualifier.
If you do not initialize your volumes using the /HEADERS qualifier, the file system will extend INDEXF.SYS for you as it needs file headers. The file system will not allow INDEXF.SYS to become multiheadered, which means you can have a maximum of approximately 77 extents in the header before you will get an error saying the index file is full.
You can tell how close you are to the index file limit using DUMP/HEADER/BLOCK=COUNT=0 [000000]INDEXF.SYS. The display contains a field called "Map area words in use." This field has a maximum of 155 for INDEXF.SYS. If the number of mapping words in use is around 120 to 130, you should schedule an image backup/restore cycle for the volume.
To prevent your system from reaching its file header limit, make sure you delete file headers as appropriate. What this means is, when you no longer need a file, do not leave it shelved with the file header on disk. Use another strategy to archive the file, just in case you need it someday. Then, delete the file from the disk.
HSM provides a comprehensive set of event logging capabilities that you can use to analyze shelving activity on your cluster and tune your system to provide an optimal computational environment.
Two types of logging are supported:
Event logging is supported by both the shelf handler process and the policy execution process. You can use the shelf handler log to obtain a complete summary of all shelving operations initiated on the cluster. You can use the policy log to obtain information relating to all policies run on the system.
Logging may be enabled or disabled at your discretion with one or more of the following selections: AUDIT, ERROR, and EXCEPTION.
The event logs are human-readable and can be displayed with the TYPE command while HSM is in operation. Access to the logs require SYSPRV, READALL, or BYPASS privileges. Table 5-5 lists their locations.
You can read the event logs at any time during HSM operation, using a TYPE command, a SEARCH command, or other OpenVMS read_only tools. You also can obtain a dynamic output of events by issuing the following command on any of the event log files:
$ TYPE/TAIL/INTERVAL=1/CONTINUOUS HSM$LOG:log_file_name.LOG
The logs grow with use, and are not re-created on HSM startup. If you wish to reinitialize the logs, you can do so with the SMU SET FACILITY/RESET command, which opens a new version of each log file. The old files can then be purged, renamed and shelved, or otherwise disposed of to make space available.
Internally generated HSM requests are generally not reported in the audit log, as these are not visible to either the user or the system manager. However, they may be reported in the error log if they fail. Such internal requests include:
If you wish to see the "invisible" requests logged in the audit log, as well as shelf server logging of requests, you can enable the following logical name:
$ DEFINE/SYSTEM HSM$SHP_REMOTE_AUDIT 1
Please note that this will more than double the size of the audit log, and is only recommended when troubleshooting problems.
The shelf handler error log reports only requests that have not succeeded because of an unexpected error. It does not report all errors: for example, if an error occurs because of a user syntax error, or because of a valid but illogical HSM configuration, these are generally not reported in the error log.
If you see an entry in the error log, this means that it is worth investigating for more information. It does not necessarily mean that there is a problem with the HSM system, the hardware, or the media that contains the shelved file data. For more information on solving problems, see Chapter 7.
Each entry in the shelf handler log is tagged with a request number, which is incremented in the audit log. If a serious error occurs on a request, the request number in the audit log can be reconciled with the request number in the error log to obtain more information about the error.
The following are examples of audit and error log entries:
Shelf handler V3.0A (BL22), Oct 20 1999 started at 22-
17:23:25.32 Shelf handler client enabled on node SYS001
29 20-OCT-1999 19:53:05.58, 22-SEP 19:53:06.62: Status: Error
Application request from node SYS001, process 604003B9, user SMITH
Shelve file $1$DKA100:[SMITH]TESTJLM.DAT;1
30 20-OCT-1999 20:03:04.66, 22-SEP-
20:03:13.08: Status: Success
System request from node SYS002, process 40201C31, user SMITH
File fault (unshelve) file DISK$MYNODE:[SMITH]TESTJLM.DAT;1
31 20-OCT-1999 20:03:13.66, 20-OCT-
20:03:13.98: Status: Success
System request from node SYS002, process 40201C31, user SMITH
Unpreshelve file DISK$MYNODE:[SMITH]TESTJLM.DAT;1
6648 20-OCT-1999 18:33:03.31, 20-OCT- 18:33:04.16 status: Success
Reset PEp logs request from node MYNODE, PID 20200687, user BAILEY
6649 20-OCT-1999 18:36:40.36, 22-SEP-
17:23:04.16 status: Success
Scheduled request from node MYNODE, PID 20200687, user SYSTEM
Reactive execution on volume _$1$DKA100:
Using policy definition HSM$DEFAULT_OCCUPANCY
Volume capacity is 5841360 blocks
Current utilization is 5286012 blocks
Lowwater mark is 90% or 5257224 blocks used
Primary policy definition Space Time Working Set(STWS) was executed.
Secondary policy definition Least Recently Used(LRU) was not executed.
A total of 1454 requests for 28867 blocks were successfully sent
To reach the lowwater mark 0 blocks must be reclaimed.
6650 20-OCT-1999 19:25:04.10, 22-SEP- 18:36:47.42 status: Success
Exceeded quota request from node MYNODE, PID 20200687, user SYSTEM
Reactive execution on volume _$1$DKA200:
Using policy definition HSM$DEFAULT_QUOTA
Quota capacity is 194865 blocks
Current utilization is 203416 blocks
Lowwater mark for UIC [107,34] is 80% or 155892 blocks used
Primary policy definition Space Time Working Set(STWS) was executed.
Secondary policy definition Least Recently Used(LRU) was not executed.
A total of 2051 requests for 48042 blocks were successfully sent
To reach the lowwater mark 0 blocks must be reclaimed.
***************************************************************************
** 29 ** REQUEST ERROR REPORT
Error detected on request number 29 on node SYS001
Entry logged at 20-OCT-1999 19:53:06.86
** Request Information:
Identifier: 1
Process: 604003B9
Username: SMITH
Timestamp: 20-OCT-1999 19:53:05.58
Client Node: SYS001
Source: Application
Type: Shelve file
Flags: Nowait Notify
State: Original Validated
Status: Error
** Request Parameters:
File: $1$DKA100:[SMITH]TESTJLM.DAT;1
** Error Information:
%HSM-E- shelf access information unavailable for $1$DKA100:[SMITH]TESTJLM.DAT;1
%SYSTEM-E-SHELFERROR, access to shelved file failed
** Request Disposition:
Non-fatal shelf handler error
Fatal request error
Operation was completed
** Exception Information:
Exception Module Line
SHP_NO_OFFLINE_INFO SHP_3851
Exception Module Line
SHP_INVALID_OFFLINE_INFOSHP_4015
The event logs contain information that is logged at the end of each request, together with its final status. However, there is often a need to examine activity in progress for the following reasons:
To this end, HSM provides an SMU SHOW REQUESTS command that indicates the number of requests currently being processed. In addition, detailed information about requests can be dumped to an activity log on a SHOW REQUESTS/FULL command. The activity log is named:
A new version of the file is created for each SHOW REQUESTS /FULL command. The format of the activity log is similar to the shelf handler audit log, except that additional flags are displayed indicating the current state of the request.
In contrast to the event logs, which have clusterwide scope, the activity log is a node-specific log that reflects only the operations in progress on the requesting node. To accurately see activity on the entire cluster, you need to perform the SMU SHOW REQUESTS/FULL on every node in the cluster.
The following is an example of the activity log display:
** HSM Activity Log for Node MYNODE at 20-OCT-1999 16:37:06.67 **
1 20-OCT-1999 16:35:58.68, - Request in progess - Status: Null status
System request from node MYNODE, process 20200B24, user BAILEY
FileID Original Validated
Free space of 100 blocks for user BAILEY on volume _$1$DKA100:
2 20-OCT-1999 16:35:15.46, - Request in progess - Status: Null status
System request from node MYNODE, process 20200B24, user BAILEY
FileID Original Validated
Free space of 171 blocks for user BAILEY on volume _$1$DKA100:
3 20-OCT-1999 16:34:42.02, - Request in progess - Status: Null status
Shelf request from node MYNODE, process 20200B26, user HSM$SERVER
Original Validated
Flush cache file _$1$DKA0:[HSM_CACHE]TEST2.DAT$7702292510;1 to shelf stor
age
4 20-OCT-1999 16:34:42.01, - Request in progess - Status: Null status
Shelf request from node MYNODE, process 20200B26, user HSM$SERVER
Original Validated
Flush cache file _$1$DKA0:[HSM_CACHE]TEST1.DAT$7702292519;3 to shelf stor
age
In the activity log, requests are logged in reverse order of being received. Also, all active requests are logged, including internal requests that do not appear in the audit log.
If upon monitoring the activity log, or otherwise, you wish to cancel certain requests, there are several means to accomplish this. This is useful if a policy has started that is about to shelve files that you do not want to be shelved. Use the following table to determine how to cancel classes of requests:
Any request that is in operation may or may not complete. However, all pending requests are terminated with an "OPERATION DISABLED" message.
Once a managed entity is disabled, it must be reenabled for operations on that entity to resume.
Although you specify whether to install HSM Basic mode or HSM Plus mode during the installation process, you can convert to HSM Plus mode after the installation if you choose. To convert to HSM Plus mode, you need to do the following:
The remainder of this section explains how to perform the conversion tasks in detail and provides recommendations that should make the transition easier.
To shut down the shelf handler, you use the SMU SHUTDOWN command. This commands shuts down and disables HSM in an orderly manner. To use this command, you must have SYSPRV, TMPMBX, and WORLD privileges. If you do not shut down the shelf handler before you convert to Plus mode, the database could become corrupted and files may become ineligible for unshelving. Also, note that the mode change does not have effect until you restart HSM.
To disable the facility across the cluster, you use the SMU SET FACILITY command. You also use this same command, but with different qualifiers, to reenable the facility after the upgrade is completed. Disabling the facility prevents people from attempting to shelve or unshelve files while the conversion is in progress.
To enable HSM Plus mode to access the appropriate information, you need to make MDMS aware of (tape) volumes that already have been used. For new shelving, you can use volumes already in the MDMS database.
For volumes that have already been used for HSM Basic mode, you need to allocate those volumes for unshelving purposes to HSM, bearing in mind the specific volume names used for HSM Basic mode. Because you need to use these volumes as "read-only" volumes, you may want to create a special volume pool for all the old HSM Basic mode volumes.
For more information on preparing HSM to work with MDMS, see the Getting Started with HSM Chapter of the HSM Installation and Configuration Guide.
To change from HSM Basic mode to HSM Plus mode without reinstalling the HSM software, you need to change information about the facility and restart the shelf handler. Because HSM Version 3.0A converts existing HSM information upon installation, you do not need to do any additional conversion for HSM Plus mode to operate.
To change from HSM Basic mode to HSM Plus mode, use the following command:
Once you have made all the HSM Basic mode volumes known to MDMS and have reset the facility to HSM Plus mode, you are ready to restart HSM. To restart HSM, use the SMU STARTUP command.
If you intend to use the same archive classes for HSM Plus mode as you used for HSM Basic mode, you need to be very careful about the information that has been stored in those archive classes so far. To protect this information and enable HSM to use the same archive classes, you need to checkpoint the existing archive classes before you enable the facility for shelving in Plus mode.
The SMU CHECKPOINT command allows HSM to use the next volume in sequence for shelving operations within the archive class, but stops writing to the existing volumes for that archive class.
The last thing you need to do for HSM Plus to start running is to enable the facility for shelving and unshelving operations, because you disabled it earlier. To do this, use the following command:
The following is an example of a Basic mode configuration successfully converted to Plus mode. The Basic mode configuration consists of:
For the initial conversion to Plus mode, we will retain the same devices and archive classes for operation. Additional archive classes and devices can be added later in the usual way.
The following example shows the commands to issue to convert the above Basic mode configuration to Plus mode:
$!
$! Convert HSM to Plus Mode (does not affect current operations)
$!
$ SMU SET FACILITY/MODE=PLUS
$! Disable HSM shelving operations
$!
$ SMU SET FACILITY/DISABLE=ALL
$!
$! Shut Down HSM, and bring back up in Plus mode
$!
$ SMU SHUTDOWN
$!
$! Redefine the archive classes -
$! TK85K is a standard MDMS/SLS media type for "CompacTape III"
$! Pool TK85K_POOL is a pool for new volumes to be allocated in Plus mode
$!
$ SMU SET ARCHIVE 1,2 /MEDIA_TYPE=TK85K/ADD_POOL=TK85K_POOL
$!
$! If needed, define the HSM device in TAPESTART.COM, and restart
$! MDMS/SLS. If the device is a magazine loader, additional configuration
$! is necessary (see section 5.5.2 in the Guide to Operations).
$!
$! MTYPE_1 := TK85K
$! DENS_1 :=
$! DRIVES_1 := $1$MKA100:
$!
$ @SYS$STARTUP:SLS$STARTUP.COM
$!
$! Define the Basic mode volumes in the MDMS/SLS Database, using a
$! specific pool called HSM_BASIC. This helps prevent these volumes being
$! allocated by another application.
$!
$ STORAGE ADD VOLUME HS0001/MEDIA_TYPE=TK85K/POOL=HSM_BASIC
$ STORAGE ADD VOLUME HS0002/MEDIA_TYPE=TK85K/POOL=HSM_BASIC
$ STORAGE ADD VOLUME HS0003/MEDIA_TYPE=TK85K/POOL=HSM_BASIC
$ STORAGE ADD VOLUME HS0004/MEDIA_TYPE=TK85K/POOL=HSM_BASIC
$ STORAGE ADD VOLUME HS1001/MEDIA_TYPE=TK85K/POOL=HSM_BASIC
$ STORAGE ADD VOLUME HS1002/MEDIA_TYPE=TK85K/POOL=HSM_BASIC
$ STORAGE ADD VOLUME HS1003/MEDIA_TYPE=TK85K/POOL=HSM_BASIC
$ STORAGE ADD VOLUME HS1004/MEDIA_TYPE=TK85K/POOL=HSM_BASIC
$!
$! Allocate the Basic mode volumes for HSM use.
$!
$ STORAGE ALLOCATE TK85K/VOLUME=HS0001/USER=HSM$SERVER
$ STORAGE ALLOCATE TK85K/VOLUME=HS0002/USER=HSM$SERVER
$ STORAGE ALLOCATE TK85K/VOLUME=HS0003/USER=HSM$SERVER
$ STORAGE ALLOCATE TK85K/VOLUME=HS0004/USER=HSM$SERVER
$ STORAGE ALLOCATE TK85K/VOLUME=HS1001/USER=HSM$SERVER
$ STORAGE ALLOCATE TK85K/VOLUME=HS1002/USER=HSM$SERVER
$ STORAGE ALLOCATE TK85K/VOLUME=HS1003/USER=HSM$SERVER
$ STORAGE ALLOCATE TK85K/VOLUME=HS1004/USER=HSM$SERVER
$!
$! Create a volume set for each archive class - all but the first
$! volume in an archive class MUST BE APPENDED to the first volume
$! in the archive class. Also, the given user name must be correct.
$!
$! NOTE THE ORDER OF COMMANDS - THIS IS SIGNIFICANT TO GET THE
$! CORRECT PROGRESSION OF VOLUMES IN THE ORDER:
$! HSx001, HSx002, HSx003, HSx004
$!
$ STORAGE APPEND HS0001/VOLUME=HS0004/USER=HSM$SERVER
$ STORAGE APPEND HS0001/VOLUME=HS0003/USER=HSM$SERVER
$ STORAGE APPEND HS0001/VOLUME=HS0002/USER=HSM$SERVER
$ STORAGE APPEND HS1001/VOLUME=HS1004/USER=HSM$SERVER
$ STORAGE APPEND HS1001/VOLUME=HS1003/USER=HSM$SERVER
$ STORAGE APPEND HS1001/VOLUME=HS1002/USER=HSM$SERVER
$!
$! Define new volumes for the archive classes to use in Plus mode
$! (at least two per archive class).
$!
$ STORAGE ADD VOLUME DEC001/MEDIA_TYPE=TK85K/POOL=TK85K_POOL
$ STORAGE ADD VOLUME DEC002/MEDIA_TYPE=TK85K/POOL=TK85K_POOL
$ STORAGE ADD VOLUME DEC003/MEDIA_TYPE=TK85K/POOL=TK85K_POOL
$ STORAGE ADD VOLUME DEC004/MEDIA_TYPE=TK85K/POOL=TK85K_POOL
$!
$! Checkpoint the archive class to use new Plus mode volumes
$!
$ SMU CHECKPOINT 1,2
$!
$! Shut down HSM again
$!
$ SMU SHUTDOWN
$!
$! Restart HSM
$!
$ SMU STARTUP
$!
$! Enable HSM shelving operations
$!
$ SMU SET FACILITY/ENABLE=ALL
$!
At this point you can begin shelving files to the new volumes in Plus mode, as well as unshelve files from the previous volumes written in Basic mode.
In most environments, HSM performs operations to nearline and offline storage devices. In many cases, manual loading and unloading of tape volumes and tape magazines are required. This section describes the messages that HSM issues to the OpenVMS OPCOM interface, and what the operator's possible options are.
When running HSM, the OPCOM operator interface must be enabled to allow the operator to perform such loading and unloading. To enable the operator interface, enter the following command:
$ REPLY/ENABLE=(CENTRAL, TAPES)
%%%%%%%%%%% OPCOM 08-Jan-2003 14:25:46.05 %%%%%%%%%%%
Operator _SYS001$RTA2: has been enabled, username SYSTEM
%%%%%%%%%%% OPCOM 08-Jan-2003 14:25:46.06 %%%%%%%%%%%
Operator status for operator _SYS001$RTA2:
CENTRAL, TAPES
When an HSM operation is directed at a nonmagazine loader tape drive, the operator is responsible for loading and unloading tapes on the drive. The following messages apply to nonmagazine loader tape drives.
%%%%%%%%%%% OPCOM 28-OCT-13:52:47.09 %%%%%%%%%%%
Message from user HSM$SERVER on MYNODE
Please mount volume HSZ001 in device _ (no reply needed)
This request, issued by HSM, requests that you load a specific volume label into the specified drive.
%%%%%%%%%%% OPCOM 28-OCT- 13:52:48.04 %%%%%%%%%%%
Request 2324, from user HSM$SERVER on MYNODE
Please mount volume HSZ001 in device _ (OTHERNODE)
This request, issued by the OpenVMS mount command, requests that you load a specific volume label into the specified drive. Do one of the following:
If you load a volume into the drive, you can optionally reply with a confirmation:
If you do not reply after loading a volume, the mount completes and HSM proceeds anyway.
%%%%%%%%%%% OPCOM 08-Jan-2003- 14:25:46.05 %%%%%%%%%%%
Request 2324, from user HSM$SERVER on MYNODE
Allow HSM to reinitialize volume TEST to HS0001 in drive $1$MUA0:
NOTE: Previous contents of volume will be lost
This message is displayed if you loaded a volume with a different label than the one requested. Issue one of the following replies:
This reply is required. HSM will not proceed until the request is answered with one of the possible replies.
%%%%%%%%%%% OPCOM 30-MAY- 14:25:46.05 %%%%%%%%%%%
Message from user HSM$SERVER on MYNODE
Volume in drive $1$MUA0: has been re-initialized to HS0001
Please place label HS0001 on volume when unloaded
This message is a confirmation that HSM has reinitialized a volume label. It serves as a reminder to place a physical volume label with the name listed in the message when the volume is unloaded.
%%%%%%%%%%% OPCOM 30-MAY- 14:25:46.05 %%%%%%%%%%%
Message from user HSM$SERVER on MYNODE
Please place label HS0001 on volume unloaded from drive $1$MUA0:
This message is displayed when a tape volume, initialized by HSM, is unloaded from a drive. This is a final reminder to place the requested physical label on the tape volume, so that the volume can be located later. Do not issue a REPLY to this message.
In addition to HSM-generated OPCOM requests, OpenVMS BACKUP also issues OPCOM messages when handling continuation volumes for HSM Basic mode. Please refer to the OpenVMS Utilities Manual: A - Z for information relating to BACKUP requests.
HSM issues OPCOM messages to load and unload magazines into a magazine loader. The following requests are issued:
%%%%%%%%%%% OPCOM 30-MAY- 14:25:46.05 %%%%%%%%%%%
Request 3, from user HSM$SERVER on MYNODE
Please load magazine containing volume HS0001 into drive $1$MUA0:
This message requests that you load a specific magazine (stacker) into a magazine loader tape drive. The magazine itself is not identified, but the specific volume is identified. You should locate the magazine containing the specific volume, which should be labeled, and load that entire magazine into the magazine loader.
%%%%%%%%%%% OPCOM 30-MAY- 14:25:46.05 %%%%%%%%%%%
Message from user HSM$SERVER on MYNODE
The magazine loaded in drive $1$MUA0: has an invalid HSM configuration.
Please reconfigure magazine before reloading
See HSM Guide to Operations - Magazine Loaders
The magazine contains duplicate HSM volumes. Each HSM volume must have a unique label in the format HSyxxx, where y is the archive class minus 1, and xxx is a string in the format 001 - Z99. Please review the labels in the magazine, and initialize as appropriate. It is recommended that the labels in the magazine are ordered by archive class in ascending order. For example, HS0001, HS0002, HS1001, HS1002 etc.
Do not issue a REPLY to this message.
%%%%%%%%%%% OPCOM 30-MAY- 14:25:46.05 %%%%%%%%%%%
Message from user HSM$SERVER on MYNODE
Please unload magazine from drive $1$MUA0:
This message requests that you unload the current magazine from the specified drive, and store it in its usual place.
Do not enter a REPLY to this message
If HSM needs to use a volume or a volume contained in a magazine that is not currently imported into the loader, there is a series of OPCOM requests and actions that need to occur for HSM to continue without failing.
The following series of operator actions and replies occur when HSM needs to use a volume contained in a magazine that is not imported into a loader.
$
%%%%%%%%%%% OPCOM 08-Jan-2003 15:28:59.72 %%%%%%%%%%%
Request 65514, from user HSM$SERVER on SLOPER
Please import volume AEL008 or its associated magazine into jukebox containing drive _SLOPER$MKA500:
$ STORAGE EXPORT MAGAZINE MAG002
%%%%%%%%%%% OPCOM 08-Jan-2003 15:30:15.76 %%%%%%%%%%%
Message from user SLS on SLOPER
Remove Magazine MAG002 from Tape Jukebox JUKEBOX1
%SLS-S-MAGVOLEXP, magazine volume AEL001 exported from tape jukebox
%SLS-S-MAGVOLEXP, magazine volume AEL002 exported from tape jukebox
%SLS-S-MAGVOLEXP, magazine volume AEL003 exported from tape jukebox
%SLS-S-MAGVOLEXP, magazine volume AEL004 exported from tape jukebox
%SLS-S-MAGVOLEXP, magazine volume AEL005 exported from tape jukebox
%SLS-S-MAGVOLEXP, magazine volume AEL006 exported from tape jukebox
%SLS-S-MAGVOLEXP, magazine volume AEL007 exported from tape jukebox
$ STORAGE IMPORT MAGAZINE MAG001 JUKEBOX1
%%%%%%%%%%% OPCOM 08-Jan-2003 15:30:51.38 %%%%%%%%%%%
Request 65515, from user SLS on SLOPER
Place Magazine MAG001 into Tape Jukebox JUKEBOX1; REPLY when DONE
$ REPLY/TO=65515
15:31:08.27, request 65515 was completed by operator _SLOPER$FTA6:
%SLS-S-MAGVOLIMP, magazine volume AEL008 imported into tape jukebox
%SLS-S-MAGVOLIMP, magazine volume AEL009 imported into tape jukebox
%SLS-S-MAGVOLIMP, magazine volume AEL010 imported into tape jukebox
%SLS-S-MAGVOLIMP, magazine volume AEL011 imported into tape jukebox
%SLS-S-MAGVOLIMP, magazine volume AEL012 imported into tape jukebox
%SLS-S-MAGVOLIMP, magazine volume AEL013 imported into tape jukebox
%SLS-S-MAGVOLIMP, magazine volume AEL014 imported into tape jukebox
$ REPLY/TO=65514 15:31:17.45, request 65514 was completed by operator _SLOPER$FTA6:
The following series of operator actions and replies occur when HSM needs to use a volume that is not imported into a TL820 or similar device.
$
%%%%%%%%%%% OPCOM 08-Jan-2003 15:28:59.72 %%%%%%%%%%%
Request 65514, from user HSM$SERVER on SLOPER
Please import volume AWX001 or its associated magazine into jukebox containing
drive _SLOPER$MKA500:
$ STORAGE IMPORT CARTRIDGE AWX001 JUKEBOX1
%SLS-S-VOLIMP, volume AWX001 imported into tape jukebox
$ REPLY/TO=65514 15:31:17.45, request 65514 was completed by operator _SLOPER$FTA6:
OPCOM messages are provided in Plus mode when an attempt to select a drive for HSM operations fails. An example of the messages follows:
%%%%%%%%%%% OPCOM 08-Jan-2003 12:01:23 %%%%%%%%%%%
Message from user HSM$SERVER on SYS001
MDMS/SLS error selecting a drive for volume DEC100, retrying
%%%%%%%%%%% OPCOM 08-Jan-2003 12:01:24 %%%%%%%%%%%
Message from user HSM$SERVER on SYS001bad density specified for given media type
Two messages are written as a pair: the first message is a constant message from HSM identifying the problem volume. The second message is the MDMS/SLS error code received from the call. Please note HSM does not consider a select failure as fatal, and retries the operation indefinitely. Please examine the OPCOM messages and correct the MDMS/SLS problem: refer to the Media, Device and Management Services Guide to Operations for help in determining the problem. You can also use the command $ HELP STORAGE MESSAGE command for more information on specific MDMS/SLS messages for SLS /MDMS Versions prior to V2.6.
After the correction, HSM will proceed to process the requests normally. The OPCOM messages are repeated every 10 minutes if the select error continues to occur.
Another MDMS OPCOM message is printed if MDMS selects a drive for a tape volume, but cannot load the volume because it is already loaded in another drive.
%%%%%%%%%%% OPCOM 08-Jan-2003 12:01:23 %%%%%%%%%%%
Message from user HSM$SERVER on SYS001
Volume APW032 cannot be loaded into selected drive $1$MKA100:
Volume is loaded in another drive
Check volume location and drive availability, REPLY when corrected
This message should not normally happen, but if it does you should check the following:
In addition to the specific information given here about working with automated loaders, MDMS may display other messages that you need to respond to or deal with on versions prior to V2.6. For information about MDMS messages, see the MDMS online help.
The following OPCOM messages may be displayed when an error occurs trying to select and reserve a drive for HSM operations.
%%%%%%%%%%% OPCOM 08-Jan-2003 12:01:23 %%%%%%%%%%%
Message from user HSM$SERVER on SYS001
Drive "name" has been marked unavailable and disabled -Please re-enable or disable using SMU SET DEVICE name /ENABLE or /DISABLE
HSM has detected multiple errors while trying to use the drive, has assumed the drive to be bad, and has disabled operations on the drive. This message is repeated every 10 minutes until the operator enters one of the following commands:
%%%%%%%%%%% OPCOM 08-Jan-2003 12:01:23 %%%%%%%%%%%
Message from user HSM$SERVER on SYS001
Drive reservation for tape volume "name" stalled, retrying -
Optionally check drive availability and configuration
This message is an indication that a request for a tape drive is outstanding, and there are not enough drives available to handle the request. This could be because all defined drives are busy, or that a defined drive is disabled or otherwise cannot accept the request. Normally, no action is needed on this message, and the request is processed when a drive frees up. However, if this message persists for a long time, the operator should examine the HSM configuration and the drives to see if there is a problem.
%%%%%%%%%%% OPCOM 30-MAY 12:01:23 %%%%%%%%%%%
Message from user HSM$SERVER on SYS001
Tape volume label on drive "name" detected
Expected volume "right_name" but read "wrong_name"
Please check volume and configuration
This message is displayed when HSM mounts the wrong tape for an operation. An accompanying message will be issued for non-robot tape devices to request a load of the correct volume to the specified drive.
The following OPCOM messages are printed out to log significant events in HSM operations. They are also logged in the shelf handler audit log.
%%%%%%%%%%% OPCOM 6-JUN- 13:55:18.52 %%%%%%%%%%%
Message from user HSM$SERVER on SYS001
HSM shelving facility started on node SYS001
This message is printed out when HSM is started on a node via an SMU STARTUP command.
%%%%%%%%%%% OPCOM 6-JUN- 13:55:18.39 %%%%%%%%%%%
Message from user HSM$SERVER on SYS001
HSM shelf server enabled on node SYS001
This message is printed out when an HSM shelf server becomes enabled on a certain node. This means that all tape operations are handled by this node from this point on. This message is printed out at startup of the server node or when a node takes over as the shelf server after a failure.
%%%%%%%%%%% OPCOM 6-JUN- 13:55:18.52 %%%%%%%%%%%
Message from user HSM$SERVER on SYS001
HSM shelving facility shutdown on node SYS001
This message indicates that HSM has been shut down with an SMU SHUTDOWN command.
%%%%%%%%%%% OPCOM 6-JUN- 13:55:18.52 %%%%%%%%%%%
Message from user HSM$SERVER on SYS001
HSM shelving facility terminated on node SYS001
This message indicates that HSM has terminated for some reason. It immediately follows any shutdown message. If it appears without a shutdown message, then an error occurred. Refer to the shelf handler error log to determine the cause of the error.
This chapter describes many of the common problems that can arise as a result of using HSM and lists appropriate solutions. The chapter is structured into the following sections:
The sections describing problems are in the following format:
Problem
A description of symptoms and possible problems within the category.
Solution
The solution is usually a specific solution to fix the specific problem assuming that it is a problem. For example, the solution to the problem of not being able to shelve contiguous files is:
However, before issuing this command, you should evaluate the advantages and disadvantages of shelving contiguous files.
Reference
A pointer to the section of the document that you should read for more details on the proposed solution.
hp recommends reading this chapter even if you have not experienced any problems. It can alert you to potential problems to avoid when setting up and using HSM.
HSM provides several tools and utilities to help troubleshoot problems and resolve them as they occur. This section summarizes each tool and its purpose in troubleshooting.
Two components of HSM have startup logs, which record the startup procedure and any failures for the shelf handler process and the policy execution process:
If you have problems starting up HSM (using SMU STARTUP), examine these logs for more information. All messages to SYS$OUTPUT from the startup process and its subprocesses are written to this log. A new log file version is created for each startup event, and spans all nodes in the VMScluster system. You need to read the log to determine the node to which the log file refers.
These logs report requests and errors, and have clusterwide scope. You should examine shelf handler logs first, as these cover all activities performed by HSM. All user- visible requests are reported in the shelf handler audit log, on both success and error.
If a problem occurs during the execution of a policy, whether scheduled preventative policy or reactive policy, you can obtain more details on the error and associated policy execution in the policy execution audit log. The policy audit log gives quite detailed information about the progress of the policy execution and is logged for all policy runs. The policy error log gives additional information if the policy failed because of an unexpected error. An error log entry is not written if a policy simply fails to reach its goal-this information is written in the audit log.
Please note that entries are placed in the event logs at the completion of a request. Requests in progress are not reported in the event logs, but rather in the activity log (see Section 7.2.3).
In contrast to the event logs, the activity log allows you to examine requests that are in progress. This is useful if you suspect that an operation is hung, or there are requests that have been generated that you wish to cancel (such as an unintended mass shelving). An activity log can be obtained using the SMU SHOW REQUESTS/FULL command, which dumps all in-progress requests to the file HSM$LOG:HSM$SHP_ ACTIVITY.LOG. Note that the activity log is node-specific.
The activity log is similar to the shelf handler audit log in format, except that the status and "completion time" are necessarily different. In addition, flags showing the input options and progress of the request also are displayed.
If you are experiencing a problem in unshelving a shelved file's data, you can use the SMU LOCATE command to retrieve full information about the file's data locations. Although HSM tries to restore data from all known locations automatically, even when some of its metadata is missing, there may be occasions when this is not possible. In these situations, you should use the SMU LOCATE command to locate the file's data. Once you have found the data, you can restore it manually using BACKUP (from tape) or COPY (from cache) commands. SMU LOCATE reads the HSM catalog directly to find a shelved file's data locations.
You should note that the SMU LOCATE command does not work quite the same way as a typical OpenVMS commands when processing look-up and wildcard operations. The file name you supply as input (including any wildcards) applies to the file as stored in the HSM catalog at the time of shelving. Thus, for example:
When you retrieve information using the SMU LOCATE command, several instances or groups of stored locations may be displayed. These reflect the locations of the file when it was shelved at various stages of its life. You should carefully review the shelving time and revision time of the file to determine which, if any, is the appropriate copy to restore.
When a shelved file is accessed causing a file fault, or when a request to unshelve a file is made, HSM performs consistency checking to validate that the shelved file data actually belongs to the file being requested. There are many such tests, including verification of the file identifier, device, and revision dates to ensure that the data being retrieved for the file is correct.
If any of the consistency checks fail, the file is not unshelved and the user-requested operations fail with an error message. As the system manager, you may be able to force unshelving of the file if some of these tests fail by using the UNSHELVE/OVERRIDE command, which requires BYPASS privilege. This tool enables you to retrieve important file data in the event that an unusual situation has occurred.
hp recommends that you examine the circumstances of the original consistency failure before using the UNSHELVE /OVERRIDE option. For example, use the SMU LOCATE command to verify the file revision dates. It is very likely that the data that you would restore is not exactly current, and additional recovery may be needed. Under no circumstances should UNSHELVE/OVERRIDE be used during normal operations (in policy scripts for example). The consistency failure indicates that HSM has detected a real problem that needs to be examined.
The SMU RANK command provides the capability of previewing an actual policy execution against a volume, before any files are actually shelved. This lists the names of all files that would be shelved if a policy were to be executed on a volume.
To avoid a mass shelving problem, hp recommends that you make extensive use of this command before enabling any automatic policy executions on a volume (see Section 7.5).
This command also can be used to tune your policies so that they select the correct files for shelving based on usage in your environment and that the quantity of files that they select is manageable.
Many operational problems are caused by invalid or illogical configurations as set up using SMU commands. You can use the SMU SET and SHOW commands to determine if your configuration is valid and to make the configuration valid. The following are examples of common configuration problems that can easily be corrected using the SMU SET and SHOW commands:
See Chapter 3 for a tutorial in configuring HSM and the appendix in the Installation Guide for an example on how to set up a moderately complex configuration.
A number of problems can appear during the installation process. VMSINSTAL displays failure messages as they occur. If the installation fails, you see the following message:
%VMSINSTAL-E-INSFAIL, The installation of HSM V2.1 has failed.
Depending on the problem, you may see additional messages that identify the problem. Then, you can take appropriate action to correct the problem.
Sometimes, the problem does not show up until later in the installation process.
If the IVP fails, you see this message:
The HSM V2.1 Installation Verification Procedure failed.
%VMSINSTAL-E-IVPFAIL, The IVP for HSM V2.1 has failed.
Errors can occur during the installation if any of the following conditions exist:
For descriptions of the error messages generated by these conditions, see the OpenVMS documentation on system messages, recovery procedures, and VMS software installation. If you are notified that any of these conditions exist, you should take the appropriate action as described in the message. For information on installation requirements, see Chapter 1 of the HSM Installation Guide.
This section describes problems that can occur while starting up HSM.
If you cannot run the Shelf Management Utility (SMU), examine Table 7-1 for more information.
If the shelf handler process (HSM$SHELF_HANDLER) does not start up, examine Table 7-2 and the following files for more information:
If the shelf handler successfully starts up, but the policy execution process does not, examine Table 7-3 and the following files for more information:
Delete HSM$LOG:HSM$*.SMU, recreate databases and restart; run SMUor HSM$STARTUP.COM |
||
If you have entered a SHUTDOWN command, but HSM does not shut down, and you have waited at least 30 seconds, examine Table 7-4 for more information.
The following symptoms mean that parts of the HSM system are not running:
If the shelving driver is not loaded, issue the following command on OpenVMS VAX? systems:
$ MCR SYSGEN CONNECT HSA0:/NOADAPTER
If the shelving driver is not loaded, issue the following command on OpenVMS Alpha? systems:
$ MCR SYSMAN IO_CONNECT HSA0:/NOADAPTER
To recover any other component, issue the following command:
Unintended mass shelving can occur when you enable OCCUPANCY, HIGHWATER_MARK, and QUOTA operations on specific volumes, or the default volume, without careful preparation. hp recommends that you stage automatic shelving, one volume at a time, and in manageable quantities on those volumes by gradually lowering the volume's low water mark from its current occupancy level to the desired level.
You should not attempt to shelve more than 1000 files at a time, otherwise HSM's performance will degrade. Use the SMU RANK command to determine the quantity (and names) of files that would be shelved, before enabling the policy.
If you have accidentally initiated a mass shelving operation on a volume, use Table 7-5 to recover.
Additional options exist to cancel shelving operations at other granularities. See Table 5-6.
Note that once a shelving policy has begun, it is too late to simply disable the policy on the volume: SHELVING must be disabled. However, it is a good idea to disable OCCUPANCY, HIGHWATER_MARK, and EXCEEDED QUOTA on the volume, in case a trigger initiates another mass shelving on the volume.
Although the installation procedure marks OpenVMS system files as unshelvable, this could be enabled (intentionally or unintentionally) later. The installation procedure does not protect layered product files from shelving. You should define system disks separately from the HSM$DEFAULT volume and disable all HSM operations, as in the following example:
$ SMU SET VOLUME SYS$SYSDEVICE:/DISABLE=ALL
Note that if there is more than one system disk in a VMScluster system, the command should be issued on each node that has its own system disk. This especially applies to mixed VAX and Alpha VMScluster systems.
If OpenVMS system or key layered product files are shelved, the consequences are that it may no longer be possible to boot any system in the VMScluster environment. Specifically, if a file involved in the system startup stream is shelved, then accessed before HSM is started, the boot procedure will fail. Recovery may require a complete reinstallation of OpenVMS and affected layered products. It is much better to simply disable shelving on the system disks rather than to have to worry about all these consequences.
The procedures in Table 7-6 should be adopted to prevent or recover from this condition.
There are a number of problems that HSM Plus mode may have that are not HSM problems, but are instead problems with MDMS. Many of these problems are related to MDMS configuration issues. For more information, see the Plus Mode Offline Environment Chapter of the HSM Installation and Configuration Guide and the Media, Device and Management Services for OpenVMS Guide to Operations.
HSM is designed to run in a VMScluster environment. It must run on all nodes in the cluster so that files can be accessed from any node. The following requirements define how HSM must be run in a cluster environment for correct operation:
If you are still having VMScluster problems, examine Table 7-8 for more information.
You can enable HSM operations on any or all of your online disks in the cluster as long as those disks are served and accessible to all nodes in the VMScluster system. HSM operations on purely local disks are not supported for HSM Version 2.2.
The online disks must be mounted and accessible to all nodes in the cluster. Any suitably privileged user can perform HSM operations on system-mounted disks. Access to group-mounted disks are subject to the same restrictions for HSM as normal operations. Process-mounted disks are ineligible for HSM operations.
HSM keeps a file open on all disks enabled for HSM operations: this file must be closed if the disk needs to be dismounted for any reason. To do this, enter the following commands:
Table 7-9 shows problems that can occur with online disks.
The following problems are related to using an online cache. Unless you use the /BACKUP qualifier on the cache to create nearline/offline shelf copies at shelving time, your file data exists as a single copy on one of the defined cache devices until the cache is flushed. To ensure that this single copy provides the same level of protection as your online data, hp recommends the following:
Table 7-10 shows problems that can occur with cache operations.
You can use magneto-optical devices in HSM by defining them as cache devices. As with other cache devices, each device must be accessible and system-mounted on all nodes in the VMScluster system. You can use magneto-optical devices in one of two ways:
Each platter (or side of platter) that you wish to use as a cache must be defined with an SMU SET CACHE command, and system-mounted on all nodes in the VMScluster system. Use the logical device name of the mounted MO volume (JBxxx:) in the SET CACHE commands, not the name of the MO drives.
Table 7-11 shows problems that can occur with magneto-optical devices. See also cache problems in Section 7.10.
You can configure any number of nearline/offline devices for HSM use.
In Basic mode, nearline and offline devices must be accessible by all nodes in the VMScluster system designated as shelf servers, or all nodes in the VMScluster system if no servers are specified.
In Plus mode, you can use nearline and offline devices that are:
Remote devices cannot be dedicated for HSM use.
Non-remote devices can be shared or dedicated for HSM use. If you set up a device for dedicated use, HSM will keep a tape mounted in the device at all times in anticipation of the next operation. With shared usage, HSM dismounts and unloads the device within one minute of the last operation.
Except when you are using nearline devices exclusively, tape operations are requested using OPCOM messages. You should enable OPCOM classes CENTRAL and TAPES at all times to respond to such messages.
Table 7-12 shows problems that can occur with offline devices.
HSM supports various types of Digital magazine loaders and robotically-controlled large tape jukeboxes for use as nearline shelf storage. Specific support varies depending on whether you are running HSM in Basic mode or Plus mode. You define these devices with SMU SET DEVICE commands as you would for any offline device and additional MDMS commands for HSM Plus mode. Table 7-13 shows problems that can occur with magazine or robotic loaders.
Table 7-14 describes generic shelving problems. These problems may additional to specific cache or device problems. Many of these problems also apply to preshelving.
Table 7-15 describes generic unshelving problems that are in addition to specific cache or device problems. Unshelving problems also apply to file faults.
HSM policies are designed to automatically shelve files based on triggers initiated by online disk events, high water marks, or scheduled operation. All problems with policies should first be examined by reading the following files:
In addition, details on specific policy runs can be found in the output file specified with SMU SET POLICY/OUTPUT.
Because policy runs usually involve shelving operations, please see also Section 7.14 if the shelving operations of the policy fail, rather than the policy itself.
Table 7-16 shows problems that can occur with policy execution.
HSM uses several files for its own purposes, and these files need to be carefully maintained. These files include:
It is imperative that the logical names associated with these files are defined on all nodes with the same definitions, so that HSM uses the same files on all nodes. It is also vital that the files contained within HSM$CATALOG and HSM$MANAGER are given the highest safety protection available, including:
Specifically, the HSM catalog must be given the highest priority. An unrecoverable loss of the catalog will usually mean that you have lost access to all shelved file data, unless you have kept logs of locations of the data by regular SMU LOCATE commands, and stored them away.
Refer to Section 5.10 for more details about how to recover HSM system files.
At the current time, there are a few limitations to HSM operations of which you should be aware. These limitations are not necessarily the fault of HSM, but are instead reliant upon OpenVMS behaviors:
OpenVMS limits the number of file headers available for an online disk volume based on how the disk is initialized. As a result, as you shelve data and do not clean up your online disk, you could eventually exceed the number of file headers available.
To prevent this problem, make sure you delete file headers as appropriate. What this means is, when you no longer need a file, do not leave it shelved with the file header on disk. Use another strategy to archive the file, just in case you need it someday. Then, delete the file from the disk.
If you experience either IDXFILEFULL or HEADERFULL errors while trying to create files, you have exceeded the file header limit defined on your system. If you installed HSM on an existing system and have not specifically initialized your volumes for HSM use, you may not have planned for the additional number of files in INDEXF.SYS (the index file that contains the file headers for both online and shelved files). Also, you may not have preallocated space for the file headers using the /HEADERS qualifier on the disk initialization.
If your users get IDXFILEFULL errors while trying to create files on the volume it means they are attempting to create more files than that specified on the MAXIMUM_ FILES qualifier when the volume was initialized. There are two possible solutions to this:
If your users get a HEADERFULL error on INDEXF.SYS when creating files, it means the INDEXF.SYS file has reached its fragmentation limit. That is, adding one more file extent to INDEXF.SYS causes the "Map area words in use" field of INDEXF.SYS's header to exceed 155. To solve this problem:
The second step (reinitialize the disk) is not necessary unless you want to increase the MAXIMUM_FILES value of the disk or preallocate a larger INDEXF.SYS file (via /HEADERS). If you do reinitialize the disk, remember to use the /NOINITIALIZE qualifier on the backup command when restoring the disk.
When you attempt to execute (via a RUN command, for example) a shelved executable file, this causes a file fault. If you then try to cancel that execution, it does not. This occurs because OpenVMS does not actually allow you to cancel a DCL command using a Ctrl/Y. Normally, when you submit a DCL command that operates on data located online and type a Ctrl/Y to cancel it, the execution completes and then is canceled quickly enough that you do not notice.
If you attempt to access a shelved file across a network but have set your process to /NOAUTO_UNSHELVE, the file is unshelved.
If you perform an RMS open of a shelved, indexed file, a file fault occurs, because some of the RMS metadata resides in the data section of the file. A file fault also occurs if you perform a DELETE/LOG of a shelved, indexed file; use DELETE/LOG with caution. DELETE/NOLOG works as expected.
This chapter starts by describing the Media, Device and Management Services software (MDMS)' management concept and its implementation. Following that is a description of the product's internal interfaces.
Media, Device and Management Services V4.1 (MDMS), can be used to manage locations of tape volumes in your IT environment. MDMS identifies all tape volumes by their volume label or ID. Volumes can be located in different places like tape drives or onsite locations. Requests can be made to MDMS for moving volumes between locations. If automated volume movement is possible - like in a jukebox (tape loader, tape library) - MDMS moves volume/s without human intervention. MDMS sends out operator messages if human intervention is required.
MDMS allows scheduled moves of volumes between onsite and offsite locations (e.g. vaults).
Multiple nodes in a network can be setup as an MDMS domain. Note that:
MDMS is a client/server application. At a given time only one node in an MDMS domain will be serving user requests and accessing the database. This is the database server. All other MDMS servers (which are not the database server) are clients to the database server. All user requests will be delegated through the local MDMS server to the database server of the domain.
In case of failure of the designated database server, MDMS' automatic failover procedures ensure that any of the other nodes in the domain, that has the MDMS server running, can take over the role of the database server.
MDMS manages all information in its database as objects. See MDMS Object Records and What they Manage lists and describes the MDMS objects.
MDMS tries to reflect the true states of objects in the database. MDMS requests by users may cause a change in the state of objects. For some objects MDMS can only assume the state, for example: that a volume has been moved offsite. Wherever possible, MDMS tries to verify the state of the object. For example if MDMS finds a volume that should have been in a jukebox slot, in a drive, it updates the database with the current placement of the volume.
MDMS provides an internal callable interface to ABS and HSM software. This interfacing is transparent to the ABS or HSM user. However some MDMS objects can be selected from ABS and HSM.
MDMS communicates with the OpenVMS OPCOM facility when volumes need to be moved, loaded, unloaded, and for other situations where operator actions are required. Most MDMS commands allow control over whether or not an OPCOM message will be generated and whether or not an operator reply is necessary.
MDMS controls jukeboxes by calling specific callable interfaces. For SCSI controlled jukeboxes MDMS uses the MRD/MRU callable interface. For StorageTek jukeboxes MDMS uses DCSC. You still have access to these jukeboxes using the individual control software but doing so will make objects in the MDMS database out-of-date.
MDMS includes two interfaces: a command line interface (CLI) and a graphic user interface (GUI). This section describes how these interfaces allow you to interact with MDMS.
MDMS provides a DCL command line interface in addition to MDMSView. Some people prefer a command line interface, and it can also be used for automated command procedures. With this release, the entire command line interface is supported within MDMS, which maintains the database for media management.
The MDMS DCL interface uses a consistent syntax for virtually all commands in the format:
$ MDMS VERB OBJECT_KEYWORD OBJECT_NAME /QUALIFIERS
The verb is an simple action word, and may be one of the following:
The object keyword is the object class name that the verb is to operate on. In MDMS, the object keyword cannot be omitted. MDMS supports the following object keywords:
Following the object keyword, you should enter an object name. This must be the name of an already-existing object unless the verb is "Create", in which case the object must not already exist.
The qualifiers for all commands are non-positional and may appear anywhere in the command line.
There are two exceptions to the general command syntax, as follows:
MDMS MOVE VOLUME TLZ234 TLZ_JUKE/SLOT=4
$ MDMS REPORT VOLUME VOLUME,STATE=ALLOCATED,SCRATCH_DATE,PLACEMENT,PLACNAME
With this release of MDMS, all of the following commands accept a list of objects, so that you can operate on multiple objects in a single command:
If you specify an attribute in a CREATE or SET command and use an object list, then that attribute value is applied to all objects in the list.
Certain qualifiers accept a list of attributes, and the list can be applied in one of three ways using an appropriate qualifier:
Consider the following examples:
MDMS CREATE GROUP COLORADO/NODES=(DENVER, SPRINGS, PUEBLO)
The group Colorado contains nodes Denver, Springs and Pueblo
MDMS SET GROUP COLORADO/NODE=ASPEN
The group Colorado now contains nodes Denver, Springs, Pueblo and Aspen. With no list qualifier specified, /ADD is applied by default.
All MDMS objects now accept the /INHERIT qualifier on Create. This allows you to create new objects and inherit most attributes of an existing object. This provides an easy way to "clone" objects, then apply the any differences in individual commands. It saves the effort of typing in all the attributes once a prototype has been established. In general, only non-protected fields of objects can be inherited.
In addition, the object list capability allows you to clone multiple objects in a single command. For example:
MDMS CREATE DRIVE DRIVE_2, DRIVE_3, DRIVE_4/INHERIT=DRIVE_1
This command creates three drives and applies all non-protected attributes of DRIVE_1 to the three new drives.
MDMS now supports symbols on all objects, which command procedures can read and process. To use symbols, enter a Show command for a single object (symbols are not supported for object lists). The symbols are generally in the format "MDMS_INQ_qualifier", where "qualifier" is almost always the associated qualifier name for the attribute. The list of symbols for each show command is documented for that command, and is also available in DCL help.
When you issue a Show/Symbols, the show output is not displayed by default. If you wish to see the output as well, use Show/Symbols/Output.
MDMS supports the normal DCL help mechanisms, as follows:
$ MDMS HELP [VERB] [KEYWORD] [/QUALIFIER]
$ HELP MDMS [VERB] [KEYWORD] [/QUALIFIER]
In addition, you can request help on any error message, for example:
MDMS HELP MESSAGE NOSUCHOBJECT
You can request help on any MDMS logical name, for example:
MDMS HELP LOGICAL MDMS%$LOGFILTER
Finally, you can locate the mapping of the old (pre-version 4.0B) ABS commands to the MDMS equivalent, for example:
MDMS HELP MAPPING CREATE ARCHIVE
The MDMS Reference Guide fully documents all DCL commands and qualifiers.
MDMSView and the MDMS DCL supports operations on Archive Backup System (ABS) objects only if an ABS or SLS license is loaded on the system. The ABS objects are:
MDMS supports operations on the other media management objects if the system only has a Hierarchical Storage Management (HSM) license installed, or with an ABS or SLS license.
In addition, if the ABS license is the restricted OMT license, the following operations are not supported:
MDMS provides a graphical user interface called MDMSView, which provides several views that you can use to manage your MDMS domain. MDMSview provides support for both media management and (if you have an ABS license) the Archive Backup System. MDMSView is designed to be the preferred interface to ABS and MDMS, with the goal of supporting most, if not all, of the regular management tasks. MDMSView supersedes all previous graphical interfaces for both ABS and MDMS.
MDMSview provides several views into the management of MDMS objects and requests, including ABS objects managed by MDMS. In V4.1, a limited number of views have been implemented, but many more are planned for future releases. MDMSView currently supports the following views:
Each view is provided in a tab from the main screen, and you can be working in several views at the same time, although only one is visible at a time. When switching from one view tab to another, the contents of the tab you are leaving are retained, and you can return to it at any time.
MDMSView is installed at installation time on OpenVMS systems. Please refer to the Installation Guide for instructions on how to install MDMSView and Java on OpenVMS systems.
Once the installation is complete, the following commands are required to activate the GUI:
$ SET DISPLAY/CREATE/NODE= nodename /TRANSPORT=TCPIP
where nodename is the TCP/IP node name of the system on which the MDMSView display is to appear. Although the GUI itself must run on an Alpha System V7.2-1 and higher, using Java 1.2 or higher, the MDMSView display can be redirected to any OpenVMS system, including VAX systems and those running OpenVMS versions less than V7.2-1.
A SETUP.EXE package is also installed on OpenVMS systems for use on Microsoft Windows (R) PCs. This file may then be transported to any Microsoft Windows PC and executed. The SETUP.EXE will install MDMSView at a default location of C:\MDMSView, although alternative locations are possible. Once the PC installation is complete, you can execute MDMSView by clicking on the mdmsview.bat file in that directory.
Once MDMSView is started, it will come up with the default look and feel for the system. For OpenVMS systems, this is the Java/Metal look and feel. For Windows systems, this is the Windows look and feel. You can adjust the look and feel to your taste by using the View menu as follows:
Changing the look and feel requires a new login, so it's a good idea to change this before logging in. The value is saved in the MDMSView initialization file, and is used on all subsequent invocations from this location.
Once MDMSView is started and the look and feel is set, you need to log into an OpenVMS system, even if you are running on an OpenVMS system already. You can log into any OpenVMS node in the MDMS domain, as long as it supports TCP/IP communication. Logging in requires three fields, as follows:
If there is a login failure for any reason, the node name and user name are retained for subsequent retries, but the password must always be re-entered.
After a successful login, the login screen disappears and the MDMSView splash screen is displayed.
The next step is to select a view depending on what you want to do. Here are some tasks that you might wish to perform, and the associated view(s) that support them:
The domain view and object view produce attribute and operation screens that work on one object at a time. The task view produces screens that can operate on multiple objects, but restrict the display of attributes to those that are common across the objects. The request view is a specialized view that allows you to show current requests (as a whole or in detail), and allows you to delete requests as needed. The report view is a specialized view that generates customized volume reports.
All view displays are divided into two parts:
While resizing the MDMSview screens is not supported, you can choose to view only the left or right screens by using the arrows at the top of the division between the left and right screens. Clicking on the left arrow eliminates the left screen, and clicking on the right arrow eliminates the right screen. To restore the dual screens, click on the opposite arrow.
If you wish to create a new object, you can choose the Domain, Object or Task Views to accomplish this. The Domain and Object Views create objects one at a time, while the Task View can create multiple objects.
To create an object, use one of the following methods:
Once a create screen appears, (except for catalogs) you are prompted for two pieces of information:
The domain and object views allow creation of only one object at a time, whereas the task view allows a comma-separated list of new objects (and also ranges in the case of volumes). Depending on the view, enter the name or names of the new objects you wish to create.
The inherit object allows you to copy most of the attributes from the inherit object to the object being created. If you wish to specify an inherit object, use the combo box to select the existing inherit object. This must be the same type of object, except in the case of restores, in which case you can inherit from either a restore or a save object.
After clicking create, the new object attribute and operations screens appear, which you can then modify to your liking. In the task view, this screen modifies all the newly created objects.
For objects that already exist, you can use the Domain View, Object View or Task View to show and optionally modify objects, or to perform operations on them.
To view an object, use one of the following methods:
When an object is selected, its attributes and operations are displayed in a two-dimensional tab screen as follows:
If you select the Show screen and wish to modify attributes, use the tool tip text for help on any field. Select appropriate values (from all the show tabs as needed), then click on Set. This sends the currently displayed values from all tabs to the MDMS server. If you just wish to view the object's attributes without modification, click on Cancel after viewing the attributes. This returns you to the object class screen.
MDMSView supports switching from one object to another during displaying of values. For objects that appear in combo boxes or lists, you can view related objects without losing the context of the current object. Each combo box or list attribute supports two methods of viewing, selecting and creating objects:
From the menu, there are the following options:
If you select Show or Create, you will go to an appropriate screen. When you then complete your operation on that object, you will come back to the original object.
You can delete objects from the Domain, Object and Task Views. To delete an object, perform one of the following:
A request to delete an object will always bring up a Delete dialog box for confirmation of the delete. You can confirm "OK" or "Cancel" from here.
The Domain view provides a way to view the hierarchical structure of the MDMS domain. The left side of the screen provides an object-class-object... hierarchy of objects belonging to other objects, or objects contained in other objects. The left side of the screen displays most of the object classes which contain other objects (the exceptions: selections, schedules and volumes, which have no sub-objects). You can begin the hierarchical navigation at any level, and all sub-levels can be displayed.
For example, starting at jukebox, you can view all objects that reside in a jukebox: Drives, Magazines and Volumes. If you then click on Drives, you will see all drives in this jukebox. If you then select a drive and click on it, you can see the volume in the drive.
If your domain is sufficiently complex, you might want to expand the left side of the screen by using the right arrow between the left and right screen. You can then view the entire hierarchy of the domain.
If you wish to perform an operation on an object (for example, to load a volume into a drive), you should first display the object's attributes and operations screens. Then select the desired operation tab, on the right side of the screen. For example, to load a volume, show the volume then click on the Load tab.
The load tab is called an operations tab, and they all follows the same basic concepts. You enter options concerning the operation (for example, operator assistance), then press the appropriate operation button on the bottom left of the screen. This button is always labelled with the appropriate operation (for example, Load).
MDMS has the capability of performing long-running operations synchronously or asynchronously. However, in MDMSView, long-running operations are always submitted asynchronously and control is returned to the user. Asynchronous operations show a dialog box that states that the operation has been queued for processing, but has not yet completed. If you perform an operation that does not result in the dialog box, then you can safely assume it has been completed synchronously.
If you receive a "queued" dialog box, it does not necessarily mean that the operation was fully validated. If you want to check on the status of the operation, use the Request View to monitor the request's progress.
The Request View provides a monitoring capability for all current MDMS operations. You can display all current requests by clicking on Show Requests - this results in a table of requests being displayed. This includes all current requests, and some recently-completed requests.
You can also expand the requests on the left side of the screen and click on a specific request for detailed information about the request. Or you can right-click on the request number on the left screen and select Show.
If you feel that a request is not working correctly, or for any reason you wish to delete the request, you can click on delete from the detailed request screen, or select a request number on the left screen, right-click and select delete from the popup menu.
As with other deletes, a dialog box will appear to confirm the delete of the request.
The Report View provides the capability of generating custom reports on volumes. With this view, you can choose attributes that can be displayed and/or used as selection criteria for volumes.
To select an attribute for display, simply click on the attribute and then press the right arrow button to move it to the display screen. The attributes are displayed in the report in the order selected. If you change your mind or wish to re-order the attributes, select an attribute on the display screen and press the left arrow button to deselect it.
If you wish to use an attribute as a selection criterion, click on the attribute, then click on "Use for Selection". This will enable a field below (either a text field or combo box) to allow you to enter a selection.
You may display any number of fields and use any number of selection criteria to customize the report. When your selections are ready, you can generate the report by clicking on "Generate". You can see the resultant report in the "Report Results" tab.
If you wish to save this report, enter a report title in the text field at the bottom of the screen and click on save. The report is saved to the following locations:
For example, a report file name is: Report_2001_12_17_8_35_17.txt
Once the results screen is displayed, you can sort the report using any field by clicking on the field's header. You can reverse-sort the same field by clicking on the field header again.
To examine past operations in MDMS, you can use the event view to view the MDMS audit and event logfile. There are five pre-configured options and a fully flexible custom option to allow you to select what you wish to see from the MDMS logfile. The five pre-configured options all apply to the MDMS Database Server logfile and show all operations (auditing and events) for the following amounts of time before the current time:
If you wish to see the logfile using other selection criteria, you can use the "Custom" setting. Byclicking on "Custom", a selection screen appears that allows you to select the entries to be displayed as follows:
- Low and high request IDs (for DB server only)
After entering the selection criteria, you click on the Show button to display. Depending on the size of the log file, this operation may take several seconds to complete. You may want to regularly reset your log files to avoid long response times. The code has been written to scan previous versions of log files if the date and or request selections are not in the latest log file.
The Refresh button at the bottom of the screen refreshes whatever selection is currently on thescreen. The Cancel button allows you to enter a new selection.
MDMSView can report two types of errors:
MDMSView provides three types of help:
This section describes access rights for MDMS operations. MDMS works with the OpenVMS User Authorization File (UAF), so you need to understand the Authorize Utility and OpenVMS security before changing the default MDMS rights assignments.
MDMS rights control access to operations, not to object records in the database.
Knowing the security implementation will allow you to set up MDMS operation as openly or securely as required.
MDMS controls user action with process rights granted to the user or application through low and high level rights.
The low level rights are named to indicate an action and the object the action targets. For instance, the MDMS_MOVE_OWN right allows the user to conduct a move operation on a volume allocated to that user. The MDMS_LOAD_ALL right allows the user to load any managed volume.
For detailed descriptions of the MDMS low level rights, refer to the ABS or HSM Command Reference Guide.
MDMS associates high level rights with the kind of user that would typically need them. Refer to the ABS or HSM Command Reference Guide for a detailed list of the low level rights associated with each high level right. The remainder of this section describes the high level rights.
The default MDMS_USER right is for any user who wants to use MDMS to manage their own tape volumes. A user with the MDMS_USER right can manage only their own volumes. The default MDMS_USER right does not allow for creating or deleting MDMS object records, or changing the current MDMS configuration.
Use this right for users who perform non-system operations with ABS or HSM.
The default MDMS_APPLICATION right is for the ABS and HSM applications. As MDMS clients using managed volumes and drives, these applications require specific rights.
The ABS or HSM processes include the MDMS_APPLICATION rights identifier which assumes the low level rights associated with it. Do not modify the low level rights values for the Domain application rights attribute. Changing the values to this attribute can cause your application to fail.
The default MDMS_OPERATOR right supports data center operators. The associated low level rights allow operators to service MDMS requests for managing volumes, loading and unloading drives.
The low level rights associated with the MDMS_DEFAULT right apply to any OpenVMS user who does not have any specific MDMS right granted in their user authorization (SYSUAF.DAT) file. Use the default right when all users can be trusted with an equivalent level of MDMS rights.
The high level rights are defined by domain object record attributes with lists of low level rights. The high level rights are convenient names for sets of low level rights.
For MDMS users, grant high and/or low level rights as needed with the Authorize Utility. You can take either of these approaches to granting MDMS rights.
You can ensure that all appropriate low level rights necessary for a class of user are assigned to the corresponding high level right, then grant the high level rights to users.
You can grant any combination of high level and low level rights to any user.
Use the procedure outlined in See Reviewing and Setting MDMS Rights to review and set rights that enable or disable access to MDMS operations. CLI command examples appear in this process description but can use the GUI to accomplish this procedure as well.
This section describes the basic concepts that relate to creating, modifying, and deleting object records.
Both the CLI and GUI provide the ability to create object records. MDMS imposes rules on the names you give object records. When creating object records, define as many attribute values as you can, or inherit attributes from object records that describe similar objects.
When you create an object record, you give it a name that will be used as long as it exists in the MDMS database. MDMS also accesses the object record when it is an attribute of another object record; for instance a media type object record named as a volume attribute.
MDMS object names may include any digit (0 through 9), any upper case letter (A through Z), and any lower case letter (a through z). Additionally, you can include $ (dollar sign) and _ (underscore).
The MDMS CLI accepts all these characters. However, lower case letters are automatically converted to upper case, unless the string containing them is surrounded by the "(double quote) characters. The CLI also allows you to embed spaces in object names if the object name is surrounded by the " characters.
The MDMS GUI accepts all the allowable characters, but will not allow you to create objects that use lower case names, or embed spaces. The GUI will display names that include spaces and lower case characters if they were created with the CLI.
hp recommends that you create all object records with names that include no lower case letters or spaces. If you create an object name with lower case letters, and refer to it as an attribute value which includes upper case letters, MDMS may fail an operation.
The following examples illustrate the concepts for creating object names with the CLI.
These commands show the default CLI behavior for naming objects:
$!Volume created with upper case locked
$MDMS CREATE VOLUME CPQ231 /INHERIT=CPQ000 !Standard upper case DCL
$MDMS SHOW VOLUME CPQ231
$!
$!Volume created with lower case letters
$MDMS CREATE VOLUME cpq232 /INHERIT=CPQ000 !Standard lower case DCL
$MDMS SHOW VOLUME CPQ232
$!
$!Volume created with quote-delimited lower case, forcing lower case naming
$MDMS CREATE VOLUME ìcpq233î /INHERIT=CPQ000 !Forced lower case DCL
$!
$!This command fails because the default behavior translates to upper case
$MDMS SHOW VOLUME CPQ233
$!
$!Use quote-delimited lower case to examine the object record
$MDMS SHOW VOLUME ìcpq233î
This feature allows you to copy the attributes of any specified object record when creating or changing another object record. For instance, if you create drive object records for four drives in a new jukebox, you fill out all the attributes for the first drive object record. Then, use the inherit option to copy the attribute values from the first drive object record when creating the subsequent three drive object records.
If you use the inherit feature, you do not have to accept all the attribute values of the selected object record. You can override any particular attribute value by including the attribute assignment in the command or GUI operation. For CLI users, use the attribute's qualifier with the MDMS CREATE command. For GUI users, set the attribute values you want.
Not all attributes can be inherited. Some object record attributes are protected and contain values that apply only to the specific object the record represents. Check the command reference information to identify object record attributes that can be inherited.
MDMS allows you to specify object record names as attribute values before you create the records. For example, the drive object record has a media types attribute. You can enter media type object record names into that attribute when you create the drive object before you create the media type object records.
The low level rights that enable a user to create objects are MDMS_CREATE_ALL (create any MDMS object record) and MDMS_CREATE_POOL (create volumes in a pool authorized to the user).
Whenever your configuration changes you will modify object records in the MDMS database. When you identify an object that needs to be changed you must specify the object record as it is named. If you know an object record exists, but it does not display in response to an operation to change it, you could be entering the name incorrectly. Section See Naming Objects describes the conventions for naming object records.
Do not change protected attributes if you do not understand the implications of making the particular changes. If you change a protected attribute, you could cause an operation to fail or prevent the recovery of data recorded on managed volumes.
MDMS uses some attributes to store information it needs to manage certain objects. The GUI default behavior prevents you from inadvertently changing these attributes. By pressing the Enable Protected button on the GUI, you can change these attributes. The CLI makes no distinction in how it presents protected attributes when you modify object records. Ultimately, the ability to change protected attributes is allowed by the MDMS_SET_PROTECTED right and implicitly through the MDMS_SET_RIGHTS right.
The low level rights that allow you to modify an object by changing its attribute values are shown below:.
When managed objects, such as drives or volumes, become obsolete or fail, you may want to remove them from management. When you remove these objects, you must also delete the object records that describe them to MDMS.
When you remove object records, there are two reviews you must make to ensure the database accurately reflects the management domain: review the remaining object records and change any attributes that reference the deleted object records, review any DCL command procedures and change any command qualifiers that reference deleted object records.
When you delete an object record, review object records in the database for references to those objects. See Reviewing Managed Objects for References to Deleted Objects shows which object records to check when you delete a given object record. Use this table also to check command procedures that include the MDMS SET command for the remaining objects.
Change references to deleted object records from the MDMS database. If you leave a reference to a deleted object record in the MDMS database, an operation with MDMS could fail.
When you delete an object record, review any DCL command procedures for commands that reference those objects. Other than the MDMS CREATE, SET, SHOW, and DELETE commands for a given object record, See Reviewing DCL Commands for References to Deleted Objects shows which commands to check. These commands could have references to the deleted object record.
Change references to deleted object records from DCL commands. If you leave a reference to a deleted object record in a DCL command, an operation with MDMS could fail.
This chapter expands on the MDMS object summary given in Chapter 2, and describes all the MDMS objects in detail, including the object attributes and operations that can be performed on the objects.
Before going into details on each object, however, the use of the MDMS$CONFIGURE.COM procedure is recommended to configure your MDMS domain and the objects in it. In many cases this should take care of your entire initial configuration.
If you are configuring your MDMS domain (including all objects in the domain) for the first time, hp recommends that you use the MDMS$CONFIGURE.COM command procedure. This procedure prompts you for most MDMS objects, including domain, drives, jukeboxes, media types, locations and volumes, and establishes relationships between the objects. The goal is to allow complete configuration of simple to moderately complex sites without having to read the manual.
The configuration procedure offers extensive help, and contains much of the information contained in this chapter. Help is offered in a tutorial form if you answer "No" to "Have you used this procedure before". In addition, for each question asked, you can enter "?" to have help on that question displayed. Furthermore, if you type "??" to a question, not only will the help be displayed, but in most cases a list of possible options is also displayed.
This procedure is also useful when adding additional resources to an existing MDMS configuration. To invoke this procedure, enter:
@MDMS$SYSTEM:MDMS$CONFIGURE.COM
and just follow the questions and help.
A complete example of running the procedure is shown in Appendix A.
The MDMS domain encompasses all objects that are served by a single MDMS database, and all users that utilize those objects. A domain can range from a single OpenVMS cluster and its backup requirements, to multi-site configurations that may share resources over a wide area network or through Fibre Channel connections. An OpenVMS system running MDMS is considered a node within the MDMS domain, and MDMS server processes within a domain can communicate with one another.
The MDMS domain object is created at initial installation, and cannot be deleted. Its main focus is to maintain domain-wide attributes and defaults, and these attributes are described in the following sections.
The domain attribute ABS_RIGHTS controls whether a user having certain pre-V4.0B ABS rights can map these to MDMS rights for security purposes (see Chapter 5, Security , for more information about rights). Setting the attribute allows the mapping, and setting the attribute to false disallows the mapping.
The right MDMS_APPLICATION_RIGHTS is a high-level right that maps to a set of low level rights suitable for MDMS applications (for example, ABS and HSM). Normally these rights should not be changed, or at least not reduced from the default settings otherwise ABS and HSM may not function correctly. You may add rights to application rights if you have your own MDMS applications or command procedures. The ABS and MDMS$SERVER accounts should have MDMS_APPLICATION_RIGHTS granted in the User Authorization File.
The check access attribute determines if access controls are checked in the domain. MDMS uses two forms of security: Rights and Access Control. Rights checking is a task-oriented form of security and is always performed. However, access control is an object-oriented form of security and can be optionally enabled or disabled with this attribute. Setting Check Access enables access control checking. Clearing Check Access disables access control checking even if there are objects with access control entries.
When a volume is deallocated after its data has expired, it may go into one of two states. The transition state is an interim state that the volume goes into after deallocation, but it is not eligible to be used again until a period of time called the transition time expires. This is a safety feature that allows you to examine whether the data has legitimately expired, and if not to retain the volume (put back to the allocated state). If you do not wish this feature, you can disable the transition state and allow volume to return directly to the free state, where it is eligible for immediate allocation and initialization for new data. The domain deallocate state is applied to all volumes that are automatically deallocated by MDMS. When manually deallocating volumes, you can override the domain deallocate state with a state on the deallocate operation itself.
The MDMS default rights attribute maps a set of MDMS low-level rights to all users in the domain. This allows you to give all users a limited set of rights to access MDMS objects and perform operations, without having to expressly modify their accounts. Be aware that default rights are applied to all users on all nodes in the domain, so granting such rights should be carefully reviewed. By default, MDMS maps no rights to the default rights.
When MDMS deallocates volumes based on their scratch date (an operation that is performed once per day), it sends a mail message indicating which volumes were deallocated to the set of users defined in the mail users attributes. You should enter a list of users in the format node::username. Every user in the list will receive the deallocate volume mail messages.
The maximum scratch time is the maximum scratch time that can be applied to any volume when it is allocated. The scratch time is the period of time that you wish the volume to stay allocated because its data is still valid. The maximum scratch time imposes a maximum limit and overrides the volume's scratch time if it exceeds the maximum. For HSM, the maximum scratch time should be set to zero (unlimited), as HSM volumes' data remains valid until it is repacked. For ABS uses, this value should be set to the longest period of time you wish to retain any volume.
The domain media type attribute is the media type that is applied to new volumes and drives by default when they are created. In a simple configuration, you may only have a single media type, so specifying it in the domain allows you to not have to specify it when creating individual drives and volumes. It may also be applied as a default to ABS archives. You may always override the domain default media type with a specific media type when you create or modify drives and volumes.
The domain offsite location attribute is applied by default to the offsite location field of new volumes when they are created. The offsite location is an MDMS location that is used for secure storage of the volumes in case of a disaster. You can always override the domain default offsite location when you create or modify volumes.
The domain onsite location attribute is applied by default to the onsite location field of new volumes when they are created. The onsite location is an MDMS location that is used for storage of the volumes when they are onsite, or quickly accessible to jukeboxes and drives. You can always override the domain default onsite location when you create or modify volumes.
The domain OPCOM classes attribute contains the default OPCOM classes that are applied to new node objects by default when they are created. OPCOM classes are classes of users whose terminals are enabled to receive certain OPCOM classes. You can override the domain default OPCOM classes with specific classes on a per-node basis when you create or modify a node.
The right MDMS_OPERATOR_RIGHTS is a high-level right that maps to a set of low level rights suitable for operators managing the domain. The default set of operator rights allow for normal operator activities such as loading and unloading volumes into drives, showing any object or operations, and moving volumes offsite and onsite. However, you can add or remove low level rights to/from the operator rights as you wish.
The domain protection attributes defines the default protection applied to new volumes when they are created. This protection is used by MDMS when it initializes volumes, and writes the protection on the magnetic tape volume itself. You can always override the domain default protection by specifying the protection specifically when creating or modifying a volume.
The relaxed access attribute controls the security when a user or application tries to access an object without any access control entries, and access control checking is enabled. If relaxed access is set, such access is granted. If relaxed access is clear, such access is denied. The relaxed access attribute is ignored if the check access attribute is clear.
MDMS uses sequentially increasing request identifiers for each request received by the MDMS database server, and this attribute displays the ID of the next request. If this ID is becoming very large, you can reset it to zero or one (or indeed any value) if you wish. The request ID automatically resets to one when it reaches one million.
MDMS performs scheduling operations on behalf of itself and ABS. For ABS scheduling, you can choose a scheduler type that best meets your needs, as follows:
MDMS-initiated scheduled operations such as MDMS$MOVE_VOLUMES always use the internal MDMS scheduler.
The domain default scratch time is the default scratch time applied to new volumes when they are created. Scratch time indicates how long a volume is to remain allocated (that is, how long its data is valid and needs to be kept). You can override the domain volume scratch time when you create, modify or allocate individual volumes. For HSM volumes, the scratch time should be set to zero (unlimited), since HSM data remains valid until a volume is repacked.
MDMS uses user account rights as one mechanism for security within the domain. MDMS allows you to control whether the OpenVMS privilege SYSPRV can map to the ultimate MDMS right MDMS_ALL_RIGHTS. If you set the SYSPRV attribute, users with SYSPRV are assigned MDMS_ALL_RIGHTS, which means they can perform any operation subject to access control checks. Clearing SYSPRV gives users with SYSPRV no special rights.
The domain default transition time is applied to volumes by default when they are deallocated into the transition state. The transition time determines how long the volumes remain in the transition state before moving to the free state. This attribute is used alongside the deallocation state attribute, which determines the default state that volumes are deallocated into. You can override the domain default transition time when you create, modify, or deallocate a volume.
The right MDMS_USER_RIGHTS is a high-level right that maps to a set of low level rights suitable for non-privileged users that perform ABS or HSM operations. The default set of user rights allow for user activities such as creating and manipulating their own volumes and loading and unloading those volumes into drives, showing their volumes. However, you can add or remove low level rights to/from the user rights as you wish.
A drive is a physical resource that can read and write data to tape volumes. Drives can be standalone requiring operator intervention for loading and unloading, in a stacker configuration that allows limited automatic sequential loading of volumes, or in a jukebox which provides full random-access automatic loading. Drives are named in MDMS using a unique name across the domain; it may or may not be the same as the OpenVMS device name, as these may not be unique across the domain.
The following sections describe the attributes of a drive.
The access attribute controls whether the drive may be used from local access, remote access or both. Local access includes direct SCSI access, access via a controller such as the HSJ70, access via TMSCP, or access via Fibre Channel, and does not require use of the Remote Device Facility (RDF). Remote access is via a DECnet network requiring RDF. You can set the access to one of the following:
Automatic reply is the capability of polling hardware to determine if an operator-assist action has completed. For example, if MDMS requests that an operator load a volume into a drive, MDMS can poll the drive to see if the volume was loaded, and if so complete the OPCOM request without an operator reply. Set automatic reply to enable this feature, and clear to require an operator response. Please note that some operations cannot be polled and always require an operator reply. The OPCOM message itself clearly indicates if a reply is needed or automatic replies are enabled.
The device attribute is the OpenVMS device name for the drive. In many cases you can set up the drive name to be the OpenVMS device name, and this is the default when you create a drive. However, the drive name must be unique within the domain, and since the domain can consist of multiple clusters there may be duplicate device names across the domain. In this case you must use different drive names from the OpenVMS device names. Also, you can specify simple or descriptive drive names which are used for most commands, and hide the OpenVMS device in the device name attribute.
By default, drives are enabled, meaning that they can be used by MDMS and its applications. However, you may wish to disable a drive from use because it may need repair or be used for some other application. Set the disabled flag to disabled the drive, and clear the flag to enable the drive.
If the drive is in a robotically-controlled jukebox, and the jukebox is controlled by MRD, you must set the drive number to the relative drive number in the jukebox used by MRD. Drives in jukeboxes are numbered from 0 to n, according to the SCSI addresses of the drives. Refer to the jukebox documentation on how to specify the relative drive number.
The groups attribute contains a list of groups containing nodes that have direct access to the drive. Direct access includes direct-SCSI access, access via a controller such as an HSJ70, access via TMSCP, and access via Fibre Channel. You can specify as many groups as you wish, in addition to nodes that may not be in a group.
If the drive is in a jukebox, you must specify which jukebox using the jukebox attribute. Enter a valid jukebox name from an MDMS-defined jukebox. If there is no jukebox, MDMS treats the drive as a standalone drive or as a stacker.
A drive must support one or more media types in order for volumes to be used on the drive. In the media type attribute, specify one or more MDMS-defined media types that this drive can both read and write. If you wish, you can restrict the media types to a subset that you wish this drive to handle, and not all the media types it could physically handle. In this way, you can restrict the drive's usage somewhat.
The nodes attribute contains a list of nodes that have direct access to the drive. Direct access includes direct-SCSI access, access via a controller such as an HSJ70, access via TMSCP, and access via Fibre Channel. You can specify as many nodes as you wish, in addition to groups of nodes in the groups attribute.
In addition to media types that a drive can read and write, a drive may support one or more additional media types that it can only read. In the read-only media type attribute, specify one or more MDMS-defined media types that this drive can only read. This allows this drive to be used when the application operation is read-only (for example, HSM unshelves or ABS restores). Do not duplicate a media type in both the media type list and read-only media type list.
You can designate whether a drive is to be used by MDMS applications and users only, or by non-MDMS users. If the drive is not shared, the MDMS server process allocates the drive on all clusters to prevent non-MDMS users and applications from allocating it. However, when an MDMS user attempts to allocate the drive, MDMS will deallocate it and allow the allocation. Set the shared attribute if you wish to share the drive with non-MDMS users, and clear if you wish to restrict usage to MDMS users. ABS users who do their own user backups are considered MDMS users, as are all system backups and HSM shelving/unshelving users.
Certain types of drive can be configured as a stacker, which allows a limited automatic sequential loading capability of a set of volumes. Such drives may physically reside in a loader or have specialized hardware that allows stacker capabilities. If you wish the drive to support the stacker loading capability, set this attribute and make sure the jukebox attribute does not contain a jukebox name. If you wish the drive to operate as a jukebox or standalone drive, clear this attribute.
The drive state field determines the load state of the drive. The drive can be in one of four states:
This is a protected field that is normally handled by MDMS. Only modify this field if you know that there are no outstanding requests and the new state reflects the actual state of the drive.
You allocate a drive so that you can it for reading and writing data to a volume. If you allocate a drive, your process ID and node is stored in the MDMS database, and the drive is allocated in OpenVMS for your process. Because the MDMSView GUI does not operate in a process context, it is not possible to allocate drives from the GUI.
You can either allocate a drive by name, or you can specify selection criteria to be used for MDMS to select an available drive for you and allocate it. The allocation selection criteria include:
You can also specify the following options when allocating a drive:
If you allocated a drive using the DCL "Allocate Drive" command, you should deallocate the drive when you are finished using it, otherwise the drive will remain allocated until your process exits.
Simply issue a deallocate drive and specify the drive name or the logical name obtained from the define option in "Allocate Drive".
MDMS supports two ways to load volumes into drives:
This section discusses the load drive option. The load volume option is discussed under volumes.
The "Load Drive" operation requests either that a scratch volume (in the free state) be loaded into the drive, or the next volume in the stacker is loaded into the drive. In either case, the volume ID of the volume is not known until the load completes, and MDMS reads the magnetic tape label to determine the volume.
The loaded volumes may or may not already be defined in the MDMS database. You can choose to create volume records by setting the "Create" flag, and optionally providing attributes to apply to the volume as follows:
When issuing the load drive request, you can specify whether the load is for read/write (almost always the case) or read-only, and whether operator assistance is required.
You can also specify an alternative message for the operator. This is included in the OPCOM message instead of the normal MDMS operator message. Use of an alternative message is not recommended.
When initiating a load from the DCL, you can choose a synchronous operation (default) or an asynchronous operation using the /NOWAIT qualifier. From MDMSView, a load is always asynchronous, so that you can continue performing other tasks.
Unlike the load drive operation, the unload drive can be applied to any type of drive at any time. What it does is simply unload the current volume in the drive, and so you can use this when you don't know which volume is in the drive. Alternatively, you can use the unload volume operation if you know the volume ID in the drive.
The only option for unload drive is to request operator assistance if needed.
When initiating an unload from the DCL, you can choose a synchronous operation (default) or an asynchronous operation using the /NOWAIT qualifier. From MDMSView, an unload is always asynchronous, so that you can continue performing other tasks.
The group object is a logical object that is simply a list of nodes that have something in common. Groups can be used to represent an OpenVMS cluster, a collection of nodes that have access to a device, or for any other purpose. A node may appear in any number of groups. Groups can be specified instead of, or in addition to nodes in drive, jukebox, save and restore objects, and can be used interchangeably with nodes in pool authorization and access control definitions.
In MDMS, a jukebox is a generic term applied to any robot-controlled device that supports automatic loading of volumes into drives. Jukeboxes include small, single-drive loaders, large multi-drive libraries and very large silos containing thousand of volumes. In general MDMS does not make distinctions among the types of jukeboxes, except for the software subsystem used to control them. MDMS supports both the Media Robot Device (MRD) subsystem for SCSI-controlled robots, and the Digital Cartridge Server Component (DCSC) subsystem for certain silos.
The next sections describe the jukebox attributes.
The access attribute controls whether the jukebox may be used from local access, remote access or both. Local access includes direct SCSI access, access via a controller such as the HSJ70, or access via Fibre Channel, and does not require use of the Remote Device Facility (RDF). Remote access is via a DECnet network requiring RDF. You can set the access to one of the following:
For DCSC-controlled jukeboxes, the ACS identifier specifies the Automated Cartridge System Identifier. Each MDMS jukebox maps to one Library Storage Module (LSM), and requires the specification of the Library, ACS and LSM identifiers.
Automatic reply is a capability of polling hardware to determine if an operator-assist action has completed. For example, if MDMS requests that an operator move a volume into a port, MDMS can poll the port to see if the volume is there, and if so complete the OPCOM request without an operator reply. Set automatic reply to enable this feature, and clear to require an operator response. Please note that some operations cannot be polled and always require an operator reply. The OPCOM message itself clearly indicates if a reply is needed or automatic replies are enabled.
For DCSC-controlled jukeboxes equipped with Cartridge Access Points (CAPs), this attribute specifies the number of cells for each CAP. The first number is the size for CAP 0, the second for CAP 1, and so on. If a size is not specified, a default value of 40 is used. Specifying a cap size optimizes the movement of volumes to and from the jukebox by filling the CAP to capacity for each move operation.
The control attribute determines the software subsystem that performs robotic actions in the jukebox. The control may be one of the following:
By default, jukeboxes are enabled, meaning that they can be used by MDMS and its applications. However, you may wish to disable a jukebox from use because it may need repair or be used for some other application. Set the disabled flag to disabled the jukebox, and clear the flag to enable the jukebox.
The groups attribute contains a list of groups containing nodes that have direct access to the jukebox. Direct access includes direct-SCSI access, access via a controller such as an HSJ70, and access via Fibre Channel. TMSCP access is not supported for jukeboxes. You can specify as many groups as you wish, in addition to nodes that may not be in a group.
For DCSC-controlled jukeboxes, the Library identifier specifies the library that this jukebox is in. Each MDMS jukebox maps to one Library Storage Module (LSM), and requires the specification of the Library, ACS and LSM identifiers.
The location attribute specifies the physical location of the jukebox. Location can be used as a selection criterion for selecting volumes and drives. Specify an MDMS-defined location for the jukebox. This location may be the same as, or different from, the onsite location that volumes are stored in when not in a jukebox. If different, moves from the jukebox to the onsite location and vice versa will be done in two phases: jukebox to jukebox location, then jukebox location to onsite location, and vice versa.
For DCSC-controlled jukeboxes, the Library Storage Module (LSM) identifier specifies the LSM that comprises this jukebox. Each MDMS jukebox maps to one Library Storage Module (LSM), and requires the specification of the Library, ACS and LSM identifiers.
The nodes attribute contains a list of nodes that have direct access to the jukebox. Direct access includes direct-SCSI access, access via a controller such as an HSJ70, and access via Fibre Channel. TMSCP access to jukeboxes is not supported. You can specify as many nodes as you wish, in addition to groups of nodes in the groups attribute.
For MRD-controlled jukeboxes, the robot name is the OpenVMS device name of the robot device. Robot names normally fall into one of several formats:
If the jukebox is controlled by direct connect SCSI (first option), the device must be first loaded on the system with one of the following DCL commands:
Alpha - $ MCR SYSMAN IO CONNECT GKxxx/NOADAPTER/DRIVER=SYS$GKDRIVER.EXE
For MRD jukeboxes, the slot count is simply the number of slots (which can contain volumes) in the jukebox. Volumes reside in numbered slots when they are not in a drive. Slots are numbered from 0 to (slot count - 1). Filling in this field is optional: MDMS calculates the slot count by polling the jukebox firmware.
The state attribute is a protected field that describes the current state of the jukebox. A jukebox can be in one of three states:
This field is normally maintained by MDMS, so you should not modify it unless a problem has occurred that needs manual cleanup (for example, the robot is stuck in the in-use state when it is clear that it is not in-use).
MDMS provides the capability of monitoring the number of free volumes in a jukebox. A free volume is one that is available for allocation and writing new data. Many users would like to maintain a minimum number of free volumes in a jukebox to handle tape writing needs for some period of time. You can specify a threshold value of free volumes, below which an OPCOM message is issued that asks an operator move some more free volumes into the jukebox. In addition, the color status of the jukebox in MDMSView changes to yellow if the number of free volumes falls below the threshold, and to red if there are no free volumes in the jukebox. If you wish to disable threshold OPCOM messages and color status, set the threshold value to 0.
The topology attribute specifies the physical configuration of a certain type of jukebox when it is being used with magazines. Topology is only useful when all of the following conditions are true:
You specify the topology of the jukebox so that you can move magazines into and out of the jukebox by specifying a position rather than a start slot.
For each tower in the jukebox, you specify the number of faces in the tower, the number of levels in each face, and the number of slots in each level. For TL820-class jukeboxes, the typical values for each tower are 8 faces, 2 or 3 levels per face and 11 slots per level. The associated magazine contains 11 slots and fits into a position specified by tower, face and level. Other jukeboxes may vary.
The usage attribute determines whether this jukebox is set up to use magazines, and has two values:
You should only set usage to magazine if you plan to use MDMS magazine objects and move all the volumes in the magazines together. An alternative is to move individual volumes separately, even if they reside in a physical magazine; in this case set usage to nomagazine.
MDMS provides the capability to inventory jukeboxes, and "discover" volumes in them and optionally create volumes in the MDMS database to match what was discovered. With this feature, you can simply place new volumes in the jukebox and let MDMS create the associated volume records with attributes that you can specify.
There are two types of inventory:
You can inventory whole jukeboxes, or specify a volume range or slot range, as follows:
While inventorying jukeboxes, MDMS can find volumes that are defined and in the jukebox, that are not defined but are in the jukebox, and that are defined but missing from the jukebox. MDMS provides several options to handle undefined and missing volumes.
If you set the "Create" flag during an inventory, MDMS will create a volume record for each undefined volume it finds in the jukebox. You can specify in advance certain attributes to be applied to this volume record:
If you do not set the "Create" flag, then MDMS will not create new volume records for undefined volumes it finds.
Conversely, you can also define what to do if a volume that should be in the jukebox (according to the database) is found not to be in the jukebox. There are three options that you can apply using the "Missing" attribute:
When initiating an inventory from the DCL, you can choose a synchronous operation (default) or an asynchronous operation using the /NOWAIT qualifier. From MDMSView, an inventory is always asynchronous, so that you can continue performing other tasks.
A location is an MDMS object that describes the physical location other objects. Nodes, jukeboxes, magazines, volumes and archives can all have locations associated with them. Locations are used for volume and drive allocation selection criteria, and for placing volumes and magazines in known labelled locations.
Locations can be hierarchical, and locations in hierarchy that have a common source are considered compatible locations. For example, locations SHELF1 and SHELF2 are compatible if they have a common parent location such as ROOM2. Compatible locations are used when allocating drives and volumes using selection criteria, so you should only define hierarchies to the extent that you wish compatible locations. Locations that extend beyond a room or floor are generally not considered compatible, so you should not normally build location hierarchies beyond that level.
Locations can also contain "spaces", that are normally labelled areas in a location that volumes and magazines can be placed in an onsite location. If a volume or magazine contains a space definition, this is output in OPCOM messages so that operator can easily locate a volume or magazine when needed.
Locations contain two attributes, as defined in the following sections.
The parent location is an MDMS location object which is the next level up on the location hierarchy. For example, a location SHELF1 might have a parent location ROOM2, indicating that SHELF1 is in ROOM2. You should define a parent location only if you wish all locations belonging to the parent (including the parent itself) to be compatible when selecting volumes and drives. For example, in a hierarchy of SHELF1 and SHELF2 in ROOM2, volumes in any of the three locations would match a request to allocate a volume from ROOM2. Do not use the location hierarchy for other purposes.
A magazine is an MDMS object that contains a set of volumes that are planned to be moved together as a group. It can also relate to physical magazines that some jukeboxes (most notably small loaders) require to move volumes into and out of the jukebox. Magazines can be moved into and out of MRD-controlled jukeboxes with all their volumes at once.
However, just because a jukebox requires a physical magazine does not necessarily mean that you must use MDMS magazines. The physical magazine jukebox can be handled without magazines, and volumes are moved individually as far as MDMS is concerned. The choice should depend on whether you wish the volumes to move independently (don't use magazines) or as a group together (use magazines).
Magazines are not supported for DCSC-controlled jukeboxes. Magazines have the following attributes.
The jukebox name contains the name of the jukebox if the magazine is in a jukebox. When in a jukebox, a magazine can optionally have a start slot or position, as follows:
All three fields are protected and normally managed by MDMS when a "Move Magazine" operation occurs. Only manipulate these fields if an error occurs and you need to recover the database to a consistent state.
When not in a jukebox, a magazine may be either in an onsite or offsite location. An onsite location is one where the magazine can be quickly accessed and moved into a jukebox, which is also onsite. An offsite location is meant to be a secure location in the case of disaster recovery, and generally does not have local access to a jukebox. However, nothing in MDMS precludes the possibility of offsite locations having their own jukeboxes.
Each magazine should have an onsite and offsite location defined, so that operators know where the magazine is physically located. They use these locations, the jukebox name and the placement to determine where a jukebox is at a certain time. Both onsite and offsite locations should be MDMS-defined location objects.
Together with the offsite and onsite locations, you can associate an offsite and onsite date. These dates represent the date the magazine is due to be moved offsite or onsite respectively. Typically, magazines are moved offsite while their volumes' data is still valid and needs to be protected in a secure location. When the volumes' data expires, the magazine should be scheduled to be brought onsite, so that the newly-freed volumes can be used for other purposes.
If an offsite and/or onsite date is specified, MDMS initiates the movement of the magazines at some point on the scheduled date automatically. This is performed by the "Move Magazine" scheduled operation, which by default runs at 1:00 am each day. Operators will see OPCOM messages to move the magazines to either the onsite or offsite location.
If you do not wish to have MDMS move magazines automatically, either remove the onsite and offsite dates from the magazine, or disable the scheduled "Move Magazine" activity by assigning a zero time to its schedule object "MDMS$MOVE_MAGAZINES".
The slot count specifies how many slots are in the magazine. Unlike jukeboxes, this value is required to make magazines work properly.
While in an onsite location, the magazine can occupy a space, which is a labelled part of a location that uniquely identifies where the magazine is. A space can be designed to handle a single volume, but since magazines hold multiple volumes, multiple spaces can also be assigned. Enter either a space or a range of spaces for the magazine.
The supported way to move magazines from one place to another is to use the "Move Magazine" operation. You can move magazines on demand by issuing this operation, or you can let MDMS automatically move magazines according to pre-defined onsite or offsite dates (this is called a "scheduled" move). You can also force an early scheduled move if you want it to occur before the time that MDMS would initiate the move. Moving magazines into jukeboxes must always be performed manually.
When intiating a "Move Magazine", you can choose a destination for the magazine if the move is not a scheduled move. The destination can be one of three types of places:
If you wish to force a scheduled move, you can select "Scheduled". In most cases, the destination is predefined, so you don't need to specify it. However, you can specify an alternative destination for the scheduled move if you wish by specifying a destination as outlined above.
Finally, you can specify if you need operator assistance. This is recommended with "Move Magazine" as magazines cannot be moved without human intervention. Only if you plan to do the physical move yourself or you manually let someone know would you disable operator assistance.
MDMS uses media type objects to hold information about the type of media that volumes and drives can support. MDMS uses media type as a major selection criterion for allocating volumes and drives, and volumes can only be loaded into drives with compatible media types.
Media types contain four attributes, as defined in the following sections.
The capacity attribute indicates the capacity of the media in MB. This field is not used by ABS or HSM, but is used by the obsolete product "Sequential Media Filesystem" (SMF).
This important field indicates whether you wish the tape to be written with firmware compaction. Enabling compaction usually doubles the capacity of the tape, so this is a desirable option which is set by default. Clear the attribute if you do not wish compaction.
This field indicates the density of the tape that you desire. Many types of tape media (especially DLT tapes) support multiple densities, and certain types of drive can either read and write a certain density, or just read some densities. As such, you can define many media types with different densities that can be assigned to volumes and drives.
MDMS uses the density field when initializing volumes, so the density must be a valid OpenVMS density for the version of the operating system being used. Issue a "HELP INITIALIZE /DENSITY" command to determine the valid densities on the platform.
An MDMS node is an OpenVMS system that is running MDMS. All nodes running MDMS must have a node object defined in the database for MDMS to work properly. The node name must be the DECnet Phase IV name of the system, if DECnet Phase IV is running or a Phase IV alias is used. Otherwise it can be any name.
Nodes contain attributes as outlined in the following sections.
MDMS operates as a group of co-operating processes running on multiple nodes in multiple clusters in an MDMS domain. One of these MDMS processes is known as the "Database Server", and it actually controls all MDMS operations in the domain. Although only one node is the database server at any one time, you should enable multiple nodes to be possible database servers in case the actual database server node fails. In this way, failover is supported.
A database server must have direct access to the database files located in MDMS$DATABASE_LOCATION. Direct access, access via MSCP, and access via Fibre Channel are all considered local access. Access via a network protocol or DFS are not considered local access. It is recommended that you enable at least 3 nodes as potential database servers to ensure failover capabilities.
You can specify the OPCOM classes to be used by MDMS for operator messages on this node. By default, the domain default OPCOM classes are used, but you can override this on a node-by-node basis. Specify one or more of the standard OpenVMS OPCOM classes - messages are directed to all login sessions with these OPCOM classes enabled.
You can define which network transports are defined for this node. There are four choices:
If you identify TCP/IP as a supported transport, you must define the TCP/IP fullname in the TCP/IP fullname field. These fullnames are normally in the format "node.loc.org.ext". For example, SLOPER.CXO.CPQCORP.COM
If you identify DECnet as a transport, you need to specify a DECnet full name only if you are using DECnet-Plus (Phase V). In this case, enter the full name, which is normally in a format such as LOCAL:.node. If you are running DECnet Phase IV, do not specify a DECnet full name. The node's node name is used.
A pool is a logical MDMS object that associates a set of volumes with a set of users that are authorized to use those volumes. Every volume can be assigned one pool, for which we say that the volume is in the pool. The pool is then assigned a set of users that are authorized to use the volumes in the pool. If a volume does not have a pool specified, then it is said to belong to the "scratch pool" for which no authorization is required.
Pools have three attributes that are discussed in the following sections.
You can specify a list of authorized users for the pool, as a comma-separated list of users. Each user should be specified as node::username or group::username, where both the node/group and username portions can contain wildcard characters (*%). To authorize everyone, you can specify *::*. To authorize everyone on a node you can specify nodename::*. Everyone in the authorized user list is allowed to allocate volumes in the pool. Other users require MDMS_ALL_RIGHTS or MDMS_ALLOCATE_ALL rights.
Default users are authorized like the authorized users, but in addition are assigned this pool as their default pool. In this case, if they attempt to allocate a volume and don't specify a pool, they will allocate a volume from this pool. A particular user need only appear in one list: they do not need to be listed in both lists to be an authorized user to their default pool.
Pools are useful for dividing volumes between groups or organizations, but they are only useful is there are free volumes in the pool. MDMS provides the capability of monitoring the number of free volumes in a pool. A free volume is one that is available for allocation and writing new data. Many users would like to maintain a minimum number of free volumes in a pool to handle tape writing needs for some period of time. You can specify a threshold value of free volumes, below which an OPCOM message is issued that asks an operator add some more free volumes to the pool. In addition, the color status of the pool in MDMSView changes to yellow if the number of free volumes falls below the threshold, and to red if there are no free volumes in the pool. If you wish to disable threshold OPCOM messages and color status, set the threshold value to 0.
A volume is a physical piece of tape media that contains (or will contain) data written by MDMS applications (ABS or HSM), or user applications. Volumes have many attributes concerning their placement, allocation status, life-cycle dates, protection attributes and many other things.
Volume records can be created manually with a "Create Volume" operation, or automatically be MDMS with "Inventory Jukebox" and "Load Drive" operations. The MDMS$CONFIGURE command procedure can also be used to create volumes.
Once a volume is created it acquires a state. This state determines how the volume may be used at any time, and to an extent where the volume should be placed.
The following figure illustrates the life cycle of volumes, and the following table indicates how a volume transitions from one state to another.
Each row describes an operation with current and new volume states, commands and GUI actions that cause volumes to change states, and if applicable, the volume attributes that MDMS uses to cause volumes to change states. Descriptions following the table explain important aspects of each operation.
The following sections describes all the volume attributes in detail, followed by operations that you can perform on volumes.
The account, username and UIC fields are filled in automatically when a volume is allocated, and reflect the calling user or specified user during the allocate. The username is a valid OpenVMS username on the client system performing the allocate, and the account and UIC is from the user's entry in the system Authorization (UAF) file.
These fields are normally maintained by MDMS and are protected fields. You should not modify these fields unless the volume is deallocated. MDMS maintains the Account, Username and UIC in the volume even after the volume is deallocated, so that you can "retain" the volume back to the allocated state in case of accidental deallocation.
There are several dates that maintain or control allocation and movement dates for volumes. These are as follows:
If an offsite and/or onsite date is specified, MDMS initiates the movement of the volumes at some point on the scheduled date automatically. This is performed by the "Move Volumes" scheduled operation, which by default runs at 1:00 am each day. Operators will see OPCOM messages to move the volumes to either the onsite or offsite location.
If you do not wish to have MDMS move volumes automatically, either remove the onsite and offsite dates from the volume, or disable the scheduled "Move Volumes" activity by assigning a zero time to its schedule object "MDMS$MOVE_VOLUMES".
The history dates are maintained by MDMS, but are for information purposes only. MDMS does not use these dates to perform any operations. The following history dates are maintained:
The state field indicates where in a volume's life cycle the volume exists. The state field itself is protected, and you should not normally adjust it unless an error occurs. However, you can "Update State" using certain keywords, which checks for validity and results in a consistent database state.
A volume can be in one of the following states, which are shown in normal life-cycle order:
A picture showing the normal state transitions is provided at the top of the volumes section.
While changing the state directly is not recommended, there are several options for changing state that are supported:
A volume's media types define the type of media for the volume, and what potential compaction or density options the volume can support. As such, before a volume is initialized, it can potentially support many media types. However, once a volume is initialized, MDMS uses the density and compaction attributes from a media type to physically write the tape. As such, a volume should only support one media type at and after the first initialization.
If the volume is in the Uninitialized state, select one or more MDMS-defined media types for the volume. If the volume is in any other state, select a single media type. If no media type is specified, the domain default media type is used.
A pool contains a collection of volumes that can be used by a set of authorized users. To insert a volume into a pool, simply specify a pool name in the volume's pool field. If not defined, the volume is placed in the "scratch pool", and it can be allocated by any user. If the volume is in the free state, the number of free volumes in the pool is incremented.
These read-only fields indicate if a volume is in a volume set, and what the previous and next volumes are in the set, relative to this volume. A volume set is created when a tape write operation reaches end-of-tape and a new tape is required to complete the operation. ABS and HSM bind the next volume to the current volume, and create a volume set.
These fields are manipulated by "Bind Volume" and "Unbind Volume" operations, both manually and under control of MDMS applications.
The placement fields of a volume indicate where the volume resides, and where it should reside when moved to an onsite or offsite locations. The placement attributes include the following:
Placement is a protected field managed by MDMS. You should not change placement unless error recovery is needed.
The format fields are not used by ABS, HSM or MDMS, but can be used to document certain characteristics of the volume and its data format. The fields are as follows:
The protection field provides System, Owner, Group and World access protection for the volume. This protection is written to the volume when it is initialized, and provides protection from unauthorized use and re-initialization. The standard protection is:
SYSTEM(R, W) OWNER (R, W) GROUP (R) WORLD (None)
If protection is not set for the volume, the domain default protection is used.
MDMS provides three counters for volumes, as follows:
You allocate volumes so that you can use them for writing new data. Allocating a volume places it into the Allocated state, and assigns the calling user (or specified user), UIC, and account in the allocation fields. This effectively reserves the volume to the user. The volume remains allocated to the user and unavailable for other use until the scratch date is reached, or unless the volume is manually deallocated.
When allocating a volume, you may specify the user for which you are allocating the volume (for example, ABS). If you do not specify a user, then you as the calling user are placed in the allocation fields.
Also, during allocation, you can change the following fields in the MDMS database to reflect the format to be used on the tape:
Instead of allocating a volume by name, you can specify selection criteria to be used for MDMS to select a free volume for you and allocate it. You can also allocate a volume set by specifying a count of volumes to allocate. The allocation selection criteria include:
If you specify a volume count of more than one, then that many volumes will be allocated and placed in a volume set. If you also use the "Bind Volume" selection option, the new volume set is bound to the specified volume set.
You can also specify that you wish to change certain attributes of the volume as follows:
MDMS normally deallocates volumes when their scratch date expires. However, you can deallocate volumes manually in order to free them up earlier than planned. You can deallocate your own volumes, or with the appropriate rights deallocate volumes allocated to other users.
If the volume is in a volume set, the volume is also unbound from the volume set.
The following options are available when you deallocate a volume:
Binding volumes is the way to create volume sets, by binding one volume (or volume set) to another volume (or volume set). Normally, MDMS applications such as ABS and HSM perform automatic binding when they reach end-of-tape. However, it is sometimes necessary to perform manual binding. For example, if a volume set has been accidentally deallocated but is still needed, you may need to manually bind the set together (although the retain feature does this quite well).
There are only two options when binding a volume set:
When you bind a new volume to a volume or volume set, the new volume acquires the following attributes of the volume set:
The next and previous volumes are also updated appropriately.
Unbinding a volume removes the volume from the volume set without deallocating it. When unbinding a volume you can choose whether to unbind the entire volume set, or break the volume set at the point of the unbind. You can also unbind on behalf of the allocated user.
There are only two options for unbind:
MDMS supports two ways to load volumes into drives:
This section discusses the load volume option. The load drive option is discussed under drives.
When loading a specific volume, you normally need to specify the drive in which to load the volume, unless a drive has been specifically allocated for a volume (via DCL only). Select a drive with a compatible media type for the volume.
If you are loading a volume into a jukebox drive, and the volume is not in the jukebox, you can specify is an automatic "Move Volume" request to move the volume into the jukebox is desired. If you do not specify this option, and the volume is not in the jukebox, the operation will fail.
Another option is to request MDMS to check the volume label. This is normally a good idea as there can be mismatches between the volume's magnetic label and its bar code label. If the labels do not match, the load fails. If you do not set the label check flag, the load may succeed but the label may be wrong. Use this option with caution.
When issuing the load volume request, you can specify whether the load is for read/write or read-only, and whether operator assistance is required.
You can also specify an alternative message for the operator. This is included in the OPCOM message instead of the normal MDMS operator message. Use of an alternative message is not recommended.
You can unload a specific volume from a drive by issuing the "Unload Volume" operation. Unlike the "Unload Drive" operation which unloads any volume from the drive, the "Unload Volume" function checks the label on the volume on the drive before unloading it. If the label can be read and does not match the specified volume, the unload fails.
There is only one option for unload volume - operator assistance. This is recommended unless you are personally monitoring the unload operation.
The supported way to move volumes from one place to another is to use the "Move Volume" operation. You can move volumes on demand by issuing this operation, or you can let MDMS automatically move volumes according to pre-defined onsite or offsite dates (this is called a "scheduled" move). You can also force an early scheduled move if you want it to occur before the time that MDMS would initiate the move. Moving volumes into jukeboxes or magazines must always be performed manually.
When intiating a "Move Volume", you can choose a destination for the volume if the move is not a scheduled move. The destination can be one of four types of places:
If you wish to force a scheduled move, you can select "Scheduled". In most cases, the destination is predefined, so you don't need to specify it. However, you can specify an alternative destination for the scheduled move if you wish by specifying a destination as outlined above.
Finally, you can specify if you need operator assistance. This is recommended with "Move Volume" because human intervention is necessary to move volumes. Only if you plan to do the physical move yourself or you manually let someone know would you disable operator assistance.
MDMS supports initialization of volumes to make them available for use. Initializing a volume consists of writing an ANSI label on the volume, and applying compaction and density attributes and the volume protection field in the label. The volume is then free to be written. If the volume was in the Uninitialized state, it will now change to the Free state. All volumes need to be initialized at least once before ABS and HSM can allocate and use them.
Volumes that are already written need to be initialized again if you wish to use the whole volume for writing again. Both ABS and MDMS initialize volumes on every allocation.
When initializing volumes, you can specify four options:
The Installation Guide provides information about establishing the MDMS domain configuration. The information in this chapter goes beyond the initial configuration of MDMS, explaining concepts in more detail than the product installation and configuration guide. This chapter also includes procedures related to changing an existing MDMS configuration.
The major sections in this chapter focus on the MDMS domain and its components, and the devices that MDMS manages.
A sample configuration for MDMS is shown in See .
To manage drives and volumes, you must first configure the scope of the MDMS management domain. This includes placing the database in the best location to assure availability, installing and configuring the MDMS process on nodes that serve ABS V3 or HSM V3 and defining node and domain object record attributes. The MDMS Domain is defined by:
The MDMS database is a collection of OpenVMS RMS files that store the records describing the objects you manage. lists the files that make up the MDMS database.
If you are familiar with the structure of OpenVMS RMS files, you can tune and maintain them over the life of the database. You can find File Definition Language (FDL) files in the MDMS$ROOT:[SYSTEM] directory for each of the database files. Refer to the OpenVMS Record Management System documentation for more information on tuning RMS files and using the supplied FDL files.
MDMS keeps track of all objects by recording their current state in the database. In the event of a catastrophic system failure, you would start recovery operations by rebuilding the system, and then by restoring the important data files in your enterprise. Before restoring those data files, you would have to first restore the MDMS database files.
Another scenario would be the failure of the storage system on which the MDMS files reside. In the event of a complete disk or system failure, you would have to restore the contents of the disk device containing the MDMS database.
The procedures in this section describe ways to create backup copies of the MDMS database. These procedures use MDMS$SYSTEM:MDMS$COPY_DB_FILES.COM command procedure. This command procedure copies database files with the CONVERT/SHARE command. The procedure in See How to Back Up the MDMS Database Files describes how to copy MDMS database files only. The procedure in See Processing MDMS Database Files for an Image Backup describes how to process the MDMS database files when they are copied as part of an image backup on the disk device.
To Make Backup Copies of the MDMS Database
The procedure outlined in describes how you can make backup copies of just the MDMS database files using the OpenVMS Backup Utility. This procedure does not account for other files on the device.
To Process the MDMS Database for an Image Backup of the Device
The procedure in shows how to process the MDMS database files for an image backup. The image backup could be part of a periodic full backup and subsequent incremental. This procedure also describes how to use the files in case you restore them.
In the event the disk device on which you keep the MDMS database runs out of space, you have the option of moving the MDMS database, or moving other files off the device. The procedure described in this section explains the actions you would have to perform to move the MDMS database. Use this procedure first as a gauge to decide whether moving the MDMS database would be easier or more difficult than moving the other files. Secondarily, use this procedure to relocate the MDMS database to another disk device.
See See How to Move the MDMS Database describes how to move the MDMS database to a new device location.
This section describes the MDMS software process, including server availability, interprocess communication, and start up and shut down operations.
Each node in an MDMS domain has one MDMS server process running. Within an MDMS domain only one server will be serving the database to other MDMS servers. This node is designated as the MDMS Database Server, while the others become MDMS clients. Of the servers listed as database servers, the first one to start up tries to open the database. If that node can successfully open the database, it is established as the database server. Other MDMS servers will then forward user requests to the node that has just become the database server.
Subsequently, if the database server fails because of a hardware failure or a software induced shut down, the clients compete among themselves to become the database server. Whichever client is the first to successfully open the database, becomes the new database server. The other clients will then forward user requests to the new database server. User requests issued on the node which is the database server, will be processed on that node immediately.
During installation you create the MDMS user account as shown in See MDMS User Account. This account is used by MDMS for every operation it performs.
Username: MDMS$SERVER Owner: SYSTEM MANAGER
Account: SYSTEM UIC: [1,4] ([SYSTEM])
CLI: DCL Tables:
Default: SYS$SYSROOT:[SYSMGR]
LGICMD: SYS$LOGIN:LOGIN
Flags: DisForce_Pwd_Change DisPwdHis
Primary days: Mon Tue Wed Thu Fri Sat Sun
Secondary days:
No access restrictions
Expiration: (none) Pwdminimum: 14 Login Fails: 0
Pwdlifetime: 30 00:00 Pwdchange: 08-Jan-2003 12:19
Maxjobs: 0 Fillm: 500 Bytlm: 100000
Maxacctjobs: 0 Shrfillm: 0 Pbytlm: 0
Maxdetach: 0 BIOlm: 10000 JTquota: 4096
Prclm: 10 DIOlm: 300 WSdef: 5000
Prio: 4 ASTlm: 300 WSquo: 10000
Queprio: 0 TQElm: 300 WSextent: 30000
CPU: (none) Enqlm: 2500 Pgflquo: 300000
Authorized Privileges:
DIAGNOSE NETMBX PHY_IO READALL SHARE SYSNAM SYSPRV TMPMBX WORLD
Default Privileges:
DIAGNOSE NETMBX PHY_IO READALL SHARE SYSNAM SYSPRV TMPMBX WORLD
MDMS creates the SYS$STARTUP:MDMS$SYSTARTUP.COM command procedure on the initial installation. This file includes logical assignments that MDMS uses when the node starts up. The installation process also offers the opportunity to make initial assignments to the logicals.
If you install MDMS once for shared access in an OpenVMS Cluster environment, this file is shared by all members. If you install MDMS on individual nodes within an OpenVMS Cluster environment, this file is installed on each node.
In addition to creating node object records and setting domain and node attributes, you must define logicals in the MDMS start up file. These are all critical tasks to configure the MDMS domain.
See MDMS$SYSTARTUP.COM Logical Assignments provides brief descriptions of most of the logical assignments in MDMS$SYSTARTUP.COM. More detailed descriptions follow as indicated.
List of all nodes that can run as the MDMS database server. See See MDMS$DATABASE_SERVERS - Identifies Domain Database Servers for more information. |
|
Device and directory of the MDMS log file. See See MDMS$LOGFILE_LOCATION for more information. |
|
Device and directory of the MDMS database files. All installations in any one domain must define this as a common location. See The MDMS Database identifies the MDMS database files and describes how they should be managed. |
|
Range of ports for the node to use for out going connections. The default range is for privileged ports; 1 through 1023. |
|
Support for SLS/MDMS Version 2.9x clients. The default value is FALSE. If you need to support some systems running SLS/MDMS Version 2.9x, then set this value to TRUE. |
Of all the nodes in the MDMS domain, you select those which can act as a database server. Only one node at a time can be the database server. Other nodes operating at the same time communicate with the node acting as the database server. In the event the server node fails, another node operating in the domain can become the database server if it is listed in the MDMS$DATABASE_SERVERS logical.
For instance, in an OpenVMS Cluster environment, you can identify all nodes as a potential server node. If the domain includes an OpenVMS Cluster environment and some number of nodes remote from it, you could identify a remote node as a database server if the MDMS database is on a disk served by the Distributed File System software (DECdfs). However, if you do not want remote nodes to function as a database server, do not enter their names in the list for this assignment.
The names you use must be the full network name specification for the transports used. shows example node names for each of the possible transport options. If a node uses both DECnet and TCP/IP, full network names for both should be defined in the node object
Defines the location of the Log Files. For each server running, MDMS uses a log file in this location. The log file name includes the name of the cluster node it logs.
For example, the log file name for a node with a cluster node name NODE_A would be:
To shut down MDMS on the current node enter this command:
$@SYS$STARTUP:MDMS$SHUTDOWN.COM
To restart MDMS (shut down and immediate restart), enter the shut down command and the parameter RESTART:
$@SYS$STARTUP:MDMS$SHUTDOWN RESTART
The MDMS node object record characterizes the function of a node in the MDMS domain and describes how the node communicates with other nodes in the domain.
To participate in an MDMS domain, a node object has to be entered into the MDMS database. This node object has 4 attributes to describe its connections in a network:
When an MDMS server starts up it only has its network node name/s to identify itself in the MDMS database. Therefore if a node has a network node name but it is not defined in the
node object records of the database, this node will be rejected as not being fully enabled. For example, a node has a TCP/IP name and TCP/IP is running but the node object record shows the TCP/IP full name as blank.
There is one situation where an MDMS server is allowed to function even if it does not have a node object record defined or the node object record does not list all network names. This is in the case of the node being an MDMS database server. Without this exception, no node entries can be created in the database. As long as a database server is not fully enabled in the database it will not start any network listeners.
This section describes how to designate an MDMS node as a database server, enable and disable the node.
Designating Potential Database Servers
When you install MDMS, you must decide which nodes will participate as potential database servers. To be a database server, the node must be able to access the database disk device.
Typically, in an OpenVMS Cluster environment, all nodes would have access to the database disk device, and would therefore be identified as potential database servers.
Set the database server attribute for each node identified as a potential database server. For nodes in the domain that are not going to act as a database server, negate the database server attribute.
Disabling and Enabling MDMS Nodes
There are several reasons for disabling an MDMS node.
Disable the node from the command line or the GUI and restart MDMS.
When you are ready to return the node to service, enable the node.
Nodes in the MDMS domain have two network transport options: one for DECnet, the other for TCP/IP. When you configure a node into the MDMS domain, you can specify either or both these transport options by assigning them to the transport attribute. If you specify both, MDMS will attempt interprocessor communications on the first transport value listed. MDMS will then try the second transport value if communication fails on the first.
If you are using the DECnet Plus network transport, define the full DECnet Plus node name in the decnet fullname attribute. If you are using an earlier version of DECnet, leave the
DECnet-Plus fullname attribute blank.
If you are using the TCP/IP network transport, enter the node's full TCP/IP name in the
TCPIP fullname attribute. You can also specify the receive ports used by MDMS to listen for incoming requests. By default, MDMS uses the port range of 2501 through 2510. If you want to specify a different port or range of ports, append that specification to the TCPIP fullname. For example:
Describe the function, purpose of the node with the description attribute. Use the location attribute to identify the MDMS location where the node resides.
List the OPCOM classes of operators with terminals connected to this node who will receive OPCOM messages. Operators who enable those classes will receive OPCOM messages pertaining to devices connected to the node.
For more information about operator communication, see See Managing Operations.
MDMS provides the group object record to define a group of nodes that share common drives or jukeboxes. Typically, the group object record represents all nodes in an OpenVMS Cluster environment, when drives in the environment are accessible from all nodes.
Some configurations involve sharing a device between nodes of different OpenVMS Cluster environments. You could create a group that includes all nodes that have access to the device.
When you create a group to identify shared access to a drive or jukebox assign the group name as an attribute of the drive or jukebox. When you set the group attribute of the drive or jukebox object record, MDMS clears the node attribute.
The following command examples create a functionally equivalent drive object records.
$!These commands create a drive connected to a Group object
$MDMS CREATE GROUP CLUSTER_A /NODES=(NODE_1,NODE_2,NODE_3)
$MDMS CREATE DRIVE NODE$MUA501/GROUPS=CLUSTER_A
$!
$!This command creates a drive connected to NODE_1, NODE_2, and NODE_3
$MDMS CREATE DRIVE NODE$MUA501/NODES=(NODE_1,NODE_2,NODE_3)
See Groups in the MDMS Domain is a model of organizing clusters of nodes in groups and how devices are shared between groups.
The domain object record describes global attributes for the domain and includes the description attribute where you can enter an open text description of the MDMS domain. Additional domain object attributes define configuration parameters, access rights options, and default volume management parameters. See See The MDMS Domain.
Operator Communications for the Domain
Include all operator classes to which OPCOM messages should go as a comma separated list value of the OPCOM classes attribute. MDMS uses the domain OPCOM classes when nodes do not have their classes defined.
For more information about operator communication, see See Managing Operations.
Resetting the Request Identifier Sequence
If you want to change the request identifier for the next request, use the request id attribute.
This section briefly describes the attributes of the domain object record that implement rights controls for MDMS users. Refer to Appendix on MDMS Rights and Privileges for the description of the MDMS rights implementation.
If you use MDMS to support ABS, you can set the ABS rights attribute to allow any user with any ABS right to perform certain actions with MDMS. This feature provides a short cut to managing rights by enabling ABS users and managers access to just the features they need. Negating this attribute means users with any ABS rights have no additional MDMS rights.
MDMS defines default low level rights for the application rights attribute according to what ABS and HSM minimally require to use MDMS.
Default Rights for Various System Users
If you want to grant all users certain MDMS rights without having to modify their UAF records, you can assign those low level rights to the default rights attribute. Any user without specific MDMS rights in their UAF file will have the rights assigned to the default rights identifier.
Use the operator rights attribute to identify all low level rights granted to any operator who has been granted the MDMS_OPERATOR right in their UAF.
Use the SYSPRV attribute to allow any process with SYSPRV enabled the rights to perform any and all operations with MDMS.
Use the user rights attribute to identify all low level rights granted to any user who has been granted the MDMS_USER right in their UAF.
The MDMS domain includes attributes used as the foundation for volume management. Some of these attributes provide defaults for volume management and movement activities, others define particular behavior for all volume management operations. The values you assign to these attributes will, in part, dictate how your volume service will function. lists brief descriptions of these attributes.
This section addresses issues that involve installing additional MDMS nodes into an existing domain, or removing nodes from an operational MDMS domain.
Once you configure the MDMS domain, you might have the opportunity to add a node to the existing configuration. See Adding a Node to an Existing Configuration describes the procedure for adding a node to an existing MDMS domain.
Action... |
|
1. |
Create a node object record with either the CLI or GUI. |
Decide if the node will be a database server or will only function as an MDMS client. |
|
If the node will not share an existing startup file and database server image, then install the MDMS software with the VMSINSTAL utility. |
|
If the new node is a database server, then add the node by its network transport names to the MDMS$DATABASE_SERVERS list in all start up files in the MDMS domain. |
MDMS manages the use of drives for the benefit of its clients, ABS and HSM. You must configure MDMS to recognize the drives and the locations that contain them. You must also configure MDMS to recognize any jukebox that contains managed drives.
You will create drive, location, and possibly jukebox object records in the MDMS database. The attribute values you give them will determine how MDMS manages them. The meanings of some object record attributes are straightforward. This section describes others because they are more important for configuring operations.
Before you begin configuring drives for operations, you need to determine the following aspects of drive management:
You must give each drive a name that is unique within the MDMS domain. The drive object record can be named with the OpenVMS device name, if desired, just as long as the name is not duplicated elsewhere.
Use the description attribute to store a free text description of anything useful to your management of the drive. MDMS stores this information, but takes no action with it.
The device attribute must contain the OpenVMS allocation class and device name for the drive. If the drive is accessed from nodes other than the one from which the command was entered, you must specify nodes or groups in the /NODE or /GROUP attributes in the drive record. Do not specify nodes or groups in the drive name or the device attribute.
If the drive resides in a jukebox, you must specify the name of the jukebox with the jukebox attribute. Identify the position of the drive in the jukebox by setting the drive number attribute. Drives start at position 0.
Additionally, the jukebox that contains the drives must also be managed by MDMS.
MDMS allows you to dedicate a drive solely to MDMS operations, or share the drive with other users and applications. Specify your preference with the shared attribute.
You need to decide which systems in your data center are going to access the drives you manage.
Use the groups attribute if you created group object records to represent nodes in an OpenVMS Cluster environment or nodes that share a common device.
Use the nodes attribute if you have no reason to refer to any collection of nodes as a single entity, and you plan to manage nodes, and the objects that refer to them, individually.
The last decision is whether the drive serves locally connected systems, or remote systems using the RDF software. The access attribute allows you to specify local, remote (RDF) or both.
Specify the kinds of volumes that can be used in the drive by listing the associated media type name in the media types attribute. You can force the drive to not write volumes of particular media types. Identify those media types in the read only attribute.
If the drive has a mechanism for holding multiple volumes, and can feed the volumes sequentially to the drive, but does not allow for random access or you choose not to use the random access feature, then you can designate the drive as a stacker by setting the stacker attribute.
Set the disabled attribute when you have to exclude the managed drive from operations by MDMS. If the drive is the only one of its kind (for example if it accepts volumes of a particular media type that no other drives accept), make sure you have another drive that can take load requests. Return the drive to operation by setting the enabled attribute.
The drive object record state attribute shows the state of managed MDMS drives. MDMS sets one of four values for this attribute: Empty, Full, Loading, or Unloading.
The procedure described in describes how to add a drive to the MDMS domain.
The procedure described in describes how to remove a drive from the MDMS domain.
MDMS manages Media Robot Driver (MRD) controlled jukeboxes and DCSC controlled jukeboxes. MRD is a software that controls SCSI-2 compliant medium changers. DCSC is software that controls large jukeboxes manufactured by StorageTek, Inc. This section first describes the MDMS attributes used for describing all jukeboxes by function. Subsequent descriptions explain attributes that characterize MRD jukeboxes and DCSC jukeboxes respectively.
Assign unique names to jukeboxes you manage in the MDMS domain. When you create the jukebox object record, supply a name that describes the jukebox.
Set the control attribute to MRD if the jukebox operates under MRD control. Otherwise, set the control to DCSC.
Use the description attribute to store a free text description of the drive. You can describe its role in the data center operation or other useful information. MDMS stores this information for you, but takes no actions with it.
You can dedicate a jukebox solely to MDMS operations, or you can allow other applications and users access to the jukebox device. Specify your preference with the shared attribute.
You need to decide which systems in the data center are going to access the jukebox.
Use the groups attribute if you created group object records to represent nodes in an OpenVMS Cluster environment or nodes that share a common device.
Use the nodes attribute if you have no reason to refer to any collection of nodes as a single entity, and you plan to manage nodes, and the objects that refer to them, individually.
Disable the jukebox to exclude it from operations. Make sure that applications using MDMS will either use other managed jukeboxes, or make no request of a jukebox you disable. Enable the jukebox after you complete any configuration changes. Drives within a disabled jukebox cannot be allocated.
Set the library attribute to the library identifier of the particular silo the jukebox objects represents. MDMS supplies 1 as the default value. You will have to set this value according the number silos in the configuration and the sequence in which they are configured.
Specify the number of slots for the jukebox. Alternatively, if the jukebox supports magazines, specify the topology for the jukebox (see See Magazines and Jukebox Topology).
The robot attribute must contain the OpenVMS device name of the jukebox medium changer (also known as the robotic device).
If the jukebox is accessed from nodes other than the one from which the command was entered, you must specify nodes or groups in the /NODE or /GROUP attributes in the jukebox record. Do not specify nodes or groups in the jukebox name or the robot attribute.
The jukebox object record state attribute shows the state of managed MDMS jukeboxes. MDMS sets one of three values for this attribute: Available, In use, and Unavailable.
If you decide that your operations benefit from the management of magazines (groups of volumes moved through your operation with a single name) must set the jukebox object record to enable it. Set the usage attribute to magazine and define the jukebox topology with the topology attribute. See See Magazines for a sample overview of how the 11 and 7 slot bin packs can be used as a magazine.
Setting the usage attribute to nomagazine means that you will move volumes into and out of the jukebox independently (using separate commands for each volume, regardless if they are placed into a physical magazine or not).
Towers, Faces, Levels, and Slots
Some jukeboxes have their slot range subdivided into towers, faces, and levels. See See Jukebox Topology for an overview of how the configuration of Towers, Faces, Levels and Slots constitute Topology. Note that the topology in See Jukebox Topology comprises 3 towers. In the list of topology characteristics, you should identify every tower in the configuration. For each tower in the configuration, you must inturn identify:
Restrictions for Using Magazines
You must manually open the jukebox when moving magazines into and out of the jukebox. Once in the jukebox, volumes can only be loaded and unloaded relative to the slot in the magazine it occupies.
While using multiple TL896 jukebox towers you can treat the 11 slot bin packs as magazines. The following command configures the topology of the TL896 jukebox as shown in See Magazines for use with magazines:
$ MDMS CREATE JUKEBOX JUKE_1/ -
$_ /TOPOLOGY=(TOWERS=(0,1,2), FACES=(8,8,8), -
$_ LEVELS=(3,3,2), SLOTS=(11,11,11))
This section describes some of the management issues that involve both drives and jukeboxes.
Drive and jukebox object records both use the automatic load reply attribute to provide an additional level of automation.
When you set the automatic reply attribute to the affirmative, MDMS will poll the drive or jukebox for successful completion of an operator-assisted operation for those operations where polling is possible. For example, MDMS can poll a drive, determine that a volume is in the drive, and cancel the associated OPCOM request to acknowledge a load. Under these circumstances, an operator need not reply to the OPCOM message after completing the load.
To use this feature, set the automatic reply attribute to the affirmative. When this attribute is set to the negative, which is the default, an operator must acknowledge each OPCOM request for the drive or jukebox before the request is completed.
If you need to make backup copies to a drive in a remote location, using the network, then you must install the Remote Device Facility software (RDF). The RDF software must then be configured to work with MDMS.
See See Actions for Configuring Remote Drives for a description of the actions you need to take to configure RDF software.
When you add another drive to a managed jukebox, just specify the name of the jukebox in which the drive resides, in the drive object record.
You can temporarily remove a drive or jukebox from service. MDMS allows you to disable and enable drive and jukebox devices. This feature supports maintenance or other operations where you want to maintain MDMS support for ABS or HSM, and temporarily remove a drive or jukebox from service.
During the course of management, you might encounter a requirement to change the device names of drives or jukeboxes under MDMS management, to avoid confusion in naming. When you have to change the device names, follow the procedure in See Changing the Names of Managed Devices.
MDMS allows you to identify locations in which you store volumes. Create a location object record for each place the operations staff uses to store volumes. These locations are referenced during move operations, load to, or unload from stand-alone drives.
If you need to divide your location space into smaller, named locations, define locations hierachically. The location attribute of the location object record allows you to name a higher level location. For example, you can create location object records to describe separate rooms in a data center by first creating a location object record for the data center. After that, create object records for each room, specifying the data center name as the value of the location attribute for the room locations.
When allocating volumes or drives by location, the volumes and drives do not have to be in the exact location specified; rather they should be in a compatible location. A location is considered compatible with another if both have a common root higher in the location hierarchy. For example, in See Named Locations, locations Room_304 and Floor_2 are considered compatible, as they both have location Building_1 as a common root.
Your operations staff must be informed about the names of these locations as they will appear in OPCOM messages. Use the description attribute of the location object record to describe the location it represents as accurately as possible. Your operations staff can refer to the information in the event they become confused about a location mentioned in an OPCOM message.
You can divide a location into separate spaces to identify locations of specific volumes. Use the spaces attribute to specify the range of spaces in which volumes can be stored. If you do not need that level of detail in the placement of volumes at the location, negate the attribute.
The Appendix - Sample Configuration of MDMS, contains a set of sample MDMS V4 configurations. These samples will help you make necessary checks for completeness.
MDMS manages volume availability with the concept of a life cycle. The primary purpose of the life cycle is to ensure that volumes are only written when appropriate, and by authorized users. By setting a variety of attributes across multiple objects, you control how long a volume, once written, remains safe. You also set the time and interval for a volume to stay at an offsite location for safe keeping, then return for re-use once the interval passes.
This section describes the volume life cycle, relating object attributes, commands and life cycle states. This section also describes how to match volumes with drives by creating media type object records.
The volume life cycle determines when volumes can be written, and controls how long they remain safe from being overwritten. See MDMS Volume State Transitions describes operations on volumes within the life cycle.
Each row describes an operation with current and new volume states, commands and GUI actions that cause volumes to change states, and if applicable, the volume attributes that MDMS uses to cause volumes to change states. Descriptions following the table explain important aspects of each operation.
This section describes the transitions between volume states. These processes enable you to secure volumes from unauthorized use by MDMS client applications, or make them available to meet continuing needs. Additionally, in some circumstances, you might have to manually force a volume transition to meet an operational need.
Understanding how these volume transitions occur automatically under MDMS control, or take place manually will help you manage your volumes effectively.
You have more than one option for creating volume object records. You can create them explicitly with the MDMS CREATE VOLUME command: individually, or for a range of volume identifiers.
You can create the volumes implicitly as the result of an inventory operation on a jukebox. If an inventory operation finds a volume that is not currently managed, a possible response (as you determine) is to create a volume object record to represent it.
You can also create volume object records for large numbers of volumes by opening the jukebox, loading the volumes into the jukebox slots, then running an inventory operation.
Finally, it is possible to perform scratch loads on standalone or stacker drives using the MDMS LOAD DRIVE /CREATE command. If the volume that is loaded is does not exist in the database, MDMS will create it.
You must create volumes explicitly through the MDMS CREATE VOLUME command, or implicitly through the inventory or load operations.
Use the MDMS initialize feature to make sure that MDMS recognizes volumes as initialized. Unless you acquire preinitialized volumes, you must explicitly initialize them MDMS before you can use them. If your operations require, you can initialize volumes that have just been released from allocation.
When you initialize a volume or create a volume object record for a preinitialized volume, MDMS records the date in the initialized date attribute of the volume object record.
Typically, applications request the allocation of volumes. Only in rare circumstances will you have to allocate a volume to a user other than ABS or HSM. However, if you use command procedures for customized operations that require the use of managed media, you should be familiar with the options for volume allocation. Refer to the ABS or HSM Command Reference Guide for more information on the MDMS ALLOCATE command.
Once an application allocates a volume, MDMS allows read and write access to that volume only by that application. MDMS sets volume object record attributes to control transitions between volume states. Those attributes include:
The application requesting the volume can direct MDMS to set additional attributes for controlling how long it keeps the volume and how it releases it. These attributes include:
MDMS allows no other user or application to load or unload a volume with the state attribute value set to ALLOCATED, unless the user has MDMS_LOAD_ALL rights. This volume state allows you to protect your data. Set the amount of time a volume remains allocated according to your data retention requirements.
During this time, you can choose to move the volume to an offsite location.
When a volume's scratch date passes, MDMS automatically frees the volume from allocation.
If the application or user negates the volume object record scratch date attribute, the volume remains allocated permanently.
Use this feature when you need to retain the data on the volume indefinitely.
After the data retention time has passed, you have the option of making the volume immediately available, or you can elect to hold the volume in a TRANSITION state. To force a volume through the TRANSITION state, negate the volume object record transition time attribute.
You can release a volume from transition with the DCL command MDMS SET VOLUME /RELEASE. Conversely, you can re-allocate a volume from either the FREE or TRANSITION states with the DCL command MDMS SET VOLUME /RETAIN.
Once MDMS sets a volume's state to FREE, it can be allocated for use by an application once again.
You can make a volume unavailable if you need to prevent ongoing processing of the volume by MDMS. MDMS retains the state from which you set the UNAVAILABLE state. When you decide to return the volume for processing, the volume state attribute returns to its previous value.
The ability to make a volume unavailable is a manual feature of MDMS.
MDMS matches volumes with drives capable of loading them by providing the logical media type object. The media type object record includes attributes whose values describe the attributes of a type of volume.
The domain object record names the default media types that any volume object record will take if none is specified.
Create a media type object record to describe each type of volume. Drive object records include an attribute list of media types the drive can load, read, and write.
Volume object records for uninitialized volumes include a list of candidate media types. Volume object records for initialized volumes include a single attribute value that names a media type. To allocate a drive for a volume, the volume's media type must be listed in the drive object record's media type field, or its read-only media-type field for read-only operations.
Use magazines when your operations allow you to move and manage groups of volumes for single users. Create a magazine object record, then move volumes into the magazine (or similar carrier) with MDMS. All the volumes can now be moved between locations and jukeboxes by moving the magazine to which they belong.
The jukeboxes must support the use of magazines; that is, they must use carriers that can hold multiple volumes at once. If you choose to manage the physical movement of volumes with magazines, then you may set the usage attribute to MAGAZINE for jukebox object records of jukeboxes that use them. You may also define the topology attribute for any jukebox used for magazine based operations.
If your jukebox does not have ports, and requires you to use physical magazines, you do not have to use the MDMS magazine object record. The jukebox can still access volumes by slot number. Single volume operations can still be conducted by using the move operation on individual volumes, or on a range of volumes.
MDMS provides a feature that allows you to define a series of OpenVMS DCL symbols that describe the attributes of a given volume. By using the /SYMBOLS qualifier with the MDMS SHOW VOLUME command, you can define symbols for all the volume object record attribute values. Use this feature interactively, or in DCL command procedures, when you need to gather information about volumes for subsequent processing.
Refer to the ABS or HSM Command Reference Guide description of the MDMS SHOW VOLUME command.
MDMS manages volumes and devices as autonomously as possible. However, it is sometimes necessary - and perhaps required - that your operations staff be involved with moving volumes or loading volumes in drives. When MDMS cannot conduct an automatic operation, it sends a message through the OpenVMS OPCOM system to an operator terminal to request assistance.
Understanding this information will help you set up effective and efficient operations with MDMS.
This section describes how to set up operator communication between MDMS and the OpenVMS OPCOM facility. Follow the steps in See Setting Up Operator Communication to set up operator communication.
Set the domain object record OPCOM attribute with the default OPCOM classes for any node in the MDMS management domain.
Each MDMS node has a corresponding node object record. An attribute of the node object record is a list of OPCOM classes through which operator communication takes place. Choose one or more OPCOM classes for operator communication to support operations with this node.
Identify the operator terminals closest to MDMS locations, drives and jukeboxes. In that way, you can direct the operational communication between the nodes and terminals whose operators can respond to it.
Make sure that the terminals are configured to receive OPCOM messages from those classes. Use the OpenVMS REPLY/ENABLE command to set the OPCOM class that corresponds to those set for the node or domain.
$REPLY/ENABLE=(opcom_class,[...])
Where opcom_class specifications are those chosen for MDMS communication.
Several commands include an assist feature where you can either require or forego operator involvement. Other MDMS features allow you to communicate with particular OPCOM classes, making sure that specific operators get messages. You can configure jukebox drives for automatic loading, and stand alone drives for operator supported loading. See See Operator Management Features for a list of operator communication features and your options for using them.
Once configured, MDMS serves ABS and HSM with uninterrupted access to devices and volumes for writing data. Once allocated, MDMS catalogs volumes to keep them safe, and makes them available when needed to restore data.
To service ABS and HSM, you must supply volumes for MDMS to make available, enable MDMS to manage the allocation of devices and volumes, and meet client needs for volume retention and rotation.
To create and maintain a supply of volumes, you must regularly add volumes to MDMS management, and set volume object record attributes to allow MDMS to meet ABS and HSM needs.
To prepare volumes for use by MDMS, you must create volume object records for them and initialize them if needed. MDMS provides different mechanisms for creating volume object records: the create, load, and inventory operations. When you create volume object records, you should consider these factors:
The following sections provide more detailed information.
If you create volume object records with the use of a vision equipped jukebox, you must command MDMS to use the jukebox vision system and identify the slots in which the new volumes reside. These two operational parameters must be supplied to either the create or inventory operation.
For command driven operations, these two commands are functionally equivalent.
$MDMS INVENTORY JUKEBOX jukebox_name /VISION/SLOTS=slot_range /CREATE
$MDMS CREATE VOLUME /JUKEBOX=jukebox_name /VISION/SLOTS=slot_range
If you create volume object records with the use of a jukebox that does not have a vision system, you must supply the range of volume names as they are labelled and as they occupy the slot range.
If you create volume object records for volumes that reside in a location other than the default location (as defined in the domain object record), you must identify the placement of the volumes and the location in the onsite or offsite attribute. Additionally, you must specify the volume name or range of volume names.
If you create volume object records for volumes that reside in the default onsite location, you need not specify the placement or onsite location. However, you must specify the volume name or range of volume names.
If you acquire preinitialized volumes for MDMS management, and you want to bypass the MDMS initialization feature, you must specify a single media type attribute value for the volume.
Select the format to meet the needs of your MDMS client application. For HSM, use the BACKUP format. For ABS, use BACKUP or RMUBACKUP.
Use a record length that best satisfies your performance requirements. Set the volume protection using standard OpenVMS file protection syntax. Assign the volume to a pool you might use to manage the consumption of volumes between multiple users.
Static volume attributes rarely, if ever, need to be changed. MDMS provides them to store information that you can use to better manage your volumes.
The description attribute stores up to 255 characters for you to describe the volume, its use, history, or any other information you need.
The brand attribute identifies the volume manufacturer.
Use the record length attribute to store the length or records written to the volume, when that information is needed.
If you use a stand alone drive, enable MDMS operator communication on a terminal near the operator who services the drive. MDMS signals the operator to load and unload the drive as needed.
You must have a ready supply of volumes to satisfy load requests. If your application requires specific volumes, they must be available, and the operator must load the specific volumes requested.
To enable an operator to service a stand alone drive during MDMS operation, perform the actions listed in See Configuring MDMS to Service a Stand Alone Drive.
MDMS incorporates many features that take advantage of the mechanical features of automated tape libraries and other medium changers. Use these features to support lights-out operation, and effectively manage the use of volumes.
Jukeboxes that use built-in vision systems to scan volume labels provide the greatest advantage. If the jukebox does not have a vision system, MDMS has to get volume names by other means. For some operations, the operator provides volume names individually or by range. For other operations, MDMS mounts the volume and reads the recorded label.
The inventory operation registers the contents of a jukebox correctly in the MDMS database. You can use this operation to update the contents of a jukebox whenever you know, or have reason to suspect the contents of a jukebox have changed without MDMS involvement.
When you need to update the database in response to unknown changes in the contents of the jukebox, use the inventory operation against the entire jukebox. If you know the range of slots subject to change, then constrain the inventory operation to just those slots.
If you inventory a jukebox that does not have a vision system, MDMS loads and mounts each volume, to read the volume's recorded label.
When you inventory a subset of slots in the jukebox, use the option to ignore missing volumes.
If you need to manually adjust the MDMS database to reflect the contents of jukebox, use the nophysical option for the MDMS move operation. This allows you to perform a logical move for to update the MDMS database.
Inventory to Create Volume Object Records
If you manage a jukebox, you can use the inventory operation to add volumes to MDMS management. The inventory operation includes the create, preinitialized, media types, and inherit qualifiers to support such operations.
Take the steps in See How to Create Volume Object Records with INVENTORY to use a vision jukebox to create volume object records.
To assist with accounting for volume use by data center clients, MDMS provides features that allow you to divide the volumes you manage by creating volume pools and assigning volumes to them.
Use MDMS to specify volume pools. Set the volume pool options in ABS or HSM to specify that volumes be allocated from those pools for users as needed. See Pools and Volumes identifies the pools respective to a designated group of users. Note that `No Pool' is for use by all users.
The pool object record includes two attributes to assign pools to users: authorized users, and default users.
Set the authorized users list to include all users, by node or group name, who are allowed to allocate volumes from the pool.
Set the default users list to include all users, by node or group name, for whom the pool will be the default pool. Unless another pool is specified during allocation, volumes will be allocated from the default pool for users in the default users list.
Because volume pools are characterized in part by node or group names, anytime you add or remove nodes or groups, you must review and adjust the volume pool attributes as necessary.
After you create a volume pool object record, you can associate managed volumes with it. Select the range of volumes you want to associate with the pool and set the pool attribute of the volumes to the name of the pool.
This can be done during creation or at any time the volume is under MDMS management.
To change access to volume pools, modify the membership of the authorized users list attribute.
If you are using the command line to change user access to volume pools, use the /ADD and /REMOVE command qualifiers to modify the current list contents. Use the /NOAUTHORIZED_USERS qualifier to erase the entire user list for the volume pool.
If you are using the GUI to change user access to volume pools, just edit the contents of the authorized users field.
You can also authorize users with the /DEFAULT_USERS attribute, which means that the users are authorized, and that this pool is the pool for which allocation requests for volumes are applied if no pool is specified in the allocation request. You should ensure that any particular user has a default users entry in only one pool.
You can delete volume pools. However, deleting a volume pool may require some additional clean up to maintain the MDMS database integrity. Some volume records could still have a pool attribute that names the pool to be deleted, and some DCL command procedures could still reference the pool.
If volume records naming the pool exist after deleting the pool object record, find them and change the value of the pool attribute.
The MDMS CREATE VOLUME and MDMS LOAD DRIVE commands in DCL command procedures can specify the deleted pool. Change references to the delete pool object record, if they exist, to prevent the command procedures from failing.
You might want to remove volumes from management for a variety of reasons:
To temporarily remove a volume from management, set the volume state attribute to UNAVAILABLE. Any volume object record with the state set to UNAVAILABLE remains under MDMS management, but is not processed though the life cycle. These volumes will not be set to the TRANSITION or FREE state. However, these volumes can be moved and their location maintained.
To permanently remove a volume from management, delete the volume object record describing it.
Volume rotation involves moving volumes to an off-site location for safekeeping with a schedule that meets your needs for data retention and retrieval. After a period of time, you can retrieve volumes for re-use, if you need them. You can rotate volumes individually, or you can rotate groups of volumes that belong to magazines.
The first thing you have to do for a volume rotation plan is create location object records for the on-site and off-site locations. Make sure these location object records include a suitable description of the actual locations. You can optionally specify hierarchical locations and/or a range of spaces, if you want to manage volumes by actual space locations.
You can define as many different locations as your management plan requires.
Once you have object records that describe the locations, choose those that will be the domain defaults (defined as attributes of the domain object record). The default locations will be used when you create volumes or magazines and do not specify onsite and/or offsite location names. You can define only one onsite location and one offsite location as the domain default at any one time.
Manage the volume rotation schedule with the values of the offsite and onsite attributes of the volumes or magazines you manage. You set these values. In addition to setting these attribute values, you must check the schedule periodically to select and move the volumes or magazines.
See Sequence of Volume Rotation Events shows the sequence of volume rotation events and identifies the commands and GUI actions you issue.
MDMS starts three scheduled activities at 1AM, by default, to do the following:
These three activities are controlled by a logical, are separate jobs with names, generate log files, and notify users when volumes are deallocated. These things are described in the sections below.
The start time for scheduled activities is controlled by the logical:
MDMS$SCHEDULED_ACTIVITIES_START_HOUR
By default, the scheduled activities start a 1AM which is defined as:
$ DEFINE/SYSTEM/NOLOG MDMS$SCHEDULED_ACTIVITIES_START_HOUR 1
You can change when the scheduled activities start by changing this logical in SYS$STARTUP:MDMS$SYSTARTUP.COM. The hour must be an integer between 0 and 23.
When these scheduled activities jobs start up, they have the following names:
If any volumes are deallocated, the users in the Mail attribute of the Domain object will receive notification by VMS mail.
Operators will receive Opcom requests to move the volumes or magazines.
These scheduled activities generate log files. These log files are located in MDMS$LOGFILE_LOCATION and are named:
These log files do not show which volumes or magazines were acted upon. They show the command that was executed and whether it was successful or not.
If the Opcom message is not replied to by the time the next scheduled activities is started, the activity is cancel and a new activity is scheduled. For example, nobody replied to the message from Saturday at 1AM, so on Sunday MDMS canceled the request and generated a new request. The log file for Saturday night would look like this:
$ SET VERIFY
$ SET ON
$ MDMS MOVE VOL */SCHEDULE
%MDMS-E-CANCELED, request canceled by user
MDMS$SERVER job terminated at 08-Jan-2003 01:01:30.48
Nothing is lost because the database did not change, but this new request could require more volumes or magazines to be moved.
The following shows an example that completed successfully after deallocating and releasing the volumes:
$ SET VERIFY
$ SET ON
$ MDMS DEALLOCATE VOLUME /SCHEDULE/VOLSET
MDMS$SERVER job terminated at 08-Jan-2003 01:03:31.66
To notify users when the volumes are deallocated, place the user names in the Mail attribute of the Domain object. For example:
$ MDMS show domain
Description: Smith's Special Domain
Mail: SYSTEM,OPERATOR1,SMITH
Offsite Location: JOHNNY_OFFSITE_TAPE_STORAGE
Onsite Location: OFFICE_65
Def. Media Type: TLZ09M
Deallocate State: TRANSITION
Opcom Class: TAPES
Request ID: 496778
Protection: S:RW,O:RW,G:R,W
DB Server Node: DEBBY
DB Server Date: 08-Jan-2003 14:20:08
Max Scratch Time: NONE
Scratch Time: 365 00:00:00
Transition Time: 1 00:00:00
Network Timeout: NONE
$
In the above example, users SYSTEM, OPERATOR1, and SMITH will receive VMS mail when any volumes are deallocated during scheduled activities or when some one issues the following command:
$ MDMS DEALLOCATE VOLUME /SCHEDULE/VOLSET
If you delete all users in the Mail attribute, nobody will receive mail when volumes are deallocated by the scheduled activities or the DEALLOCATE VOLUME /SCHEDULE command.
MDMS GUI users have access to features that guide them through complex tasks. These features conduct a dialog with users, asking them about their particular configuration and needs, and then provide the appropriate object screens with information about setting specific attribute values.
The features support tasks that accomplish the following:
The procedures outlined in this section include command examples with recommended qualifier settings shown. If you choose to perform these tasks with the command line interface, use the MDMS command reference for complete command details.
This task offers the complete set of steps for configuring a drive or jukebox to an MDMS domain and adding new volumes used by those drives. This task can be performed to configure a new drive or jukebox that can use managed volumes.
This task can also be performed to add new volumes into management that can use managed drives and jukeboxes.
1. |
Verify that the drive is on-line and available.
|
If you are connecting the jukebox or drive to a set of nodes which do not already share access to a common device, then create a group object record. |
|
If you are configuring a new jukebox into management, then create a jukebox object record. |
|
If the drive you are configuring uses a new type of volume, then create a media type object record. |
|
If you need to identify a new place for volume storage near the drive, then create a location object record. |
|
Create the drive object record for the drive you are configuring into MDMS management. |
|
Enable the drive (and if you just added a jukebox, enable it too). |
|
If you are adding new volumes into MDMS management, then continue with See . |
|
If you have added a new media type to complement a new type of drive, and you plan to use managed volumes, set the volumes to use the new media type. |
|
If the volumes you are processing are of a type you do not presently manage, complete the actions in this step. Otherwise, continue with See .
Create a media type object record. |
|
If you are using a jukebox with a vision system to create volume object records, then continue with See . Otherwise, continue with See to create volume records. |
|
If you use magazines in your operation, then continue with this step. Otherwise, continue with See .
If you do not have a managed magazine that is compatible with the jukebox, then create a magazine object record.
Place the volumes in the magazine.
$MDMS MOVE MAGAZINE magazine_name jukebox_name /START_SLOT=n |
|
Place the volumes in the jukebox. If you are not using all the slots in the jukebox, note the slots you are using for this operation. Inventory the jukebox, or just the slots that contain the new volumes.
If you are processing pre-initialized volumes, use the /PREINITIALIZED qualifier, then your volumes are ready for use. |
|
Initialize the volumes in the jukebox if they were not created as preinitialized. After you initialize volumes, you are done with this procedure. |
|
Create volume object records for the volumes you are going to manage.
If you are processing preinitialized volumes, use the /PREINITIALIZED qualifier, then your volumes are ready for use. |
|
Initialize the volumes. This operation will direct the operator when to load and unload the volumes from the drive. |
This task describes the complete set of decisions and actions you could take in the case of removing a drive from management. That is, when you have to remove the last drives of a particular kind, and take with it all associated volumes, then update any remaining MDMS object records that reference the object records you delete. Any other task of removing just a drive (one of many to remain) or removing and discarding volumes involves a subset of the activities described in this procedure.
1. |
If there is a volume in the drive you are about to remove from management, then unload the volume from the drive. |
Delete the drive from management. |
|
If you have media type object records to service only the drive you just deleted, then complete the actions in this step. Otherwise, continue with See .
Delete the media type object record.
If volumes remaining in management reference the media type, then set the volume attribute value for those volumes to reference a different media type value. Use the following command for uninitialized volumes:
Use the following command for initialized volumes: |
|
If the drives you have deleted belonged to a jukebox, then complete the actions in this step. Otherwise, continue with See .
If the jukebox still contains volumes, move the volumes (or magazines, if you manage the jukebox with magazines) from the jukebox to a location that you plan to keep under MDMS management. |
|
If a particular location served the drives or jukebox, and you no longer have a need to manage it, then delete the location. |
|
Move all volumes, the records of which you are going to delete, to a managed location. |
|
If the volumes to be deleted exclusively use a particular media type, and that media type has a record in the MDMS database, then take the actions in this step. Otherwise, continue with See .
Delete the media type object record.
If drives remaining under MDMS management reference the media type you just deleted, then update the drives' media type list accordingly. |
|
If the volumes to be deleted are the only volumes to belong to a volume pool, and there is no longer a need for the pool, then delete the volume pool. |
|
If the volumes to be deleted exclusively used certain managed magazines, then delete the magazines. |
|
This procedure describes how to gather and rotate volumes from the onsite location to an offsite location. Use this procedure in accordance with your data center site rotation schedule to move backup copies of data (or data destined for archival) to an offsite location. Additionally, this procedure processes volumes from the offsite location into the onsite location.
This procedure describes the steps you take to move allocated volumes from a jukebox and replace them with scratch volumes. This procedure is aimed at supporting backup operations, not operations that involve the use of managed media for hierarchical storage management.
1. |
Report on the volumes to remove from the jukebox. |
If you manage the jukebox on a volume basis, perform this step with each volume, otherwise proceed with See with instructions for magazine management. |
|
Identify the magazines to which the volumes belong, then move the magazines from the jukebox. |
|
If you manage the jukebox on a volume basis, perform this step, otherwise proceed with See for magazine management. |
|
Move free volumes to the magazine, and move the magazine to the jukebox. |
This chapter explains how to configure and manage remote devices using the Remote Device Facility (RDF). RDF is used for devices remotely connected over a wide-area network, and DECnet is still a requirement for access to these remote devices. RDF is not required for devices connected remotely via Fibre Channel, as these are considered local devices.
When you install ABS (non-standard installation) or MDMS, you are asked whether you want to install the RDF software. With the ABS standard installation, the RDF client and server software is installed by default.
During the installation you place the RDF client software on the nodes with disks you want to access for ABS or HSM. You place the RDF server software on the systems to which the tape devices (jukeboxes and drives) are connected. This means that when using RDF, you serve the tape device to the systems with the client disks.
All of the files for RDF are placed in SYS$COMMON:[MDMS.TTI_RDF] for your system. There are separate locations for VAX or Alpha.
After installing RDF you should check the TTI_RDEV:CONFIG_nodename.DAT file to make sure it has correct entries.
Check this file to make sure that all RDF characteristic names are unique to this node.
The following sections describe how to use RDF with MDMS.
RDF software is automatically started up along with then MDMS software when you enter the following command:
The following privileges are required to execute the RDSHOW procedure: NETMBX, TMPMBX.
In addition, the following privileges are required to show information on remote devices allocated by other processes: SYSPRV, WORLD.
You can run the RDSHOW procedure any time after the MDMS software has been started. RDF software is automatically started at this time.
$ @TTI_RDEV:RDSHOW CLIENT
$ @TTI_RDEV:RDSHOW SERVER node_name
$ @TTI_RDEV:RDSHOW DEVICES
node_name is the node name of any node on which the RDF server software is running.
To show remote devices that you have allocated, enter the following command from the RDF client node:
RDALLOCATED devices for pid 20200294, user DJ, on node OMAHA::
Local logical Rmt node Remote device
TAPE01 MIAMI:: MIAMI$MUC0
DJ is the user name and OMAHA is the current RDF client node.
The RDSHOW SERVER procedure shows the available devices on a specific SERVER node. To execute this procedure, enter the following command from any RDF client or RDF server node:
$ @TTI_RDEV:RDSHOW SERVER MIAMI
MIAMI is the name of the server node whose devices you want shown.
Available devices on node MIAMI::
Name Status Characteristics/Comments
MIAMI$MSA0 in use msa0
...by pid 20200246, user CATHY (local)
MIAMI$MUA0 in use mua0
...by pid 202001B6, user CATHY, on node OMAHA::
MIAMI$MUB0 -free- mub0
MIAMI$MUC0 in use muc0
...by pid 2020014C, user DJ, on node OMAHA::
This RDSHOW SERVER command shows any available devices on the server node MIAMI, including any device characteristics. In addition, each allocated device shows the process PID, username, and RDF client node name.
The text (local) is shown if the device is locally allocated.
To show all allocated remote devices on an RDF client node, enter the following command from the RDF client node:
Devices RDALLOCATED on node OMAHA::
RDdevice Rmt node Remote device User name PID
RDEVA0: MIAMI:: MIAMI$MUC0 DJ 2020014C
RDEVB0: MIAMI:: MIAMI$MUA0 CATHY 202001B6
This command shows all allocated devices on the RDF client node OMAHA. Use this command to determine which devices are allocated on which nodes.
This section describes network issues that are especially important when working with remote devices.
The Network Control Program (NCP) is used to change various network parameters. RDF (and the rest of your network as a whole) benefits from changing two NCP parameters on all nodes in your network. These parameters are:
The pipeline quota is used to send data packets at an even rate. It can be tuned for specific network configurations. For example, in an Ethernet network, the number of packet buffers represented by the pipeline quota can be calculated as approximately:
buffers = pipeline_quota / 1498
The default pipeline quota is 10000. At this value, only six packets can be sent before acknowledgment of a packet from the receiving node is required. The sending node stops after the sixth packet is sent if an acknowledgment is not received.
The PIPELINE QUOTA can be increased to 45,000 allowing 30 packets to be sent before a packet is acknowledged (in an Ethernet network). However, performance improvements have not been verified for values higher than 23,000. It is important to know that increasing the value of PIPELINE QUOTA improves the performance of RDF, but may negatively impact performance of other applications running concurrently with RDF.
Similar to the pipeline quota, line receive buffers are used to receive data at a constant rate.
The default setting for the number of line receive buffers is 6.
The number of line receive buffers can be increased to 30 allowing 30 packets to be received at a time. However, performance improvements have not been verified for values greater than 15 and as stated above, tuning changes may improve RDF performance while negatively impacting other applications running on the system.
As stated in DECnet-Plus(Phase V), (DECnet/OSI V6.1) Release Notes, a pipeline quota is not used directly. Users may influence packet transmission rates by adjusting the values for the transport's characteristics MAXIMUM TRANSPORT CONNECTIONS, MAXIMUM RECEIVE BUFFERS, and MAXIMUM WINDOW. The value for the transmit quota is determined by MAXIMUM RECEIVE BUFFERS divided by Actual TRANSPORT CONNECTIONS.
This will be used for the transmit window, unless MAXIMUM WINDOW is less than this quota. In that case, MAXIMUM WINDOW will be used for the transmitter window.
The DECnet-Plus defaults (MAXIMUM TRANSPORT CONNECTIONS = 200 and MAXIMUM RECEIVE BUFFERS = 4000) produce a MAXIMUM WINDOW of 20. Decreasing MAXIMUM TRANSPORT CONNECTIONS with a corresponding increase of MAXIMUM WINDO may improve RDF performance, but also may negatively impact other applications running on the system.
This section describes how to change the network parameters for DECnet Phase IV and DECnet-PLUS.
The pipeline quota is an NCP executor parameter. The line receive buffers setting is an NCP line parameter.
The following procedure shows how to display and change these parameters in the permanent DECnet database. These changes should be made on each node of the network.
For the changed parameters to take effect, the node must be rebooted or DECnet must be shut down.
The Network Control Language (NCL) is used to change DECnet-Plus network parameters. The transport parameters MAXIMUM RECEIVE BUFFERS, MAXIMUM TRANSPORT CONNECTIONS and MAXIMUM WINDOW can be adjusted by using NCL's SET OSI TRANSPORT command. For example:
NCL> SET OSI TRANSPORT MAXIMUM RECEIVE BUFFERS = 4000 !default value
NCL> SET OSI TRANSPORT MAXIMUM TRANSPORT CONNECTIONS = 200 !default value
NCL> SET OSI TRANSPORT MAXIMUM WINDOWS = 20 !default value
To make the parameter change permanent, add the NCL command(s) to the SYS$MANAGER:NET$OSI_TRANSPORT_STARTUP.NCL file. Refer to the DENET-Plus (DECnet/OSI) Network Management manual for detailed information.
Changing the default values of line receive buffers and the pipeline quota to the values of 30 and 45000 consumes less than 140 pages of nonpaged dynamic memory.
In addition, you may need to increase the number of large request packets (LRPs) and raise the default value of NETACP BYTLM.
LRPs are used by DECnet to send and receive messages. The number of LRPs is governed by the SYSGEN parameters LRPCOUNT and LRPCOUNTV.
A minimum of 30 free LRPs is recommended during peak times. Show these parameters and the number of free LRPs by entering the following DCL command:
System Memory Resources on 24-JUN-1991 08:13:57.66
Large Packet (LRP) Lookaside List Packets Bytes
Current Total Size 36 59328
Initial Size (LRPCOUNT) 25 41200
Maximum Size (LRPCOUNTV) 200 329600
Free Space 20 32960
In the LRP lookaside list, this system has:
The SYSGEN parameter LRPCOUNT (LRP Count) has been set to 25. The Current Size is not the same as the Initial Size. This means that OpenVMS software has to allocate more LRPs. This causes system performance degradation while OpenVMS is expanding the LRP lookaside list.
The LRPCOUNT should have been raised to at least 36 so OpenVMS does not have to allocate more LRPs.
Raise the LRPCOUNT parameter to a minimum of 50. Because the LRPCOUNT parameter is set to only 25, the LRPCOUNT parameter is raised on this system even if the current size was also 25.
This is below the recommended free space amount of 30. This also indicates that LRPCOUNT should be raised. Raising LRPCOUNT to 50 (when there are currently 36 LRPs) has the effect of adding 14 LRPs. Fourteen plus the 20 free space equals over 30. This means that the recommended value of 30 free space LRPs is met after LRPCOUNT is set to 50.
The LRPCOUNTV parameter should be at least four times LRPCOUNT. Raising LRPCOUNT may mean that LRPCOUNTV has to be raised. In this case, LRPCOUNTV does not have to be raised because 200 is exactly four times 50 (the new LRPCOUNT value).
Make changes to LRPCOUNT or LRPCOUNTV in both:
Example: Changing LRPCOUNT to 50 in SYSGEN
Username: SYSTEM
Password: (the system password)
$ SET DEFAULT SYS$SYSTEM
$ RUN SYSGEN
SYSGEN> USE CURRENT
SYSGEN> SH LRPCOUNT
Parameter Name Current Default Minimum Maximum
LRPCOUNT 25 4 0 4096
SYSGEN> SET LRPCOUNT 50
SYSGEN> WRITE CURRENT
SYSGEN> SH LRPCOUNT
Parameter Name Current Default Minimum Maximum
LRPCOUNT 50 4 0 4096
After making changes to SYSGEN, reboot your system so the changes take effect.
Example: Changing the LRPCOUNT for AUTOGEN
Add the following line to MODPARAMS.DAT:
$ MIN_LRPCOUNT = 50 ! ADDED {the date} {your initials}
This ensures that when AUTOGEN runs, LRPCOUNT is not set below 50.
The default value of NETACP is a BYTLM setting of 65,535. Including overhead, this is enough for only 25 to 30 line receive buffers. This default BYTLM may not be enough.
Increase the value of NETACP BYTLM to 110,000.
Before starting DECnet, define the logical NETACP$BUFFER_ LIMIT by entering:
$ DEFINE/SYSTEM/NOLOG NETACP$BUFFER_LIMIT 110000
$ @SYS$MANAGER:STARTNET.COM
By default, RDF tries to perform I/O requests as fast as possible. In some cases, this can cause the network to slow down. Reducing the network bandwidth used by RDF allows more of the network to become available to other processes.
The RDF logical names that control this are:
RDEV_WRITE_GROUP_SIZE
RDEV_WRITE_GROUP_DELAY
The default values for these logical names is zero. The following example shows how to define these logical names on the RDF client node:
$ DEFINE/SYSTEM RDEV_WRITE_GROUP_SIZE 30
$ DEFINE/SYSTEM RDEV_WRITE_GROUP_DELAY 1
To further reduce bandwidth, the RDEV_WRITE_GROUP_DELAY logical can be increased to two (2) or three (3).
Remote Device Facility (RDF) can survive network failures of up to 15 minutes long. If the network comes back within the 15 minutes allotted time, the RDCLIENT continues processing WITHOUT ANY INTERRUPTION OR DATA LOSS. When a network link drops while RDF is active, after 10 seconds, RDF creates a new network link, synchronizes I/Os between the RDCLIENT and RDSERVER, and continues processing.
The following example shows how you can test the RDF's ability to survive a network failure. (This example assumes that you have both the RDSERVER and RDCLIENT processes running.)
$ @tti_rdev:rdallocate tti::mua0:
RDF - Remote Device Facility (Version 4.1) - RDALLOCATE Procedure
Copyright (c) 1990, 1996 Touch Technologies, Inc.
Device TTI::TTI$MUA0 ALLOCATED, use TAPE01 to reference it
$ backup/rewind/log/ignore=label sys$library:*.* tape01:test
$ run sys$system:NCP
NCP> show known links
Known Link Volatile Summary as of 13-MAR-1996 14:07:38
Link Node PID Process Remote link Remote user
24593 20.4 (JR) 2040111C MARI_11C_5 8244 CTERM
16790 20.3 (FAST) 20400C3A -rdclient- 16791 tti_rdevSRV
24579 20.6 (CHEERS) 20400113 REMACP 8223 SAMMY
24585 20.6 (CHEERS) 20400113 REMACP 8224 ANDERSON
NCP> disconnect link 16790
.
.
.
Backup pauses momentarily before resuming. Sensing the network disconnect, RDF creates a new -rdclient- link. Verify this by entering the following command:
NCP> show known links
Known Link Volatile Summary as of 13-MAR-1996 16:07:00
Link Node PID Process Remote link Remote user
24593 20.4 (JR) 2040111C MARI_11C_5 8244 CTERM
24579 20.6 (CHEERS) 20400113 REMACP 8223 SAMMY
24585 20.6 (CHEERS) 20400113 REMACP 8224 ANDERSON
24600 20.3 (FAST) 20400C3A -rdclient- 24601 tti_rdevSRV
The RDF Security Access feature allows storage administrators to control which remote devices are allowed to be accessed by RDF client nodes.
You can allow specific RDF client nodes access to all remote devices.
For example, if the server node is MIAMI and access to all remote devices is granted only to RDF client nodes OMAHA and DENVER, then do the following:
Edit TTI_RDEV:CONFIG_MIAMI.DAT
CLIENT/ALLOW=(OMAHA,DENVER)
DEVICE $1$MUA0: MUAO, TK50
DEVICE MSA0: TU80, 1600bpi
OMAHA and DENVER (the specific RDF CLIENT nodes) are allowed access to all remote devices (MUA0, TU80) on the server node MIAMI.
If there is more than one RDF client node being allowed access, separate the node names by commas.
You can allow specific RDF client nodes access to a specific remote device.
If the server node is MIAMI and access to MUA0 is allowed by RDF client nodes OMAHA and DENVER, then do the following:
$ Edit TTI_RDEV:CONFIG_MIAMI.DAT
DEVICE $1$MUA0: MUA0, TK50/ALLOW=(OMAHA,DENVER)
DEVICE MSA0: TU80, 1600bpi
OMAHA and DENVER (the specific RDF client nodes ) are allowed access only to device MUA0. In this situation, OMAHA is not allowed to access device TU80.
You can deny access from specific RDF client nodes to all remote devices. For example, if the server node is MIAMI and you want to deny access to all remote devices from RDF client nodes OMAHA and DENVER, do the following:
You can deny specific client nodes access to a specific remote device.
If the server node is MIAMI and you want to deny access to MUA0 from RDF client nodes OMAHA and DENVER, do the following:
$ Edit TTI_RDEV:CONFIG_MIAMI.DAT
DEVICE $1$MUA0: MUA0, TK50/DENY=(OMAHA,DENVER)
DEVICE MSA0: TU80, 16700bpi
OMAHA and DENVER RDF client nodes are denied access to device MUA0 on the server node MIAMI.
One of the features of RDF is the RDserver Inactivity Timer. This feature gives system managers more control over rdallocated devices.
The purpose of the RDserver Inactivity Timer is to rddeallocate any rdallocated device if NO I/O activity to the rdallocated device has occurred within a predetermined length of time. When the RDserver Inactivity Timer expires, the server process drops the link to the client node and deallocates the physical device on the server node. On the client side, the client process deallocates the RDEVn0 device.
The default value for the RDserver Inactivity Timer is 3 hours.
The RDserver Inactivity Timer default value can be manually set by defining a system wide logical on the RDserver node prior to rdallocating on the rdclient node. The logical name is RDEV_SERVER_INACTIVITY_TIMEOUT.
To manually set the timeout value:
$ DEFINE/SYSTEM RDEV_SERVER_INACTIVITY_TIMEOUT seconds
For example, to set the RDserver Inactivity Timer to 10 hours, you would execute the following command on the RDserver node:
The following messages are generated by OpenVMS and returned to the user who is initiating a function.
%SYSTEM-E-DEVICEFULL, device full - allocation failure
Explanation: An attempt to create or extend a file failed because it would exceed the device capacity, and any attempts to free disk space failed or did not free up the required space. Files should be deleted from the disk to free up space. This is an existing OpenVMS message.
%SYSTEM-E-EXDISKQUOTA, exceeded disk quota
Explanation: An attempt to create or extend a file failed because it would exceed the user disk quota (plus overdraft), and any attempts to free disk space failed or did not free up the required space. The user should either reduce the number of online files, or request additional disk quota. This is an existing OpenVMS message.
%SYSTEM-E-SHELVED, file is shelved
Explanation: An attempt to access a currently shelved file has failed because unshelving of the file is disallowed. This is a new OpenVMS message for HSM.
%SYSTEM-E-SHELFERROR, access to shelved file failed
Explanation: An attempt to access (read/write/extend /truncate) a file failed because the file was shelved and HSM could not unshelve it for some reason. HSM adds further information as to the root cause of the error. This is a new OpenVMS message for HSM.
This section defines all status and error messages that are produced by or on behalf of HSM, together with the cause and suggested user actions when appropriate
The HSM Shelf Handler Process (SHP) performs all preshelving, shelving, unshelving, and unpreshelving operations for HSM. The following status and error messages are generated by the shelf handler process and are either returned to the end-user or to the shelf handler audit and error logs. All shelf handler messages use the message prefix of "HSM".
%HSM-W-ALLOCFAILED, failed to load/allocate/mount drive drivename
Explanation: An error occurred trying to ready the specified drive for operations. The causes could be that the drive is not configured in SMU, or MDMS, or that the drive has another volume mounted, or is otherwise unavailable. Please check the SHP error log and the status of the drive.
%HSM-I-ALRPRESHELVED, file filename was already preshelved
Explanation: A preshelve request was issued for a file that was already preshelved or shelved. No action is required.
%HSM-I-ALRSHELVED, file filename was already shelved
Explanation: A SHELVE/NOONLINE request was issued for a file that was already shelved, and no reshelving is required. No action is required.
%HSM-F-BUGCHECK, internal consistency failure
Explanation: An internal error occurred and the shelf handler process terminated and is automatically restarted. This error is nonrecoverable, and is written to the error log. Please report this problem to hp and include relevant entries in the error and audit logs.
%HSM-W-CACHEERROR, shelf caching error
Explanation: An error occurred trying to access a cache disk or a cache file on a preshelve, shelve, or unshelve request, or during a cache flush to tape. Consult the SHP error log for more information.
%HSM-I-CACHEFULL, shelf cache full
Explanation: All disk and MO devices specified as caches have exhausted their capacity as defined by the block size, or the physical size of the device.Either define additional cache devices, or initiate cache flushing using SMU commands. Any preshelve or shelve operations are directed to tape, if defined.
%HSM-W-CANCELED, shelving operation canceled, on file filename
Explanation: The specified request has been canceled due to a specific cancel request, a request that conflicts with another user, or a failure of a multi-operation request. In the last case, please check the SHP error log for more information.
%HSM-E-CATOPENERROR, error opening shelf catalog file
Explanation: An unexpected error occurred trying to open the shelf catalog file(s). Consult the SHP error log for further information. Please check the equivalence name of HSM$CATALOG and redefine as needed. Also verify that any catalog files are accessible.
%HSM-E-CATSTATS_ERROR, error manipulating catalog statistics record
Explanation: An error occurred reading or writing the shelf catalog during a license capacity scan or SMU facility definition. Please check the equivalence name of HSM$CATALOG and redefine as needed. If the catalog exists, you may need to recover the catalog from a BACKUP copy.
%HSM-E-CLASS_DISABLED, command class disabled; re-enable with SMU SET FACILITY/REENABLE
Explanation: A repeated fatal error in the shelf handler has been detected on a certain class of operations. Please refer to the SHP error log for detailed information, and report the problem to hp. Since the fatal error continually repeats, HSM disabled the class of operation causing the problem, so that other operations might proceed. After fixing the problem, you can re-enable all operations using SMU SET FACILITY/REENABLE.
%HSM-E-CLASSDIS, commandclass command class disabled
Explanation: A repeated fatal error in the shelf handler has been detected on the specified class of operations. Please refer to the error log for detailed information, and report the problem to hp. Since the fatal error continually repeats, HSM disabled this class of operation, so that other operations might proceed. After fixing the problem, you can re-enable all operations using SMU SET FACILITY/REENABLE.
%HSM-E-DBACCESS_ERROR, unable to access SMU database
Explanation: The shelf handler process could not access one or more of the SMU databases. Please check the equivalence name of HSM$MANAGER and redefine as needed. If the database does not exist, you can create a new version by simply running SMU and answering "Yes" to the create questions - then use SMU SET commands to configure HSM.
%HSM-E-DBDATA_ERROR, consistency error in SMU database
Explanation: A consistency error was detected in the SMU database. This could be from the number of archive classes exceeding the maximum allowed for a shelf, an invalid shelf definition, inconsistent definitions, etc. Please examine the error log, then enter SMU SET commands to correct the discrepancy.
%HSM-E-DBNOTIFY_ERROR, propagation error for SMU update to all shelf handlers
Explanation: There was a problem notifying all shelf handlers in the VMScluster™ about a change to an SMU database. Please retry the SMU command, and report the problem to hp if the problem persists.
%HSM-E-DEVICEIDERR, error accessing volume identifier
Explanation: An error occurred trying to access or create the file [000000]HSM$UID.SYS on a disk volume or cache device. Please check the volume for read/write accessibility, and ensure there is sufficient space to create this file (only one cluster factor is usually required). This file is required on all disk volumes for which HSM operations are enabled.
%HSM-S-DMPACTREQS, shelving facility active with n requests
Explanation: Normal response to an SMU SHOW REQUESTS command with "n" active requests. The messages indicates the number of requests active on the shelf handler on the node from which the command was entered, not cluster-wide.
%HSM-I-DMPFILE, active requests dumped to file HSM$LOG:HSM$SHP_ ACTIVITY.LOG
Explanation: Normal response to an SMU SHOW REQUESTS/FULL command, indicating that the activity log was dumped to the fixed-named file. This message (and the activity log) are only produced if there is at least one active request.
%HSM-W-DMPNOMUTEX, unable to lock shelf handler database
Explanation: An SMU SHOW REQUESTS operation proceeds even if it cannot lock the appropriate mutexes after 5 seconds. This might occasionally be seen under heavy load and is not a concern. However, if repeated requests display this message, the shelf handler might be hung and a shutdown /restart may be necessary. When this message occurs, any resulting activity log may contain entries with incomplete data.
%HSM-S-DMPNOREQS, shelving facility idle with no requests
Explanation: Normal response to an SMU SHOW REQUESTS when HSM has no outstanding requests. No activity log is generated on /FULL. Note that there may be outstanding requests on other shelf handlers in the VMScluster™ environment.
%HSM-F-DUPPROCESS, shelf handler already active
Explanation: An SMU START command was issued while a shelf handler was already active on the node.Either no action is required, or SHUTDOWN the current shelf handler and retry the START.
%HSM-E-EXCEEDED, The licensed product has exceeded current license limits
Explanation: On an attempt to shelve a file, you have exceeded the capacity defined in your HSM license. You can either purchase a license upgrade, delete some shelved files, or do no more shelving. However, all other operations are unaffected and will succeed.
%HSM-E-EXDISKQUOTA, unshelve operation exceeds disk quota
Explanation: An attempt to unshelve (or access a shelved file) fails because the unshelve would exceed the file owner's disk quota. You can define a policy to shelve other files to be initiated on this condition. Otherwise, you should shelve/delete other files to free sufficient capacity to allow this unshelve to proceed.
%HSM-I-EXIT, HSM shelving facility terminated on node nodename
Explanation: This audit log message indicates that the HSM shelf handler terminated on the named node. In the case of a fatal error, the shelf handler is normally restarted. In the case of an SMU SHUTDOWN, it must be manually restarted.
%HSM-E-FILERROR, file filename access error
Explanation: HSM was unable to access or read the specified file from the online system. This is written to the error log. This usually means that the file is opened by another user (including HSM on another node), but could also mean the file has been deleted or is otherwise unavailable. Retry the operation later.
%HSM-E-HWPOLDIS, high-water mark policy execution disabled on volume volumename
Explanation: This message indicates that a high-water mark condition was detected but the policy execution for this condition is disabled, and no policy was run on the volume. No action is required if this is desired, but it is recommended that the policy is enabled.
%HSM-E-INCOMEDIA, Volume volumename media type mediatype inconsistent with drive drivename media type mediatype
Explanation: This message appears in Basic Mode only, and indicates that the shelf handler has detected a discrepancy in the media type used for shelving a file, and that requested for unshelving it. You should re-check the media type with SMU LOCATE/FULL and reset the SMU databases as needed. This should not normally occur.
%HSM-E-INCOMEDIATYPE, volume media type inconsistent with drive
Explanation: This message appears in Basic Mode only, and means that the drive(s) specified for an archive class cannot physically handle the media type of a tape volume containing a file requested to be unshelved. Please re-check the SMU DEVICE and ARCHIVE definitions.
%HSM-E-INCONSTATE, file filename has inconsistent state for unshelving
Explanation: The state of the file is inconsistent for unshelving, and allowing an unshelve may cause loss or overwriting of valid data. The file may be unshelved using the UNSHELVE/OVERRIDE qualifier, which requires BYPASS privilege. After unshelving the file, it should be checked for data integrity, especially with regards to being the right version of the data.
%HSM-E-INELIGPRESHLV, file filename is ineligible for preshelving
Explanation: The file is ineligible for preshelving. Reasons might include a SET FILE/NOSHELVABLE operation on the file, the file resides on an ineligible disk, the filename begins with HSM$ or the file is too large.
%HSM-E-INELIGSHLV, file filename is ineligible for shelving
Explanation: The file is ineligible for shelving. Reasons might include a SET FILE/NOSHELVABLE operation on the file, the file resides on an ineligible disk, the filename begins with HSM$ or the file is too large.
%HSM-E-INELIGUNPRESHLV, file filename is ineligible for unpreshelving
Explanation: The file is ineligible for unpreshelving because it is currently shelved. The file must be unshelved first.
%HSM-E-INELIGUNSHLV, file filename is ineligible for unshelving
Explanation: The file is ineligible for unshelving, because of its type (directory file, file marked for delete or locked, etc.). These should not normally be shelved in the first place.
%HSM-E-INELIGVOL, volume is ineligible for HSM operations
Explanation: The volume is ineligible for HSM operations because of an SMU SET VOLUME/DISABLE=operation, or is a remote volume of some type (including DFS-mounted and NFS- mounted volumes).
%HSM-F-INITFAILED, shelf initialization failed
Explanation: There was a problem starting the shelf handler process. Please refer to the error log for more details, correct problem, and retry.
%HSM-F-INSPRIV, insufficient privilege for HSM operation
Explanation: The HSM$SERVER account does not contain sufficient privileges to run HSM. Although this is configured properly during installation, it could be changed later. Please refer to the SMU STARTUP command in the Guide to Operations to set the appropriate privileges for this account.
%HSM-E-MAILSND, error sending to distribution maillist
Explanation: The policy execution process encountered an error sending mail to this distribution list or user. If a distribution list was specified for the policy, verify that the distribution file exists and is accessible.
%HSM-E-MANRECOVER, unable to access filename in shelf, manual recovery required
Explanation: A problem was encountered trying to unshelve a file. Please refer to the error log for more details. If the problem cannot be recovered (for example, a deleted online file), use SMU LOCATE/FULL and OpenVMS BACKUP to restore the file from the shelf.
%HSM-E-NOARCHIVE, no archive classes defined for shelf
Explanation: An attempt to preshelve or shelve a file failed because no archive classes were defined for the appropriate shelf. Use SMU SET SHELF/ARCHIVE to define archive classes to shelve files.
%HSM-E-NODRIVEAVAIL, no drive available to perform operation
Explanation: An error occurred on any shelve/unshelve operation because no devices were available to perform the operation.Ensure that an SMU device was defined to appropriate archive classes. In Plus Mode, ensure that the SMU device and archive configurations are compatible with the definitions in TAPESTART.COM, and the SMU SHOW DEVICE shows as "Configured". If it shows as "Not Configured", you should re-verify the definitions of archive media type /density and device name to be identical in the SMU and MDMS configurations. This message does not appear if the device is simply busy with other applications.
%HSM-F-NOLICENSE, license for HSM is not installed
Explanation: You must install an HSM license in order to use this software.
%HSM-E-NONEXPR, nonexistent process
Explanation: An SMU or policy execution request failed because HSM was not running. Use SMU START to startup HSM and retry the operation.
%HSM-E-NOSUCHDEV, volumename - no such volume available
Explanation: The policy execution process was unable to assign a channel to the device or get information about the device. Please check that the device is known and available to the system. If the device is no longer in service, it should be removed from the HSM configuration.
%HSM-E-NOSUCH_FILE, - no such file filename found
Explanation: The policy execution process was unable to locate the distribution list to be used for mail notification or requested a file to be shelved that no longer exists.
%HSM-E-NOSUCH_REQUEST, - no such request found
Explanation: The /CANCEL qualifier was used to cancel a request that has already been completed by the shelf handler.
%HSM-E-NORESTARC, no restore archive classes defined for shelf
Explanation: This is a common error meaning that no restore archive classes are defined for the shelf. Use SMU SHOW SHELF to make sure that the archive list and restore archive lists are compatible, and add the restore archive list as needed, using SMU SET SHELF/RESTORE=(list). In most cases, the archive and restore lists should be the same.
%HSM-I-NOTSHELVED, file filename was not shelved
Explanation: An UNSHELVE/ONLINE request was issued for a file that was not shelved. No action is required.
%HSM-E-NOUIC_QUOTA, - no quota for user username found
Explanation: The policy execution process found no disk quota defined for this user or quotas are not enabled for the disk. The policy execution process will assume that the lowwater mark has been reached by default.
%HSM-E-NOVOLAVAIL, new volume could not be allocated
Explanation: In Basic Mode this means you have exhausted the number of volumes allowed for the archive class; define a new archive class. In Plus Mode, this means that the volume pools(s) specified do not contain enough volumes to allocate a new volume.Either add new volumes to the pool, or define additional pools for the archive class.
%HSM-E-OCCPOLDIS, - occupancy full policy execution disabled on volume volumename
Explanation: The occupancy full policy has been disabled on this volume. Use SMU SET VOLUME command to enable occupancy full condition handling.
%HSM-E-OFFLINERROR, off-line system error, function not performed
Explanation: An error occurred trying to read or write to the near-line/off-line system. Refer to the error log for more details, fix the problem, and retry. There are usually additional messages to explain the problem in the error log.
%HSM-E-OFFREADERR, off-line read error on drive drivename
Explanation: An error occurred trying to read a file on the specified near-line/off-line drive. Refer to the error log for more details, fix the problem, and retry. There are usually additional messages to explain the problem in the error log.
%HSM-E-OFFWRITERR, off-line write error on drive drivename
Explanation: An error occurred trying to write a file on the specified near-line/off-line drive. Refer to the error log for more details, fix the problem, and retry. There are usually additional messages to explain the problem in the error log.
%HSM-E-ONLINERROR, unrecoverable online access error
Explanation: HSM was unable to access or read a file, or the disk itself, from the online system. Refer to the error log for more details, fix the problem, and retry. There are usually additional messages to explain the problem in the error log.
%HSM-E-OPCANCELED, operation canceled
Explanation: On a recovery of the shelf handler process, the operation was canceled because it should not be retried.
%HSM-E-OPDISABLED, shelving operation disabled
Explanation: The requested operation has been disabled by the storage administrator. Operations can be disabled at the facility, shelf, disk volume and off-line device levels. To re-enable, enter the appropriate SMU SET/ENABLED command. This message also appears after an SMU SHUTDOWN, but before the facility has actually shut down.
%HSM-E-PEPCOMMERROR, unable to send to policy execution process
Explanation: The shelf handler process could not send a request to the policy execution process. This usually means that the policy execution process has not been started. Issue an SMU STARTUP command to recover.
%HSM-E-PEPMBX, - communication mailbox mailboxname not enabled
Explanation: The policy execution process was unable to establish communications with the shelf handler process, which usually means that the shelf handler process is not running, or create a mailbox for it's own use. Issue an SMU STARTUP command to recover.
%HSM-F-PEP_ALREADY_STARTED, - policy execution process already started
Explanation: The HSM policy execution process has already been started.
%HSM-E-PEP_INCOMPLETE, - policy execution unable to satisfy request
Explanation: The policy execution was unable to reach the specified lowwater mark. Verify that the file selection criteria is suitable for the selected lowwater mark.
%HSM-F-POLACCESSFAIL, unable to access policy database
Explanation: The policy execution process was unable to access the policy database. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that any policy files are accessible.
%HSM-E-POLDISABLED, policy policyname is disabled
Explanation: On a scheduled policy run, the requested policy is disabled.Either enable it, or cancel the scheduled policy run.
%HSM-E-POLDEF_NF, - policy definition policyname was not found
Explanation: The policy execution process was unable to locate this policy definition in the policy database. Verify that any policies specified for volumes or scheduled have been defined with SMU SET POLICY.
%HSM-E-POLEXEFAIL, unable to initiate policy execution
Explanation: The shelf handler process could not send a request to the policy execution process. This usually means that the policy execution process has not been started. Issue an SMU STARTUP command to recover.
%HSM-E-POLVOLDIS, - policy execution disabled on volume volumename
Explanation: The policy execution process has detected that shelving is currently disabled on this volume. For policy execution to take place on the volume, shelving must be enable. Use the SMU SET VOLUME command to enable shelving for the volume.
%HSM-S-PRESHELVED, file filename preshelved
Explanation: When the /NOTIFY qualifier is specified, this message is displayed on a successful completion of a preshelve operation. The file data has been copied to the cache or the shelf, but the file is still accessible online.
%HSM-E-PSHLVERROR, - error preshelving file filename
Explanation: HSM encountered an error preshelving this file during policy execution. This could be caused by such things as the file not being found, possibly deleted prior to the shelving action, or the device containing the file being unavailable. Please check the SHP error log for more information on the failure.
%HSM-W-PSHLVOPINCOM, preshelving operation incomplete for file filename
Explanation: HSM could not complete the preshelving operation for this file during policy execution. Please check the SHP error log for more information on the failure.
%HSM-E-QUOPOLDIS, - quota exceeded policy execution disabled on volume volumename
Explanation: The policy execution process detected that quota exceeded policy events are currently disabled on this volume. Use SMU SET VOLUME to enable.
%HSM-I-RECOVERSHLV, inconsistent state found, file shelved
Explanation: This message may be issued on recovery of a shelf handler process after finding a file in an inconsistent state. The file has been made into a consistent state by shelving it (it was really already shelved). No action is required.
%HSM-I-RECOVERUNSHLV, inconsistent state found, file unshelved
Explanation: This message may be issued on recovery of a shelf handler process after finding a file in an inconsistent state. The file has been made into a consistent state by unshelving it (it was really already unshelved). No action is required.
%HSM-E-REPACKINPRG, cannot checkpoint during repack, please try later
Explanation: An attempt to checkpoint an archive class while that archive class was being repacked was made. Checkpoint and repack are incompatible operations on an archive class. Please re-enter the checkpoint command after the repack has completed.
%HSM-E-RESHELVERR, unable to re-shelve file filename, manual recovery required
Explanation: An attempt to re-shelve a file to additional archive classes failed for some reason. Please examine the error log. As the result of this, the specified file may remain shelved or be unshelved.Existing shelf copies remain available.
%HSM-W-SELECTFAILED, MDMS/SLS error selecting a drive for volume volumename, retrying
Explanation: In Plus Mode, an error occurred trying to select a drive for an HSM operation. Please read the error log for more details.
%HSM-I-SERVER, HSM shelf server enabled on node nodename
Explanation: This is an informational message indicating that a shelf handler on the specified node is now the shelf server. This message is printed in the audit log and to the OPCOM terminal. If at any time you wish to determine which node is the shelf server, examine the tail of the audit log for the last such message.
%HSM-E-SHELFERROR, unrecoverable shelf error, data for filename lost
Explanation: The file could not be found or accessed in the cache or shelf archive classes. This failure results in the loss of the file data. This is written to the error log.
%HSM-E-SHELFINFOLOST, shelf access information unavailable for file filename
Explanation: There was a problem accessing the ACE and/or catalog information trying to unshelve a file. Please use SMU LOCATE to retrieve the file information, then use BACKUP to retrieve the file.
%HSM-S-SHELVED, file filename shelved
Explanation: With /NOTIFY specified, this message is displayed to the user upon successful completion of an explicit shelve operation. The operation is complete when the file is shelved to the initial shelving location, which can be the cache or directly to the shelf.
%HSM-E-SHLVERROR, - error shelving file filename
Explanation: HSM encountered an error shelving this file during policy execution. This could be caused by such things as the file not being found, possibly deleted prior to the shelving action, or the device containing the file being unavailable. Please check the SHP error log for more information on the failure.
%HSM-W-SHLVOPINCOM, shelving operation incomplete for file filename
Explanation: HSM could not complete the shelving operation for this file during policy execution. Please check the SHP error log for more information on the failure.
%HSM-I-SHLVPRG, shelving files to free disk space
Explanation: This message occurs if a user request results in a DEVICEFULL or EXDISKQUOTA error, and the file system is requesting HSM to free space for the request. This message is printed to indicate a possible delay in processing the user request.
%HSM-S-SHUTDOWN, HSM shelving facility shutdown on node nodename
Explanation: In the audit log, this message shows that HSM was shut down with an SMU SHUTDOWN command. It is not automatically restarted.
%HSM-E-SPLITMERGSERR, - error during shelf split/merge, catalog not changed
Explanation: HSM encountered an error during shelf split /merge. The catalog was not changed. Please check the SHP error log for more information on the failure.
%HSM-S-STARTED, shelving facility started on node nodename
Explanation: In the audit log and startup log, this message indicates that the shelf handler process was successfully started. No action is required.
%HSM-F-STSACCESSFAIL, error accessing status log files
Explanation: HSM encountered and error while accessing the log files. This could be caused by a device full condition. Please check the state of the HSM$LOG device.
%HSM-E-UNEXPERR, unexpected error on operation
Explanation: This message indicates that the shelf handler experienced an unexpected error condition. Please check the SHP error log for more information about the failure and report this to hp. This is not a fatal error condition.
%HSM-E-UNKNOWN_RESP, response unknown, unable to locate corresponding request
Explanation: The policy execution process has received a response from the shelf handler for a shelve/preshelve request that has already been completed. No action is required.
%HSM-S-UNPRESHELVED, file filename unpreshelved
Explanation: With /NOTIFY specified, this message is displayed to the user upon successful completion of an unpreshelve operation.
%HSM-S-UNSHELVED, file filename unshelved
Explanation: With /NOTIFY specified, this message is displayed to the user upon successful completion of an unshelve operation. The file is now online and available for user access.
%HSM-I-UNSHLVPRG, unshelving file filename
Explanation: A file fault is initiated as a result of attempting to read/write/extend/truncate/execute a file that is shelved. This message is printed to indicate a possible delay in processing the user request.
%HSM-F-VOLACCESSFAIL, unable to access volume database
Explanation: The policy execution process was unable to access a volume's policy information from the volume database. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the volume file is accessible and that all needed volumes have been defined with SMU SET VOLUME.
%HSM-E-VOLDEF_NF, volume definition volumedef was not found
Explanation: The policy execution process was unable to locate this volume or the default volume definition in the volume database. Please verify that needed volumes have been defined with SMU SET VOLUME. Also, the HSM$DEFAULT_VOLUME entry should never be deleted.
%HSM-E-VOLNOTLOADED, off-line volume(s) could not be loaded
Explanation: An error occurred trying to load or mount a specific volume for a shelving operation. Please refer to the error log for more information, fix, and retry.
%HSM-E-VOLUME_NF, volume volumename was not found
Explanation: For a REPACK operation, this tape volume or a member of the volume set containing this volume was not found in the MDMS volume database. In plus mode, all source tape volumes for REPACK must exist in the MDMS volume database.
The following messages are displayed by the utilities that support explicit SHELVE, PRESHELVE and UNSHELVE commands. Although only the SHELVE command messages are listed here, there are similar messages for the PRESHELVE and UNSHELVE commands.
%SHELVE-F-BADSEARCH, shelve search confused
Explanation: This failure message alerts you that the shelving operation got confused while searching for the files specified on the command line. No HSM action took place.
%SHELVE-I-ALRSHELVED, file filename was previously shelved
Explanation: A shelve request was issued for a file that is already shelved. No action is required.
%SHELVE-W-CANCELLED, shelving operation on file filename canceled
Explanation: The shelving request has been canceled due to a specific cancel request, a request that conflicts with another user, or a failure of a multi-operation request. In the last case, please check the SHP error log for more information.
%SHELVE-F-CLI, fatal error detected parsing command line
Explanation: This failure messages alerts you that a fatal error was encountered while parsing the command line. Verify the command syntax, fix and retry.
%SHELVE-F-CLI_BY_OWNER, value shelf-value invalid for /BY_OWNER qualifier
Explanation: This failure message alerts you that you entered an invalid value for the /BY_OWNER qualifier on the command line. Verify that UIC syntax and that it exists.
%SHELVE-F-CLI_INVTIM, invalid absolute time - use DD-MMM- YYYY:HH:SS.CC format
Explanation: This failure message alerts you that you entered an invalid time value on the command line. Verify the time value and make sure it conforms to the DD-MMM- YYYY:HH:SS.CC format (use of TODAY, TOMORROW and YESTERDAY are also valid).
%SHELVE-E-DISCLASS, command class has been automatically disabled
Explanation: A repeated fatal error in the shelf handler has been detected on a certain class of operations. Please refer to the SHP error log for detailed information, and report the problem to hp. Since the fatal error continually repeats, HSM disabled the class of operation causing the problem, so that other operations might proceed. After fixing the problem, you can re-enable all operations using SMU SET FACILITY/REENABLE.
%SHELVE-W-ERROR, error shelving file filename
Explanation: This warning message alerts you than an error was encountered while trying to shelve the file. There may be an accompanying error message that gives more information about any failure (privileges, communications failure, etc.). Also check the SHP error log for more information about the failure.
%SHELVE-F-FATAL, fatal error condition detected
Explanation: This failure message alerts you that a fatal error condition was encountered while shelving a file. Please check the SHP error log for more information.
%SHELVE-F-FATAL_P, fatal error condition detected
Explanation: An unexpected error was encountered while parsing/processing a confirmation action. Please see HELP or the reference documentation for valid responses.
%SHELVE-F-INCONSIST, internal inconsistency detected
Explanation: SMU was unable to generate a request for the shelf handler. This could be caused by an insufficient memory condition.
%SHELVE-F-INTERNAL, internal error detected, code = value
Explanation: This failure message alerts you that an internal error condition was detected with a code of value. This could have come from the policy execution process if memory couldn't be allocated, there was a problem queuing a job or getting job information, there was an unexpected error getting system information, etc. There may be more information about the failure in the PEP error log. From SMU, this could mean that an unexpected error was encountered while parsing/processing a confirmation action, getting job or system information, etc.
%SHELVE-W-INVALANS, text is an invalid answer
Explanation: The response given to a confirmation action is incorrect. Please see HELP or the reference documentation for valid responses.
%SHELVE-W-INVFILESPEC, invalid file specification format
Explanation: This warning message alerts you that your file specification format is invalid. Please re-enter the command with a valid file specification.
%SHELVE-W-INVFORMAT, invalid internal format
Explanation: A request generated by SMU and sent to the shelf handler has an invalid internal format. The request cannot be processed by the shelf handler. There may be more information about the failure in the SHP error log.
%SHELVE-W-INVREQUEST, invalid shelving request
Explanation: For policy execution, the policy execution process received an unexpected error from the shelf handler for the shelve request. This could include missing archive or shelf definitions or an incorrectly formatted request. SMU may have also encountered these problems or there was a problem communicating with the shelf handler. There may be more information about the failure in the PEP or SHP error logs.
%SHELVE-S-MARKEDCANCEL, file filename was marked for cancel
Explanation: This status message informs you that your file has been marked for cancellation and won't be shelved.
%SHELVE-W-NOFILES, no files found
Explanation: SMU was unable to locate the specified files. Reasons include insufficient memory, invalid file specification, file(s) already in requested state, etc. There may be an accompanying message that gives more information about any failure.
%SHELVE-W-NOMODDATE, modification date not enabled for file
Explanation: Expiration dates are not currently enabled for this file/volume.Expiration dates are needed for the /SINCE and /BEFORE qualifiers.
%SHELVE-W-NOSUCHDEVICE, no such device found
Explanation: For REPACK, an unload request was sent to the shelf handler for a tape device that is not known. The shelf handler may have encountered an unexpected error trying to read a volume's UID file. The policy execution process may be trying to access a disk volume that is no longer defined. Please check the PEP or SHP error logs for more information.
%SHELVE-W-NOSUCHFILE, no such file filename found
Explanation: A cache flush shelve request was made for a file that no longer exists. Please see the SHP error log for more information.
%SHELVE-W-NOSUCHPOLICY, no such policy found
Explanation: This warning message alerts you that the policy you are specifying cannot be found. There may be an accompanying message that gives more information about the failure. Please check the PEP and SHP error logs form more information.
%SHELVE-W-NOSUCHREQ, no such request found
Explanation: The /CANCEL qualifier was used to cancel a request that has already been completed by the shelf handler.
%SHELVE-E-NOTSHELVED, file filename was not shelved
Explanation: This error message informs you that the file was not shelved. This could be due to an error during the shelving process, or, for a restore request, the file wasn't shelved. Please see the SHP error log for more information.
%SHELVE-W-OPINCOM, shelving operation incomplete for file filename
Explanation: The shelving operation was unable to complete due to an error. Please see the SHP error log for more information.
%SHELVE-S-QUEUED, file filename queued for shelving
Explanation: When the /NOWAIT/LOG qualifiers are used, this message indicates that your request has been queued for processing.
%SHELVE-E-RSPCOMM, response communications error
Explanation: SMU encountered an unexpected error while trying to read a response from the shelf handler. There may be an accompanying message that gives more information about any failure. Please verify that the shelf handler is running and restart as needed with SMU START.
%SHELVE-F-SEARCHFAIL, error searching for file filename
Explanation: The specified file does not exist. Verify that the filename is correct and that the file exists, then retry the command.
%SHELVE-S-SHELVED, file filename shelved
Explanation: This status message informs you that your file has been shelved successfully.
%SHELVE-F-SLFCOMM, shelf handler communications failure
Explanation: This message indicates that the shelf handler is not running. Use SMU START to start the shelf handler and retry.
%SHELVE-F-SLFMESSAGE, corrupt response message detected
Explanation: The failure message alerts you that a bad response message was received from the shelf handler or an error was encountered while trying to format and display an error message.
%SHELVE-E-UNKSTATUS, unknown status returned from the shelf handler
Explanation: This error message informs you that the shelf handler process returned an unknown status message. Please report this problem to hp and include relevant entries in the error and audit logs.
%SHELVE-E-UNSUPP, operation unsupported
Explanation: This error message informs you that the operation you are attempting is unsupported by this software. This is usually caused by a node name being included in a file specification.
%SHELVE-F-USLFCOMM, user communications failure
Explanation: This failure message alerts you that the shelf handler detected a failure in user communications. SMU was either unable to create a mailbox to receive responses from the shelf handler on the user's behalf or get the name of the mailbox. There may be an accompanying message that gives more information about any failure.
The following messages are printed out by the shelf management utility.
%SMU-F-ABORTANA, user aborted ANALYZE
Explanation: SMU ANALYZE was aborted when a ^Z was entered in response to a repair confirmation.
%SMU-F-ABORTSCAN, aborted scan for shelved files on disk volume device-name
Explanation: SMU ANALYZE aborted processing of the device due to an error or ^Z was entered in response to a repair confirmation.
%SMU-E-ARCHID_ADDERR, qualifier required on first SET ARCHIVE, archive-id not created
Explanation: In plus mode, the /MEDIA_TYPE qualifier is required for the initial creation of the archive class with the SMU SET ARCHIVE command. Subsequent use of the SMU SET ARCHIVE command to modify the archive class does not require the /MEDIA_TYPE qualifier. Re-enter the command using the qualifier.
%SMU-E-ARCHID_DELERR, error deleting archive-id
Explanation: For SMU SET ARCHIVE/DELETE, an error was encountered while trying to delete the archive class. There may be an accompanying message that gives more information about any failure.
%SMU-E-ARCHID_DISPERR, error displaying archive-id
Explanation: For SMU SHOW ARCHIVE, an error was encountered while trying to read the archive information. There may be an accompanying message that gives more information about any failure.
%SMU-E-ARCHID_INCOMPAT, device is an incompatible media type for this archive class
Explanation: For SMU SET DEVICE, the media type of the archive class entered is not compatible with the media type of the device. Verify your configuration and re-enter the command with corrections.
%SMU-E-ARCHID_MANYPOOL, archive id archive-id has too many pools added, limit is pool-limit
Explanation: This error message alerts you that you have exceeded the pool limit for the archive. Verify your configuration and possibly remove pools that are no longer needed, then retry the command.
%SMU-W-ARCHID_NF, archive class id class-id not found
Explanation: The archive class id was not found in the archive database or an unexpected error was encountered while trying to read the volume database. There may be an accompanying message that gives more information about the failure. Verify your configuration then retry the command.
%SMU-W-ARCHID_POOLNF, archive class id class-id pool pool-id not found, not removed
Explanation: For SMU SET ARCHIVE/REMOVE_POOL, a pool was specified which is not in the pool list for the archive class. Verify your configuration then retry the command.
%SMU-I-ARCHIVE_DELETED, archive id archive-id deleted
Explanation: The archive class was successfully deleted.
%SMU-W-ARCHIVE_NF, archive class archive-class not found
Explanation: For SMU SET ARCHIVE/DELETE, the archive class was not found in the archive database. Verify your configuration then retry the command.
%SMU-E-ARCHIVE_READERR, error reading archive definition, archive-id
Explanation: For SMU SET ARCHIVE/DELETE, an unexpected error was encountered while trying to delete the archive class. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the archive file is accessible.
%SMU-I-ARCHIVE_UPDATED, archive id archive-id updated
Explanation: The archive class was successfully updated.
%SMU-W-ARCHUPDERR, unable to update archive information, archive-information
Explanation: An error was encountered while trying to modify the archive class information. This could have been directly from a SMU SET ARCHIVE command, or indirectly from a SMU SET DEVICE/ARCHIVE command which may attempt to update the media type for the archive class. There may be an accompanying message that gives more information about any failure. Please check your configuration, the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the archive file is accessible.
%SMU-E-BASIC_MODE_ONLY, basic-mode-feature, is a basic mode feature, see SET FACILITY/MODE
Explanation: The use of this qualifier is for Basic mode only.
%SMU-I-CACHE_CREATED, cache device device-name created
Explanation: The cache device was successfully added.
%SMU-E-CACHE_DELERR, error deleting cache definition, cache- name
Explanation: A request was made to delete a cache device that does not exist in the database. Verify your configuration and re-enter the command.
%SMU-I-CACHE_DELETED, cache device device-name deleted
Explanation: The cache device was successfully deleted.
%SMU-E-CACHE_DISPERR, error displaying cache device, device- name
Explanation: For SMU SHOW CACHE, an error was encountered while trying to read the cache information. There may be an accompanying message that gives more information about any failure.
%SMU-W-CACHE_NF, cache device device-name was not found
Explanation: For SMU SET CACHE or SMU SHOW CACHE, the specified cache device was not found in the cache database. Verify your configuration and re-enter the command.
%SMU-E-CACHE_READERR, error reading cache device definition, device-name
Explanation: An unexpected error was encountered while trying to read the cache data for a delete or display operation. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the cache file is accessible.
%SMU-I-CACHE_UPDATED, cache device device-name updated
Explanation: The cache device was successfully updated.
%SMU-E-CACHE_WRITERR, error writing cache device definition, device-name
Explanation: An unexpected error was encountered while adding or modifying a cache device record. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the cache file is accessible.
%SMU-E-CANT_CHANGE_MODE, cannot set basic mode after shelving in plus mode
Explanation: For SMU SET FACILITY, you cannot set to Basic mode after files have been shelved in Plus mode.
%SMU-E-CANT_DEDICATE, remote device can't be dedicated
Explanation: For SMU SET DEVICE, the /DEDICATE qualifier is not valid for use with remote devices.
%SMU-E-CANT_DO_ARCASSOC, cannot action archive class archive- class, due to nonzero reference
Explanation: For SMU SET ARCHIVE, archive classes with shelf and/or device associations cannot be deleted. The archive class must be removed from the shelf and all devices prior to deletion.
%SMU-E-CANT_DO_ARCUSED, cannot action archive class archive- class, it has been used
Explanation: For SMU SET ARCHIVE, a request was made to either delete an archive class that has been used for shelving or modify certain attributes of an archive class (such as density or media type) that has been used for shelving.
%SMU-E-CANT_SET_REMOTE, local device cannot be set to remote
Explanation: For SMU SET DEVICE, the /REMOTE qualifier is not valid for use with an existing local device.
%SMU-E-CAT_CREATERR, error creating catalog catalog-name
Explanation: An error was encountered while trying to create the catalog. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$CATALOG and redefine as needed. Also verify that the device and directory are accessible.
%SMU-E-CAT_SYNTAXERR, catalog file syntax error catalog-name
Explanation: For SMU SET SHELF/CATALOG, a catalog file syntax error was encountered. Verify the format of the catalog filename and retry the command.
%SMU-F-CATOPENERR, error opening catalog catalog-name
Explanation: For SMU ANALYZE, an unexpected error was encountered opening the associated catalog for the device. There may be an accompanying message that gives more information about any failure. SMU ANALYZE will stop processing the current device.
%SMU-F-CATREADER, error reading catalog catalog-name
Explanation: For SMU ANALYZE, the catalog associated with this device was not found or there was an unexpected error reading from the catalog. There may be an accompanying message that gives more information about any failure. SMU ANALYZE will stop processing the current device.
%SMU-E-CATWRITERR, error encountered writing catalog - no repair
Explanation: For SMU ANALYZE, an unexpected error was encountered while writing the new catalog entry for a repair. There may be an accompanying message that gives more information about any failure. No repair will be made.
%SMU-E-CON_READERR, error reading configuration definition, configuration-definition
Explanation: An unexpected error was encountered while trying to read the facility information for SMU SET FACILITY, SMU SET SCHEDULE, SMU SHOW SHELF or SMU COPY. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the configuration file is accessible.
%SMU-W-CONFIG_NF, configuration configuration-name was not found
Explanation: The facility information was not found in the configuration database for SMU SET FACILITY, SMU SET SCHEDULE, SMU SHOW FACILITY or SMU COPY. This error could also mean that the shelf handler was unable to locate the facility information during a shelf update request. There may be an accompanying message that gives more information about any failure. The SMU SET FACILITY command should be used to create the facility data if none exists.
%SMU-E-COPYCHKERR, error(s) verifying shelf ACE
Explanation: For SMU COPY, an error was encountered during the initial phase that verifies that the shelving ACE on the files to be copied is correct. There may be an accompanying message that gives more information about any failure.
%SMU-I-COPYCHK, verifying shelving ACE on files to be copied
Explanation: SMU COPY is verifying that the shelving ACE on the files to be copied is correct.
%SMU-E-COPYDEV, cannot copy to source device, use DCL RENAME instead
Explanation: The SMU COPY command has detected that the source and destination devices are the same. If this is desired, then the DCL RENAME command should be used instead.
%SMU-E-COPYDST, specify device or device and directory location only
Explanation: The SMU COPY command has detected that the destination specified contains more than a device and/or directory location. Node names are not allowed as are any attempt to specify a file name or portion of one.
%SMU-I-COPYSTART, starting file copy
Explanation: SMU COPY has completed all initial verifications and is starting the actual file copy.
%SMU-F-CREATERR, error creating database, database-name
Explanation: An error was encountered while trying to create a new database file. There may be an accompanying message that gives more information about any failure. Please check the equivalence name HSM$MANAGER and redefine as needed. Also verify that the device is accessible and has enough free space.
%SMU-E-DATABASERR, error detected on database, database
Explanation: An unexpected error was encountered while trying to delete a record from this database. There may be an accompanying message that gives more information about any failure.
%SMU-E-DELERR, error deleting database record, database-record
Explanation: An unexpected error was encountered while trying to delete a record from this database or the record entry does not exit. Other causes could be an attempt to delete a default policy, facility record, default shelf record, a shelf that still has volume (disk) references, a shelf that contains a catalog reference other than the one assigned to the default shelf, a shelf where a split /merge is currently active, default volume record, a volume that contains a shelf reference other than the one assigned to the default volume or a volume where a split/merge is currently active. There may be an accompanying message that gives more information about any failure.
%SMU-E-DEV_DELERR, error deleting device definition, device- name
Explanation: An attempt was made to delete the default device record or a device that does not exist in the database. There may be an accompanying message that gives more information about any failure. Verify your configuration and retry the command.
%SMU-E-DEV_DISPERR, error displaying device, device-name
Explanation: For SMU SHOW DEVICE, an error was encountered while trying to read the device information. There may be an accompanying message that gives more information about any failure.
%SMU-W-DEV_INELIG, device device-name is ineligible
Explanation: An attempt was made to use a device which is not currently available on the system. This could come from SMU SET CACHE to add a new cache device, SMU SET SCHEDULE on one of the listed volumes or SMU SET VOLUME to add a new volume. There may be an accompanying message that gives more information about any failure.
%SMU-E-DEV_NOTREMOTE, device device is not a remote device specification
Explanation: For SMU SET DEVICE/REMOTE, the device name must contain a node name or the node name must be included in a logical name assignment for the device.
%SMU-E-DEV_READERR, error reading device definition, device- name
Explanation: For SMU SET DEVICE or SMU SHOW DEVICE, an unexpected error was encountered while trying to delete a device record or read a device record for display. There may be an accompanying message that gives more information about any failure.
%SMU-E-DEV_WRITERR, error writing device definition, device- name
Explanation: For SMU SET DEVICE, an attempt was made to add a device where the media type is not compatible with it's associated archive class(es), the /DEDICATE qualifier was specified for a remote device, the /REMOTE qualifier was specified for an existing local device or an unexpected error was encountered while writing a new or modified device record. There may be an accompanying message that gives more information about any failure.
%SMU-I-DEVICE_CREATED, device device-name created
Explanation: The device was successfully created.
%SMU-I-DEVICE_DELETED, device device-name deleted
Explanation: The device was successfully deleted.
%SMU-W-DEVICE_NF, device device-name was not found
Explanation: For SMU SET DEVICE or SMU SHOW DEVICE, the device was not found in the device database. For SMU SET SCHEDULE or SMU SHOW SCHEDULE, there was no scheduled event for the volume.
%SMU-I-DEVICE_UPDATED, device device-name updated
Explanation: The device was successfully updated.
%SMU-E-DEVINFOERR, error getting device information for device- name
Explanation: For SMU ANALYZE, an unexpected error was encountered getting information about the device. SMU ANALYZE will stop processing this device/set.
%SMU-E-DISCLASS, command class has been automatically disabled
Explanation: A repeated fatal error in the shelf handler has been detected on a certain class of operations. Please refer to the SHP error log for detailed information, and report the problem to hp. Since the fatal error continually repeats, HSM disabled the class of operation causing the problem, so that other operations might proceed. After fixing the problem, you can re-enable all operations using SMU SET FACILITY/REENABLE.
%SMU-E-DISPLAYERR, display error encountered
Explanation: An error was encountered while trying to display the requested information. There may be an accompanying message that gives more information about any failure.
%SMU-I-ENDSCAN, completed scan for shelved files on disk volume device-name
Explanation: SMU ANALYZE has completed processing of this device.
%SMU-E-ENF, job entry not found
Explanation: For SMU SET SCHEDULE or SMU SHOW SCHEDULE, no job entry was found for the listed volume(s) or specific entry number if /ENTRY was used. There may be an accompanying message that gives more information about any failure.
%SMU-I-ERRORS, number error(s) detected, number error(s) repaired
Explanation: For SMU ANALYZE, this message is for the device indicating the number of errors detected and repaired.
%SMU-I-FAC_UPDATED, HSM facility modified
Explanation: The facility was successfully modified.
%SMU-W-FACUPDERR, unable to update facility information
Explanation: For SMU SET FACILITY, an error was encountered while trying to modify the facility information. There may be an accompanying message that gives more information about the failure. Please check your configuration and the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the configuration file is accessible.
Explanation: For SMU SET SCHEDULE, the supplied command procedure to initiate policy execution was not found. There will be an accompanying message that give more information about the failure. The file may have to be restored from a previous backup or the HSM distribution.
%SMU-W-HSMCOMM, shelf handler communications failure
Explanation: An error was encountered while trying to establish communications with the shelf handler. There may be an accompanying message that give more information about any failure. Verify that the shelf handler is running and startup with SMU START if needed.
%SMU-W-HSMMESSAGE, corrupt response message detected
Explanation: A message returned from the shelf handler contained too many FAO parameters or an error was encountered formatting the message for display. Please report this problem to hp.
%SMU-F-INDOPENERR, error opening INDEXF.SYS on device-name
Explanation: For SMU ANALYZE, an unexpected error was encountered opening INDEXF.SYS for the device. There may be an accompanying message that gives more information about any failure. SMU ANALYZE will stop processing this device.
%SMU-F-INITFAILED, fatal error encountered during initialization
Explanation: The shelf management utility failed to initialize.
%SMU-F-INREADERR, error reading INDEXF.SYS on device-name
Explanation: For SMU ANALYZE, an unexpected error was encountered while reading INDEXF.SYS for the device. There may be an accompanying message that gives more information about any failure. SMU ANALYZE will stop processing this device.
%SMU-F-INTERNAL, fatal internal error detected, error-string
Explanation: Internal inconsistency detected. There may be an accompanying message that gives more information about any failure. If the problem can't be corrected locally, please report this problem to hp.
%SMU-W-INVALANS, string - is an invalid answer
Explanation: The response given to a confirmation action is incorrect. Please see HELP or the reference documentation for valid responses.
%SMU-E-INVALARCHIVE, invalid archive- archive-id
Explanation: For SMU SET ARCHIVE, the archive id is outside the range of valid values. Currently, for Basic mode this range is 1 thru 36 and for Plus mode is 1 thru 9999.
%SMU-W-INVALDIR, invalid directory specification, directory- spec
Explanation: An invalid file specification was given for the /OUTPUT qualifier. Re-enter the command with a valid output location.
%SMU-E-INVALIST, exceeded maximum list count of count
Explanation: Maximum number of parameter list elements were found. There will be an accompanying message indicating which parameter or qualifier is in violation. Please see HELP or the reference documentation for more information about the command.
%SMU-E-INVALPSIZE, exceeded maximum parameter size value
Explanation: A parameter value entered in the command exceeds it's valid range or size. The maximum value will be displayed for reference. The accompanying message will indicate what value is in error. Re-enter the command with a corrected value.
%SMU-E-INVALQSIZE, invalid qualifier size qualifier-size
Explanation: A qualifier value entered in the command exceeds it's valid range or size. The maximum value will be displayed for reference. The accompanying message will indicate which qualifier is in error either by displaying the qualifier name or the value itself. Re-enter the command with a corrected qualifier value.
%SMU-E-INVCONFIG, invalid tape drive configuration for repack request volume-name
Explanation: For SMU REPACK, there is an invalid tape drive configuration. One possible cause is that there are not enough tape drives; REPACK must use two. A second possibility is that there are no devices associated with the archive classes specified in the command.
%SMU-W-INVNAME, invalid volume name volume-name
Explanation: For SMU RANK, a wildcard character was detected in the volume name parameter. Wildcards are not allowed.
%SMU-E-INVPARAM, parameter or value for parameter parameter or parameter-value is invalid
Explanation: An invalid parameter or parameter value was detected in the command. There will be an accompanying message to indicate which parameter is in violation. Re- enter the command with corrected syntax. Please see HELP or the reference documentation for more information about the command.
%SMU-E-INVPOLNAME, invalid policy name policy-name
Explanation: For SMU RANK or SMU SET SCHEDULE, a wildcard character was detected in the policy name parameter. Wildcards are not allowed. Re-enter the command with the correct syntax. Please see HELP or the reference documentation for more information about the command.
%SMU-E-INVQUAL, invalid qualifier or qualifier value qualifier
Explanation: An invalid qualifier or associated value was detected in the command. There will be an accompanying message to indicate which qualifier is in violation. Re- enter the command with corrected syntax. Please see HELP or the reference documentation for more information about the command.
%SMU-W-INVREQUEST, invalid shelf handler request
Explanation: The shelf handler has received an invalid request from SMU. There may be more information about the failure in the SHP error log. If this problem cannot be corrected, please report it to hp.
%SMU-E-INVVOLNAME, invalid volume name volume-name
Explanation: For SMU SET ARCHIVE/LABEL in Basic mode, the volume name entered does not conform to the Basic mode volume label convention. Please see the documentation for a description of the correct format and re-try the command.
%SMU-E-JOBEXECUTING, job job executing on server prevents requested operation
Explanation: For SMU SET SCHEDULE, an update request was made for a job that is currently executing. No changes were made. Re-enter the command once the job has completed.
%SMU-W-LOCATE, error(s) occurred during locate processing
Explanation: For SMU LOCATE, one or more errors occurred during locate processing.
%SMU-E-LOCKERR, error locking database database-name
Explanation: An unexpected error was encountered while trying to unlock a record in the database. There may be an accompanying message that gives more information about any failure.
%SMU-E-LOCKTIMEOUT, timed out waiting for SPLIT/MERGE lock
Explanation: A SMU SET VOLUME or SMU SET SHELF command timed out waiting for split/merge lock to become available. Re-try the command later.
%SMU-E-MEMALLOC, error allocating memory in routine routine
Explanation: An error was encountered while trying to allocate memory. There may be an accompanying message that gives more information about any failure.
%SMU-E-MUSTUSEREMOTE, device device-name must be created using the /REMOTE qualifier
Explanation: For SMU SET DEVICE, a remote device name was entered, contains a node name, without use of the /REMOTE qualifier. Re-enter the command with the /REMOTE qualifier, or remove the node name from the device specification.
%SMU-W-NOARCHIVE, archive class(es) not found
Explanation: A database read request sent to the shelf handler on an update failed because the archive class was not found or was outside it's valid range.
%SMU-E-NOCACHELIST, no cache device name or list of devices names
Explanation: For SMU SET CACHE, no cache name or list of names was present in the command. Re-enter the command and specify a cache device or list of devices.
%SMU-E-NODEFINLIST, the default device may not be in a device list
Explanation: For SMU SET DEVICE, the default device may not be specified in the command. Re-enter the command without using the default device.
%SMU-E-NODEVICELIST, no device name or list of devices found
Explanation: For SMU SET DEVICE, no device name or list of names was present in the command. Re-enter the command and specify a device or list of devices.
%SMU-W-NOENTFND, no database entries found for string
Explanation: An unexpected error was encountered while trying to read from a SMU database. The message will contain the database involved. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the database files are accessible.
%SMU-E-NOFILEATTR, error reading file attributes for file ID file-id
Explanation: For SMU ANALYZE, an unexpected error was encountered while reading the file attributes. There may be an accompanying message that gives more information about any failure. SMU ANALYZE will stop processing this file.
%SMU-W-NOFILES, no files found
Explanation: For SMU LOCATE, no files were found that matched the search criteria or the catalog is empty.
%SMU-E-NONEXIST_SHELF, nonexistent shelf, shelf-name
Explanation: For SMU SET VOLUME/SHELF, a shelf name was given that doesn't exist in the database. Re-enter the command and specify a defined shelf, or define the new shelf and then re-enter the command.
%SMU-E-NONEXT, no next device found in set after device-name
Explanation: For SMU ANALYZE, an unexpected error was encountered getting information about the next device in the volume set. There may be an accompanying message that gives more information about any failure. SMU ANALYZE will stop processing this device/set.
%SMU-E-NOPOLSERV, no policy execution servers found
Explanation: For SMU SET SCHEDULE, since the /SERVER qualifier was not used, an attempt was made to select a server from the facility definition. This attempt failed due to errors getting system or cluster information.
%SMU-E-NOPOLLIST, no policy name or list of policies found
Explanation: For SMU SET POLICY, no policy name or list of names was present in the command. Re-enter the command and specify a policy name or list of policies.
%SMU-E-NOSHELFLIST, no shelf name or list of shelves found
Explanation: For SMU SET SHELF, no shelf name or list of names was present in the command. Re-enter the command and specify a shelf name or list of shelves.%SMU-E-NOSUCHENT, no such entry, entry-name
Explanation: For SMU SET SCHEDULE or SMU SHOW SCHEDULE, no job entry was found for the listed volume(s) or specific entry number if /ENTRY was used. There may be an accompanying message that gives more information about any failure.
%SMU-E-NOSUCHQUE, no such server queue, queue-name
Explanation: For SMU SET SCHEDULE, a request was made to modify or remove a policy job, but the queue was not found on the policy server.
%SMU-W-NOTSTARTED, process-name process was not started
Explanation: A startup or shutdown attempt was made from an account with insufficient privileges, or an unexpected error was encountered while starting up the shelf handler process or the policy execution process. There may be an accompanying message that gives more information about any failure.
%SMU-W-NOTUPDARCH, archive id archive-id-name was not updated, no new attributes
Explanation: For SMU SET ARCHIVE, a negative response was given to the update confirmation, a delete was requested for a non-existent archive class or there was no new data to change.
%SMU-W-NOTUPDCACHE, cache device device-name was not updated, no new attributes
Explanation: For SMU SET CACHE, no new attributes were defined for the cache. The update was not performed.
%SMU-W-NOTUPDDEVICE, device device-name was not updated, no new attributes
Explanation: For SMU SET DEVICE, no new attributes were defined for the device. The update was not performed.
%SMU-W-NOTUPDFAC, facility was not updated, no new attributes
Explanation: For SMU SET FACILITY, no new attributes were defined for the facility. The update was not performed.
%SMU-W-NOTUPDPOLICY, policy policy-name was not updated, no new attributes
Explanation: For SMU SET POLICY, no new attributes were defined for the policy. The update was not performed.
%SMU-W-NOTUPDSCHED, scheduled entry entry-name was not updated, no new attributes
Explanation: For SMU SET SCHEDULE, no new attributes were defined for the entry. The update was not performed.
%SMU-W-NOTUPDSHELF, shelf shelf-name was not updated, no new attributes
Explanation: For SMU SET SHELF, no new attributes were defined for the shelf. The update was not performed.
%SMU-W-NOTUPDVOLUME, volume volume-name was not updated, no new attributes
Explanation: For SMU SET VOLUME, no new attributes were defined for the volume. The update was not performed.
%SMU-F-NOUID, no device UIDs found for device device-name %SMU-F-NOUID, no device UIDs found for set device-name
Explanation: For SMU ANALYZE, no valid UIDs were found in the HSM$UID.SYS file. SMU ANALYZE will stop processing this device/set.
%SMU-F-NOUIDFILE, HSM$UID.SYS not available for device device- name %SMU-F-NOUIDFILE, HSM$UID.SYS not available for set device- name
Explanation: For SMU ANALYZE, no HSM$UID.SYS file was found on the device/set or the file could not be opened. The missing file indicates that shelving has not taken place on the disk. SMU ANALYZE will stop processing this device/set. Or, during a repair, no HSM$UID.SYS file could be found and the repair is incomplete.
%SMU-E-NOVOLLIST, no volume name or list of volumes found
Explanation: For SMU SET VOlUME, no volume name or list of names was present in the command. Re-enter the command and specify a volume name or list of volumes.
%SMU-E-OFLUPDERR, error updating offline information - no repair %SMU-E-OFLUPDERR, error updating offline information - repair incomplete
Explanation: For SMU ANALYZE, an unexpected error was encountered while writing the HSM metadata to the file and either no repair will be made, or a partial repair has been made and a new catalog entry exists. There may be an accompanying message that gives more information about any failure.
%SMU-F-OPENERR, error opening, storage-entity
Explanation: For any SMU command that uses the /OUTPUT qualifier, there was an error opening the specified output file. For SMU SET SCHEDULE, there was an error opening the policy execution command file. Or, there was an unexpected error opening one of the SMU database files. There may be an accompanying message that gives more information about any failure.
%SMU-E-OPERCONF, requested operation conflicts with current activity
Explanation: The requested SMU ANALYZE operation is in conflict with an active Split/Merge operation on the device. SMU ANALYZE will stop processing this device or stop the analysis completely depending on when the conflict was detected. Retry the command later.
%SMU-W-PEP_ALREADYSTARTED, policy execution process already started
Explanation: A SMU START was issued when there was already a policy execution process started. No action is required.
%SMU-S-PEP_STARTED, policy execution process started process-id
Explanation: The policy execution process has been successfully started.
%SMU-E-POL_DELERR, error deleting policy definition, policy- name
Explanation: For SMU SET POLICY, a request was made to delete a policy that does not exist in the database. Verify your configuration and re-enter the command.
%SMU-E-POL_DISPERR, error displaying policy, policy-name
Explanation: For SMU SHOW POLICY, an error was encountered while trying to read the policy information. There may be an accompanying message that gives more information about any failure.
%SMU-E-POL_READERR, error reading policy definition, policy- name
Explanation: For SMU SET POLICY/DELETE, SMU SET SHELF or SMU SHOW POLICY, an unexpected error was encountered while trying to read the policy data for a delete or display operation. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the policy file is accessible.
%SMU-E-POL_WRITERR, error writing policy definition, policy name
Explanation: For SMU SET POLICY, an unexpected error was encountered while adding or modifying a policy. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the policy file is accessible.
%SMU-I-POLICY_CREATED, policy policy-name created
Explanation: The policy was successfully created.
%SMU-I-POLICY_DELETED, policy policy-name deleted
Explanation: The policy was successfully deleted.
%SMU-W-POLICY_NF, policy policy-name was not found
Explanation: For SMU SET POLICY, SMU SET SCHEDULE, SMU SHOW POLICY or SMU RANK, the policy was not found in the policy database. Verify your configuration then retry the command.
%SMU-I-POLICY_UPDATED, policy policy-name updated
Explanation: The policy was successfully updated.
%SMU-E-PLUS_MODE_ONLY, feature, is a plus mode feature, see SET FACILITY/MODE
Explanation: For SMU SET ARCHIVE or SMU SET DEVICE, the use of this qualifier is for Plus mode only.
%SMU-W-PREREQSW, required prerequisite software, Save Set Manager, not found
Explanation: For SMU REPACK, the Save Set Manager software was not found on the system or exists at a version below the minimum that is required. Please check the documentation for this version of HSM and install the appropriate version of Save Set Manager.
%SMU-I-PROCESSING, processing input device device-name
Explanation: The input device is currently being processed by SMU ANALYZE.
%SMU-F-READERR, fatal error encountered reading database, database-name
Explanation: An unexpected error was encountered while reading the catalog. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$CATALOG and redefine as needed. Also verify that the catalog file is accessible.
%SMU-E-RDVOLSHLF, error reading volume or shelf data for device-name
Explanation: For SMU ANALYZE, an unexpected error was encountered getting volume or shelf data for the device. There may be an accompanying message that gives more information about any failure. SMU ANALYZE will stop processing this device.
%SMU-W-RSPCOMM, shelf handler response communications error
Explanation: When SMU started processing a response from the shelf handler, it discovered that the shelf handler process no longer existed or there was an error reading the response. There may be an accompanying message that gives more information about any failure. Start the shelf handler with SMU START if needed.
%SMU-I-SCHED_CREATED, scheduled policy policy-name for volume volume-name was created on server server-name
Explanation: The scheduled policy was successfully created.
%SMU-I-SCHED_DELETED, scheduled policy policy-name for volume volume-name was deleted on server server-name
Explanation: The scheduled policy was successfully deleted.
%SMU-E-SCHED_DELERR, error deleting policy definition policy- name for volume volume-name
Explanation: For SMU SET SCHEDULE/DELETE, an error was encountered while trying to delete the scheduled event. There may be an accompanying message that gives more information about any failure.
%SMU-W-SCHED_NF, schedule schedule-name for volume volume-name on server server-name was not found
Explanation: For SMU SET SCHEDULE, the scheduled event for the volume was not found in the database. There may be an accompanying message that gives more information about any failure. Verify your configuration then retry the command.
%SMU-E-SCHED_WRITERR, error writing scheduled definition for volume volume-name
Explanation: For SMU SET SCHEDULE/LOG, an unexpected error was encountered while adding a schedule definition for the volume. There may be an accompanying message that gives more information about any failure.
%SMU-I-SCHED_UPDATED, scheduled policy policy-name for volume volume-name was updated on server server-name
Explanation: The scheduled policy was successfully updated.
%SMU-W-SCHEDUPDERR, unable to update schedule information
Explanation: For SMU SET SCHEDULE, an error was encountered while trying to modify the scheduled policy attributes. There may be an accompanying message that gives more information about any failure.
%SMU-I-SHELF_CREATED, shelf shelf-name created
Explanation: The shelf was successfully created.
%SMU-E-SHELF_DELERR, error deleting shelf definition, shelf- name
Explanation: For SMU SET SHELF/DELETE, a request was made to delete a shelf that does not exist in the database. Verify your configuration and re-enter the command.
%SMU-I-SHELF_DELETED, shelf shelf-name deleted
Explanation: The shelf was successfully deleted.
%SMU-E-SHELF_DISPERR, error displaying shelf configuration, shelf-name
Explanation: For SMU SHOW SHELF, an error was encountered while trying to read the shelf information from the configuration database. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the configuration file is accessible.
%SMU-W-SHELF_NF, shelf shelf-name was not found
Explanation: For SMU SET SHELF or SMU SHOW SHELF, the shelf was not found in the configuration database. Verify your configuration then retry the command.
%SMU-E-SHELF_READERR, error reading shelf definition, shelf- name
Explanation: For SMU SET SHELF or SMU SET VOLUME, an error was detected while trying to read the shelf information from the configuration database. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the configuration file is accessible.
%SMU-E-SHELF_REFERR, shelf is referenced by one or more volumes
Explanation: For SMU SET SHELF, an attempt was made to delete a shelf that has volume references. Use SMU SET VOLUME to change the shelf assignment and retry the command.
%SMU-E-SHELF_SMIP, shelf split/merge is in process on shelf shelf-name
Explanation: For SMU SET SHELF, a delete was requested while a split/merge is in progress on either the current shelf or the default shelf. For SMU SET VOLUME/SHELF, an update request was made to use a shelf where a split/merge is in progress or the split/merge is in progress on the shelf assigned to the default volume. Retry the command later.
%SMU-I-SHELF_UPDATED, shelf shelf-name updated
Explanation: The shelf was successfully updated.
%SMU-E-SHELF_WRITERR, error writing shelf definition, shelf- definition-name
Explanation: For SMU SET SHELF, an error was encountered while trying to access the split/merge lock or an unexpected error was encountered while trying to add or update a shelf definition. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the configuration file is accessible.
%SMU-W-SHELFUPDERR, shelf handler process was unable to update information
Explanation: This is a generic companion message that is displayed when an error is returned from the shelf handler. The accompanying message will give more information about the failure.
%SMU-W-SHP_ALREADYSTARTED, shelf handler already started
Explanation: A SMU START was issued when there was already a shelf handler process started. No action is required.
%SMU-S-SHP_STARTED, shelf handler process started process-id
Explanation: The shelf handler process has been successfully started.
%SMU-E-SHUTERR, error shutting down database database-name
Explanation: For SMUEXIT, an error was encountered while trying to close the database. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the database file is accessible.
%SMU-F-SMLOCKERR, error locking SPLIT/MERGE lock
Explanation: For SMU SET SHELF or SMU SET VOLUME, an unexpected error was encountered while trying to acquire the split/merge lock.
%SMU-F-SNF, policy execution server not found
Explanation: For SMU SET SCHEDULE, the queue was not found on the policy server. There will be accompanying messages that give more information about the queue involved and the failure. Verify that the queue exists.
%SMU-I-STARTSCAN, scanning for shelved files on disk volume device-name
Explanation: SMU ANALYZE is currently processing the device.
%SMU-W-STARTQ, error encountered attempting to start HSM batch queue
Explanation: During startup, an error was encountered while trying to start the policy execution queue on this node. There may be an accompanying message that gives more information about any failure.
%SMU-W-UHSMCOMM, user communications failure
Explanation: An error was encountered while trying to establish a response mailbox for the request. There may be accompanying messages that give more information about any failure. It is possible that the request was successfully sent to the shelf handler and will execute.
%SMU-E-UNDEL_CATREF, catalog referenced by shelf must match HSM$DEFAULT_SHELF
Explanation: For SMU SET SHELF/DELETE, the delete cannot take place until the catalog for the shelf is changed to be the same as the one assigned to HSM$DEFAULT_SHELF. Use SMU SET SHELF to change the catalog and retry the command.
%SMU-E-UNDEL_DEFPOL, default policy definition cannot be deleted
Explanation: For SMU SET POLICY/DELETE, an attempt was made to delete one of the default policies. Retry the command without specifying the default policy.
%SMU-E-UNDEL_DEFSHELF, default shelf definition cannot be deleted
Explanation: For SMU SET SHELF/DELETE, an attempt was made to delete the default shelf. Retry the command without specifying the default shelf.
%SMU-E-UNDEL_DEFVOL, default volume definition cannot be deleted
Explanation: For SMU SET VOLUME/DELETE, an attempt was made to delete the default volume. Retry the command without specifying the default volume.
%SMU-E-UNDEL_SHELFREF, shelf referenced by volume must match HSM$DEFAULT_VOLUME
Explanation: For SMU SET VOLUME/DELETE, the delete cannot take place until the shelf for the volume is changed to be the same as the one assigned to HSM$DEFAULT_VOLUME. Use SMU SET VOLUME to change the shelf and retry the command.
%SMU-F-UPDATERR, fatal error encountered updating database, database-name
Explanation: An unexpected error was encountered while updating one of the SMU database files or the catalog. There may be an accompanying message that gives more information about any failure. Please check the equivalence names of HSM$MANAGER and HSM$CATALOG and redefine as needed. Also verify that the catalog and database files are accessible.
%SMU-W-UNKSTATUS, shelf handler returned unknown status
Explanation: The shelf handler process returned an unknown status for the request. There may be more information in the SHP error log.
%SMU-E-VOL_DELERR, error deleting volume definition, volume- name
Explanation: For SMU SET VOLUME/DELETE, a request was made to delete a volume that does not exist in the database. Verify your configuration and re-enter the command.
%SMU-E-VOL_DISPERR, error displaying volume, volume-name
Explanation: For SMU SHOW VOLUME, an error was encountered while trying to read the volume information from the database. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the volume file is accessible.
%SMU-E-VOL_NOTUPDATED, volume definition volume-name was not updated
Explanation: For SMU SET VOLUME, this is a general message indicating that the update was not performed. This is usually because the specified shelf doesn't exist, or a split/merge was in progress. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the volume file is accessible.
%SMU-E-VOL_READERR, error reading volume definition, volume- name
Explanation: An error was encountered while trying to read the volume information for SMU SET VOLUME, SMU SHOW VOLUME or SMU LOCATE. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the volume file is accessible.
%SMU-E-VOL_SMIP, volume split/merge in progress on volume volume-name
Explanation: For SMU SET VOLUME/DELETE, a delete was requested on a volume while a split/merge is in progress on this volume or the default volume. Retry the command later.
%SMU-E-VOL_WRITERR, error writing volume definition, volume- definition
Explanation: For SMU SET VOLUME, an error was encountered while trying to access the split/merge lock or an unexpected error was encountered while trying to add or update a volume definition. There may be an accompanying message that gives more information about any failure. Please check the equivalence name of HSM$MANAGER and redefine as needed. Also verify that the volume file is accessible.
%SMU-I-VOLUME_CREATED, volume volume-name created
Explanation: The volume was successfully created.
%SMU-I-VOLUME_DELETED, volume volume-name deleted
Explanation: The volume was successfully deleted.
%SMU-W-VOLUME_NF, volume volume-name was not found
Explanation: For SMU SET SCHEDULE or SMU RANK, there was an error getting information about the online volume. For SMU SET VOLUME/DELETE or SMU SHOW VOLUME, a request we made for a volume that was not found in the volume database. There may be an accompanying message that gives more information about any failure. Verify that the online volumes exist and are available. Check your configuration and retry the command.
%SMU-I-VOLUME_UPDATED, volume volume-name updated
Explanation: The volume was successfully updated.
%SMU-F-WRITERR, fatal error encountered writing database, database-name
Explanation: An unexpected error was encountered while adding an entry to one of the SMU database files or the catalog. There may be an accompanying message that gives more information about any failure. Please check the equivalence names of HSM$MANAGER and HSM$CATALOG and redefine as needed. Also verify that the catalog and database files are accessible.
Configuration - which involves the creation or definition of MDMS objects, should take place in the following order:
Creating these objects in the above order ensures that the following informational message, does not appear:
%MDMS-I-UNDEFINEDREFS, object contains undefined referenced objects
This message appears if an attribute of the object is not defined in the database. The object is created even though the attribute is not defined. The sample configuration consists of the following:
SMITH1 - ACCOUN cluster node
SMITH2 - ACCOUN cluster node
SMITH3 - ACCOUN cluster node
JONES - a client node
$1$MUA560
$1$MUA561
$1$MUA562
$1$MUA563
$1$MUA564
$1$MUA565
The following examples illustrate each step in the order of configuration.
This example lists the MDMS commands to define an offsite and onsite location for this domain.
$ !
$ ! create onsite location
$ !
$ MDMS CREATE LOCATION BLD1_COMPUTER_ROOM -
/DESCRIPTION="Building 1 Computer Room"
$ MDMS SHOW LOCATION BLD1_COMPUTER_ROOM
Location: BLD1_COMPUTER_ROOM
Description: Building 1 Computer Room
Spaces:
In Location:
$ !
$ ! create offsite location
$ !
$ MDMS CREATE LOCATION ANDYS_STORAGE -
/DESCRIPTION="Andy's Offsite Storage, corner of 5th and Main"
$ MDMS SHOW LOCATION ANDYS_STORAGE
Location: ANDYS_STORAGE
Description: Andy's Offsite Storage, corner of 5th and Main
Spaces:
In Location:
This example shows the MDMS command to define the media type used in the TL826.
!
$ ! create the media type
$ !
$ MDMS CREATE MEDIA_TYPE TK88K -
/DESCRIPTION="Media type for volumes in TL826 with TK88 drives" -
/COMPACTION ! volumes are written in compaction mode
$ MDMS SHOW MEDIA_TYPE TK88K
Media type: TK88K
Description: Media type for volumes in TL826 with TK88 drives
Density:
Compaction: YES
Capacity: 0
Length: 0
This example shows the MDMS command to set the domain attributes. The reason this command is not run until after the locations and media type are defined, is because they are default attributes for the domain object. Note that the deallocation state (transition) is taken as the default. All of the rights are taken as default also.
$ !
$ ! set up defaults in the domain record
$ !
$ MDMS SET DOMAIN -
/DESCRIPTION="Smiths Accounting Domain" - ! domain name
/MEDIA_TYPE=TK88K - ! default media type
/OFFSITE_LOCATION=ANDYS_STORAGE - ! default offsite location
/ONSITE_LOCATION=BLD1_COMPUTER_ROOM - ! default onsite location
/PROTECTION=(S:RW,O:RW,G:RW,W) ! default protection for volumes
$ MDMS SHOW DOMAIN/FULL
Description: Smiths Accounting Domain
Mail: SYSTEM
Offsite Location: ANDYS_STORAGE
Onsite Location: BLD1_COMPUTER_ROOM
Def. Media Type: TK88K
Deallocate State: TRANSITION
Opcom Class: TAPES
Priority: 1536
Request ID: 2576
Protection: S:RW,O:RW,G:RW,W
DB Server Node: SPIELN
DB Server Date: 08-Jan-2003 08:18:20
Max Scratch Time: NONE
Scratch Time: 365 00:00:00
Transition Time: 14 00:00:00
Network Timeout: 0 00:02:00
ABS Rights: NO
SYSPRIV Rights: YES
Application Rights: MDMS_ASSIST
MDMS_LOAD_SCRATCH
MDMS_ALLOCATE_OWN
MDMS_ALLOCATE_POOL
MDMS_BIND_OWN
MDMS_CANCEL_OWN
MDMS_CREATE_POOL
MDMS_DEALLOCATE_OWN
MDMS_DELETE_POOL
MDMS_LOAD_OWN
MDMS_MOVE_OWN
MDMS_SET_OWN
MDMS_SHOW_OWN
MDMS_SHOW_POOL
MDMS_UNBIND_OWN
MDMS_UNLOAD_OWN
Default Rights:
Operator Rights: MDMS_ALLOCATE_ALL
MDMS_ASSIST
MDMS_BIND_ALL
MDMS_CANCEL_ALL
MDMS_DEALLOCATE_ALL
MDMS_INITIALIZE_ALL
MDMS_INVENTORY_ALL
MDMS_LOAD_ALL
MDMS_MOVE_ALL
MDMS_SHOW_ALL
MDMS_SHOW_RIGHTS
MDMS_UNBIND_ALL
MDMS_UNLOAD_ALL
MDMS_CREATE_POOL
MDMS_DELETE_POOL
MDMS_SET_OWN
MDMS_SET_POOL
User Rights: MDMS_ASSIST
MDMS_ALLOCATE_OWN
MDMS_ALLOCATE_POOL
MDMS_BIND_OWN
MDMS_CANCEL_OWN
MDMS_DEALLOCATE_OWN
MDMS_LOAD_OWN
MDMS_SHOW_OWN
MDMS_SHOW_POOL
MDMS_UNBIND_OWN
MDMS_UNLOAD_OWN
This example shows the MDMS commands for defining the three MDMS database nodes of the cluster ACCOUN. This cluster is configured to use DECnet-PLUS.
Note that a node is defined using the DECnet node name as the name of the node.
$ !
$ ! create nodes
$ ! database node
$ MDMS CREATE NODE SMITH1 - ! DECnet node name
/DESCRIPTION="ALPHA node on cluster ACCOUN" -
/DATABASE_SERVER - ! this node is a database server
/DECNET_FULLNAME=SMI:.BLD.SMITH1 - ! DECnet-Plus name
/LOCATION=BLD1_COMPUTER_ROOM -
/TCPIP_FULLNAME=SMITH1.SMI.BLD.COM - ! TCP/IP name
$ MDMS SHOW NODE SMITH1
Node: SMITH1
Description: ALPHA node on cluster ACCOUN
DECnet Fullname: SMI:.BLD.SMITH1
TCP/IP Fullname: SMITH1.SMI.BLD.COM:2501-2510
Disabled: NO
Database Server: YES
Location: BLD1_COMPUTER_ROOM
Opcom Classes: TAPES
Transports: DECNET,TCPIP
$ MDMS CREATE NODE SMITH2 - ! DECnet node name
/DESCRIPTION="ALPHA node on cluster ACCOUN" -
/DATABASE_SERVER - ! this node is a database server
/DECNET_FULLNAME=SMI:.BLD.SMITH2 - ! DECnet-Plus name
/LOCATION=BLD1_COMPUTER_ROOM -
/TCPIP_FULLNAME=SMITH2.SMI.BLD.COM - ! TCP/IP name
/TRANSPORT=(DECNET,TCPIP) ! TCPIP used by JAVA GUI and JONES
$ MDMS SHOW NODE SMITH2
Node: SMITH2
Description: ALPHA node on cluster ACCOUN
DECnet Fullname: SMI:.BLD.SMITH2
TCP/IP Fullname: SMITH2.SMI.BLD.COM:2501-2510
Disabled: NO
Database Server: YES
Location: BLD1_COMPUTER_ROOM
Opcom Classes: TAPES
Transports: DECNET,TCPIP
$ MDMS CREATE NODE SMITH3 - ! DECnet node name
/DESCRIPTION="VAX node on cluster ACCOUN" -
/DATABASE_SERVER - ! this node is a database server
/DECNET_FULLNAME=SMI:.BLD.SMITH3 - ! DECnet-Plus name
/LOCATION=BLD1_COMPUTER_ROOM -
/TCPIP_FULLNAME=CROP.SMI.BLD.COM - ! TCP/IP name
/TRANSPORT=(DECNET,TCPIP) ! TCPIP used by JAVA GUI and JONES
$ MDMS SHOW NODE SMITH3
Node: SMITH3
Description: VAX node on cluster ACCOUN
DECnet Fullname: SMI:.BLD.SMITH3
TCP/IP Fullname: CROP.SMI.BLD.COM:2501-2510
Disabled: NO
Database Server: YES
Location: BLD1_COMPUTER_ROOM
Opcom Classes: TAPES
Transports: DECNET,TCPIP
This example shows the MDMS command for creating a client node. TCP/IP is the only transport on this node.
$ !
$ ! client node
$ ! only has TCP/IP
$ MDMS CREATE NODE JONES -
/DESCRIPTION="ALPHA client node, standalone" -
/NODATABASE_SERVER - ! not a database server
/LOCATION=BLD1_COMPUTER_ROOM -
/TCPIP_FULLNAME=JONES.SMI.BLD.COM - ! TCP/IP name
/TRANSPORT=(TCPIP) ! TCPIP is used by JAVA GUI
$ MDMS SHOW NODE JONES
Node: JONES
Description: ALPHA client node, standalone
DECnet Fullname:
TCP/IP Fullname: JONES.SMI.BLD.COM:2501-2510
Disabled: NO
Database Server: NO
Location: BLD1_COMPUTER_ROOM
Opcom Classes: TAPES
Transports: TCPIP
This example shows the MDMS command for creating a jukebox
$ !
$ ! create jukebox
$ !
$ MDMS CREATE JUKEBOX TL826_JUKE -
/DESCRIPTION="TL826 Jukebox in Building 1" -
/ACCESS=ALL - ! local + remote for JONES
/AUTOMATIC_REPLY - ! MDMS automatically replies to OPCOM requests
/CONTROL=MRD - ! controled by MRD robot control
/NODES=(SMITH1,SMITH2,SMITH3) - ! nodes the can control the robot
/ROBOT=$1$DUA560 - ! the robot device
/SLOT_COUNT=176 ! 176 slots in the library
$ MDMS SHOW JUKEBOX TL826_JUKE
Jukebox: TL826_JUKE
Description: TL826 Jukebox in Building 1
Nodes: SMITH1,SMITH2,SMITH3
Groups:
Location: BLD1_COMPUTER_ROOM
Disabled: NO
Shared: NO
Auto Reply: YES
Access: ALL
State: AVAILABLE
Control: MRD
Robot: $1$DUA560
Slot Count: 176
Usage: NOMAGAZINE
This example shows the MDMS commands for creating the six drives for the jukebox.
This example is a command procedure that uses a counter to create the six drives. In this example it is easy to do this because of the drive name and device name. You may want to have the drive name the same as the device name. For example:
$ MDMS CREATE DRIVE $1$MUA560/DEVICE=$1$MUA560
This works fine if you do not have two devices in your domain with the same name.
$ COUNT = COUNT + 1
$ IF COUNT .LT. 6 THEN GOTO DRIVE_LOOP
$DRIVE_LOOP:
$ MDMS CREATE DRIVE TL826_D1 -
/DESCRIPTION="Drive 1 in the TL826 JUKEBOX" -
/ACCESS=ALL - ! local + remote for JONES
/AUTOMATIC_REPLY - ! MDMS automatically replies to OPCOM requests
/DEVICE=$1$MUA561 - ! physical device
/DRIVE_NUMBER=1 - ! the drive number according to the robot
/JUKEBOX=TL826_JUKE - ! jukebox the drives are in
/MEDIA_TYPE=TK88K - ! media type to allocate drive and volume for
/NODES=(SMITH1,SMITH2,SMITH3)! nodes that have access to drive
$ MDMS SHOW DRIVE TL826_D1
Drive: TL826_D1
Description: Drive 1 in the TL826 JUKEBOX
Device: $1$MUA561
Nodes: SMITH1,SMITH2,SMITH3
Groups:
Volume:
Disabled: NO
Shared: NO
Available: NO
State: EMPTY
Stacker: NO
Automatic Reply: YES
RW Media Types: TK88K
RO Media Types:
Access: ALL
Jukebox: TL826_JUKE
Drive Number: 1
Allocated: NO
:
:
:
$ MDMS CREATE DRIVE TL826_D5 -
/DESCRIPTION="Drive 5 in the TL826 JUKEBOX" -
/ACCESS=ALL - ! local + remote for JONES
/AUTOMATIC_REPLY - ! MDMS automatically replies to OPCOM requests
/DEVICE=$1$MUA565 - ! physical device
/DRIVE_NUMBER=5 - ! the drive number according to the robot
/JUKEBOX=TL826_JUKE - ! jukebox the drives are in
/MEDIA_TYPE=TK88K - ! media type to allocate drive and volume for
/NODES=(SMITH1,SMITH2,SMITH3)! nodes that have access to drive
$ MDMS SHOW DRIVE TL826_D5
Drive: TL826_D5
Description: Drive 5 in the TL826 JUKEBOX
Device: $1$MUA565
Nodes: SMITH1,SMITH2,SMITH3
Groups:
Volume:
Disabled: NO
Shared: NO
Available: NO
State: EMPTY
Stacker: NO
Automatic Reply: YES
RW Media Types: TK88K
RO Media Types:
Access: ALL
Jukebox: TL826_JUKE
Drive Number: 5
Allocated: NO
$ COUNT = COUNT + 1
$ IF COUNT .LT. 6 THEN GOTO DRIVE_LOOP
This example shows the MDMS commands to define two pools: ABS and HSM. The pools need to have the authorized users defined.
$ !
$ ! create pools
$ !
$ mdms del pool abs
$ MDMS CREATE POOL ABS -
/DESCRIPTION="Pool for ABS" -
/AUTHORIZED=(SMITH1::ABS,SMITH2::ABS,SMITH3::ABS,JONES::ABS)
$ MDMS SHOW POOL ABS
Pool: ABS
Description: Pool for ABS
Authorized Users: SMITH1::ABS,SMITH2::ABS,SMITH3::ABS,JONES::ABS
Default Users:
$ mdms del pool hsm
$ MDMS CREATE POOL HSM -
/DESCRIPTION="Pool for HSM" -
/AUTHORIZED=(SMITH1::HSM,SMITH2::HSM,SMITH3::HSM)
$ MDMS SHOW POOL HSM
Pool: HSM
Description: Pool for HSM
Authorized Users: SMITH1::HSM,SMITH2::HSM,SMITH3::HSM
Default Users:
This example shows the MDMS commands to define the 176 volumes in the TL826 using the /VISION qualifier. The volumes have the BARCODES on them and have been placed in the jukebox. Notice that the volumes are created in the UNINITIALIZED state. The last command in the example initializes the volumes and changes the state to FREE.
$ !
$ ! create volumes
$ !
$ ! create 120 volumeS for ABS
$ ! the media type, offsite location, and onsite location
$ ! values are taken from the DOMAIN object
$ !
$ MDMS CREATE VOLUME -
/DESCRIPTION="Volumes for ABS" -
/JUKEBOX=TL826_JUKE -
/POOL=ABS -
/SLOTS=(0-119) -
/VISION
$ MDMS SHOW VOLUME BEB000
Volume: BEB000
Description: Volumes for ABS
Placement: ONSITE BLD1_COMPUTER_ROOM
Media Types: TK88K Username:
Pool: ABS Owner UIC: NONE
Error Count: 0 Account:
Mount Count: 0 Job Name:
State: UNINITIALIZED Magazine:
Avail State: UNINITIALIZED Jukebox: TL826_JUKE
Previous Vol: Slot: 0
Next Vol: Drive:
Format: NONE Offsite Loc: ANDYS_STORAGE
Protection: S:RW,O:RW,G:RW,W Offsite Date: NONE
Purchase: 08-Jan-2003 08:19:00 Onsite Loc: BLD1_COMPUTER_ROOM
Creation: 08-Jan-2003 08:19:00 Space:
Init: 08-Jan-2003 08:19:00 Onsite Date: NONE
Allocation: NONE Brand:
Scratch: NONE Last Cleaned: 08-Jan-2003 08:19:00
Deallocation: NONE Times Cleaned: 0
Trans Time: 14 00:00:00 Rec Length: 0
Freed: NONE Block Factor: 0
Last Access: NONE
$ !
$ ! create 56 volumes for HSM
$ !
$ MDMS CREATE VOLUME -
/DESCRIPTION="Volumes for HSM" -
/JUKEBOX=TL826_JUKE -
/POOL=HSM -
/SLOTS=(120-175) -
/VISION
$ MDMS SHOW VOL BEB120
Volume: BEB120
Description: Volumes for HSM
Placement: ONSITE BLD1_COMPUTER_ROOM
Media Types: TK88K Username:
Pool: HSM Owner UIC: NONE
Error Count: 0 Account:
Mount Count: 0 Job Name:
State: UNINITIALIZED Magazine:
Avail State: UNINITIALIZED Jukebox: TL826_JUKE
Previous Vol: Slot: 120
Next Vol: Drive:
Format: NONE Offsite Loc: ANDYS_STORAGE
Protection: S:RW,O:RW,G:RW,W Offsite Date: NONE
Purchase: 08-Jan-2003 08:22:16 Onsite Loc: BLD1_COMPUTER_ROOM
Creation: 08-Jan-2003 08:22:16 Space:
Init: 08-Jan-2003 08:22:16 Onsite Date: NONE
Allocation: NONE Brand:
Scratch: NONE Last Cleaned: 08-Jan-2003 08:22:16
Deallocation: NONE Times Cleaned: 0
Trans Time: 14 00:00:00 Rec Length: 0
Freed: NONE Block Factor: 0
Last Access: NONE
$ !
$ ! initialize all of the volumes
$ !
$ MDMS INITIALIZE VOLUME -
/JUKEBOX=TL826_JUKE -
/SLOTS=(0-175)
$ MDMS SHOW VOL BEB000
Volume: BEB000
Description: Volumes for ABS
Placement: ONSITE BLD1_COMPUTER_ROOM
Media Types: TK88K Username:
Pool: ABS Owner UIC: NONE
Error Count: 0 Account:
Mount Count: 0 Job Name:
State: FREE Magazine:
Avail State: FREE Jukebox: TL826_JUKE
Previous Vol: Slot: 0
Next Vol: Drive:
Format: NONE Offsite Loc: ANDYS_STORAGE
Protection: S:RW,O:RW,G:RW,W Offsite Date: NONE
Purchase: 08-Jan-2003 08:19:00 Onsite Loc: BLD1_COMPUTER_ROOM
Creation: 08-Jan-2003 08:19:00 Space:
Init: 08-Jan-2003 08:19:00 Onsite Date: NONE
Allocation: NONE Brand:
Scratch: NONE Last Cleaned: 08-Jan-2003 08:19:00
Deallocation: NONE Times Cleaned: 0
Trans Time: 14 00:00:00 Rec Length: 0
Freed: NONE Block Factor: 0
Last Access: NONE
Explanation: The request issued an OPCOM message that has been aborted by an operator. This message can also occur if no terminals are enabled for the relevant OPCOM classes on the node.
User Action: Either nothing or enable an OPCOM terminal, contact the operator and retry.
Explanation: You entered a SET command and you only had CONTROL access to the object, so only the access control information (if any) was updated.
User Action: If this is what was intended no action is needed. If you wish to update other fields in the object, you require SET access control. See your administrator.
The MDMS software caused an access violation. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
The named drive was successfully allocated.
drive !AD allocated as device !AD
The named drive was successfully allocated, and the drive may be accessed with DCL commands using the device name shown.
The named volume was successfully allocated.
The request was successful, but extended status contains information.
Examine the extended status, and retry command as needed.
The MDMS API (MDMS$SHR.EXE) detected an inconsistency. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
unexpected error in API !AZ line !UL
The shareable image MDMS$SHR detected an internal inconsistency.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
referenced archive(s) !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a archive name that does not exist. One or more of the specified archives may be undefined.
Check spelling of the archive names and retry, or create the archive objects in the database.
onsite/offsite attributes invalid for magazine-based volumes
You attempted to specify offsite or onsite dates or locations for a volume whose placement is in a magazine. These attributes are controlled by the magazine and are not valid for individual volumes.
Specify the dates and locations in the magazine object, or do not use magazines for volumes if you want the individual offsite/onsite dates to be different for each volume.
The specified volume (or volume set) was successfully bound to the end of the named volume set.
The server software detected an inconsistency. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis. Restart the server.
The request was cancelled by a user issuing a cancel request command.
During a load of a volume, a cleaning volume was loaded.
During an inventory this message can be ignored. During a load of a requested volume or a scratch load on a drive, or an initialize command, a cleaning volumes was loaded. Check location of the cleaning volume, update database as needed, and re-issue command using a non-cleaning volume.
conflicting item codes specified
The command cannot be completed because there are conflicting item codes in the command. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
The named volume was successfully created.
This node has the database files open locally.
The search for a database server received an error from a remote server.
Check the logfile on the remote server for more information. Check the logical name MDMS$DATABASE_SERVERS for correct entries of database server node.
access to remote database server on node !AZ
This node has access to a remote database server.
Database server on node !AZ reports:
The remote database server has reported an error condition. The next line contains additional information.
Depends on the additional information.
DCL extended status format, argument list overflow
During formating of the extended status, the number of arguments exceeded the allowable limit. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
The MDMS comand line software (MDMS$DCL.EXE) detected an inconsistency. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
error accessing jukebox with DCSC
MDMS encountered an error when performing a jukebox operation. An accompanying message gives more detail.
Examine the accompanying message and perform corrective actions to the hardware, the volume or the database, and optionally retry the operation.
This is a more detailed DCSC error message which accompanies DCSCERROR.
Check the DCSC error message file.
The DECnet listener has exited due to an internal error condition or because the user has disabled the DECNET transport for this node. The DECnet listener is the server's routine to receive requests via DECnet (Phase IV and Phase V).
The DECnet listener should be automatically restarted unless the DECNET transport has been disabled for this node. Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis if the transport has not been disabled by the user.
listening on DECnet node !AZ object !AZ
The server has successfully started a DECnet listener. Requests can now be sent to the server via DECnet.
During the allocation of a drive, the drive name was not returned by the server. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
specified drive already exists
The specified drive already exists and cannot be be created.
Use a set command to modify the drive, or create a new drive with a different name.
MDMS could not access the drive.
Verify the VMS device name, node names and/or group names specified in the drive record. Fix if necessary. Verify MDMS is running on a remote node. Check status of the drive, correct and retry.
An attempt was made to allocate a drive that was already allocated.
Wait for the drive to become deallocated, or if the drive is allocated to you, use it.
drive is empty or volume in drive is unloaded
Explanation: The specified drive is empty, or the volume in the drive is unloaded, spun-down and inaccessible.
Check status of drive, correct and retry.
error initializing drive on platform
MDMS could not initialize a volume in a drive.
There was a system error initializing the volume. Check the log file.
The specified drive is already in use.
Wait for the drive to free up and re-enter command, or try to use another drive.
A drive unload appeared to succeed, but the specified volume was still detected in the drive.
Check the drive and check for duplicate volume labels, or if the volume was reloaded.
drive is currently being loaded or unloaded
The operation cannot be performed because the drive is being loaded or unloaded.
Wait for the drive to become available, or use another drive. If the drive is stuck in the loading or unloading state, check for an outstanding request on the drive and cancel it. If all else fails, manually adjust the drive state.
The specified drive could not be allocated.
Check again if the drive is allocated. If it is, wait until it is deallocated. Otherwise there was some other reason the drive could not be allocated. Check the log file.
drive is not allocated to user
You cannot perform the operation on the drive because the drive is not allocated to you.
Either defer the operation or (in some cases) you may be able to perform the operation specifying a user name.
drive is not available on system
The specified drive was found on the system, but is not available for use.
Check the status of the drive and correct.
MDMS could not deallocate a drive.
Either the drive was not allocated or there was a system error deallocating the drive. Check the log file.
The specified drive cannot be found on the system.
Check that the OpenVMS device name, node names and/or group names are correct for the drive. Verify MDMS is running on a remote node. Re-enter command when corrected.
drive not specified or allocated to volume
When loading a volume a drive was not specified, and no drive has been allocated to the volume.
Retry the operation and specify a drive name.
The specified drive is remote on a node where it is defined to be local.
Check that the OpenVMS device name, node names and/or group names are correct for the drive. Verify MDMS is running on a remote node. Re-enter command when corrected.
all drives are currently in use
All of the drives matching the selection criteria are currently in use.
Wait for a drive to free up and re-enter command.
referenced drive !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a drive name that does not exist.
Check spelling of the drive name and retry, or create the drive object in the database.
referenced environment(s) !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a environment name that does not exist. One or more of the specified environments may be undefined.
Check spelling of the environment names and retry, or create the environment objects in the database.
A general internal MDMS error occurred.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
execute command failed, see log file for more explanation
While trying to execute a command during scheduled activities, a system service called failed.
Check the log file for the failure code from the system server call.
MDMS server exiting with fatal error, restarting
The MDMS server has encountered a fatal error and is exiting. The server will be restarted.
internal schedules are inoperable; external scheduler in use
You have created or modified an MDMS schedule object. This is allowed, but since the domain scheduler type is set up to an external scheduler product, this schedule object will never be executed.
If you are not planning ot change the scheduler type to INTERNAL or EXTERNAL, you should modify the associated save or restore request to use a standard frequency or an explicit frequency.
One or more volumes unknown to MDMS have been processed by this command.
See next message line(s) for more details. Use MDMS or jukebox utility programs (MRU or CARTRIDGE) to correct the problem.
The previous message is the error that caused the failure.
The connection to an MDMS server either failed or could not be established. See additional message lines and/or check the server's logfile.
Depends on additional information.
failed connection to server via DECnet
The DECnet connection to an MDMS server either failed or could not be established. See additional message lines and/or check the server's logfile.
Depends on additional information.
failed connection to server via TCP/IP
The TCP/IP connection to an MDMS server either failed or could not be established. See additional message lines and/or check the server's logfile.
Depends on additional information.
The reported file or object could not be created. The next line contains additional information.
Depends on the additional information.
The previous message is the error that caused the failure.
The reported file or object could not be deleted. The next line contains additional information.
Depends on the additional information.
MDMS was unable to mount the volume.
The error above this contains the error that cause the volume not to be mounted.
The command cannot be completed because there are conflicting item codes in the command. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
failed to initialize extended status buffer
The API could not initialze the extended status buffer. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
The reported file or object could not be looked up. The next line contains additional information.
Depends on the additional information.
The MDMS server encountered a fatal error during the processing of a request.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
An MDMS database file could not be opened.
Check the server's logfile for more information.
specified volume is first in set
The specified volume is the first volume in a volume set.
You cannot deallocate or unbind the first volume in a volume set. However, you can unbind the second volume and then deallocate the first, or unbind and deallocate the entire volume set.
An internal call to a system function has failed. The following lines identify the function called and the failure status.
Depends on information following this message.
referenced group(s) !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a group name that does not exist. One or more of the specified groups may be undefined.
Check spelling of the group names and retry, or create the group objects in the database.
You attempted to move a volume within a DCSC jukebox, and this is not supported.
incompatible frequency for !AZ !AZ
After changing the domain scheduler type, MDMS has detemined that this save or restore request has a frequency that is incompatible with the new scheduler type. The frequencies that are not valid for the given scheduler types are:
Modify the frequency to a valid one for this scheduler type.
volume's media type incompatible with the drive
The media type for the volume is incompatible with the media type(s) for the drive on a load operation.
Verify that the volume can be physically loaded and used in the specified drive. If not, select another drive. If so, then add the volume's media type to the drive or otherwise aligned the media types of the volume and the drive.
incompatible options specified
You entered a command with incompatible options.
Examine the command documentation and re-enter with allowed combinations of options.
attributes incompatible with archive type
You attempted to create or set an attribute which is incompatible with the archive type. The following attributes are incompatible for archive types:
Do not specify these attributes if they are incompatible with the archive type.
volume is incompatible with volumes in set
You cannot bind the volume to the volume set because some of the volume's attributes are incompatible with the volumes in the volume set.
Check that the new volume's media type, onsite location and offsite location are compatible with those in the volume set. Adjust attributes and retry, or use another volume with compatible attributes.
insufficient privilege to execute request
You do not have sufficient privileges to enter the request.
Contact your system administrator and request additional privileges, or give yourself privs and retry.
insufficient privilege for request option
You do not have sufficient privileges to enter a privileged option of this request.
Contact your system administrator and request additional privileges, or give yourself privs and retry. Alternatively, retry without using the privileged option.
some volumes not shown due to insufficient privilege
Not all volumes were shown because of restricted privilege.
None if you just want to see volumes you own. You need MDMS_SHOW_ALL privilege to see all volumes.
insufficient server privileges
The MDMS server is running with insufficient privileges to perform system functions.
Refer to the Installation Guide to determine the required privileges. Contact your system administrator to add these privileges in the MDMS$SERVER account.
The MDMS software detected an internal buffer overflow. This an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis. Restart the server.
An invalid message was received by a server. This could be due to a network problem or, a remote non-MDMS process sending messages in error or, an internal error.
If the problem persists and no non-MDMS process can be identified then provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
cannot modify or delete internal schedule
You attempted to modify or delete a schedule object that was internally generated for a save or restore request. This is not allowed.
Modify or delete the associated save or restore request instead, and the schedule will be updated accordingly.
The item list contained an invalid absolute date and time. Time cannot be earlier than 1-Jan-1970 00:00:00 and cannot be greater than 7-Feb-2106 06:28:15
Check that the time is between these two times.
invalid volume ID or invalid range specified
The specified volume ID, volume range, slot range or space range is invalid.
A volume ID may contain up to 6 characters. A volume range may contain up to 1000 volume IDs where the first 3 characters must be alphabetic and the last 3 may be alphanumeric. Only the numeric portions may vary in the range. Examples are ABC000-ABC999, or ABCD01-ABCD99. A slot range can contain up to 1000 slots and must be numeric. Also, all slots in the range must be less than the slot count for the jukebox or magazine. Example: 0-255 for a slot count of 256. A space range can contain up to1000 spaces where the first and last spaces must have the same number of characters. Spaces must be within the range defined for the location. Examples: 000-999, or Space A1-Space C9
invalid value for consolidation savesets or volumes
You specified an invalid value for consolidation savesets or volumes.
Use a value in the range 0 to maximum integer.
invalid database server search list
The logical name MDMS$DATABASE_SERVERS contains invalid network node names or is not defined.
Correct the node name(s) in the logical name MDMS$DATABASE_SERVERS in file MDMS$SYSTARTUP.COM. Redefine the logical name in the current system. Then start the server.
object is in invalid state for delete
The specified object cannot be deleted because its state indicates it is being used.
Defer deletion until the object is no longer being used, or otherwise change its state and retry.
The item list contained an invalid delta time.
Check that the item list has a correct delta time.
A node full name for a DECnet-Plus (Phase V) node specification has an invalid syntax.
Correct the node name and retry.
invalid value for drive count, use 1-32
You specified an invalid value for drive count.
Use a value in the range 1-32.
invalid extended status item desc/buffer
The error cannot be reported in the extended status item descriptor. This error can be caused by on of the following:
Check for any of the errors stated above in your program and fix the error.
invalid frequency for domain scheduler type
You specified an invalid save or restore frequency the scheduler type specified in the domain. Invalid combinations include: CUSTOM, with NONE, DECSCHEDULER, SCHEDULER or LOCAL EXPLICIT, with NONE, INTERNAL, EXTERNAL, or SINGLE
Specify a valid frequency for the scheduler type specified in the domain.
invalid initialize options specified
You attempted initialize volumes in a jukebox by specifying a slot range and the jukebox is not a vision-equipped, MRD-controlled jukebox.
Specify a volume range instead of a slot range to initialize volumes in a DCSC jukebox or an MRD jukebox without a vision system.
invalid item code for this function
The item list had an invalid item code. The problem could be one of the following:
Refer to the API specification to find out which item codes are restricted for each function and which item codes are allowed for each function.
invalid item descriptor, index !@UL
The item descriptor is in error. The previous message gives gives the error. Included is the index of the item descriptor in the item list.
Refer to the index number and the previous message to indicate the error and which item descriptor is in error.
invalid item list buffer length
The item list buffer length is zero. The item list buffer length cannot be zero for any item code.
Refer to the API specification to find an item code that would be used in place of an item code that has a zero buffer length.
invalid value for maximum saves, use 1-36
You specified an invalid value for maximum saves.
Use a value in the range 1-36.
media type is invalid or not supported by volume
The specified volume supports multiple media types where a single media type is required, or the volume does not support the specified media type.
Re-enter the command specifying a single media type that is already supported by the volume.
An invalid message was received MDMS software. This could be due to a network problem or, a non-MDMS process sending messages in error or, an internal error.
If the problem persists and no non-MDMS process can be identified then provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
invalid node name specification
A node name for a DECnet (Phase IV) node specification has an invalid syntax.
Correct the node name and retry.
invalid port number specification
The MDMS server did not start up because the logical name MDMS$TCPIP_SND_PORTS in file MDMS$SYSTARTUP.COM specifies and illegal port number range. A legal port number range is of the form "low_port_number-high_port_number".
Correct the port number range for the logical name MDMS$TCPIP_SND_PORTS in file MDMS$SYSTARTUP.COM. Then start the server.
The position specified is invalid.
Position is only valid for jukeboxes with a topology defined. Check that the position is within the topology ranges, correct and retry. Example: /POSITION=(1,2,1)
invalid retention days specified
You entered an invalid value for the retention days. Valid values are 0 to 9999 days. If you wish for no expiration of volumes, specify /NOEXPIRATION_DATE.
Enter a value between 0 and 9999.
invalid value for retry count or interval
You specified an invalid value for either or both the retry count or interval. In addition, it is invalid to specify an interval with a retry limit of zero or nolimit.
Use values within the following ranges:
invalid value for retry interval
You specified an invalid value for retry interval. In addition, it is invalid to specify an interval with a retry limit of zero.
Use a value within the following range only if retry limit is non-zero: 00:01:00 - 01:00:00 (1 - 60 mins)
You specified an invalid value for retry limit.
Use a value in the range 0 to maximum integer or use /NORETRY_LIMIT
invalid scheduling translation defined
An invalid parameter translation was entered for a scheduling option.
invalid schedule options entered
You entered invalid schedule date/time options for a schedule object. The following values are allowed:
The yyyy for INCLUDE and EXCLUDE must be between the current year and up to 9 years into the future (e.g. 2001-2010). If omitted, the current year is used.
Re-enter the command with valid values.
inavlid scheduling parameter defined
An invalid parameter was entered for a scheduling option.
The selection criteria specified on an allocate command are invalid.
Check the command with the documentation and re-enter with a valid combination of selection criteria.
invalid slot or slot range specified
The slot or slot range specified when moving volumes into a magazine or jukebox was invalid, or the specified slots were already occupied.
Specify valid empty slots and re-enter.
The slot range was invalid. It must be of the form: 1-100 1,100-200,300-400 The only characters allowed are comma, dash, and numbers (0-9).
Check that you are using the correct form.
invalid space or space range specified
The space or space range specified when moving volumes into a location was invalid.
Specify valid spaces already defined for the location, or specify a space range for the location
invalid source or destination for move
Either the source or destination of a move operation was invalid (does not exist).
If the destination is invalid, enter a correct destination and retry. If a source is invalid, either create the source or correct the current placement of the affected volumes or magazines.
volume !AZ is in an invalid state for initialization
The volume loaded in the drive for initialization was either allocated or in the transition state and cannot be initialized.
Either the wrong volume was loaded, or the requested volume was in an invalid state. If the wrong volume was loaded, perform an inventory on the jukebox and retry. If the volume is allocated or in transition, you should not try to initialize the volume.
A node full name for a TCP/IP node specification has an invalid syntax.
Correct the node name and retry.
The specified topology for a jukebox is invalid.
Check topology definition; the towers must be sequentially increasing from 0; there must be a face, level and slot definition for each tower. Example: /TOPOLOGY=(TOWER=(0,1,2), FACES=(8,8,8), - LEVELS=(2,3,2), SLOTS=(13,13,13))
invalid volume placement for operation
The volume has an invalid placement for a load operation.
Re-enter the command and use the move option.
volume in invalid state for operation
The operation cannot be performed on the volume because of the volume state does not allow it.
Defer the operation until the volume changes state. If the volume is stuck in a transient state (e.g. moving), check for an outstanding request and cancel it. If all else fails, manually change the state.
specified jukebox already exists
The specified jukebox already exists and cannot be created.
Use a set command to modify the jukebox, or create a new jukebox with a different name.
jukebox could not be initialized
An operation on a jukebox failed because the jukebox could not be initialized.
Check the control, robot name, node name and group name of the jukebox, and correct as needed. Check access path to jukebox (HSJ etc), correct as needed. Verify MDMS is running on a remote node. Then retry operation.
timeout waiting for jukebox to become available
MDMS timed out waiting for a jukebox to become available. The timeout value is 10 minutes.
If the jukebox is in heavy use, try again later. Otherwise, check requests for a hung request - cancel it. Set the jukebox state to available if all else fails.
jukebox is currently unavailable
referenced jukebox !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a jukebox name that does not exist.
Check spelling of the jukebox name and retry, or create the jukebox object in the database.
specified location already exists
The specified location already exists and cannot be created.
Use a set command to modify the location, or create a new location with a different name.
referenced location !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a location name that does not exist.
Check spelling of the location name and retry, or create the location object in the database.
Log file !AZ by !AZ on node !AZ
The server logfile has been closed and a new version has been created by a user.
specified magazine already exists
The specified magazine already exists and cannot becreated.
Use a set command to modify the magazine, or create a new magazine with a different name.
referenced magazine !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a magazine name that does not exist.
Check spelling of the magazine name and retry, or create the magazine object in the database.
The mailbox listener has exited due to an internal error condition. The mailbox listener is the server's routine to receive local user requests through mailbox MDMS$MAILBOX.
The mailbox listener should be automatically restarted. Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
listening on mailbox !AZ logical !AZ
The server has successfully started the mailbox listener. MDMS commands can now be entered on this node.
specified media type already exists
The specified media type already exists and cannot be created.
Use a set command to modify the media type, or create a new media type with a different name.
referenced media type(s) !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a media type that does not exist. One or more of the specified media types may be undefined.
Check spelling of the media types and retry, or create the media type objects in the database.
When moving volumes into and out of a jukebox, some of the volumes were not moved.
Check that there are enough empty slots in the jukebox when moving in and retry. On a move out, examine the cause of the failure and retry.
error accessing jukebox with MRD
MDMS encountered an error when performing a jukebox operation. An accompanying message gives more detail.
Examine the accompanying message and perform corrective actions to the hardware, the volume or the database, and optionally retry the operation.
This is a more detailed MRD error message which accompanies MRDERROR.
Check the MRU error message file.
no user access to object for operation
You attempted to perform an operation on an object for which you have no access.
You need an authorized user to add you to the access control list, otherwise you cannot perform the requested operation.
volume is already in volume set
You cannot bind this volume into this volume set because it already a member of the volume set.
no attributes were changed in the database
Your set command did not change any attributes in the database because the attributes you entered were already set to those values.
Double-check your command, and re-enter if necessary. Otherwise the database is already set to what you entered.
no attributes were changed for !AZ !AZ
Your set command did not change any attributes in the database because the attributes you entered were already set to those values. The message indicates which object was not changed.
Double-check your command, and re-enter if necessary. Otherwise the database is already set to what you entered.
drive not accessible, check not performed
The specified drive could not be physically accessed and the label check was not performed. The displayed attributes are taken from the database.
Verify the VMS device name, node name or group name in the drive object. Check availability on system. Verify MDMS is running on a remote node. Determine the reason the drive was not accessible, fix it and retry.
This server has no access to a database server.
Verify the setting of logical name MDMS$DATABASE_SERVERS. Check each node listed using MDMS SHOW SERVER/NODE=... for connectivity and database access status. Check the servers logfiles for more information.
Execute command procedure SYS$STARTUP:DCSC$STARTUP.COM and retry command.
The server failed to start up because it is disabled in the database.
If necessary correct the setting and start the server again.
The specified node already exists and cannot be created.
Use a set command to modify the node, or create a new node with a different name.
node is not privileged to access database server
A remote server access failed because the user making the DECnet connection is not MDMS$SERVER or the remote port number is not less than 1024.
Verify with DCL command SHOW PROCESS that the remote MDMS server is running under a username of MDMS$SERVER and/or, verify that logical name MDMS$TCPIP_SND_PORTS on the remote server node specifies a port number range between 0-1023.
node not in database or not fully enabled
The server was not allowed to start up because there is no such node object in the database or its node object in the database does not specify all network full names correctly.
For a node running DECnet (Phase IV) the node name has to match logical name SYS$NODE on that node. For a node running DECnet-Plus (Phase V) the node's DECNET_PLUS_FULLNAME has to match the logical name SYS$NODE_FULLNAME on that node. For a node running TCP/IP the node's TCPIP_FULLNAME has to match the full name combined from logical names *INET_HOST and *INET_DOMAIN.
no node object with !AZ name !AZ in database
The current server could not find a node object in the database with a matching DECnet (Phase IV) or DECnet-Plus (Phase V) or TCP/IP node full name.
Use SHOW SERVER/NODES=(...) to see the exact naming of the server's network names. Correct the entry in the database and restart the server.
no drives match selection criteria
When allocating a drive, none of the drives match the specified selection criteria.
Check spelling and re-enter command with valid selection criteria.
You attempted to allocate, load or unload a drive from a node that is not allowed to access it.
The access field in the drive object allows local, remote or all access, and your attempted access did not conform to the attribute. Use another drive.
no drives are currently available
All of the drives matching the selection criteria are currently in use or otherwise unavailable.
Check to see if any of the drives are disabled or inaccessible. Re-enter command when corrected.
no drives in the specified group were found
When allocating a drive, no drives on nodes in the specified group were found.
Check group name and retry command.
no drives in the specified jukebox were found
When allocating a drive, no drives in the specified jukebox were found.
Check jukebox name and retry command.
no drives in the specified location were found
When allocating a drives, no drives in the specified location were found.
Check location name and retry command.
no drives with the specified media type were found
When allocating a drive, no drives with the specified media type were found.
Check media type and retry command, or specify the media type for more drives.
no drives on the specified node were found
When allocating a drive, no drives on the specified node were found.
Check the node name and retry command.
no drives that can support the specified volume were found
When allocating a drive, no drives that could support the specified volume were found.
Check the volume ID and retry command, or check and adjust volume attributes to match a valid drive.
referenced node(s) !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a node name that does not exist. One or more of the specified nodes may be undefined.
Check spelling of the node names and retry, or create the node objects in the database.
no fields specified for report
A REPORT VOLUME command was entered with no fields to select or display.
Enter at least one field for the report.
selection attributes not set with no include data
You specified one or more of the following attributes which are not valid unless an include specification is present: DATA_TYPE, INCREMENTAL, NODES, GROUPS The save or restore object was updated, but selection attributes were not set.
These attributes are applicable only when an INCLUDE statement is present. Re-enter the command with an INCLUDE qualifier.
no include specification for selection
A save or restore object had some selection attributes specified, but no include file specification. The following attributes require an include specification:
Re-enter the command with an include specification.
internal scheduling not enabled
You attempted to create a schedule object but the domain's scheduler option is set to an external scheduler. The MDMS schedule object is valid only with scheduler options INTERNAL, EXTERNAL and SINGLE_SCHEDULER.
Schedule your request using the specified external scheduler product and interface.
You attempted to use a jukebox from a node that is not allowed to access it.
The access field in the jukebox object allows local, remote or all access, and your attempted access did not conform to the attribute. Use another jukebox.
jukebox required on vision option
The jukebox option is missing on a create volume request with the vision option.
Re-enter the request and specify a jukebox name and slot range.
your current license does not support this operation
The requested operation is not licensed. If you are licensed for ABS_OMT only, you have attempted to perform an operation that requires a full ABS license.
Use an alternative mechanism to perform the operation. If this is not possible, you cannot perform the operation with your current license. You may purchase an upgrade ABS license to enable full ABS functionality. Contact hp for details.
no magazines match selection criteria
On a move magazine request using the schedule option, no magazines were scheduled to be moved.
No magazines were moved for a move magazine operation. An accompanying message gives a reason.
Check the accompanying message, correct and retry.
no media type specified when required
An allocation for a volume based on node, group or location also requires the media type to be specified.
Re-enter the command with a media type specification.
The MDMS server failed to allocate enough virtual memory for an operation. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis. Restart the server.
no such objects currently exist
On a show command, there are no such objects currently defined.
A required input parameter to a request or an API function was missing.
Re-enter the command with the missing parameter, or refer to the API specification for required parameters for each function.
no free volumes with no pool or your default pool were found
When allocating a volume, no free volumes that do no have a pool defined or that are in your default pool were found.
Add a pool specification to the command, or define more free volumes with no pool or your default pool.
slot or space ranges not supported with volset option
On a set volume, you entered the volset option and specified either a slot range or space range.
If you want to assign slots or spaces to volumes directly, do not use the volset option.
no available receive port numbers for incoming connections
The MDMS could not start the TCP/IP listener because none of the receive ports specified with this node's TCPIP_FULLNAME are currently available.
Use a suitable network utility to find a free range of TCP/IP ports which can be used by the MDMS server. Use the MDMS SET NODE command to specify the new range with the /TCPIP_FULLNAME then restart the server.
unable to connect to remote node
The server could not establish a connection to a remote node. See the server's logfile for more information.
Depends on information in the logfile.
no such requests currently exist
No requests exist on the system.
The server ran out of event flags. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis. Restart the server.
When showing a domain, the rights are not shown because you don't have privilege to see the rights.
Nothing. To see rights you need MDMS_SHOW_RIGHTS.
schedule object invalid for scheduler type or frequency
You specified a schedule object for a non-custom frequency or for an external scheduler option. A schedule object can only be specified for frequency CUSTOM with domain scheduler type of INTERNAL, EXTERNAL or SINGLE.
Do not specify a schedule name.
scratch loads not supported for jukebox drives
You attempted a load drive command for a jukebox drive.
Scratch loads are not supported for jukebox drives. You must use the load volume command to load volumes in jukebox drives.
no available send port numbers for outgoing connection
The server could not make an outgoing TCP/IP connection because none of the send ports specified for the range in logical name MDMS$TCPIP_SND_PORTS are currently available.
Use a suitable network utility to find a free range of TCP/IP ports which can be used by the MDMS server. Change the logical name MDMS$TCPIP_SND_PORTS in file MDMS$SYSTARTUP.COM. Then restart the server.
not enough slots defined for operation
The command cannot be completed because there are not enough slots specified in the command, or because there are not enough empty slots in the jukebox.
If the jukebox is full, move some other volumes out of the jukebox and retry. If there are not enough slots specified in the command, re-enter with a larger slot range.
An uninitialized status has been reported. This an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
specified destination does not exist
In a move command, the specified destination does not exist.
Check spelling or create the destination as needed.
specified drive does not exist
The specified drive does not exist.
Check spelling or create drive as needed.
specified group does not exist
The specified group does not exist.
Check spelling or create group as needed.
specified inherited object does not exist
On a create of an object, the object specified for inherit does not exist.
Check spelling or create the inherited object as needed.
specified jukebox does not exist
The specified jukebox does not exist.
Check spelling or create jukebox as needed.
specified location does not exist
The specified location does not exist.
Check spelling or create location as needed.
specified magazine does not exist
The specified magazine does not exist.
Check spelling or create magazine as needed.
specified media type does not exist
The specified media type does not exist.
Check spelling or create media type as needed.
The specified node does not exist.
Check spelling or create node as needed.
specified object does not exist
The specified object does not exist.
Check spelling or create the object as needed.
The specified pool does not exist.
Check spelling or create pool as needed.
specified request does not exist
The specified request does not exist on the system.
Check the request id again, and re-enter if incorrect.
The username specified in the command does not exist.
Check spelling of the username and re-enter.
specified volume(s) do not exist
The specified volume or volumes do not exist.
Check spelling or create volume(s) as needed.
The server cannot startup because the username MDMS$SERVER is not defined in file SYSUAF.DAT.
Enter the username of MDMS$SERVER (see Installation manual for account details) and then start the server.
no server mailbox or server not running
The MDMS server is not running on this node or the server is not servicing the mailbox via logical name MDMS$MAILBOX.
Use the MDMS$STARTUP procedure with parameter RESTART to restart the server. If the problem persists, check the server's logfile and file SYS$MANAGER:MDMS$SERVER.LOG for more information.
symbols not supported for multiple volumes
A SHOW VOLUME/SYMBOLS command was entered for multiple volumes. The /SYMBOLS qualifier is only supported for a single volume.
Re-enter command with a single volume ID, or don't use the /SYMBOLS qualifier.
volume is not allocated to user
You cannot perform the operation on the volume because the volume is not allocated to you.
Either use another volume, or (in some cases) you may be able to perform the operation specifying a user name.
specified save or restore is not scheduled for execution
The save or restore request did not contain enough information to schedule the request for execution. The request requires the definition of an archive, an environment and a start time.
If you wish this request to be scheduled, enter a SET SAVE or SET RESTORE and enter the required information.
no unallocated drives found for operation
On an initialize volume request, MDMS could not locate an unallocated drive for the operation.
If you had allocated a drive for the operation, deallocate it and retry. If all drives are currently in use, retry the operation later.
no free volumes in the specified jukebox were found
When allocating a volume, no free volumes in the specified jukebox were found.
Check jukebox name and retry command, or move some free volumes into the jukebox.
no free volumes in the specified location were found
When allocating a volume, no free volumes in the specified location were found.
Check location name and retry command, or move some free volumes into the location.
no free volumes with the specified media type were found
When allocating a volume, no free volumes with the specified media type were found.
Check media type and retry command, or specify the media type for more free volumes.
No volumes were moved for a move volume operation. An accompanying message gives a reason.
Check the accompanying message, correct and retry.
no free volumes in the specified pool were found
When allocating a volume, no free volumes in the specified pool were found.
Check pool name and retry command, or specify the pool for more free volumes (add them to the pool).
In a create, set or delete volume command, no volumes were processed.
Check the volume identifiers and re-enter command.
no free volumes matching the specified volume were found
When allocating a volume, no free volumes matching the specified volume were found.
Check the volume ID and retry command, or add more free volumes with matching criteria.
no volumes match selection criteria
When allocating a volume, no volumes match the specified selection criteria.
Check the selection criteria. Specifically check the relevant volume pool. If free volumes are in a volume pool, the pool name must be specified in the allocation request, or you must be a default user defined in the pool. You can re-enter the command specifying the volume pool as long as you are an authorized user. Also check that newly-created volumes are in the FREE state rather than the UNITIALIZED state.
specified object already exists
The specified object already exists and cannot be created.
Use a set command to modify the object, or create a new object with a different name.
referenced object !AZ does not exist
When attempting to allocate a drive or volume, you specified a selection object that does not exist.
Check spelling of selection criteria objects and retry, or create the object in the database.
dereferenced object with zero count
The MDMS server software detected an internal inconsistency. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
some volumes in range were not processed
On a command using a volume range, some of the volumes in the range were not processed.
Verify the state of all objects in the range, and issue corrective commands if necessary.
When creating or modifying a valid object, the object's record contains a reference to a pool name that does not exist.
Check spelling of the pool name and retry, or create the pool object in the database.
The specified pool already exists and cannot be be created.
Use a set command to modify the pool, or create a new pool with a different name.
You specified an invalid user profile for the environment. Verify that the user name specified (default is ABS) exists on the specified node or cluster.
Re-enter with a valid combination of node or cluster name and user name.
operation is queued for processing
The asynchronous request you entered has been queued for processing.
You can check on the state of the request by issuing a show requests command.
error allocating or deallocating RDF device
During an allocation or deallocation of a drive using RDF, the RDF software returned an error.
The error following this error is the RDF error return.
The number is the request ID for the command just queued.
referenced restore(s) !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a restore name that does not exist. One or more of the specified restores may be undefined.
Check spelling of the restore names and retry, or create the restore objects in the database.
referenced save(s) !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a save name that does not exist. One or more of the specified saves may be undefined.
Check spelling of the save names and retry, or create the save objects in the database.
failed to create a scheduling job
MDMS failed to create a scheduling job.
failed to delete a scheduling job
MDMS failed to delete a scheduling job.
scheduler disconnected from mailbox
The scheduler was disconnected from a mailbox
MDMS found a duplicate scheduling job
external schedule job exited with bad status
An external schedule job exited with bad status
schedule thread terminating with fatal error, restarting
The MDMS internal schedule thread encountered an error and terminated. The thread is restarted.
failed to modify a scheduling job
MDMS failed to modify a scheduling job.
no job complete time was returned from a scheduled job
No job complete time was returned from a scheduled job.
no job exists was returned from a scheduled job
No job exists was returned from a scheduled job.
no job number was returned from a scheduled job
No job number was returned from a scheduled job.
no job start time was returned from a scheduled job
No job start time was returned from a scheduled job.
no job status was returned from a scheduled job
No job status was returned from a scheduled job.
failed to find a scheduling job
MDMS failed to find a scheduling job.
failed to show a scheduling job
MDMS failed to show a scheduling job.
failed to access the internal scheduler queue
An MDMS call to a system service failed in the scheduler functions.
schedule qualifier and novolume qualifier are incompatible
The /SCHEDULE and /NOVOLUME qualifiers are incompatible for this command.
Use the /SCHEDULE and /VOLSET qualifiers for this command.
schedule qualifier and volume parameter are incompatible
The /SCHEDULE and the volume parameter are incompatible for this command.
Use the /SCHEDULE qualifier and leave the volume parameter blank for this command.
schedule qualifier and novolume qualifier are incompatible
The /SCHEDULE and /NOVOLUME qualifiers are incompatible for this command.
Use the /SCHEDULE and /VOLSET qualifiers for this command.
schedule qualifier and volume parameter are incompatible
The /SCHEDULE and the volume parameter are incompatible for this command.
Use the /SCHEDULE qualifier and leave the volume parameter blank for this command.
referenced schedule(s) !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a schedule name that does not exist. One or more of the specified schedules may be undefined.
Check spelling of the schedule names and retry, or create the schedule objects in the database.
referenced selection(s) !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a selection name that does not exist. One or more of the specified selections may be undefined.
Check spelling of the selection names and retry, or create the selection objects in the database.
an error occurred when accessing locale information
When executing the SETLOCALE function an error occurred.
A user should not see this error.
protected field(s) set, verify consistency
You have directly set a protected field with this command. Normally these fields are maintained by MDMS. This has the potential to make the database inconsistent and cause other operations to fail.
Do a SHOW /FULL on the object(s) you have just modified and verify that your modifications leave the object(s) in a consistent state.
The MDMS server could not be started because it could not declare the network task SLS$DB. The network task SLS$DB is already in use.
Check the server's logfile for more information.Check the logical MDMS$SUPPORT_PRE_V3 in the system table. If this is TRUE and the SLS$TAPMGRDB process is running the server cannot be started. Shut down the SLS$TAPMGRDB process by shutting down SLS. Restart MDMSV3.0 server and then restart SLS.
send mail failed, see log file for more explanation
While sending mail during the scheduled activities, a call to the mail utility failed.
Check the log file for the failure code from the mail utility.
some objects in list were not processed
The request was partially successful, but some of the objects were not processed as shown in the extended status.
Examine the extended status, and retry command as needed.
During the mount of a volume, the spawned mount command was too long for the buffer. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
internal inconsistency in SERVER
The MDMS server software (MDMS$SERVER.EXE) detected an inconsistency. This is an internal error.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis. Restart the server.
The server disconnected from the request because of a server problem or a network problem.
Check the server's logfile and file SYS$MANAGER:MDMS$SERVER.LOG for more information. Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
Server exited. Check the server logfile for more information.
Depends on information in the logfile.
The server failed to execute the request. Additional information is in the server's logfile.
Depends on information in the logfile.
The MDMS server is already running.
Use the MDMS$SHUTDOWN procedure with parameter RESTART to restart the server.
The server has started up identifying its version and build number.
The server has started up identifying its version and build number.
The MDMS server was shut down. This could be caused by a normal user shutdown or it could be caused by an internal error.
Check the server's logfile for more information. If the logfile indicates an error has caused the server to shut down then provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
unexpected error in SERVER !AZ line !UL
The server software detected an internal inconsistency.
Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis.
The TCP/IP listener has exited due to an internal error condition or because the user has disabled the TCPIP transport for this node. The TCP/IP listener is the server's routine to receive requests via TCP/IP.
The TCP/IP listener should be automatically restarted unless the TCPIP transport has been disabled for this node. Provide copies of the MDMS command issued, the database files and the server's logfile for further analysis if the transport has not been disabled by the user.
listening on TCP/IP node !AZ port !AZ
The server has successfully started a TCP/IP listener. Requests can now be sent to the server via TCP/IP.
Either entries cannot be added to a list of an MDMS object or existing entries cannot be renamed because the maximum list size would be exceeded.
Remove other elements from list and try again.
You attempted to perform an operation that generated too many objects.
There is a limit of 1000 objects that may be specified in any volume range, slot range or space range. Re-enter command with a valid range.
too many selections for a field, use only one
More than one selection was specified for a particular field.
Specify only one field to select on.
too many sort qualifiers, use only one
When specify more than one field to sort on.
Specify only one field to sort on.
success, but object references undefined objects
The command was successful, but the object being created or modified has references to undefined objects. Subsequent messages indicate which objects are undefined.
This allows objects to be created in any order, but some operations may not succeed until the objects are defined. Verify/correct the spelling of the undefined objects or create the objects if needed.
unknown volume !AZ entered in jukebox !AZ
A volume unknown to MDMS has been entered into a jukebox.
Use the INVENTORY command to make the volume known to MDMS or use a jukebox utility program (CARTRIDGE or MRU) to eject the volume from the jukebox.
You attempted to perform an unsupported function.
You attempted to perform an unsupported function.
unsupported version for record !AZ in database !AZ
The server has detected unsupported records in a database file. These records will be ignored.
Consult the documentation about possible conversion procedures provided for this version of MDMS.
user is not authorized for volume pool
When allocating a volume, you specified a pool for which you are not authorized.
Specify a pool for which you are authorized, or add your name to the list of authorized users for the pool. Make sure the authorized user includes the node name or group name in the pool object.
vision option and volume parameter are incompatible
You attempted to create volumes with the vision option and the volume parameter. This is not supported.
The vision option is used to create volumes with the volume identifiers read by the vision system on a jukebox. Re-enter the command with either the vision option (specifying jukebox and slot range), or with volume identifier(s), but not both.
specified volume is already allocated
You attempted to allocate a volume that is already allocated.
volume is already initialized and contains data
When initializing a volume, MDMS detected that the volume is already initialized and contains data.
If you are sure you still want to initialize the volume, re-enter the command with the overwrite option.
The volume ID is missing in a request.
Provide voluem ID and retry request.
volume is currently in a drive
When allocating a volume, the volume is either moving or in a drive, and nopreferred was specified.
Wait for the volume to be moved or unloaded, or use the preferred option.
You attempted load a volume that is currently in a jukebox into a drive that is not in the jukebox.
Load the volume into a drive within the current jukebox, or check the jukebox name for the drive.
volume is already bound to a volume set
You cannot bind this volume because it is already in a volume set and is not the first volume in the set.
Use another volume, or specify the first volume in the volume set.
The volume's location is unknown.
Check if the volume's placement is in a magazine, and if so if the magazine is defined. If not, create the magazine. Also check the magazine's placement.
volume cannot be loaded but can be moved to jukebox or drive
The volume is not currently in a placement where it can be loaded, but can be moved there.
Move the volume to the drive, or use the automatic move option on the load and retry.
volume is currently being moved
In a move, load or unload command, the specified volume is already being moved.
Wait for volume to come to a stable placement and retry. If the volume is stuck in the moving placement, check for an outstanding request and cancel it. If all else fails, manually change volume state.
specified volume is not allocated
You attempted to bind or deallocate a volume that is not allocated.
None for deallocate. For bind, allocate the volume and then bind it to the set, or use another volume.
volume is not bound to a volume set
You attempted to unbind a volume that is not in a volume set.
one or more volumes are not in this ACS
One or more volumes for the command are not in this ACS.
Verify that all volumes are in the same ACS and that the ACS id is correct.
When loading a volume into a drive, the volume is not in a jukebox.
Use the move option and retry the load. This will issue OPCOM messages to move the volume into the jukebox.
loaded volume is not in the specified pool
During a scratch load of a volume in a drive, the volume loaded was not in the requested pool.
Load another volume that is in the requested pool. A recommended volume is printed in the OPCOM message. Note that if no pool was specified, the volume must have no pool defined.
the volume is not loaded in a drive
On an unload request, the volume is not recorded as loaded in a drive.
If the volume is not in a drive, none. If it is, issue an unload drive command to unload it.
volume is currently in another drive
When loading a volume, the volume was found in another drive.
Wait for the volume to be unloaded, or unload the volume and retry.
!AZ volumes were successfully allocated
When attempting to allocate multiple volumes using the quantity option, some but not all of the requested quantity of volumes were allocated.
See accompanying message as to why not all volumes were allocated.
one or more of the volumes are in drives or are moving
One or more of the volumes in the move request are in drives and cannot be moved. A show volume /brief will identify which volumes are in drives.
Unload the volume(s) in drives and retry, or retry without specifying the volumes in drives.
specified volume(s) already exist
The specified volume or volumes already exist and cannot be be created.
Use a set command to modify the volume(s), or create new volume(s) with different names.
referenced volume !AZ undefined
When creating or modifying a valid object, the object's record contains a reference to a volume ID that does not exist.
Check spelling of the volume ID and retry, or create the volume object in the database.
volume loaded with hardware write-lock
The requested volume was loaded in a drive, but is hardware write-locked when write access was requested.
If you need to write to the volume, unload it, physically enable it for write, and re-load it.
initializing volume !AZ as !AZ is disallowed
The label of the volume loaded in the drive for initialization does not match the requested volume label and there is data on the volume. Or initializing the volume with the requested label causes duplicate volumes in the same jukebox or location.
If you wish to overwrite the volume label, re-issue the command with the overwrite qualifier. If there are duplicate volumes in the same location or jukebox you need to move the other volume from the jukebox or location before retrying.
wrong volume label or unlabelled volume was loaded
On a load volume command, MDMS loaded a volume with the wrong volume label or a blank volume label into the drive.
Check the volume, and optionally perform an initialization of the volume and retry. If this message is displayed in an OPCOM message, you will need another free drive to perform the initialization. The volume has been unloaded.
This section describes how to convert the SLS/MDMS V2.X symbols and database to Media, Device and Management Services Version 4 (MDMS). The conversion is automated as much as possible, however, you will need to make some corrections or add attributes to the objects that were not present in SLS/MDMS V2.X.
Before doing the conversion, you should read See - MDMS Configuration in this Guide to Operations to become familiar with configuration requirements.
All phases of the conversion process should be done on the first database node on which you installed MDMS V4. During this process you will go through all phases of the conversion:
When you install on any other node that does not use the same TAPESTART.COM as the database node, you only do the conversion of TAPESTART.COM
To execute the conversion command procedure, type in the following command:
$ @MDMS$SYSTEM:MDMS$CONVERT_V2_TO_V3
The command procedure will introduce itself and then ask what parts of the SLS/MDMS V2.x you would like to convert.
During the conversion, the conversion program will allow you to start and stop the MDMS server. The MDMS server needs to be running when converting TAPESTART.COM and the database authorization file. The MDMS should not be running during the conversion of the other database files.
During the conversion of TAPESTART.COM the conversion program generates the following file:
$ MDMS$SYSTEM:MDMS$LOAD_DB_nodename.COM
This file contains the MDMS commands to create the objects in the database. You have the choice to execute this command procedure or not during the conversion.
The conversion of the database files are done by reading the SLS/MDMS V2.x database file and creating objects in the MDMS V4 database files.
You must have the SLS/MDMS V2.x DB server shutdown during the conversion process. Use the following command to shut down the SLS/MDMS V2.x DB server:
Because of the difference between SLS/MDMS V2.x and MDMS V4 there will be conflicts during the conversion. Instead of stopping the conversion program and asking you about each conflict, the conversion program generates the following file during each conversion:
$ MDMS$MDMS$LOAD_DB_CONFLICTS_nodename.COM
Where nodename is the name of the node you ran the conversion on. This file is not meant to be executed, it is there for you to look at and see what commands executed and caused a change in the database. This change is flagged because there was already an object in the database or this command changed an attribute of the object.
An example could be that you had two media types of the same name but one specified compressed and one other specified non compressed. This would cause a conflict. MDMS V4 does not allow two media types with the same name but different attributes. What you see in the conflict file would be the command that tried to create the same media type. You will have to create a new media type.
See Symbols in TAPESTART.COM shows the symbols in TAPESTART.COM file and what conflicts they may cause.
At the completion of the conversion of the database files, you will see a message that notes the objects that where in an object but not defined in the database. For example: the conversion program found a pool in a volume record that was not a pool object.
Because of the differences between SLS/MDMS V2.x and MDMS V4 you should go through the objects and check the attributes and make sure that the objects have the attributes that you want. See Things to Look for After the Conversion shows the attributes of objects that you may want to check after the conversion.
This section describes how older versions of SLS/MDMS can coexist with the new version of MDMS for the purpose of upgrading your MDMS domain. You may have versions of ABS, or HSM or SLS which are using SLS/MDMS V2 and which cannot be upgraded or replaced immediately. MDMS V4 provides limited support for older SLS/MDMS clients to make upgrading your MDMS domain to the new version as smooth as possible. This limited support allows rolling upgrade of all SLS/MDMS V2 nodes to MDMS V4. Also ABS and HSM version 3.0 and later have been modified to support either SLS/MDMS V2 or MDMS V2 to make it easy to switch over to the new MDMS. The upgrade procedure has been completed as soon as all nodes in your domain are running the new MDMS V4 version exclusively.
The major difference between SLS/MDMS V2 and MDMS V4 is the way information about objects and configuration is stored. To support the old version the new server can be set up to accept requests for DECnet object SLS$DB which was serving the database before. Any database request sent to SLS$DB will be executed and data returned compatible with old database client requests. This allows SLS/MDMS V2 database clients to still send their database requests to the new server without any change.
The SLS$DB function in the new MDMS serves and shares information for the following objects to a V2 database client:
The new MDMS server keeps all its information in a per object database. The MDMS V4 installation process propagates definitions of the objects from the old database to the new V4 database. However, any changes made after the installation of V4 have to be carefully entered by the user in the old and new databases. Operational problems are possible if the databases diverge. Therefore it is recommended to complete the upgrade process as quickly as possible.
Upgrading your SLS/MDMS V2 domain starts with the nodes, which have been defined as database servers in symbol DB_NODES in file TAPESTART.COM. Refer to the Installation Guide for details on how to perform the following steps.
If you had to change any of the logical name settings above you have to restart the server using '@SYS$STARTUP:MDMS$STARTUP RESTART'. You can type the server's logfile to verify that the DECnet listener for object SLS$DB has been successfully started.
This prevents a SLS/MDMS V2 server from starting the old database server process SLS$TAPMGRDB.
Use a "STORAGE VOLUME" command to test that you can access the new MDMS V4 database.
Note that no change is necessary for nodes running SLS/MDMS V2 as a database client. For any old SLS/MDMS client in your domain you have to add its node object to the MDMS V4 database. In V4 all nodes of an MDMS domain have to be registered (see command MDMS CREATE NODE). These clients can connect to a new MDMS DB server as soon as the new server is up and running and has been added to the new database.
A node with either local tape drives or local jukeboxes which are accessed from new MDMS V4 servers need to have MDMS V4 installed and running.
A node with either local tape drives or local jukeboxes, which are accessed from old SLS/MDMS V2 servers, need to have SLS/MDMS V4 running.
If access is required from both old and new servers then both versions need to be started on that node. But in all cases DB_NODES in all TAPESTART.COM needs to be empty.
MDMS V4 allows you to convert the MDMS V4 volume database back to the SLS/MDMS V2 TAPEMAST.DAT file. Any changes you did under MDMS V4 for pool and magazine objects need to be entered manually into V2 database. Any changes you did under MDMS V4 for drive, jukebox or media type objects need to be updated in file TAPESTART.COM.
The following steps need to be performed to revert back to a SLS/MDMS V2 only domain:
During the rolling upgrade period, the following restrictions apply:
This section describes how to convert the MDMS V4 volume database back to a SLS/MDMS V2.X volume database.
If for some reason, you need to convert back to SLS/MDMS V2.X a conversion command procedure is provided. This conversion procedure does not convert anything other than the volume database. If you have added new objects, you will have to add these to TAPESTART.COM or to the following SLS/MDMS V2.X database files:
To execute the conversion command procedure, type in the following command:
$ @MDMS$SYSTEM:MDMS$CONVERT_V3_TO_V2
After introductory information, this command procedure will ask you questions to complete the conversion.
Access control lists (ACLs) 5-8
Application and User Performance Impeded 5-19
same in plus mode as basic 5-39
with multiple archive classes 5-10
Creating new archive classes 5-25
creating new archive classes 5-25
ANALYZE Command with Default Confirmation 5-26
ANALYZE/REPAIR/CONFIRM/OUTPUT 5-27
Shelf Handler Audit Log Entry 5-35
Shelf Handler Error Log Entry 5-35
Critical HSM product files 5-4
frequent reactive shelving requests 5-18
recovering the HSM$UID file 5-13
that will not be preshelved 5-30
Latitude, storage capacity 5-14
canceling policy requests 5-37
catalog analysis and repair 5-1
contiguous and placed files 5-9
converting from Basic mode to Plus mode 5-37
critical HSM product files 5-4
enable facility for shelving/unshelving 5-39
entering MDMS information 5-38
files never shelved/preshelved 5-6
Maintaining file headers limit 5-33
maintaining shelving policies 5-13
protecting system files from shelving 5-1, 5-4
recovering data from a lost shelved file 5-12
recovering the HSM$UID files 5-12
repacking archive classes 5-23
replacing a lost or damaged shelf volume 5-25
Restarting the Shelf Handler 5-38
restoring files to another disk 5-1
shutting down the shelf handler 5-38
entering appropriate information 5-38
lost or damaged shelf volume 5-1
replacing a lost/damaged shelf volume 5-25
Restoring files to a different/new disk 5-4