Hierarchical Storage
Management for OpenVMS
This manual contains installation information for HSM and Media and Device
management Services (MDMS).
Storage Library System for OpenVMS V2.9B or higher, or |
|
This manual replaces AA-QUJ1E-TE.
Software Version: HSM V3.0
Compaq Computer Corporation
Houston, Texas
Possession, use, or copying of the software described in this documentation is authorized only pursuant to a valid written license from COMPAQ, an authorized sublicenser, or the identified licenser.
While COMPAQ believes the information included in this publication is correct as of the date of publication, it is subject to change without notice.
Compaq Computer Corporation makes no representations that the interconnection of its products in the manner described in this document will not infringe existing or future patent rights, nor do the descriptions contained in this document imply the granting of licenses to make, use, or sell equipment or software in accordance with the description.
© Compaq Computer Corporation
1997, 1998, 1999. All Rights Reserved.
Printed in the United States of America.
COMPAQ, DIGITAL, DIGITAL UNIX, and the COMPAQ and DIGITAL logos Registered in U.S. Patent and Trademark Office.
DECconnect, HSZ, StorageWorks, VMS, and OpenVMS are trademarks of Compaq Computer Corporation.
AIX is registered trademark of International Business Machines Corporation.
FTP Software is a trademark of FTP SOFTWARE, INC.
HP is a registered trademark of Hewlett-Packard Company.
NT is a trademark of Microsoft Corporation.
Oracle, Oracle Rdb, and Oracle RMU are all registered trademarks of Oracle Corporation.
PostScript is a registered trademark of Adobe Systems, Inc.
RDF is a trademark of Touch Technologies, Inc.
SGI is a registered trademark of Chemical Bank.
Solaris is a registered trademark of Sun Microsystems, Inc.
StorageTek is a registered trademark of Storage Technology Corporation.
SunOS is a trademark of Sun Microsystems, Inc.
UNIX is a registered trademark in the United States and other countries, licensed exclusively through X/Open Company Ltd.
Windows and Windows NT are both trademarks of Microsoft Corporation.
All other trademarks and registered trademarks are the property of their respective holders.
This document was prepared using Adobe FrameMaker Version 5.5.
1.1 What Do All Storage Environments Have In Common? 1-1
1.2 What Makes a Storage Environment Unique? 1-1
1.3 How Does HSM Complement Your Storage Environment? 1-1
1.4 What is the Purpose of a Managed Media & Device Environment? 1-2
1.5 Differences - HSM Basic& Plus Mode 1-3
1.5.1 HSM Basic Mode Functions 1-3
1.5.2 HSM Plus Mode Functions 1-3
1.5.3 HSM Mode Comparison Table 1-4
1.5.5 HSM Mode Change Restrictions 1-5
1.6.2 HSM Capacity Licenses 1-6
1.7 HSM Concurrent Use Licenses 1-7
1.8 Installation Changes when SLS is Present 1-7
1.9 HSM Upgrade Considerations 1-7
1.10 Backing HSM Up the Catalog 1-8
1.11 Backing up Your System Disk 1-8
1.12 VMScluster? System Considerations 1-8
1.13 Mixed Architecture Environments 1-8
1.13.1 Mixed Architecture Environments 1-9
1.13.2 Principles Guiding Mixed Architecture Configuration 1-9
1.13.3 Configuring Applications in a Mixed Architecture OpenVMS Cluster 1-10
1.13.3.1 Separate Disk Configuration 1-10
1.13.3.2 Separate Root Configuration 1-10
1.13.3.3 Separate Subdirectory Configuration 1-11
2.1 MDMS Pre-installation Tasks 2-15
2.1.1 Hardware and Software Requirements 2-16
2.1.2 Meet Patch Requirements 2-17
2.1.3 Install CMA Shareable Images 2-17
2.1.4 Shutdown Previous Version of MDMS 2-18
2.1.5 Register the MDMS License 2-18
2.1.6 Verify the Node is in the MDMS Database 2-18
2.1.7 Consider RDF Configuration 2-19
2.2 Installing the MDMS Software 2-19
2.3 MDMS Post-installation Tasks 2-20
2.3.1 Create a Node Object 2-20
2.3.2 Provide Automatic Start Up and Shut Down 2-21
2.3.3 Remove SLS/MDMS V2.x Automatic Startup 2-21
2.3.5 Configure Remote Tape Drives 2-22
2.3.6 Grant MDMS Rights to Users 2-22
2.3.7 Installing the DCL tables on Nodes 2-23
2.4 Graphical User Interface(GUI) Installation 2-23
2.4.2 Installation on OpenVMS Alpha V7.1 and V7.2 2-23
2.4.3 Installation on Intel Windows NT/95/98 2-26
2.4.4 Installation on Alpha Windows NT 2-26
2.5.1 Running the GUI on OpenVMS Alpha 2-27
3.1 Read the Release Notes 3-1
3.2 Required Hardware Disk Space 3-1
3.3.1 Required for HSM Basic Mode 3-2
3.3.2 Required for HSM Plus Mode 3-2
3.3.3 Required for HSM Repack Function 3-3
3.4 Required System Privileges 3-3
3.5 Required System Parameters 3-3
3.6 Required for VMSINSTAL 3-3
4.1 Installing the HSM Software 4-1
4.1.1 The Installation Procedure 4-1
4.2 After Installing HSM Software 4-5
4.3 Editing the System Startup and Shutdown Files 4-5
5.1 HSM's Default Configuration 5-1
5.1.1 The Default Facility 5-1
5.1.5 The Default Policies 5-3
5.2 Running HSM with the Default Configuration 5-3
5.2.1 Verifying the Facility Definition 5-4
5.2.2 Defining Archive Classes for Use 5-4
5.2.3 Selecting Archive Classes for the Default Shelf 5-5
5.2.4 Defining Devices for the Archive Classes 5-5
5.2.5 Initializing Tape Volumes for Each Archive Class 5-6
5.2.6 Set Volume Retention Times for Policy-Based Shelving 5-8
5.3 Additional Configuration Items 5-9
5.3.1 Authorizing Shelf Servers 5-9
5.3.2 Working with a Cache 5-9
5.3.3 An Example of Managing Online Disk Cache 5-10
5.3.4 Running Default Policies 5-10
5.3.5 Template Policy Definitions 5-10
5.3.5.1 Using a Template Policy Definition 5-10
5.3.5.2 Changing Default Policy Definitions 5-11
5.4 Plus Mode Offline Environment 5-12
5.4.1 How HSM Plus Mode and MDMS Work Together 5-12
5.4.2 How MDMS Supports HSM 5-12
5.4.3 MDMS Commands for HSM Plus Mode Use 5-12
5.4.4 MDMS Configuration Tasks Required to Support HSM Plus Mode 5-13
5.4.4.1 Defining Media Triplets 5-14
5.4.4.2 Defining Tape Jukeboxes 5-14
5.4.4.3 Adding Volumes to MDMS Database for HSM to Use 5-15
5.4.4.4 Authorizing HSM Access to Volumes 5-15
5.4.4.5 Importing Volumes Into a Jukebox 5-15
5.4.4.6 Configuring Magazines 5-15
5.4.4.7 Importing Magazines or Volumes Into the Jukebox 5-16
5.4.4.8 Working with RDF-served Devices in HSM Plus Mode 5-16
5.5 HSM Plus Mode Configuration Examples 5-17
5.5.1 Sample TA90 Configuration 5-17
5.5.2 Sample TZ877 Configuration 5-18
5.5.3 Sample TL820 Configuration 5-19
5.5.4 Sample RDF-served TL820 Configuration 5-20
5.5.4.1 Definitions on Client Node 5-20
5.5.4.2 Definitions on the RDF-served Node 5-21
5.6 HSM Basic Mode Configuration Examples 5-22
6.1 DFS, NFS and PATHWORKS Access Support 6-1
6.1.4 New Logical Names for NFS and PATHWORKS Access 6-3
This document contains installation and configuration information about HSM for OpenVMS V3.0. Use this document to install, and configure your HSM environment.
The audience for this document are persons who install HSM software. The users of this document should have some knowledge of the following:
This document is organized in the following manner and includes the following information:
The following documents are related to this documentation set or are mentioned in this manual. The lower case x in the part number indicates a variable revision letter.
HSM Hard Copy Documentation Kit (Consist of the above HSM documents and a cover letter) |
|
Storage Library System for OpenVMS Guide to Backup and Restore Operations |
|
The following related products are mentioned in this documentation.
The following conventions are used in this document.
Determining and Reporting Problems
If you encounter a problem while using HSM, report it to COMPAQ through your usual support channels.
Review the Software Product Description (SPD) and Warranty Addendum for an explanation of warranty. If you encounter a problem during the warranty period, report the problem as indicated previously or follow alternate instructions provided by COMPAQ for reporting SPD nonconformance problems.
The information presented in this chapter is intended to give you an overall picture of a typical storage environment, and to explain how HSM complements that environment.
All storage environments that plan to implement HSM have the following common hardware and software:
All storage environments have some or all of the following characteristics that make them unique:
On most storage systems, 80% of the I/O requests access only 20% of the stored data.
The remaining 80% of the data occupies expensive media (magnetic disks), but is used infrequently. HSM solves this problem by automatically and transparently moving data between magnetic disk and low-cost shelf-storage (tapes or optical disks) according to file usage patterns and policies that you specify. HSM is most suitable for large data-intensive storage operations where the backup times become excessive. By moving infrequently used data to off-line storage, HSM can greatly reduce the amount of backup time required. The benefits of using HSM are:
HSM software is dependent on the Media and Device Management Services (MDMS) software to access storage devices. The purpose of a managed media and device environment is to maintain a logical view of the physical elements of your storage environment to serve your nearline and offline data storage needs. A managed media and device environment defines the media and:
The following list summarizes the characteristics of the managed media and device environment:
HSM software operates in one of two modes:
Except for the media and device management configuration and support, both modes operate identically.
HSM Basic mode provides the following functionality and features:
HSM Plus mode provides the following functionality and features:
Table 1-1 identifies the functionality HSM for OpenVMS provides and which mode provides it.
All other functions, including HSM policies and cache, are provided in both modes.
One of the pivotal decisions you must make before starting HSM is which mode you wish to run in - Plus or Basic.
You choose an HSM mode to operate when you install the HSM for OpenVMS software. However, you can change modes after you make the initial decision. The following restrictions apply to changing modes after installation:
HSM offers three kinds of license types:
A base HSM license is required to use HSM. This base license provides 20 GB of capacity. Additional capacity licenses are available as is an unlimited capacity license. The capacity is calculated according to the online storage space freed up when files are shelved. The total capacity is checked against the allowable capacity when a shelving operation occurs. If you exceed your capacity license, users will be able to unshelve data, but will not be able to shelve data until the license capacity is extended.
When you shelve a file, the amount of space freed up by the file's truncation is subtracted from the total capacity available. When you unshelve or delete the file, its allocated space is added to the capacity available. Periodically, HSM scans the volumes in the VMScluster? system and compares the amount of storage space for the shelved files to the remaining capacity. The SMU SHOW FACILITY command displays the license capacity remaining for the HSM facility (VMScluster? system).
Base licenses are available for all-VAX clusters, all-Alpha clusters, and mixed architecture clusters. These base licenses are shown in Table 1-2.
HSM uses an online capacity licensing strategy. Because HSM increases online capacity for active data at low cost, the license strategy attempts to capitalize on this lower cost per megabyte. HSM reduces the cost of system management by providing this functionality with a reduced amount of operator intervention.
You may increase your HSM storage capacity by purchasing additional capacity licenses. Compaq makes it easy for you by combining a base license in the same capacity license package so only one part number is needed. These licenses expand your shelving capacity by 140 GB, 280 GB, 500 GB, or 1000 GB increments of storage. Table 1-3 lists these licenses.
In addition to the HSM Capacity licenses, Compaq also has available some HSM Concurrent Use Licenses. These concurrent use licenses are different from the above capacity licenses in that they don't include a base license in the same package. Obtaining a concurrent use license and a base license requires two part numbers. Table 1-4 lists these licenses.
When the Storage Library System (SLS) product is already present on the system where you are installing HSM, you must NOT install MDMS. The HSM Product will use the MDMS software already running under SLS. If you reinstall MDMS again, it will override the MDMS software running under SLS and cause SLS to lose some functionality. See the Caution note that follows.
If you are installing HSM Version 3.0 over an existing HSM product, there are several additional tasks you must perform.
In case something should happen during conversion, Compaq strongly recommends you back up the existing catalog and SMU databases before you install HSM Version 3.0 software. The catalog is located at: HSM$CATALOG:HSM$CATALOG.SYS and the SMU databases at: HSM$MANAGER:*.SMU.
Because the HSM catalog is such a critical file for HSM, it is very important that it gets backed up on a regular basis. The contents of shelved files are retrievable only through the catalog. You should therefore plan for the catalog to be in a location where it will get backed up frequently.
At the beginning of the installation, VMSINSTAL prompts you about the backup of your system disk. Compaq recommends that you back up your system disk before installing any software.
Use the backup procedures that are established at your site. For details about performing a system disk backup, see the section on the Backup utility (BACKUP) in the OpenVMS System Management Utilities Reference Manual: A-L.
If you installed HSM on a VMScluster? system, there are four things you may need to do:
Before You Install your Storage Management Software
This section addresses the characteristics of a mixed architecture environment and describes some fundamental approaches to installing and configuring your software to run in it.
The following list identifies the topics and their purposes:
A mixed architecture OpenVMS Cluster includes at least one VAX system and at least one Alpha system.
Creating a Mixed Architecture Configuration:
If you add an Alpha system to a homogenous VAX OpenVMS Cluster, or if you are currently running a homogenous Alpha OpenVMS Cluster and inherit a VAX system, you will have a mixed architecture environment.
Before you integrate the Alpha or VAX node into the system, you should decide an approach to take for handling mixed architecture issues.
Operating a Mixed Architecture Configuration:
If you are currently operating a mixed architecture environment, and you want to add a VAX system or an Alpha system you must integrate it into your current configuration consistently with your other applications.
You should understand the particular requirements of any new application you introduce into a mixed architecture OpenVMS Cluster.
Dissolving a Mixed Architecture Configuration:
If you remove the last VAX or Alpha system, leaving a homogenous OpenVMS Cluster, you should remove any aspects of configuration that accounted for the heterogeneous nature of the mixed architecture system. This includes (but is not limited to) removing startup files, duplicate directory structures, and logical tables.
VAX systems cannot execute image files compiled on an Alpha system, and Alpha systems cannot execute image files compiled on a VAX system. Other types of files cannot be shared, including object code files (.OBJ), and user interface description files (.UID). You must place files that cannot be shared in different locations: VAX files accessible only to VAX OpenVMS Cluster nodes, and Alpha files accessible only to Alpha OpenVMS Cluster nodes.
Data files, in most cases, must be shared between OpenVMS Cluster nodes. You should place all shared files in directories accessible by both VAX and Alpha OpenVMS Cluster nodes.
Logical names, that reference files which cannot be shared, or the directories in which they reside, must be defined differently on VAX and Alpha systems.
Files that assign logical name values must therefore be architecture specific. Such files may either reside on node-specific disks or shared only among OpenVMS Cluster nodes of the same hardware architecture.
This section describes three approaches to configuring applications to run in a mixed architecture OpenVMS Cluster. The one you choose depends on your existing configuration, and the needs of the particular application you are installing. These approaches are given as examples only. You should decide which you want to implement based on your own situation and style of system management.
All of these approaches have two aspects in common:
These characteristics describe the separate disk configuration:
These characteristics describe the separate root configuration:
These characteristics describe the separate directory configuration:
This document includes specific procedures for a recommended approach based on current product configuration and the behavior of the installation process with respect to its use of logical definitions during upgrades.
If the recommended approach is inconsistent with the way you currently manage your system, you should decide on a different approach before you begin your installation procedures.
The following table provides an overview of the steps involved in the full HSM installation and configuration process.To make sure you go through the installation process properly, you could use the `Check-Off' column in Table 1-5 HSM Installation and Configuration.
1. |
|||
This chapter explains how to install the Media and Device Management Services (MDMS)
Version 3.0 software. The sections in this chapter cover the 3 procedures involved in installing the software, namely:
If this is the initial installation of MDMS V3.0 you should install MDMS on a node that is going to be one of your MDMS server nodes.
This version of MDMS installs the system executable files into system specific directories. Because of this, there is no special consideration for mixed architecture OpenVMS cluster system installations. At a minimum, you will install MDMS twice in a mixed architecture OpenVMS Cluster system:
The following table lists out exactly which section describes the particular pre-installation task, to help you ensure that the installation takes place correctly.
Section See Meet Patch Requirements |
|
Section See Install CMA Shareable Images |
|
Section See Register the MDMS License |
|
Section See Consider RDF Configuration |
MDMS V3.0's free disk space requirements differ during installation (peak utilization) and after installation (net utilization). As a pre-installation step please make sure that the required space is available during and post-installation respectively. Table See Disk Space Requirements shows the different space requirements.
The two installation variants are organized, and require disk space as follows:
The files for MDMS are placed in two locations:
OpenVMS V6.2 is the minimum version of software necessary to run MDMS. OpenVMS V7.1 Alpha is the minimum version of software to run the GUI on.
Table 1-2 describes the patch requirements for MDMS:
If the server patches are not installed, you will see the following error while trying to start the server:
09-Mar-1999 10:38:16 %MDMS-I-TEXT, "10k Day" patch not installed!
If you are installing MDMS on an OpenVMS V6.2 VAX system, you have to install the following three files:
If these images are not installed by default, include the following lines in the
SYS$STARTUP:SYSTARTUP_VMS.COM:
$!
$! Install CMA stuff for MDMS
$!
$ INSTALL = "$INSTALL/COMMAND_MODE"
$ IF .NOT. F$FILE_ATTRIBUTES("SYS$COMMON:[SYSLIB]CMA$RTL.EXE", "KNOWN")
$ THEN -
$ INSTALL ADD SYS$COMMON:[SYSLIB]CMA$RTL
$ ENDIF
$ IF .NOT. F$FILE_ATTRIBUTES("SYS$COMMON:[SYSLIB]CMA$OPEN_RTL.EXE", "KNOWN")
$ THEN
$ INSTALL ADD SYS$COMMON:[SYSLIB]CMA$OPEN_RTL
$ ENDIF
$ IF .NOT. F$FILE_ATTRIBUTES("SYS$COMMON:[SYSLIB]CMA$LIB_SHR.EXE", "KNOWN")
$ THEN
$ INSTALL ADD SYS$COMMON:[SYSLIB]CMA$LIB_SHR
$ ENDIF
If you have been running a version of MDMS prior to Version 3.0, you must shut it down using the following command:
If you are using MDMS V3.0, use the following command to shutdown MDMS:
As MDMS V 3.0 does not have a separate license, you need one of the following licenses to run MDMS:
If you do not have one of these licenses registered, please refer to the section on registering the license for ABS or HSM whichever you are installing.
If this installation is not the initial installation of MDMS V3.0, you need to verify that the node you are installing MDMS on is in the MDMS database. Enter the following command on a node that has MDMS already installed on it and verify that the node you are installing MDMS on is in the database:
$ MDMS SHOW NODE node_name_you_are_installing_on
%MDMS-E-NOSUCHOBJECT, specified object does not exist
If the node is not in the database, you receive the %MDMS-E-NOSUCHOBJECT error message and you should create the node. See the command reference guide for the qualifiers to use.
If the node you are adding is an MDMS server node, be sure to use the /DATABASE qualifier. If the node you are creating is to be a database server node, you need to edit all SYS$STARTUP:MDMS$SYSTARTUP.COM files in your domain and add this node to the definition of MDMS$DATABASE_SERVERS.
If the node is in the database, proceed with preinstallation tasks. The node may have been created when you converted from SLS/MDMS V2.x.
MDMS provides RDF software to facilitate operations that require access to remote, network connected tape drives. This allows you to copy data from a local site to a remote site, or copy data from a remote site to a local site.
During the installation you will be asked questions on whether you want to install on this node, the software that will allow it to act as a server and/or client for the RDF software. You need to decide if you want the server and/or client installed on the node.
The MDMS installation procedure consists of a series of questions and informational messages. Once you start the installation procedure, it presents you with a variety of questions that will change depending on whether the installation is the first or a subsequent installation. The installation procedure provides detailed information about the decisions you will make.
If for any reason you need to abort the installation procedure at any time, you can press CTRL/Y and the installation procedure deletes all files it has created up to that point and exits. Note that you can restart the installation procedure from this point, at any time.
$ @SYS$UPDATE:VMSINSTAL MDMS030 location: OPTIONS N
location: is the device and directory that contains the software kit save set.
OPTIONS: N is an optional parameter that indicates you want to seen the question on Release Notes. If you do not include the OPTIONS:N parameter, VMSINSTAL does not ask you about the Release Notes. You should review the Release Notes before proceeding with the installation in case they contain additional information about the installation procedure.
Follow the instructions as you are prompted to complete the installation. Each question you are asked is provided with alternatives for the decision you can take and an explanation for the related decision.
Questions and decisions offered by the installation procedure vary. Subsequent installations will not prompt you for information you provided during the first installation.
The following sections describe the post-installation tasks needed after installing the MDMS:
Section See Create a Node Object |
|
Section See Configure MDMS |
|
Section See Configure Remote Tape Drives |
|
Section See Grant MDMS Rights to Users |
|
If this is the initial installation of MDMS, you need to create the node object in the MDMS node database for this node. Use the MDMS CREATE NODE command to create this initial database node. Refer to the command reference guide for the qualifiers for this command. The following is an example:
$ MDMS CREATE NODE NABORS -
! NABORS is the DECnet Phase IV node name or a
! name you make up if you do not use DECnet
! Phase IV in your network
/DATABASE_SERVER -
! a potential database node
! must also be defined in
! in SYS$STARTUP:MDMS$SYSTARTUP.COM
/TCPIP_FULLNAME=NABORS.SITE.INC.COM -
! the TCP/IP full node name if you
! are using TCP/IP you need this if
! you are using the GUI
/DECNET_FULLNAME=INC:.SITE.NABORS -
! this is the full DECnet Phase V node name
! do not define if you do not have DECnet Phase V on this node
! be sure to define if you have DECnet Phase V installed on this node
/TRANSPORT=(DECNET,TCPIP)
! describes the transports that listeners are
! started up on
To automatically start MDMS when you initiate a system start up, at a location after the DECnet or TCP/IP start up command, add the following line in the system's start up file,
SYS$MANAGER:SYSTARTUP_VMS.COM:
To automatically stop MDMS when you initiate a system shut down, enter the following into the system's shut down file:
While using MDMS V3.0 with ABS, make sure that MDMS startup is executed prior to ABS startup. ABS needs a logical name that is defined by the MDMS startup.
If you have been using SLS/MDMS V2.x before and all your nodes running ABS and/or HSM version now support the new MDMS V3.0 make sure you remove this line from your system's start up file:
If this node still needs to support clients that use SLS/MDMS V2.x refer to the Appendix SLS/MDMS V2.x Compatibility in the guide to operations. Until you have made all of the changes described in this appendix, you should not start up SLS/MDMS V2.x.
Now that you have installed MDMS you need to configure MDMS by creating the following objects:
Please refer to the MDMS section in the guide to operations for more information on configuration and operation.
If you upgrading from SLS/MDMS V2.x you can convert the SLS/MDMS V2.x symbols and database to the MDMS V3.0 database. Use the procedure described in the appendix of the guide to operations.
Once MDMS V3.0 is installed, and any conversions are performed, you may wish to adjust your configuration prior to performing MDMS operations
If you installed the RDF software, you need to configure the remote tape drives.
For each tape drive served with RDF Server software, make sure there is a drive object record in the MDMS database that describes it. Refer to the chapters on MDMS configuration in the guide to operations and the MDMS CREATE DRIVE command in the command reference guide.
For each node connected to the tape drive, edit the file TTI_RDEV:CONFIG_node.DAT and make sure that all tape drives are represented in the file. The syntax for representing tape drives is given in the file.
During startup of MDMS, the RDF client and server are also started. The RDF images are linked on your system. If you see the following link errors on Alpha V6.2, this is not an RDF bug. The problem is caused by installed VMS patches ALPCOMPAT_062 and ALPCLUSIO01_062.
%LINK-I-DATMISMCH, creation date of 11-FEB-1997 15:16 in
shareable image SYS$COMMON:[SYSLIB]DISMNTSHR.EXE;3
differs from date of 4-MAY-1995 22:33 in shareable image library
SYS$COMMON:[SYSLIB]IMAGELIB.OLB;1
.
.
.
This is a known problem and is documented in TIMA. To correct the problem, issue the following DCL commands:
$ LIBRARY/REPLACE/SHARE SYS$LIBRARY:IMAGELIB.OLB SYS$SHARE:DISMNTSHR.EXE
$ LIBRARY/REPLACE/SHARE SYS$LIBRARY:IMAGELIB.OLB SYS$SHARE:INIT$SHR.EXE
$ LIBRARY/REPLACE/SHARE SYS$LIBRARY:IMAGELIB.OLB SYS$SHARE:MOUNTSHR.EXE
Before any user can use MDMS, you must grant MDMS rights to those users. Refer to the MDMS Rights and Privileges Appendix in the Archive/Backup System for OpenVMS Command Reference Guide for explanation of MDMS rights and how to assign them.
This section describes how to install and run the Graphical User Interface (GUI) on various platforms.
As the GUI is based on Java, you must have the Java virtual machine installed on the system you run the MDMS GUI on. If you do not have Java installed on your system, these sections describe what is needed and where to get it.
This installation procedure extracts files from the MDMS kit and places them in MDMS$ROOT:[GUI...]. You can then move the files to your Windows system and install them.
The GUI requires the following in order to run:
Since the MDMS GUI is a Java application, it requires the platform specific Java Virtual Machine. The availability of each Java Virtual Machine is described in the following sections. The best way of getting a Java Virtual Machine is to down load the platform-specific kit from the given URLs. If this is not possible, the MDMS package also contains a copy for your convenience. Issues concerning availability and installation of the Java Virtual Machine can be directed to:
http://www.sun.com/java/products for Windows NT and
http://www.digital.com/java/download/jdk_ovms/1.1.8/index.html for OpenVMS
A Java Virtual Machine is included in this MDMS kit for the purpose of completeness. MDMS provides both the pointers (URLs) of downloading a Java Virtual Machine and the actual files of the Java Virtual Machine in the release package. However, the downloading approach is encouraged.
Memory - The hard drive space requirement is 6 MB for Java Virtual Machine and 2 MB for MDMS GUI. The main memory space requirement for running MDMS GUI is 10 MB.
The following steps describe how to install and run the MDMS GUI on OpenVMS Alpha:
Fixes needed to enable |
||
These patches are not required for installation on OpenVMS Alpha V7.2.
You may use the Java kit provided with the MDMS kit or download files from the Web. If you want to install from the MDMS kit, answer YES to the following question:
Do you want the OpenVMS Java kit extracted [NO]?
If you install from the MDMS kit, a file called:
MDMS$ROOT:[GUI.VMS]DEC-AXPVMS-JAVA-V0101-81-1.PCSI_DCX_AXPEXE
is created. Use this file to install Java as in step 4.
Do you want the MDMS GUI installed on Alpha OpenVMS [YES]?
Reply `Yes' to the question if you want to install the GUI on OpenVMS. Files will be moved to MDMS$ROOT:[GUI.VMS] and the GUI installation will be completed.
$ SET DEFAULT MDMS$ROOT:[GUI.VMS]
$ RUN DEC-AXPVMS-JAVA-V0101-81-1.PCSI_DCX_AXPEXE
Extract and read the Release Notes for additional information on how to use this product in an OpenVMS environment:
$ PRODUCT EXTRACT RELEASE_NOTES JAVA-
/SOURCE=[directory_where_you_put_the_PCSI_file]-
/FILE=[directory_where_you_want_it]JDK118_VMS_RELEASE_NOTES.HTML-
/BASE_SYSTEM=AXPVMS
Install the JDK1.1.8 from the .PCSI file obtained:
$ PRODUCT INSTALL JAVA-
/SOURCE=[directory_where_you_put_the_PCSI_file]/BASE_SYSTEM=AXPVMS
The following files are installed by PCSI (POLYCENTER Software Installation utility) with file attribute of ARCHIVE:
SYS$MANAGER:JAVA$SETUP.COM
SYS$MANAGER:JAVA$STARTUP.COM
SYS$SYSROOT:[JAVA.LIB]FONT.PROPERTIES
SYS$SYSROOT:[JAVA.LIB]FONT_PROPERTIES.JA
If a file having any of these names already exists on the system, the installation process renames it to a new name with the file type ending `_OLD', before loading the new copy from the kit. Only the latest version of the existing file is preserved (by being renamed to file.type_old) before PCSI deletes all remaining versions.
For example, an existing SYS$MANAGER:JAVA$SETUP.COM
is renamed to SYS$MANAGER:JAVA$SETUP.COM_OLD before the new copy is copied from the kit. If you have previously personalized any of these files, you might need to merge your personalizations with the new copy.
The JDK documentation is installed on your system at the following location:
SYS$COMMON:[SYSHLP.JAVA]INDEX.HTML
$ EDIT SYS$STARTUP:JAVA$SETUP.COM
and include the following logical name definition at the end of the file:
$ DEFINE JAVA$CLASSPATH -
MDMS$ROOT:[GUI.VMS]MDMS.ZIP,-
MDMS$ROOT:[GUI.VMS]SYMANTEC.ZIP, -
MDMS$ROOT:[GUI.VMS]SWINGALL.JAR, -
SYS$COMMON:[JAVA.LIB]JDK118_CLASSES.ZIP
Add the above command line to SYS$COMMON:[SYSMGR]SYLOGIN.COM so that when users login, they will have the Java definitions.
SYS$SYSROOT:[SYSHLP.JAVA]JAVA$FILENAME_CONTROLS.COM
to establish the JAVA$FILENAME_CONTROLS default values. You can edit this file to see what defaults are being used and how to change them. (This information is also in the "UNIX Style Filenames on an OpenVMS System" section of the JDK release notes.)
$mcr decw$utils:xmodmap -e "keysym Delete = BackSpace Delete"
The following describes how to install the MDMS GUI on Intel platforms running Windows NT/95/98:
Do you want files extracted for Microsoft Windows NT/95/98 on Intel [YES]?
Reply YES if you want to install the GUI on Intel Windows NT/95/98.
http://www.javasoft.com/products/jdk/1.1/jre or
http://www.sun.com/developers/developers.html
and follow the instructions to perform a default installation. You may use other versions of JRE, preferably 1.1.8 or later. If a Java Virtual Machine is not available, you may use MDMS$ROOT:[GUI.INTEL]JRE117WINTEL.EXE.
Simply double-click on this file to install Java, and follow the setup instructions.
Make MDMS$ROOT:[GUI.INTEL]SETUP_INTEL.EXE available to the target machine
(Intel PC running Windows NT/95/98)
The following describes how to install the MDMS GUI on an Alpha platform running Windows NT:
Do you want the MDMS GUI files extracted for Alpha NT [YES] ?
http://www.digital.com/java/download/jre_nt/1.1.8/jre118_down.html
and follow the instruction to perform a default installation. If a Java Virtual Machine is not available, you may use:
MDMS$ROOT:[GUI.ALPHA_NT]JRE118ALPHANT.EXE.
Now that you have installed the GUI, you have to make sure the server node is configured to accept communications from the GUI. The server node for the GUI must have:
To enable TCP/IP communications on the server, you have to set the TCP/IP Fullname attribute and enable the TCPIP transport. See the command reference guide for information about setting these attributes in a node.
MDMS rights for the user must be enabled in the SYSUAF record to log into the server using the GUI. Refer to the command reference guide for information about MDMS rights.
The following sections describe how to run the GUI on various platforms.
To use the MDMS GUI on OpenVMS Alpha systems, use the following commands:
$ @SYS$STARTUP:JAVA$SETUP.COM
$ SET DISPLAY/CREATE/NODE=node_name/TRANSPORT=transport
$ MDMS/INTERFACE=GUI
For the SET DISPLAY command, the node_name is the name of the node on which the monitor screen exists. This allows you to use the GUI on systems other than those running OpenVMS Alpha V7.1 or higher. The transport must be a keyword of:
To use the MDMS GUI on Intel Windows NT/95/98 platforms, double click MDMS_GUI\MDMS_GUI.BAT.
To use the MDMS GUI on Alpha Windows NT, do one the following:
Meeting the HSM Installation
Requirements
This chapter enlists requirements to be met before installing the HSM software. Go through the following list before you begin installation.
The requirements list to meet before installing HSM software is as follows:
The HSM kit includes online release notes. Compaq strongly recommends that you read the release notes before proceeding with the installation. The release notes are in a text file named SYS$HELP:HSM30.RELEASE_NOTES and a Postscript ® file named SYS$HELP:HSM30_RELEASE_NOTES.PS.
See Disk Space Requirementssummarizes the disk space requirements for installing and running HSM.
Catalog grows at the average rate of 1.25 blocks for each file copy shelved. Compaq recommends 100,000 blocks be set aside initially for this catalog. |
HSM requires 16,000 free disk blocks on the system disk. To determine the number of free disk blocks on the current system disk, enter the following command at the DCL prompt:
The software requirements list for HSM is as follows:
When HSM is used in Basic Mode, the only software required, in addition to HSM, is the OpenVMS Operating System Versions 6.1 through 7.1. Media and Device Management Services (MDMS) is required only if you wish to convert from HSM Basic Mode to HSM Plus Mode.
HSM Plus mode requires the use of Media and Device Management Services for OpenVMS (MDMS) Version 2.5B or newer software for managing media and devices. MDMS software comes packaged with HSM and can be obtained from one of the following sources:
The HSM SMU REPACK Command allows you to do an analysis of valid and obsolete data on shelf media and copy the valid data to other media, thus freeing up storage space. This Repack functionality is found in HSM Plus Mode.
The HSM Repack function requires the use of two tape drives since this is a direct tape to tape transfer process. One tape must match the media type of the source archive class and the other tape must match the media type of the destination archive class.
To install HSM software, you must be logged into an account that has the SETPRV privilege.
Note that VMSINSTAL turns off the BYPASS privilege at the start of the installation.
The installation for HSM may require that you raise the values of the GBLSECTIONS and GBLPAGES system parameters if they do not meet the minimum criteria shown in See System Parameters for VAX and ALPHA.
To find your current system parameters, use the following command:
When you invoke VMSINSTAL, it checks for the following:
Note that VMSINSTAL requires that the installation account have a minimum of the following quotas:
ASTLM = 40 (AST Quota)
BIOLM = 40 (Buffered I/O limit)
BYTLM = 32,768 (Buffered I/O byte count quota)
DIOLM = 40 (Direct I/O limit)
ENQLM = 200 (Enqueue quota)
FILLM = 300 (Open file quota)
Type the following command to find out where your quotas are set.
If VMSINSTAL detects any problems during the installation, it notifies you and prompts you to continue or stop the installation. In some instances, you can enter YES to continue. Enter NO to stop the installation and correct the problem.
User account quotas are stored in the SYSUAF.DAT file. Use the OpenVMS Authorize Utility (AUTHORIZE) to verify and change user account quotas.
First set your directory to SYS$SYSTEM, and then run AUTHORIZE, as shown in the following example:
$ SET DEFAULT SYS$SYSTEM
$ RUN AUTHORIZE UAF>
At the UAF> prompt, enter the SHOW command with an account name to check a particular account. For example:
To change a quota, enter the MODIFY command. The following example changes the FILLM quota for the SMITH account and then exits from the utility:
UAF> MODIFY SMITH /FILLM=50
UAF> EXIT
After you exit from the utility, the system displays messages indicating whether changes were made. Once the changes have been made, you must log out and log in again for the new quotas to take effect.
If DECthreads? images are not installed, you must install them before you install HSM. DECthreads? consists of several images. To check for them, you will need to execute the following commands. These commands require CMKRNL privileges and will need to be executed on all nodes in the cluster running HSM.
$ install list sys$library:cma$rtl.exe
$ install list sys$library:cma$lib_shr.exe
$ install list sys$library:cma$open_lib_shr.exe
$ install list sys$library:cma$open_rtl.exe
If any of these list commands fails, then the DECthreads? images need to be installed.
To install them, execute the following commands.
$ install add sys$library:cma$rtl.exe/open/head/share
$ install add sys$library:cma$lib_shr.exe/open/head/share
$ install add sys$library:cma$open_lib_shr.exe.open/head/share
$ install add sys$library:cma$open_rtl.exe/open/head/share
To register your HSM license or to add additional capacity licenses, follow the steps in See How to Register Your HSM License. Before you attempt to register your PAK, be sure to have the PAK information in front of you.
This section contains a step-by-step description of the installation procedure.
Installing HSM will take approximately 10 to 20 minutes, depending on your system configuration and media.
Running the Installation Verification Procedure (IVP) on a standalone system takes about 5 minutes.
The HSM installation procedure consists of a series of questions and informational messages. Table 5-1 shows the installation procedure. For a complete example of an HSM installation and verification procedure for HSM Basic mode, see Appendix A; for HSM Plus mode, see Appendix B.
To abort the installation procedure at any time, enter Ctrl/Y. When you enter Ctrl/Y, the installation procedure deletes all files it has created up to that point and exits. You can then start the installation again.
Note that VMSINSTAL deletes or changes entries in the process symbol tables during the installation. Therefore, if you are going to continue using the system manager's account and you want to restore these symbols, you should log out and log in again.
If errors occur during the installation procedure, VMSINSTAL displays failure messages. If the installation fails, you see the following message:
%VMSINSTAL-E-INSFAIL, The installation of HSM has failed
If the IVP fails, you see this message:
The HSM Installation Verification Procedure failed.
%VMSINSTAL-E-IVPFAIL, The IVP for HSM has failed.
Errors can occur during the installation if any of the following conditions exist:
For descriptions of the error messages generated by these conditions, see the OpenVMS documentation on system messages, recovery procedures, and OpenVMS software installation. If you are notified that any of these conditions exist, you should take the appropriate action as described in the message.
The following postinstallation tasks should be performed after installing HSM software:
You must edit the system startup and shutdown files to provide for automatic startup and shutdown.
Add the command line that starts HSM to the system startup file, called SYS$MANAGER:SYSTARTUP_VMS.COM.
HSM cannot start until after the network has started. You must place this new command line after the line that executes the network startup command procedure.
The following example shows the network startup command line followed by the HSM startup command line:
$ @SYS$MANAGER:STARTNET.COM
.
.
.
$ @SYS$STARTUP:HSM$STARTUP.COM
The HSM$STARTUP.COM procedure defines logicals required by the HSM software, connects the HSDRIVER software for VAX or Alpha systems, and issues an SMU STARTUP command to start the shelf handler process. The shelf handler process runs continuously and exits only with the SMU SHUTDOWN command. You may restart the shelf handler manually by using the SMU STARTUP command.
You also can connect the HSDRIVER software manually. To do this, use one of the following commands:
$ MCR SYSGEN CONNECT HSA0:/NOADAPTER
$ MCR SYSMAN IO CONNECT HSA0: /NOADAPTER
Add the following command line to the system shutdown file, called
SYS$MANAGER:SYSHUTDWN.COM:
The HSM catalog is the one and only authority containing information about shelved files. It grows as files get shelved.
If you did not have the installation create an HSM catalog for you, you must create it manually before you start HSM. To manually create the catalog, invoke SYS$STARTUP:HSM$CREATE_CATALOG.COM. This creates a single catalog for HSM to access.
When a catalog is created in this manner, it will be configured in Basic mode by default. SMU SET FACILITY/MODE=PLUS should be executed after the catalog is created if Plus mode is desired. Creating a new Basic mode catalog in an environment that was previously defined in Plus mode can cause unpredictable results.
The HSM$STARTUP.COM file placed in SYS$STARTUP at installation time creates several system wide logicals used by HSM. These logicals are stored in a file called HSM$LOGICALS.COM and includes the logical HSM$CATALOG. This logical points to the directory where HSM should look for the shelving catalog. If you wish to change the location of the catalog, the line in
SYS$STARTUP:HSM$LOGICALS.COM that defines this logical should be changed.
The system logical HSM$CATALOG should be created ahead of time to specify the location for the catalog. If you have not already created the logical, you are prompted to define it now and to restart the procedure. You still must modify the line in HSM$LOGICALS.COM to reflect that location if it is other than the default. The new catalog file is empty when created.
As mentioned in the installation procedure itself, VMSINSTAL can run an IVP upon completion of the HSM installation. The IVP attempts to verify that HSM can shelve and unshelve files using default configuration information. For a complete example of the HSM IVP, see Appendix A for HSM Basic mode or Appendix B for HSM Plus mode.
HSM comes with a set of default configuration definitions that enable you to get HSM up and running quickly. This chapter explains how to use those definitions and perform other essential configuration tasks to start using HSM. Once HSM is up and running, you can modify the configuration for optimal performance in your operational environment. Read the Customizing the HSM Environment Chapter in the HSM Guide to Operations Manual for more information on tuning.
This chapter includes the following topics:
After installation, HSM is configured with all the default definitions designed into the software. This section explains in detail what the default configuration definitions are and how you need to modify them to get HSM up and running. If you follow the steps in this section, you should be able to shelve and unshelve data on your system. For more information on optimizing HSM functions for your environment, read the Customizing the HSM Environment Chapter in the HSM Guide to Operations Manual.
When you install HSM, it sets up several default elements you can use to run HSM with few modifications. These default elements include the following:
These operations and event logging defaults represent the behavior that is recommended and expected to be used most of the time that HSM is in operation.
You should customize your shelf servers to be restricted to the larger systems in your cluster and to those that have access to the desired near-line and off-line devices.
HSM provides a default shelf that supports all online disk volumes. The default shelf enables HSM operations.
Note that no archive classes are set up for the default shelf at initialization. To enable HSM operations to near-line and off-line devices, you need to configure the default shelf to one or more archive classes.
HSM provides a default device record, which applies when you create a device without specifying attributes for the device. The default attributes are:
For the device to be used by HSM, you must at minimum associate one or more archive classes with the device. You may also choose to dedicate any device for exclusive HSM use.
HSM provides a default volume record, which applies to all online disk volumes in the system unless a specific volume entity exists for them. The default volume contains the following attributes:
If these attributes are acceptable, no further configuration of volume records is needed.
You may change the volume attributes in one of two ways:
Compaq recommends you examine disk usage before enabling implicit shelving operations on your cluster. For example, enabling a high water mark criterion on the default volume could cause immediate mass shelving on all volumes if the disk usage is already above the high water mark.
Compaq also recommends that you create an individual volume record for each system disk and disable all HSM operations on those volumes.
HSM provides three default policies as follows:
At installation time, all three default policies contain the same attributes:
These default policy definitions allow HSM to function effectively with the minimum of advance configuration and can be used without any modifications or additional information.
By default, all volumes use the appropriate default policies without any further configuration being required.
Although HSM provides the default elements described above, you cannot simply try to run HSM with these items. You must verify the facility definition and configure the following additional items for HSM to function:
As mentioned earlier, HSM provides a default facility definition. Before you start using HSM, however, you want to verify that the default facility definition is correct for your environment.
The following example shows how to view information about the facility:
HSM is enabled for Shelving and Unshelving
Facility history: Created: 22-APR-1996 12:10:37.13
Revised: 22-APR-1996 12:10:37.13
Designated servers: NODE1 NODE2
Current server: NODE1
Catalog server: Disabled
Event logging: Audit Error Exception
HSM mode: Plus
Remaining license: 20 gigabytes
The information displayed indicates:
If any of these attributes are not correct for your facility, you need to modify them before continuing to configure HSM.
To define an archive class for HSM to use, use one of the following commands depending on the mode in use. For HSM Basic Mode use:
$ SMU SET ARCHIVE n ! for Basic Mode
$ SMU SET ARCHIVE n - _$ /MEDIA_TYPE=string - _$ /DENSITY=string - _$ /ADD_POOL=string
Where n is a list of archive class numbers from 1 to 36 (Basic mode), or 1 to 9999 (Plus mode).
In Plus mode, the DENSITY and ADD_POOL qualifiers are optional. The MEDIA_TYPE and DENSITY must exactly match the definitions in TAPESTART.COM (see Section 6.4).
During installation, HSM creates a default shelf named HSM$DEFAULT_SHELF. To allow shelving operations, the shelf must be associated with one or more archive classes. When data is copied to the shelf, it is copied to each of the archive classes specified. Having several archive classes provides you additional safety for your shelved files. Each archive class is written to its own set of media. Compaq recommends having at least two archive classes.
Archive classes are represented in HSM by both an archive name and an archive identifier. The properties of archive classes depends on the selected HSM operational mode:
Basic Mode - Basic mode supports up to 36 archive classes named HSM$ARCHIVE01 to HSM$ARCHIVE36, with associated archive identifiers of 1 to 36 respectively. The media type for the archive class is determined by the devices associated with the archive class. It is not specifically defined.
Plus Mode - Plus mode supports up to 9999 archive classes named HSM$ARCHIVE01 to HSM$ARCHIVE9999, with associated archive identifiers of 1 to 9999 respectively. You specify the media type and (optionally) density which must exactly agree with the corresponding fields associated with off-line devices in the MDMS/SLS file TAPESTART.COM. Specifying a volume pool allows you to reserve specific volumes for HSM use.
Restore archive classes are the classes to be used when files are unshelved. HSM attempts to unshelve files from the restore archive classes in the specified order until the operation succeeds. To establish your restore archive classes, you use the /RESTORE qualifier.
The following command associates archive classes 1 and 2 with the default shelf. It also specifies that UNSHELVE operations use the restore archive classes 1 and 2. Each archive class has an associated media type.
$ SMU SET SHELF/DEFAULT/ARCHIVE_ID=(1,2)/RESTORE_ARCHIVE=(1,2)
Now you need to specify which near-line/off-line devices you want to use for copying shelved file data to the archive classes. You must specify a minimum of one device to support near-line or off-line shelving.
In some circumstances, it is beneficial to dedicate a device for HSM operations. You may want to dedicate a device if you expect a lot of HSM operations on that device and you do not want those operations to be interrupted by another process.
For each device definition you create, you have the option of dedicating the device or sharing the device. A dedicated device is allocated to the HSM process. A shared device is allocated to HSM only in response to a request to read or write to the media. Once the operation completes, the device is available to other applications.
The SMU SET DEVICE/DEDICATE command is used to dedicate a device, the SMU SET DEVICE/SHARE command is used to share a device. The following options are for dedicating or sharing a device:
Archive Class, Device, and Media Type
For HSM Basic mode, when you associate an archive class with a particular device, you implicitly define the media type for that archive class. HSM, in Basic mode, determines media type for a given device based on the device type.
In Basic mode, you must associate a Robot Name with a tape magazine loader if you wish it to be robotically controlled.
After setting up the devices, you must initialize each tape volume for the archive classes to be used.
For Plus mode, the tape volumes are defined by using STORAGE ADD VOLUME commands to MDMS/SLS. For jukebox loaders such as the TL81x and TL82x, it is important that the volumes names match the external volume label and bar code on the volume. You can use the OpenVMS INITIALIZE command to initialize volumes, or use the SLS Operator Menu (option 3) to do this.
HSM Basic mode uses a different approach. There are fixed labels for use in each archive class as shown in Table 6-1:
In the table, the values for xxx are as follows:
001, 002, ..., 099, 101, 102, ..., 199, 201, 202, ..., 999, A01, A02, ..., A99, B01, B02, ..., Z99
This naming convention must be adhered to for Basic Mode, allowing up to 3564 volumes per archive class. An archive class always starts with the "001" value and progresses up in order, as shown.
Use the OpenVMS INITIALIZE command to initialize the physical tape volumes for each archive class that you use.
The following examples show how to initialize two tape volumes each for archive class 1 and 2.
Archive Class ID
_ $ INITIALIZE $1$MUA100: HS0001 ! tape 1 for archive class ID 1
$ INITIALIZE $1$MUA200: HS0002 ! tape 2 for archive class ID 1
$ INITIALIZE $1$MUA100: HS1001 ! tape 1 for archive class ID 2
$ INITIALIZE $1$MUA200: HS1002 ! tape 2 for archive class ID 2 Tape volume
Each template policy uses the expiration date as the basis for selecting files for shelving. This default is intended to be used with the OpenVMS volume retention feature to provide an effective date of last access. The date of last access is the optimal way to select truly dormant files for shelving by policy.
To use the default policy effectively, you must enable volume retention on each volume used for shelving. If you do not specifically enable volume retention on a volume, expiration dates for files on the volume usually will be zero. In this case, the default policy will not select any files for shelving.
To set volume retention, you must be allowed to enable the SYSPRV system privilege or have write access to the disk volume index file.
To set volume retention dates, use the following procedure. For more information about the OpenVMS command SET VOLUME/RETENTION, see the OpenVMS DCL Dictionary.
1. Enable the system privilege for your process:
$ SET PROCESS/PRIVILEGE=SYSPRV
2. Enable retention times for each disk volume on your system:
$ SET VOLUME/RETENTION=(min,[max])
For min and max, specify the minimum and maximum periods of time you want the files retained on the disk using delta time values. If you enter only one value, the system uses that value for the minimum retention period and calculates the maximum retention period as either twice the minimum or as the minimum plus seven days, whichever is less.
If you are not already using expiration dates, the following settings for retention times are suggested:
$ SET VOLUME/RETENTION=(1-, 0-00:00:00.01)
Initializing File Expiration Dates
Once you set volume retention on a volume and define a policy using expiration date as a file selection criteria, the expiration dates on files on the volume must be initialized. HSM automatically initializes expiration dates the first time a policy runs on the volume. This initializes dates for all files on the volume that do not already have an expiration date. The expiration date is set to the current date and time, plus the maximum retention time as specified in the SET VOLUME/RETENTION command.
After the expiration date has been initialized, the OpenVMS file system automatically updates the expiration date upon read or write access to the file, at a frequency based on the minimum and maximum retention times.
There are three additional configuration tasks you may want to perform to use in connection with HSM's default configuration:
If your cluster contains a mixture of large nodes and smaller satellite workstations, you may want to restrict shelf server operation to the larger nodes for better performance.
Use the SET FACILITY command to initially authorize your shelf server. See the SMU SET FACILITY command in the HSM Guide to Operations for detailed information on this command. In the following example, two nodes (NODE1 and NODE2) are authorized as shelf servers. By default, all shelving operations and all event logging are enabled also.
$ SMU SET FACILITY/SERVER=(NODE1,NODE2)
If you omit the /SERVER qualifier, all nodes in the cluster are authorized as shelf servers.
A cache provides many performance benefits for HSM operations, for example, a significant improvement in the response time during shelving operations.
If you would like to use a magneto-optical device as a shelf device instead of, or in addition to near-line /off-line devices, define the magneto-optical device as a cache.
A system manager has decided that approximately 250,000 blocks of online disk cache will improve application availability by reducing shelving time.
There are three user disks that contain various amounts of available storage capacity: $1$DUA13, $1$DUA14, and $1$DUA15. The three disk volumes are defined as cache devices with differing amounts of capacity:
$1$DUA13 is set to 100,000 blocks
$1$DUA14 is set to 50,000 blocks
$1$DUA15 is set to 100,000 blocks
HSM includes a set of default policy definitions to provide a working system upon installation. These definitions are created during the installation procedure and are immediately implemented. The definitions also are used when creating additional definitions.
Although schedules are maintained in a database, there is no supplied schedule for the default preventive policy HSM$DEFAULT_POLICY. If you want to implement a preventive policy, you must use the SMU SET SCHEDULE command as shown in the HSM Guide to Operations.
Table 6-2 lists the supplied default policy definitions that are configured for operation upon installing HSM.
The default definitions for the disk volume, shelf, and preventive policy are also template definitions. When you create new disk volume, shelf, and policy definitions, any parameters not given a value use the parameter value in the template definition.
The following steps show how the policy template definition HSM$DEFAULT_POLICY provides values for a newly created policy definition.
$ SMU SHOW POLICY/DEFAULT/FULL
Policy HSM$DEFAULT_POLICY is enabled for shelving
Policy History:
Created: 8-JUN-1995 12:39:32.34
Revised: 8-JUN-1995 12:39:32.34
Selection Criteria:
State: Enabled
Action: Shelving
File Event: Expiration date
Elapsed time: 180 00:00:00
Before time: <none>
Since time: <none>
Low Water Mark: 80 %
Primary Policy: Space Time Working Set (STWS)
Secondary Policy: Least Recently Used (LRU)
Verification:
Mail notification: <none>
Output file: <none>
$ SMU SET POLICY NEW_POLICY /BACKUP/LOWWATER_MARK=40
The primary policy and secondary policy values are from the default policy definition.
The comparison date and the low water mark values have been changed.
$ SMU SHOW POLICY/FULL NEW_POLICY
Policy NEW_POLICY is enabled for shelving
Policy History: Created: 20-OCT-1994 13:49:26.64
Revised: 20-OCT-1994 13:49:26.64
Selection Criteria:
State: Enabled
Action: Shelving
File Event: Backup date
Elapsed time: 180 00:00:00
Before time: <none>
Since time: <none>
Low Water Mark: 40 %
Primary Policy: Space Time Working Set(STWS)
Secondary Policy: Least Recently Used(LRU)
Verification:
Mail notification: <none>
Output file: <none>
You may use the values supplied in the template definition or change them to suit your needs. To change the values of a template definition, use the SMU SET POLICY command to change the named default definition, or use the /DEFAULT qualifier to change the template itself.
In Plus mode, you are using MDMS or SLS as your media manager. In addition to setting up the HSM configuration as described above, you also need to set up the MDMS/SLS environment to agree with the HSM definitions. This section discusses the minimum MDMS/SLS operations to run HSM in Plus mode.
This section does not explain everything you need to do to first set up MDMS in an environment. For detailed instructions on installing and configuring MDMS, see the Media and Device Management Services for OpenVMS Guide to Operations.
By using Media and Device Management Services for OpenVMS (MDMS) software, HSM Plus mode uses common media and device management capabilities. MDMS provides these common services for various storage management products, such as Archive Backup System (ABS) and Storage Library System for OpenVMS (SLS), as well as HSM.
MDMS provides the following services:
If you already use MDMS to support some other storage management product, you will need to do little to add functionality for HSM.
If you do not use MDMS for other storage management products now, you can start using it for HSM and add other products later without having to shift your media and device management approach.
HSM can now support more devices, including TL820s as fully robotic devices.
HSM can support gravity-controlled loading in a Digital Linear Tape (DLT) magazine loader in addition to robotically-controlled loading in a DLT.
For HSM to work with MDMS, there are various MDMS commands you may need to use. Table 6-3 lists the MDMS commands you may need to use for HSM Plus mode and describes what each one does. Later sections of this chapter describe how to use some of these commands. For detailed information on working with MDMS, see the Media and Device Management Services for OpenVMS Guide to Operations.
To enable HSM to work with MDMS, you must perform the following tasks:
For detailed instructions on performing MDMS tasks, refer to the MDMS Software Installation and Configuration Chapter in this book. see the Media and Device Management Services for OpenVMS Guide to Operations.
MDMS uses a concept called a media triplet to identify media types and drives the software is allowed to use. The media triplet is comprised of the following symbols:
If you are going to use robotically-controlled tape jukeboxes, you need to define the jukeboxes in TAPESTART.COM.
There are two symbols you must define in TAPESTART.COM to use any type of tape jukeboxes with robotic loading. These symbols are:
For each name in TAPE_JUKEBOXES, identifies the robot device and tape drives controlled by that device. |
The following example shows a correctly-configured jukebox definition:
$!
$! --------------------------
$! Node Name
$!
$NODE := 'F$TRNLNM ("SYS$NODE")'
$NODE = NODE - "::" - "_"
$!
$!---------------------------------------------------------------------
$!
$! Jukebox definitions
$!
$!---------------------------------------------------------------------
$
$TAPE_JUKEBOXES := "TL810_1"
$TL810_1 := "''NODE'::$1$DUA810, ''NODE'::$1$MUA43, -
''NODE'::$1$MUA44, ''NODE'::$1$MUA45, ''NODE'::$1$MUA46"
Notice that the node name is required in such definitions, and should reflect the node name on which the command procedure is run. As such, the use of the variable ''NODE' parameter is recommended. The drives referenced in this definition must also appear in the media triplets.
Once you have defined all the appropriate symbols in TAPESTART.COM, you need to make MDMS aware of the volumes HSM will use. To do this, you use the STORAGE ADD VOLUME command. STORAGE ADD VOLUME has a number of possible qualifiers. For HSM's purposes, the only qualifiers that are essential are:
/MEDIA_TYPE- Must identify the media type you defined in TAPESTART.COM for
this volume.
/POOL- Identifies the volume pool to which the volume belongs.
/DENSITY- Must identify the density defined in TAPESTART.COM for this
media type, if something is assigned to DENS_n.
Compaq recommends you put HSM volumes into volume pools. This prevents other applications from using HSM volumes and vice-versa.
If you decide to put HSM volumes into volume pools (as described above), then you need to authorize HSM to access those pools. You authorize access to volume pools by using the Volume Pool Authorization function of the Administrator menu.
To access the Administrator menu, use one of the following commands:
$ RUN SLS$SYSTEM:SLS$SLSMGR
or
$ SLSMGR
The second command works only if you have installed MDMS and run the SYS$MANAGER:SLS$TAPSYMBOL.COM procedure to define short commands for the MDMS menus.
To authorize HSM access to volume pools, use the user name HSM$SERVER.
Unless you are configuring a magazine loader (see Section 6.4.4.6), you need to import volumes into the jukebox. For this, use the STORAGE IMPORT CARTRIDGE command.
If you are using magazines (a physical container with some number of slots that hold tape volumes) with magazine loaders in MDMS, you need to:
To add magazines to the magazine database, you use the STORAGE ADD MAGAZINE command. This command adds a magazine name into the magazine database. This command does not verify that this magazine exists or is available to any particular device for use. It simply adds a record to the database to identify the magazine.
To manually associate volumes with a magazine, you use the STORAGE BIND command. This command creates a link in the databases for the specified volume and magazine. Again, this does not ensure the specified volume is in the specified magazine, it simply updates the database entries. You must use the /SLOT qualifier to identify the slot in the magazine where the volume resides (or will reside).
If you prefer, you can automatically associate volumes with a magazine. To do this, you must have physical access to the magazine and an appropriate device and the volumes to be associated must be in the magazine. Then, you IMPORT the magazine into the device and use the STORAGE INVENTORY JUKEBOX command. STORAGE INVENTORY JUKEBOX reads the labels of the volumes in the magazine, binds the volumes to the magazine, assigns the magazine slot numbers to the volumes, and updates the magazine and volume databases. Note that the volumes need to be initialized before issuing the INVENTORY command.
Once you have defined the volumes, and optionally magazines, to use in the jukebox, they need to be imported into the jukebox. The command to use this depends on the actual hardware being used.
For a large tape jukebox such as a TL81x or TL82x, you import volumes directly into the jukebox ports using the STORAGE IMPORT CARTRIDGE command.
For a magazine loader such as a TZ877, you import the entire magazine into the jukebox using the STORAGE IMPORT MAGAZINE command.
To import volumes into a StorageTek silo, you use the STORAGE IMPORT ACS command. Refer to the HSM Command Reference Guide for more detail on this command.
HSM Plus mode supports the use of RDF-served devices for shelving and unshelving. This means you can define devices that are physically separated from the cluster on which the shelving operations initiate. To do this, you must:
The following example shows how to set up the HSM device definition to work with a remote device:
$ SMU SET DEVICE FARNODE::$2$MIA21 /REMOTE /SHARE=ALL /ARCHIVE=(3,4)
The following restrictions apply to working with remote devices:
The following sections illustrate various sample configurations for HSM Plus mode:
The first example sets up four TA90E drives, which support a specific media type. Three of the drives also support generic TA90 access from other applications besides HSM.
The second example shows how a TZ877 device can be configured with three magazines for HSM use.
The third example shows how a local TL820 can be configured with 50 HSM volumes.
The fourth example shows how to set up a RDF-served TL820 for HSM operations.
These examples show device and archive class configurations for HSM and MDMS. Other HSM details, such as associating archive classes with shelves, are not shown, but are the same as for HSM Basic mode.
These examples illustrate device definitions for HSM and MDMS. They do not attempt to show all commands needed to use HSM. For example, the following additional actions may be necessary:
The following procedure defines a media type for a basic device (TA90), adds 50 volumes of that media type to a particular pool, authorizes HSM only to access that pool, and defines the appropriate archive classes and HSM devices for these volumes.
$ MTYPE_1 := TA90E
$ DENS_1 := COMP
$ DRIVES_1 := $1$MUA20,
$1$MUA21, $1$MUA22,
$1$MUA23
$ STORAGE ADD VOLUME AA0001/POOL=HSM_POOL/MEDIA_TYPE=TA90E/DENSITY=COMP $ STORAGE ADD VOLUME AA0002/POOL=HSM_POOL/MEDIA_TYPE=TA90E/DENSITY=COMP
...
$ STORAGE ADD VOLUME AA0050/POOL=HSM_POOL/MEDIA_TYPE=TA90E/DENSITY=COMP
$ SMU SET ARCHIVE 1 /ADD_POOL=HSM_POOL /DENSITY=COMP /MEDIA_TYPE=TA90E
$ SMU SET ARCHIVE 2 /ADD_POOL=HSM_POOL /DENSITY=COMP /MEDIA_TYPE=TA90E
$ SMU SET DEVICE $1$MUA20:, $1$MUA21:, $1$MUA22: /ARCHIVE=(1,2)
$ SMU SET DEVICE $1$MUA23: /DEDICATE=ALL /ARCHIVE=(1,2)
The following procedure defines a media type and two jukeboxes for TZ877 DLT loaders, defines 14 volumes with two volume pools, authorizes HSM to access those volume pools, and defines the appropriate archive classes and HSM devices for these volumes.
$ MTYPE_1 := TK85K
$ DENS_1 :=
$ DRIVES_1 :=
$1$MUA500:,
$1$MUA600:
$ NODE := 'F$TRNLNM ("SYS$NODE")'
$ NODE = NODE - "::" - "_"
$!
$ TAPE_JUKEBOXES := "JUKEBOX1, JUKEBOX2"
$ JUKEBOX1 := "''NODE'::$1$DUA500:, ''NODE'::$1$MUA500:"
$ JUKEBOX2 := "''NODE'::$1$DUA600:, ''NODE'::$1$MUA600:"
$ STORAGE ADD VOLUME TZ0001/POOL=HSM_POOL1/MEDIA_TYPE=TK85K
...
$ STORAGE ADD VOLUME TZ0007/POOL=HSM_POOL1/MEDIA_TYPE=TK85K
$ STORAGE ADD VOLUME TZ0008/POOL=HSM_POOL2/MEDIA_TYPE=TK85K
...
$ STORAGE ADD VOLUME TZ0014/POOL=HSM_POOL2/MEDIA_TYPE=TK85K
$ STORAGE ADD MAGAZINE HSM001/SLOTS=7
$ STORAGE ADD MAGAZINE HSM002/SLOTS=7
$ STORAGE BIND TZ0001 HSM001/SLOT=0
...
$ STORAGE BIND TZ0007 HSM001/SLOT=6
$ STORAGE BIND TZ0008 HSM002/SLOT=0
...
$ STORAGE BIND TZ0014 HSM002/SLOT=6
$ STORAGE IMPORT MAGAZINE HSM001 JUKEBOX1
$ STORAGE IMPORT MAGAZINE HSM002 JUKEBOX2
$ SMU SET ARCHIVE 1 /MEDIA_TYPE=TK85K /ADD_POOL=HSM_POOL1
$ SMU SET ARCHIVE 2 /MEDIA_TYPE=TK85K /ADD_POOL=HSM_POOL2
9. Define the devices for shared use.
$ SMU SET DEVICE $1$MUA500:, $1$MUA600: /ARCHIVE=(1,2)
$ SMU SET SHELF /DEFAULT /ARCHIVE=(1,2) /RESTORE_ARCHIVE=(1,2)
The following procedure defines a media type and jukebox definition for a TL820 device on the local cluster, defines 50 volumes and adds them to a pool, authorizes HSM and other applications to access the volumes in this pool, and defines appropriate archive classes and devices for use. In this example, the TL820 is connected to an HSJ controller with a robot name (command disk name) of $1$DUA820:.
$ MTYPE_1 := TK85K $ DENS_1 :=
$ DRIVES_1 := $1$MUA100:, $1$MUA200:, $1$MUA300:
$ NODE := 'F$TRNLNM ("SYS$NODE")' $ NODE = NODE - "::" - "_"
$!
$ TAPE_JUKEBOXES := "TL820_1"
$ TL820_1 := "''NODE'::$1$DUA820:, ''NODE'::$1$MUA100:, - ''NODE'::$1$MUA200:, ''NODE'::$1$MKU300:"
$ STORAGE ADD VOLUME ACP001 /POOL=TL820_POOL /MEDIA_TYPE=TK85K
$ STORAGE ADD VOLUME ACP002 /POOL=TL820_POOL /MEDIA_TYPE=TK85K
...
$ STORAGE ADD VOLUME ACP050 /POOL=TL820_POOL /MEDIA_TYPE=TK85K
$ STORAGE IMPORT CARTRIDGE ACP001 TL820_1
$ STORAGE IMPORT CARTRIDGE ACP002 TL820_1
...
$ STORAGE IMPORT CARTRIDGE ACP050 TL820_1
$ SMU SET ARCHIVE 1 /ADD_POOL=TL820_POOL /MEDIA_TYPE=TK85K
$ SMU SET ARCHIVE 2 /ADD_POOL=TL820_POOL /MEDIA_TYPE=TK85K
$ SMU SET ARCHIVE 3 /ADD_POOL=TL820_POOL /MEDIA_TYPE=TK85K
$ SMU SET DEVICE $1$MUA100:, $1$MUA200: /ARCHIVE=(1,2,3)
$ SMU SET DEVICE $1$MUA300: /DEDICATE=ALL /ARCHIVE=1
The following procedure defines a configuration for an RDF-served TL820 device that resides on a remote node. This example also shows the client and server definitions for setting up both nodes so that a client HSM system can access the TL820.
The configuration for TAPESTART.COM on the client node (the one running HSM) is as follows:
$ PRI := BOSTON $ DB_NODES := BOSTON
$ MTYPE_1 := TK85K $ DENS_1 := $ DRIVES_1 := BOSTON::$1$MUA21:, BOSTON::$1$MUA22:, BOSTON::$1$MUA23:
$ TAPE_JUKEBOXES := "JUKE_TL820" $ JUKE_TL820 := "BOSTON::$1$DUA100:, BOSTON::$1$MUA21:, - BOSTON::$1$MUA22:, BOSTON::$1$MUA23:"
$ STORAGE ADD VOLUME APW201 /POOL=TL820_POOL /MEDIA_TYPE=TK85K
$ STORAGE ADD VOLUME APW202 /POOL=TL820_POOL /MEDIA_TYPE=TK85K
...
$ STORAGE ADD VOLUME APW220 /POOL=TL820_POOL /MEDIA_TYPE=TK85K
$ SMU SET ARCHIVE 1 /ADD_POOL=TL820_POOL /MEDIA_TYPE=TK85K
$ SMU SET ARCHIVE 2 /ADD_POOL=TL820_POOL /MEDIA_TYPE=TK85K
$ SMU SET DEVICE BOSTON::$1$MUA21: /REMOTE /ARCHIVE=(1,2)
$ SMU SET DEVICE BOSTON::$1$MUA22: /REMOTE /ARCHIVE=(1,2)
$ SMU SET DEVICE BOSTON::$1$MUA23: /REMOTE /ARCHIVE=(1,2)
$ SMU SET SHELF /DEFAULT /ARCHIVE=(1,2) /RESTORE_ARCHIVE=(1,2)
The configuration for TAPESTART.COM on the RDF-served node (the one containing the TL820 device). Note that HSM does not have to be running on the RDF-served node, but MDMS or SLS must be installed and running. Also note that node BOSTON is the database node for MDMS for the enterprise.
$ PRI := BOSTON $ DB_NODES := BOSTON $! $ MTYPE_1 := TK85K
$ DENS_1 := $ DRIVES_1 := $1$MUA21:, $1$MUA22:, $1$MUA23:
$ NODE := 'F$TRNLNM ("SYS$NODE")' $ NODE = NODE - "::" - "_"
$! $ TAPE_JUKEBOXES := "JUKE_TL820"
$ JUKE_TL820 := "''NODE'::$1$DUA100:, ''NODE'::$1$MUA21:, - ''NODE'::$1$MUA22:, ''NODE'::$1$MUA23:"
$ STORAGE IMPORT CARTRIDGE APW201 JUKE_TL820
$ STORAGE IMPORT CARTRIDGE APW202 JUKE_TL820
...
$ STORAGE IMPORT CARTRIDGE APW220 JUKE_TL820
Because node BOSTON is the database node, the volume and pool authorization entered on the client node are also valid on the server node.
The following example shows how to configure the default shelf, archive classes, devices and caches in HSM Basic mode for two different configurations:
These examples illustrate device definitions for HSM. They do not attempt to show all commands needed to use HSM. For example, the following additional actions may be necessary:
The following procedure defines archive classes 1 and 2 for HSM use. We will assign one TZ877 loader to each archive class. In this example, the magazine loaders are connected directly to a SCSI bus on node OMEGA: as such, they can only be accessed from this node.
$!
$ SMU SET ARCHIVE 1,2
$ SMU SET SHELF /DEFAULT /ARCHIVE=(1,2) /RESTORE_ARCHIVE=(1,2)
$ SMU SET DEVICE $
1$MKA100: /ARCHIVE=1 /ROBOT_NAME=$1$GKA101:
$ SMU SET DEVICE $1$MKA200: /ARCHIVE=2 /ROBOT_NAME=$1$GKA201:
$ SMU SET FACILITY /SERVER=OMEGA
$!
$! Confirm Setup
$!
$ SMU SHOW ARCHIVE
HSM$ARCHIVE01 has not been used
Identifier: 1 Media type: CompacTape III, Loader Label: HS0001 Position: 0 Device refs: 1 Shelf refs: 2
HSM$ARCHIVE02 has not been used Identifier: 1 Media type: CompacTape III, Loader Label: HS1001 Position: 0 Device refs: 1 Shelf refs: 2
Shelf HSM$DEFAULT_SHELF is enabled for Shelving and Unshelving
Catalog File: SYS$SYSDEVICE:[HSM$SERVER.CATALOG]HSM$CATALOG.SYS
Shelf History:
Created: 30-MAY-1996 13:16:33.79
Revised: 30-MAY-1996 13:56:00.27
Backup Verification: Off
Save Time: <none>
Updates Saved: All
Archive Classes:
Archive list:
HSM$ARCHIVE01 id: 1 HSM$ARCHIVE02 id: 2 Restore list: HSM$ARCHIVE01 id: 1 HSM$ARCHIVE02 id: 2
HSM drive HSM$DEFAULT_DEVICE is enabled.
Shared access: < shelve, unshelve >
Drive status: Not configured
Media type: Unknown Type
Robot name: <none>
Enabled archives: <none>
HSM drive _$1$MKA100: is enabled.
Dedicated access: < shelve, unshelve >
Drive status: Configured
Media type: CompacTape III, Loader
Robot name: _$1$GKA101:
Enabled archives: HSM$ARCHIVE01 id: 1
HSM drive _$1$MKA200: is enabled.
Dedicated access: < shelve, unshelve >
Drive status: Configured
Media type: CompacTape III, Loader
Robot name: _$1$GKA201:
Enabled archives: HSM$ARCHIVE02 id: 2
In addition, the tape volumes in each TZ877 loader must be initialized before using HSM. Either manually (using the front panel), or by using the Media Robot Utility (MRU), load each volume and initialize it as follows:
$ ROBOT LOAD SLOT 0 ROBOT $1$GKA101
$ INITIALIZE $1$MKA100: HS0001
$ ROBOT LOAD SLOT 1 ROBOT
$1$GKA101
$ INITIALIZE 1$MKA100: HS0002
...
$ ROBOT LOAD SLOT 6 ROBOT $1$GKA101
$ INITIALIZE $1$MKA100: HS0007
$ ROBOT LOAD SLOT 0 ROBOT $1$GKA201
$ INITIALIZE $1$MKA200: HS1001
$ ROBOT LOAD SLOT 1 ROBOT $1$GKA201
$ INITIALIZE $1$MKA200: HS1002
...
$ ROBOT LOAD SLOT 6 ROBOT $1$GKA201
$ INITIALIZE $1$MKA200: HS1007
In this example, we are configuring 8 out of 32 platters in an RW500 optical jukebox to be a permanent shelf repository for HSM. Note that the optical platters are set up as a permanent HSM cache, with no cache flushing and no specific block size restrictions. In addition, HSM shelving and unshelving operations must be specifically disabled on the cache devices and all other platters in the optical jukebox.
$ SMU SET CACHE $2$JBA0: /BLOCK=0 /NOINTERVAL /HIGHWATER=100 /NOHOLD
$ SMU SET CACHE $2$JBA1: /BLOCK=0 /NOINTERVAL /HIGHWATER=100 /NOHOLD ...
$ SMU SET CACHE $2$JBA7: /BLOCK=0 /NOINTERVAL /HIGHWATER=100 /NOHOLD
$!
$! Disable all shelving on ALL platters in the jukebox
$! $ SMU SET VOLUME $2$JBA0: /DISABLE=ALL
$ SMU SET VOLUME $2$JBA1: /DISABLE=ALL
...
$ SMU SET VOLUME $2$JBA31: /DISABLE=ALL
Note that shelving must be disabled on all platters when any of the platters are being used as an HSM cache to avoid platter load thrashing.
HSM Version 3.0 supports file access to shelved files on client systems where access is through DFS, NFS and PATHWORKS. At installation, HSM sets up such access by default. However, you may want to review this access and change it as needed, because it can potentially affect all accesses.
File faulting (and therefore file events) work as expected, with the exception of CTRL-Y/exit. Typing CTRL-Y exit during a file fault has no effect. The client side process waits until the file fault completes and the file fault is not canceled.
In addition, with DFS one can determine the shelving state of a file just as if the disk was local (i.e. DIRECTORY/SHELVED and DIRECTORY/SELECT both work correctly).
The shelve and unshelve commands do not work on files on DFS-served devices. The commands do work on the VMScluster? that has local access to the devices, however.
The normal default faulting mechanism (fault on data access), does not work for NFS-served files. The behavior is as if the file is a zero-block sequential file. Performing "cat" (or similar commands) results in no output.
However, at installation time, HSM Version 3.0 enables such access by defining a logical name that allows file faults on an OPEN of a file by the NFS process. By default, the following system and cluster wide logical name is defined:
$ DEFINE/SYSTEM HSM$FAULT_ON_OPEN "NFS$SERVER"
This definition supports access to NFS-served files upon an OPEN of a file. If you do not want NFS access to shelved files, simply de-assign the logical name as follows:
$ DEASSIGN/SYSTEM HSM$FAULT_ON_OPEN
For a permanent change, this command should be placed in:
For NFS-served files, file events (device full and user quota exceeded) occur normally with the triggering process being the NFS$SERVER process. The quota exceeded event occurs normally because the any files extended by the client are charged to the client's proxy not NFS$SERVER.
If the new logical is defined for the NFS$SERVER, the fault will occur on OPEN and appears transparent to the client, with the possible exception of messages as follows:
% cat /usr/oreo/shelve_test.txt.2
NFS2 server oreo not responding still trying NFS2 server oreo ok
The first message appears when the open doesn't complete immediately. The second message (ok) occurs when the open completes. The file contents, in the above example, are then displayed.
Typing CTRL-C during the file fault returns the user to the shell. Since the NFS server does not issue an IO$_CANCEL against the faulting I/O, the file fault is not canceled and the file will be unshelved eventually.
It is not possible from the NFS client to determine whether a given file is shelved. Further, like DFS devices, the shelve and unshelve commands are not available to NFS mounted devices.
Normal attempts to access a shelved file from a PATHWORKS client initiate a file fault on the server node. If the file is unshelved quickly enough (e.g. from cache), the user sees only the delay in accessing the file. If the unshelve is not quick enough, an application-defined timeout occurs and a message window pops up indicating the served disk is not responding. The timeout value varies on the application. No retry is attempted. However, this behavior can be modified by changing HSM's behavior to a file open by returning a file access conflict error, upon which most PC applications retry (or allow the user to retry) the operation after a delay. After a few retries, the file fault will succeed and the file can be accessed normally. To enable PATHWORKS access to shelved files using the retry mechanism, HSM defines the following logical name on installation:
$ DEFINE/SYSTEM HSM$FAULT_AFTER_OPEN "PCFS_SERVER, PWRK$LMSRV"
This definition supports access to PATHWORKS files upon an OPEN of a file. If you do not want PATHWORKS to access shelved files via retries, simply de-assign the logical name as follows:
$ DEASSIGN/SYSTEM HSM$FAULT_AFTER_OPEN
For a permanent change, this command should be placed in:
The decision on which access method to use depends upon the typical response time to access shelved files in your environment.
If the logical name is defined, HSM imposes a 3-second delay in responding to the OPEN request for PATHWORKS accesses only. During this time, the file may be unshelved: otherwise, a "background" unshelve is initiated which will result in a successful open after a delay and retries.
At this point, the file fault on the server node is under way and cannot be canceled.
The affect of the access on the PC environment varies according to the PC operating system. For windows 3.1 and DOS, the computer waits until the file is unshelved. For Windows NT and Windows-95, only the windows application itself waits.
File events (device full and user quota exceeded) occur normally with the triggering process being the PATHWORKS server process. The quota exceeded event occurs normally because the any files extended by the client are charged to the client's proxy not the PATHWORKS server.
It is not possible from a PATHWORKS client to determine whether a file is shelved. In addition, there is no way to shelve or unshelve files explicitly (via shelve- or unshelve-like commands). There is also no way to cancel a file fault once it has been initiated.
Most PC applications are designed to handle "file sharing" conflicts. Thus, when HSM detects the PATHWORKS server has made an access request, it can initiate unshelving action, but return "file busy". The typical PC application will continue to retry the original open, or prompt the user to retry or cancel. Once the file is unshelved, the next OPEN succeeds without shelving interaction.
As just discussed, HSM supports two logical names that alter the behavior of opening a shelved file for NFS and PATHWORKS access support. These are:
The default behavior is to perform no file fault on Open; rather the file fault occurs upon a read or write to the file.
Each logical name can take a list of process names to alter the behavior of file faults on open. For example:
$ DEFINE/SYSTEM HSM$FAULT_ON_OPEN "NFS$SERVER, USER_SERVER, SMITH"
The HSM$FAULT_ON_OPEN can also be assigned to "HSM$ALL", which will cause a file fault on open for all processes. This option is not allowed for HSM$FAULT_AFTER_OPEN.
As these logicals are defined to allow NFS and PATHWORKS access, they are not recommended for use with other processes, since they will cause many more file faults than are actually needed in a normal OpenVMS environment. When used, the logicals must be system-wide, and should be defined identically on all nodes in the VMScluster? environment.
These logical name assignments or lack thereof take effect immediately without the need to restart HSM.
This appendix contains a sample HSM Basic mode installation on a VAX system. Upon completion of the actual installation, this example runs the IVP to determine whether the installation was correct.
$ @SYS$UPDATE:VMSINSTAL HSM022 DISK$:[DIR]
OpenVMS VAX Software Product Installation Procedure V6.1
It is 10-JAN-1997 at 14:12.
Enter a question mark (?) at any time for help.
* Are you satisfied with the backup of your system disk [YES]?
The following products will be processed:
Beginning installation of HSM V3.0 at 14:13
%VMSINSTAL - I - RESTORE, Restoring product save set A ...
%VMSINSTAL - I - RELMOVED, Product's release notes have been moved to SYS$HELP.
****************************************************************
* Hierarchical Storage Management (HSM) *
* for OpenVMS V3.0 Installation *
* *
* Copyright(c) Compaq computer Corporation 1994 - 1997. *
* All Rights Reserved. Unpublished rights reserved under the *
* copyright laws of the United States. *
****************************************************************
*Do you want to purge files replaced by this installation [YES]?
*Do you want to run the IVP after the installation [YES]?
Correct installation and operation of this software requires that one of the following Product Authorization Keys (PAKs) reflecting your software license be present on this system:
* Does this product have an authorization key registered and loaded [YES]?
With this version, HSM can operate in one of two possible modes:
BASIC - The standalone HSM product which supports a limited number of near
line and offline devices.
PLUS - The integrated HSM product, integrated with Media and Device Manage
ment Services (MDMS) which supports an expanded number of nearline
and offline devices.
NOTE: MDMS or SLS V2.5B or newer must be installed before installing HSM PLUS mode. Also, once files are shelved in PLUS mode, you may *not* change back to BASIC mode.
BASIC or PLUS to select the mode in which you want HSM to operate. the mode to install [PLUS]: BASIC Installing HSM V3.0 BASIC mode
*** HSM Account ***
A privileged account named HSM$SERVER will be created for use by HSM processes. This account will not allow interactive logins and the password will be automatically generated.
The installation procedure will not proceed until you enter a valid user identification code (UIC) for the HSM$SERVER account. The UIC must be unique and within the SYSTEM group.
Enter the UIC to be used for HSM$SERVER account [[1,22]]:
*** HSM Device ***
You will now be asked to enter a disk device specification to be used as a repository for HSM configuration databases and log files.
NOTE: *** This device must have at least 100000 free blocks and be available to all nodes in the cluster that will be running HSM ***
Enter the disk device to use for HSM files [SYS$SYSDEVICE:]:
HSM files will be placed at SYS$SYSDEVICE:[HSM$SERVER]
*Is this correct [YES]?
This installation creates an ACCOUNT named HSM$SERVER.
user record successfully added
identifier HSM$SERVER value [000001,000022]
added to rights database
This installation updates an ACCOUNT named HSM$SERVER.
user record(s) updated
This installation updates an ACCOUNT named HSM$SERVER.
user record(s) updated
This installation updates an ACCOUNT named HSM$SERVER.
user record(s) updated
This product creates system disk directory SYS$SYSDEVICE:[HSM$SERVER].
This product creates system disk directory SYS$SYSDEVICE:[HSM$SERVER.LOG]. This product creates system disk directory SYS$SYSDEVICE:[HSM$SERVER.
CATALOG].
This product creates system disk directory SYS$SYSDEVICE:[HSM$SERVER.MANAGER].
Checking for DISKQUOTAs on device SYS$SYSDEVICE:
Restoring product save set B ...
The file SYS$STARTUP:HSM$STARTUP.COM contains specific commands needed to start the HSM Software. This file should not be modified. To start the software at system startup time, add the line
$ @SYS$STARTUP:HSM$STARTUP.COM
to the system startup procedure SYS$MANAGER:SYSTARTUP_VMS.COM
The HSM Catalog and SMU Databases must be created before running HSM. This procedure can create the HSM Catalog and SMU Databases automatically for you as a post-installation task.
you want to run the database creation procedure [YES]?
The file SYS$SYSTEM:SETFILENOSHELV.COM should be executed to set all system disk files as NON-SHELVABLE. This is important to preserve the integrity of your system disk.
This procedure can submit SETFILENOSHELV.COM to a batch execution queue for you as a post-installation task.
you want to submit SETFILENOSHELV.COM [YES]?
*****************************************************************
* IMPORTANT *
*****************************************************************
* When you upgrade to a new version of VMS you should invoke *
* SYS$SYSTEM:SETFILENOSHELV.COM again. The installation of VMS *
* does not and will not automatically set your system disk *
* files to NON-SHELVABLE for you. *
*****************************************************************
No further questions will be asked during this installation.
The installation should take less than 10 minutes to complete.
This software is proprietary to and embodies the confidential technology of Compaq Computer Corporation. Possession, use, or copying of this software and media is authorized only pursuant to a valid written license from Compaq or an authorized sublicensor.
Restricted Rights: Use, duplication, or disclosure by the U.S. Government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of DFARS 252.227-7013, or in FAR 52.227-19, or in FAR 52.227-14 Alt. III, as applicable.
Files will now be moved to their target directories...
executing HSM Post-Installation procedure
Creating HSM catalog
This command file creates the default HSM Catalog file.
The Catalog file is a directory of every file that was ever shelved.
It should be protected as such by controlling access to it.
You should only have to create the Catalog file once because that catalog will be used from that point on and contains a history of shelving activity and more importantly locates the offline data for the shelved files.
You should make sure that the Catalog file gets backed up on a regular basis since loss of this file could mean loss of the data for your shelved files.
HSM catalog created successfully
Creating HSM SMU database files
HSM SMU database files created successfully
setting HSM mode to BASIC SETFILENOSHELV (queue HSM$POLICY_NODE, entry 2) holding until 10-JAN-1997 14:24
HSM post-installation procedure complete
HSM for OpenVMS V3.0 Installation Verification Procedure (IVP)
*** Copyright(c) Compaq Corporation 1994 - 1997.
starting HSM shelving facility on node NODE shelf handler process started 000000AE policy execution process started 000000AF
Shelf Handler version - V2.x (BLxx), Oct 20 1997 Shelving Driver version - V2.x (BLxx), Oct 20 1997 Policy Execution version - V2.x (BLxx), Oct 20 1997 Shelf Management version - V2.x (BLxx), Oct 20 1997
for OpenVMS is enabled for Shelving and Unshelving
history: Created: 10-JAN-1997 14:19:28.26
Revised: 10-JAN-1997 14:19:41.83
servers: Any cluster member
server: NODE
server: Disabled
logging: Audit
Error Exception mode: Basic
license :Unlimited
cache device _NODE$DKB300: created
shelf HSM$DEFAULT_SHELF updated
volume_NODE$DKB300: created
file SYS$COMMON:[SYSTEST.HSM]HSM_IVP1.TMP;1
shelved file SYS$COMMON:[SYSTEST.HSM]HSM_IVP1.TMP;1
unshelved file SYS$COMMON:[SYSTEST.HSM]HSM_IVP2.TMP;1 shelved
unshelving file NODE$DKB300:[SYSCOMMON.SYSTEST.HSM]HSM_IVP2.TMP;1
policy HSM$IVP_POLICY created
scheduled policy HSM$IVP_POLICY for volume _NODE$DKB300: was created on server NODE _NODE$DKB300: (queue HSM$POLICY_NODE, entry 3) started on HSM$POLICY_NODE waiting for HSM$IVP_POLICY to execute...
waiting for HSM$IVP_POLICY to execute...
file SYS$COMMON:[SYSTEST.HSM]HSM_IVP3.TMP;1
unshelved shutting down HSM shelving facility on node NODE
waiting for HSM to shutdown... waiting for HSM to shutdown...
The IVP for HSM V3.0 was successful! ***
Storage Management (HSM) for OpenVMS, Version V3.0
Compaq Computer Corporation 1994 - 1997. All Rights Reserved.
shelf handler process started 000000B2
policy execution process started 000000B3
for OpenVMS is enabled for Shelving and Unshelving
history:
Created: 10-JAN-1997 14:19:04.15
Revised: 10-JAN-1997 14:22:03.89
servers: Any cluster member
server: NODE
server: Disabled
logging: Audit
Error Exception mode: Basic
license: Unlimited
Software has been successfully started
Installation of HSM V3.0 completed at 14:22
VMSINSTAL procedure done at 14:22
This appendix contains a sample HSM Plus mode installation on a VAX system. In this instance, HSM Version 3.0 is installed over an existing HSM environment. Upon completion of the actual installation, this example runs the IVP to determine whether the installation was correct.
$ @SYS$UPDATE:VMSINSTAL HSM022 DISK$:[DIR]
OpenVMS VAX Software Product Installation Procedure V6.1
It is 10-JAN-1997 at 14:40.
Enter a question mark (?) at any time for help.
* Are you satisfied with the backup of your system disk [YES]?
The following products will be processed:
HSM V3.0
Beginning installation of HSM V3.0 at 14:40
%VMSINSTAL-I-RESTORE, Restoring product save set A ...
%VMSINSTAL-I-RELMOVED, Product's release notes have been moved to SYS$HELP.
Restoring product save set A ...
Product'srelease notes have been moved to SYS$HELP.
****************************************************************
* Hierarchical Storage Management (HSM) *
* for OpenVMS V3.0 Installation *
* *
* Copyright(c) Compaq Computer Corporation 1998,1999. *
* All Rights Reserved. Unpublished rights reserved under the *
* copyright laws of the United States. *
****************************************************************
* Do you want to purge files replaced by this installation [YES]?
* Do you want to run the IVP after the installation [YES]?
*** HSM License ***
Correct installation and operation of this software requires that one
of the following Product Authorization Keys (PAKs) reflecting your
software license be present on this system:
HSM-SERVER
HSM-USER
* Does this product have an authorization key registered and loaded [YES]?
*** HSM Mode ***
With this version, HSM can operate in one of two possible modes:
BASIC - The standalone HSM product which supports a limited number of nearline and offline devices.
PLUS - The integrated HSM product, integrated with Media and Device Management Services (MDMS) which supports an expanded number of nearline and offline devices.
NOTE: MDMS or SLS V2.5B or newer must be installed before installing HSM PLUS mode. Also, once files are shelvedin PLUS mode, you may *not* change back to BASIC mode.
Enter BASIC or PLUS to select the mode in which you want HSM to operate.
* Enter the mode to install [PLUS]:
%HSM-I-MODE, Installing HSM V3.0 PLUS mode
%VMSINSTAL-I-ACCOUNT, This installation updates an ACCOUNT named HSM$SERVER. %UAF-I-MDFYMSG, user record(s) updated
%VMSINSTAL-I-ACCOUNT, This installation updates an ACCOUNT named HSM$SERVER.
%UAF-I-MDFYMSG, user record(s) updated
%VMSINSTAL-I-ACCOUNT, This installation updates an ACCOUNT named HSM$SERVER. %UAF-I-MDFYMSG, user record(s) updated
%HSM-I-CHKQUO, Checking for DISKQUOTAs on device SYS$SYSDEVICE:
%VMSINSTAL-I-RESTORE, Restoring product save set B ...
%HSM-I-CHKMRU, Checking version of Media Robot Utility (MRD$SHR.EXE)
%HSM-I-CHKHSD, Checking for updated version of HSDRIVER
The file SYS$STARTUP:HSM$STARTUP.COM contains specific commands needed to start the HSM Software. This file should not be modified. To start the software at system startup time, add the line
$ @SYS$STARTUP:HSM$STARTUP.COM to the system startup procedure SYS$MANAGER:SYSTARTUP_VMS.COM
The file SYS$SYSTEM:SETFILENOSHELV.COM should be executed to set all system disk files as NON-SHELVABLE. This is important to preserve the integrity of your system disk. This procedure can submit SETFILENOSHELV.COM to a batch execution queue for you as a post-installation task.
* Do you want to submit SETFILENOSHELV.COM [YES]? NO
*****************************************************************
* IMPORTANT *
*****************************************************************
* When you upgrade to a new version of VMS you should invoke *
* SYS$SYSTEM:SETFILENOSHELV.COM again. The installation of VMS *
* does not and will not automatically set your system disk *
* files to NON-SHELVABLE for you. *
*****************************************************************
%HSM-I-DONEASK, No further questions will be asked during this installation. -HSM-I-DONEASK, The installation should take less than 10 minutes to complete.
This software is proprietary to and embodies the confidential technology of Compaq Computer Corporation. Possession, use, or copying of this software and media is authorized only pursuant to a valid written license from Compaq or an authorized sublicensor. Restricted Rights: Use, duplication, or disclosure by the U.S. Government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of DFARS 252.227-7013, or in FAR 52.227-19, or in FAR 52.227-14 Alt. III as applicable.
%VMSINSTAL-I-MOVEFILES, Files will now be moved to their target directories...
%HSMPOST-I-START, executing HSM Post-Installation procedure
%HSMPOST-I-SMUDBCONVERT, converting SMU databases
%SMUDBCONVERT-I-ARCHIVE, converting SMU archive database
%SMUDBCONVERT-I-CURRENT, SMU archive database conversion not required %SMUDBCONVERT-I-CACHE, converting SMU cache database
%SMUDBCONVERT-I-CURRENT, SMU cache database conversion not required
%SMUDBCONVERT-I-CONFIG, converting SMU shelf database
%SMUDBCONVERT-S-CONFIG, SMU shelf database converted
%SMUDBCONVERT-I-DEVICE, converting SMU device database
%SMUDBCONVERT-S-DEVICE, SMU device database converted
%SMUDBCONVERT-I-POLICY, converting SMU policy database
%SMUDBCONVERT-S-POLICY, SMU policy database converted
%SMUDBCONVERT-I-VOLUME, converting SMU volume database
%SMUDBCONVERT-S-VOLUME, SMU volume database converted
%HSMPOST-S-SMUDBCONVERT, SMU databases successfully converted
%HSMPOST-I-CATCONVERT, converting default catalog
%HSMPOST-I-CATCURRENT, catalog conversion not required
%HSMPOST-I-CATATTRUPD, updating catalog file attributes
%HSMPOST-S-CATATTRUPD, catalog file attributes updated
%HSMPOST-I-SETMODE, setting HSM mode to PLUS
%HSMPOST-I-DONE, HSM post-installation procedure complete
*** HSM for OpenVMS V3.0 Installation Verification Procedure (IVP) ***
Copyright(c)Compaq Computer Corporation 1998,1999
%HSM-I-IVPSTART, starting HSM shelving facility on node NODE
%SMU-S-SHP_STARTED, shelf handler process started 000000C3
%SMU-S-PEP_STARTED, policy execution process started 000000C4
HSM Shelf Handler version - V2.x (BLxx), Oct 20 1997
HSM Shelving Driver version - V2.x (BLxx), Oct 20 1997
HSM Policy Execution version - V2.x (BLxx), Oct 20 1997
HSM Shelf Management version - V2.x (BLxx), Oct 20 1997
HSM for OpenVMS is enabled for Shelving and Unshelving
Facility history:
Created: 10-JAN-1997 14:52:28.26
Revised: 10-JAN-1997 14:52:41.83
Designated servers: Any cluster member
Current server: NODE
Catalog server: Disabled
Event logging: Audit Error Exception
HSM mode: Basic
Remaining license: Unlimited
%SMU-I-CACHE_CREATED, cache device _NODE$DKB300: created
%SMU-I-SHELF_UPDATED, shelf HSM$DEFAULT_SHELF updated
%SMU-I-VOLUME_CREATED, volume _NODE$DKB300: created
%SHELVE-S-SHELVED, file SYS$COMMON:[SYSTEST.HSM]HSM_IVP1.TMP;1 shelved
%UNSHELVE-S-UNSHELVED, file SYS$COMMON:[SYSTEST.HSM]HSM_IVP1.TMP;1 unshelved
%SHELVE-S-SHELVED, file SYS$COMMON:[SYSTEST.HSM]HSM_IVP2.TMP;1 shelved
%HSM-I-UNSHLVPRG, unshelving file NODE$DKB300:[SYSCOMMON.SYSTEST.HSM]HSM_IVP2.TMP;1
%SMU-I-POLICY_CREATED, policy HSM$IVP_POLICY created
%SMU-I-SCHED_CREATED, scheduled policy HSM$IVP_POLICY for volume _NODE$DKB300: was created on server NODE Job _NODE$DKB300: (queue HSM$POLICY_NODE, entry 5) started on HSM$POLICY_NODE
%HSM-I-IVPWAIT, waiting for HSM$IVP_POLICY to execute...
%HSM-I-IVPWAIT, waiting for HSM$IVP_POLICY to execute...
%UNSHELVE-S-UNSHELVED, file SYS$COMMON:[SYSTEST.HSM]HSM_IVP3.TMP;1 unshelved
%HSM-I-IVPSHUT, shutting down HSM shelving facility on node NODE
%HSM-I-IVPSHUTWAIT, waiting for HSM to shutdown...
%HSM-I-IVPSHUTWAIT, waiting for HSM to shutdown...
*** The IVP for HSM V3.0 was successful! ***
Hierarchical Storage Management (HSM) for OpenVMS, Version V3.0
Copyright Compaq Computer Corporation 1998 - 1999. All Rights Reserved.
%SMU-S-SHP_STARTED, shelf handler process started 00000103
%SMU-S-PEP_STARTED, policy execution process started 00000104
HSM for OpenVMS is enabled for Shelving and Unshelving
Facility history:
Created: 10-JAN-1997 14:51:04.15
Revised: 10-JAN-1997 14:54:03.89
Designated servers: Any cluster member
Current server: NODE
Catalog server: Disabled
Event logging: Audit
Error Exception
HSM mode: Basic
Remaining license: Unlimited
HSM Software has been successfully started
Installation of HSM V3.0 completed at 14:55
VMSINSTAL procedure done at 14:55
The following is the list of logical names entered into the logical name tables when HSM software is installed. These names are defined by the product's startup file. They are automatically entered into these logical name tables whenever the system reboots or whenever the software is invoked.
(LNM$PROCESS_TABLE)
(LNM$JOB_8CE40840)
(LNM$GROUP_000107)
(LNM$SYSTEM_TABLE)
"HSM$CATALOG" = "DISK$USER1:[HSM$SERVER.CATALOG]"
"HSM$FAULT_AFTER_OPEN" = "PCFS_SERVER, PWRK$LMSRV"
"HSM$FAULT_ON_OPEN" = "NFS$SERVER"
"HSM$LOG" = "DISK$USER1:[HSM$SERVER.LOG]"
"HSM$MANAGER" = "DISK$USER1:[HSM$SERVER.MANAGER]"
"HSM$PEP_REQUEST" = "MBA454:"
"HSM$PEP_RESPONSE" = "MBA455:"
"HSM$PEP_TERMINATION" = "MBA427:"
"HSM$REPACK_DURATION" = "0"
"HSM$ROOT" = "DISK$AIM2:[HSM$ROOT.]"
"HSM$SHP_REQUEST" = "MBA451:"
"HSM$SHP_RESPONSE" = "MBA452:"
"HSM$SHP_URGENT" = "MBA450:"
(DECW$LOGICAL_NAMES)
The HSM installation procedure creates several files on your system. See HSM Files Installedlists and describes the files installed on server and client nodes. File names with an asterisk (*) preceding them are installed only on server nodes.
The HSM/MDMS installation procedure installs files and defines logical names on your system. This appendix lists the names of the files installed and logical names that are added to the system logical name -table. See MDMS File Names lists names of the files installed and See MDMS Logical Names lists the logical names that are added to the system logical names table.
See MDMS Installed Files contains the names of all MDMS files created on the system after MDMS V3.0 is successfully installed.
When the MDMS installation procedure is complete, logical names are entered into the system logical name table and stored in the startup file, SYS$STARTUP:MDMS$SYSTARTUP.COM. They are automatically entered into the system logical name table whenever the system reboots or whenever MDMS is started with this command:
See MDMS Logical Namesdescribes the logical names in the system table
This appendix has explanation for MDMS user rights and privileges.
Every MDMS user/potential user will be assigned zero or more rights in their SYSUAF file.
These rights will be examined on a per-command basis to determine whether a user has sufficient privilege to issue a command. The command is accepted for processing only if the user has sufficient privilege. In case the user has no rights the entire command is rejected.
Each right has a name in the following format:
Rights are looked-up on the client OpenVMS node that receives the request, as such each user must have an account on the client node.
MDMS has the following rights:
These rights are designed for a specific kind of user, to support a typical MDMS installation, and make the assignments of rights to users easy. The three high-level MDMS rights, the default right, administrator right and the additional right are described in: See These rights are designed for a specific kind of user, to support a typical MDMS installation, and make the assignments of rights to users easy. The three high-level MDMS rights, the default right, administrator right and the additional right are described in:
You can disable the mapping of SYSPRV to MDMS_ALL_RIGHTS using a SET DOMAIN command
Each command or command option will be tagged with one or more low-level rights that are needed to perform the operation. Where more than one right is specified, the command indicates the appropriate combination of rights needed. The MDMS administrator can assign a set of low-level rights to each high-level right. The administrator can then simply assign the high-level right to the user.
MDMS translates the high-level right to respective low-level rights while processing a command. For additional flexibility, the user can be assigned a combination of high-level and low-level rights. The result will be a sum of all rights defined.
The default set of mapping of high-level to low-level rights will be assigned at installation (by default) and stored in the domain record. However, the MDMS administrator can change these assignments by using the SET DOMAIN command.
The low-level rights are designed to be applied to operations. A given command, with a given set of qualifiers or options, requires the sum of the rights needed for the command and all supplied options. In many cases some options require more privilege than the command, and that higher privilege will be applied to the entire command if those options are specified.
The following are usable low level rights:
Enable all operations |
|
MDMS can be defined to recognize ABS rights and map them to MDMS rights. This capability is disabled by default and can be enabled with a SET DOMAIN command. The exact mapping for ABS rights is as in Table
This section defines the default high to low-level mapping for each high-level right.
SET DOMAIN
/[NO]ABS_RIGHTS
/ADD
/[NO]APPLICATION_RIGHTS[=(right[,...])]
/[NO]DEFAULT_RIGHTS[=(right[,...])]
/[NO]OPERATOR_RIGHTS[=(right[,...])]
/REMOVE
/[NO]SYSPRV
/[NO]USER_RIGHTS[=(right[,...])]
SET DOMAIN /OPERATOR_RIGHTS=MDMS_SET_PROTECTED /ADD
This command adds the MDMS_SET_PROTECTED right to the operator rights list.