Configuring Storage Hosts Tape and Devices SL150

3 Configuring Storage Hosts and Devices

Carry out the storage configuration tasks outlined in this chapter before proceeding further with SAM-QFS installation and configuration. The chapter outlines the following topics:

Configuring Primary Storage

In a SAM-QFS file system, primary disk or solid-state disk devices store files that are being actively used and modified. Follow the guidelines below when configuring disk or solid-state disk devices for the cache.

Configure Devices for the Primary Cache

  1. Determine the total capacity required to hold the data that you plan to store in each file system.
  2. Allow an additional 10% of the storage capacity allocated for the file system to store file-system metadata.
  3. If you are preparing for a high-performance ma-type file system, configure one, hardware-controlled, four-disk, RAID 10 (1+0) volume group for each mm metadata device in the file-system configuration. Consider using solid-state disks for maximum performance.

    The characteristics of striped-mirror, RAID 10 arrays are ideal for storing SAM-QFS metadata. Storage hardware is highly redundant, so critical metadata is protected. Throughput is higher and latency is lower than in most other RAID configurations. An array that is controlled by dedicated controller hardware generally offers higher performance than an array controlled by software running on a shared, general-purpose processor.

    Solid-state devices are particularly useful for storing metadata that is, by its nature, frequently updated and frequently read.

  4. If you are using an external disk array for primary cache storage, configure 3+1 or 4+1 RAID 5 volume groups for each md or mr device in the file-system configuration. Configure one logical volume (LUN) on each volume group.

    For a given number of disks, smaller, 3+1 and 4+1 RAID 5 volume groups provide greater parallelism and thus higher input/output (I/O) performance than larger volume groups. The individual disk devices in RAID 5 volume groups do not operate independently—from an I/O perspective, each volume group acts much like a single device. So dividing a given number of disks into 3+1 and 4+1 volume groups creates more independent devices, better parallelism, and less I/O contention than otherwise equivalent, larger configurations.

    Smaller RAID groups offer less capacity, due to the higher ratio of parity to storage. But, for most users, this is more than offset by the performance gains. In an archiving file system, the small reduction in disk cache capacity is often completely offset by the comparatively unlimited capacity available in the archive.

    Configuring multiple logical volumes (LUNs) on a volume group makes I/O to the logically separate volumes contend for a set of resources that can service only one I/O at a time. This increases I/O-related overhead and reduces throughput.

  5. Next, start Configuring Archival Storage.

Configuring Archival Storage

Carry out the following tasks:

Zone SAN-attached Devices

  1. Zone the storage area network (SAN) to allow communication between the drive and the host bus adapter.
  2. Make sure that the host can see the devices on the SAN. Enter the Solaris configuration administration command cfgadm with the -al (attachment-points list) and -o show_SCSI_LUN options. Examine the output for the World Wide Name (WWN) of the drive port.

    The first column of the output displays the attachment-point ID (Ap_id), which consists of the controller number of the host bus adapter and the WWN, separated by colons. The -o show_SCSI_LUN option displays all LUNs on the node if the node is the bridged drive controlling a media changer via an ADI interface.

    root@solaris:~# cfgadm -al -o show_SCSI_LUN Ap_Id Type Receptacle Occupant Condition c2::500104f000937528 tape connected configured unknown c3::50060160082006e2,0 tape connected unconfigured unknown
  3. If the drive’s WWN is not listed in the output of cfgadm -al -o show_SCSI_LUN, the drive is not visible. Something is wrong with the SAN configuration. So recheck SAN connections and the zoning configuration. Then repeat the preceding step.
  4. If the output of the cfgadm -al command shows that a drive is unconfigured, run the command again, this time using the -c (configure) switch.

    The command builds the necessary device files in /dev/rmt:

    root@solaris:~# cfgadm -al Ap_Id Type Receptacle Occupant Condition c2::500104f000937528 tape connected configured unknown c3::50060160082006e2,0 tape connected unconfigured unknown root@solaris:~# cfgadm -c configure 50060160082006e2,0
  5. Verify the association between the device name and the World Wide Name. Use the command ls -al /dev/rmt | grep WWN, where WWN is the World Wide Name.
    root@solaris:~# ls -al /dev/rmt | grep 50060160082006e2,0 lrwxrwxrwx 1 root root 94 May 20 05:05 3un -> \ ../../devices/pci@1f,700000/SUNW,qlc@2/fp@0,0/st@w50060160082006e2,0:
  6. If you have the recommended minimum Solaris patch level, stop here.
  7. Otherwise, get the target ID for your device.
  8. Edit /kernel/drv/st.conf. Add the vendor-specified entry to the tape-config-list, specifying the target ID determined above.
  9. Force reload the st module. Use the command update_drv -f st.
    root@solaris:~# update_drv -f st root@solaris:~#
  10. Next, go to Configuring Archival Disk Storage.

Configuring Archival Disk Storage

You can use ZFS, UFS, QFS, or NFS file systems for the volumes in a disk archive. For best archiving and staging performance, configure file systems and underlying storage to maximize the bandwidth available for archiving and staging, while minimizing contention between archiving and staging jobs and between SAM-QFS and other applications. Observe the following guidelines:

  1. Use dedicated file systems, so that SAM-QFS does not contend with other applications and users for access to the file system.
  2. Configure one SAM-QFS archival disk volume per file system or ZFS data set and set a quota for the amount of storage space that the archival disk volume can occupy.

    When the storage space for an archive volume is dynamically allocated from a pool of shared disk devices, make sure that the underlying physical storage is not oversubscribed. Quotas help to keep SAM-QFS archiving processes from trying to use more of the aggregate storage than it has available.

  3. Size each file system at between 10 and 20 terabytes, if possible.
  4. When the available disk resources allow, configure multiple file systems, so that individual SAM-QFS archiving and staging jobs do not contend with each other for access to the file system. Between fifteen and thirty archival file systems are optimum.
  5. Configure each file system on dedicated devices, so that individual archiving and staging jobs do not contend with each other for access to the same underlying hardware.

    Do not use the subdirectories of a single file system as separate archival volumes.

    Do not configure two or more file systems on LUNs that reside on the same physical drive or RAID group.

  6. Now go to Configuring Archival Tape Storage.

Configuring Archival Tape Storage

Carry out the following tasks:

Determine the Order in Which Drives are Installed in the Library

If your automated library contains more than one drive, the order of the drives in the mcf file must be the same as the order in which the drives are seen by the library controller. This order can be different from the order in which devices are seen on the host and reported in the host /var/adm/messages file.

For each SAM-QFS metadata server and datamover host, determine the drive order by carrying out the tasks listed below:

Gather Drive Information for the Library and the Solaris Host
  1. Consult the library documentation. Note how drives and targets are identified. If there is a local operator panel, see how it can be used to determine drive order.
  2. If the library has a local operator panel mounted on the library, use it to determine the order in which drives attach to the controller. Determine the SCSI target identifier or World Wide Name of each drive.
  3. Log in to the Solaris host as root.
    root@solaris:~# 
    
  4. List the Solaris logical device names in /dev/scsi/changer/, redirecting the output to a text file.

    In the example, we redirect the listings for /dev/rmt/ to the file device-mappings.txt in the root user’s home directory:

    root@solaris:~# ls -l /dev/rmt/ > /root/device-mappings.txt
  5. Now, Map the Drives in a Direct-Attached Library to Solaris Device Names or Map the Drives in an ACSLS-Attached Library to Solaris Device Names.

Map the Drives in a Direct-Attached Library to Solaris Device Names

For each Solaris logical drive name listed in /dev/rmt/ and each drive that the library assigns to the SAM-QFS server host, carry out the following procedure:

  1. If you are not already logged in to the SAM-QFS Solaris host, log in as root.
    root@solaris:~# 
    
  2. In a text editor, open the device mappings file that you created in the procedure “Gather Drive Information for the Library and the Solaris Host”, and organize it into a simple table.

    You will need to refer to this information in subsequent steps. In the example, we are using the vi editor to delete the permissions, ownership, and date attributes from the /dev/rmt/ list, while adding headers and space for library device information:

    root@solaris:~# vi /root/device-mappings.txt LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ---------- ------------------------------------------- /dev/rmt/0 -> ../../devices/pci@1f,4000/scsi@2,1/st@2,0: /dev/rmt/1 -> ../../devices/pci@1f,4000/scsi@4,1/st@5,0: /dev/rmt/2 -> ../../devices/pci@1f,4000/scsi@4,1/st@6,0: /dev/rmt/3 -> ../../devices/pci@1f,4000/scsi@4/st@1,0: lrwxrwxrwx 1 root root 40 Mar 18 2014 /dev/rmt/4 -> ../../devices/pci@1f,4000/scsi@4/st@2,0:
  3. On the library, make sure that all drives are empty.
  4. Load a tape into the first drive in the library that you have not yet mapped to a Solaris logical device name.

    For the purposes of the examples below, we load an LTO4 tape into an HP Ultrium LTO4 tape drive.

  5. If you are mapping the drives in a tape library, identify the Solaris /dev/rmt/ entry that corresponds to the drive that mounts the tape. Until you identify the drive, run the command mt -f /dev/rmt/number status where number identifies the drive in /dev/rmt/.

    In the example, the drive at /dev/rmt/0 is empty, but the drive at /dev/rmt/1 holds the tape. So the drive that the library identifies as drive 1 corresponds to Solaris /dev/rmt/1:

    root@solaris:~# mt -f /dev/rmt/0 status /dev/rmt/0: no tape loaded or drive offline root@solaris:~# mt -f /dev/rmt/1 status HP Ultrium LTO 4 tape drive: sense key(0x0)= No Additional Sense residual= 0 retries= 0 file no= 0 block no= 3
  6. Open the device-mappings file that you created in the previous procedure file in a text editor.

    In the example, we use the vi editor:

    root@solaris:~# vi /root/device-mappings.txt LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ---------- ------------------------------------------- /dev/rmt/0 -> ../../devices/pci@1f,4000/scsi@2,1/st@2,0: /dev/rmt/1 -> ../../devices/pci@1f,4000/scsi@4,1/st@5,0: /dev/rmt/2 -> ../../devices/pci@1f,4000/scsi@4,1/st@6,0: /dev/rmt/3 -> ../../devices/pci@1f,4000/scsi@4/st@1,0:
  7. Locate the entry for the Solaris device that holds the tape, and enter the library’s device identifier in the space provided. Then save the file.

    In the example, enter 1 in the LIBRARY DEVICE NUMBER field of the row for /dev/rmt/1:

    root@solaris:~# vi /root/device-mappings.txt 
    LIBRARY SOLARIS      SOLARIS 
    DEVICE  LOGICAL      PHYSICAL
    NUMBER  DEVICE       DEVICE
    ------- ----------   -------------------------------------------
            /dev/rmt/0 -> ../../devices/pci@1f,4000/scsi@2,1/st@2,0:
       1 /dev/rmt/1 -> ../../devices/pci@1f,4000/scsi@4,1/st@5,0: /dev/rmt/2 -> ../../devices/pci@1f,4000/scsi@4,1/st@6,0: /dev/rmt/3 -> ../../devices/pci@1f,4000/scsi@4/st@1,0: :w
  8. Unload the tape.
  9. Repeat this procedure until the device-mappings file holds entries that map all devices that the library assigns to the SAM-QFS host to Solaris logical device names. Then save the file and close the editor.
    root@solaris:~# vi /root/device-mappings.txt 
    LIBRARY SOLARIS      SOLARIS 
    DEVICE  LOGICAL      PHYSICAL
    NUMBER  DEVICE       DEVICE
    ------- ----------   -------------------------------------------
       2 /dev/rmt/0 -> ../../devices/pci@1f,4000/scsi@2,1/st@2,0: 1 /dev/rmt/1 -> ../../devices/pci@1f,4000/scsi@4,1/st@5,0: 3 /dev/rmt/2 -> ../../devices/pci@1f,4000/scsi@4,1/st@6,0: 4 /dev/rmt/3 -> ../../devices/pci@1f,4000/scsi@4/st@1,0: :wq root@solaris:~#
  10. Keep the mappings file. You will need the information for Configuring the Basic File System (Chapter 6), and you may wish to include it when Backing Up the SAM-QFS Configuration (Chapter 13).
  11. Next, go to “Configure a Direct-Attached Library”.

Map the Drives in an ACSLS-Attached Library to Solaris Device Names
  1. If you are not already logged in to the SAM-QFS Solaris host, log in as root.
    root@solaris:~# 
    
  2. In a text editor, open the device mappings file that you created in the procedure “Gather Drive Information for the Library and the Solaris Host”, and organize it into a simple table.

    You will need to refer to this information in subsequent steps. In the example, we are using the vi editor to delete the permissions, ownership, and date attributes from the /dev/rmt/ list, while adding headers and space for library device information:

    root@solaris:~# vi /root/device-mappings.txt LOGICAL DEVICE DEVICE SERIAL NUMBER ACSLS DEVICE ADDRESS -------------- -------------------- ---------------------------------- /dev/rmt/0 /dev/rmt/1 /dev/rmt/2 /dev/rmt/3
  3. For each logical device name listed in /dev/rmt/, display the device serial number. Use the command luxadm display /dev/rmt/number, where number identifies the drive in /dev/rmt/.

    In the example, we obtain the serial number HU92K00200 for device /dev/rmt/0:

    root@solaris:~# luxadm display /dev/rmt/0 DEVICE PROPERTIES for tape: /dev/rmt/0 Vendor: HP Product ID: Ultrium 4-SCSI Revision: G25W Serial Num: HU92K00200 ... Path status: Ready root@solaris:~#
  4. Enter the serial number in the corresponding row of the device-mappings.txt file.

    In the example, we record the serial number of device /dev/rmt/0, HU92K00200 in the row for logical device /dev/rmt/0.

    root@solaris:~# vi /root/device-mappings.txt LOGICAL DEVICE DEVICE SERIAL NUMBER ACSLS DEVICE ADDRESS -------------- -------------------- ---------------------------------- /dev/rmt/0 HU92K00200 /dev/rmt/1 /dev/rmt/2 /dev/rmt/3 :wq root@solaris:~#
  5. Repeat the two preceding steps until you identified the device serial numbers for all logical devices listed in /dev/rmt/ and recorded the results in the device-mappings.txt file.

    In the example, there are four logical devices:

    root@solaris:~# vi /root/device-mappings.txt LOGICAL DEVICE DEVICE SERIAL NUMBER ACSLS DEVICE ADDRESS -------------- -------------------- ---------------------------------- /dev/rmt/0 HU92K00200 /dev/rmt/1 HU92K00208 /dev/rmt/2 HU92K00339 /dev/rmt/3 HU92K00289 :w root@solaris:~#
  6. For each device serial number mapped to /dev/rmt/, obtain the corresponding ACSLS drive address. Use the ACSLS command display drive * -f serial_num.

    In the example, we obtain the ACSLS addresses of devices HU92K00200 (/dev/rmt/0), HU92K00208 (/dev/rmt/1), HU92K00339 (/dev/rmt/2), HU92K00289 (/dev/rmt/3):

    ACSSA> display drive * -f serial_num 2014-03-29 10:49:12 Display Drive Acs Lsm Panel Drive Serial_num 0 2 10 12 331000049255 0 2 10 16 331002031352 0   2   10    17 HU92K00200 0   2   10    18 HU92K00208 0   3   10    10 HU92K00339 0 3 10 11 HU92K00189 0   3   10    12    HU92K00289
  7. Record each ACSLS drive address in the corresponding row of the device-mappings.txt file. Save the file, and close the text editor.
    root@solaris:~# vi /root/device-mappings.txt LOGICAL DEVICE DEVICE SERIAL NUMBER ACSLS DEVICE ADDRESS -------------- -------------------- ---------------------------------- /dev/rmt/0 HU92K00200 (acs=0, lsm=2, panel=10, drive=17) /dev/rmt/1 HU92K00208 (acs=0, lsm=2, panel=10, drive=18) /dev/rmt/2 HU92K00339 (acs=0, lsm=2, panel=10, drive=10) /dev/rmt/3 HU92K00289 (acs=0, lsm=2, panel=10, drive=12) :wq
  8. Keep the mappings file. You will need the information for Configuring the Basic File System (Chapter 6), and you may wish to include it when Backing Up the SAM-QFS Configuration (Chapter 13).
  9. You configure Oracle StorageTek ACSLS network-attached libraries when you configure archiving file systems. So, if you are planning a high-availability file system, go to “Configuring Storage for High-Availability File Systems”. Otherwise, go to “Installing SAM-QFS Software”.

Configure a Direct-Attached Library

By default, Solaris 10 update 6 and later versions of the operating system control robotic media libraries using the generic SCSI driver sgen. So, SAM-QFS Release 5.4 and later uses the default sgen driver in place of the legacy SAM-QFS samst driver.

  1. Physically connect the library and drives to the SAM-QFS server host.
  2. If you are installing SAM-QFS for the first time or upgrading a SAM-QFS configuration on Solaris 11, stop once the hardware has been physically connected.

    The installation software will use the sgen driver automatically and update driver aliases and any existing /etc/opt/SUNWsamfs/mcf as necessary.

  3. If you are installing SAM-QFS on a Solaris 10 system, log in to the server host as root, and find out which version of Solaris is installed. Use the commands uname -a.
    root@solaris:~#  cat /etc/release Oracle Solaris 10 9/10 s10s_u9wos_14a SPARC Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved. Assembled 11 August 2010 root@solaris:~#
  4. See if one of the driver aliases in the list below is assigned to the sgen driver in your version of Solaris. Use the command grep scs.*,08 /etc/driver_aliases.

    Depending on the version of Solaris 10, the sgen driver my be assigned any of the following aliases:

    • scsa,08.bfcp" and/or scsa,08.bvhci
    • scsiclass,08

    In the example, Solaris is using the scsiclass,08 alias for the sgen driver:

    root@solaris:~# grep scs.*,08 /etc/driver_aliases sgen "scsiclass,08" root@solaris:~#
  5. If the grep command returns sgen "alias", where alias is an alias in the list above, stop here.

    The sgen driver is installed and assigned to the alias.

  6. If the grep command returns some-driver "alias", where some-driver is some driver other than sgen and where alias is one of the aliases listed above, then the alias is already assigned to the other driver. So Create a Path-Oriented Alias for the sgen Driver.
  7. If the command grep scs.*,08 /etc/driver_aliases does not return any output, the sgen driver is not installed. So install it. Use the command add_drv -i scsiclass,08 sgen.

    In the example, the grep command does not return anything. So we install the sgen driver:

    root@solaris:~# grep scs.*,08 /etc/driver_aliases
    root@solaris:~# add_drv -i scsiclass,08 sgen
  8. If the command add_drv -i scsiclass,08 sgen returns the message Driver (sgen) is already installed, the driver is already installed but not attached. So attach it now. Use the command update_drv -a -i scsiclass,08 sgen.

    In the example, the add_drv command indicates that the driver is already installed. So we attach the driver:

    root@solaris:~# add_drv -i scsiclass,08 sgen
    Driver (sgen) is already installed.
    root@solaris:~# update_drv -a -i scsiclass,08 sgen
  9. If the command grep scs.*,08 /etc/driver_aliases shows that the alias scsiclass,08 is assigned to the sgen driver, stop here. The driver is properly configured.
    root@solaris:~# grep scs.*,08 /etc/driver_aliases
    sgen "scsiclass,08"
    root@solaris:~# 
    

    The library has now been configured using the sgen driver.

  10. If you are configuring a high-availability file system, see Configuring Storage for High-Availability File Systems.
  11. Otherwise, go to “Installing SAM-QFS Software”.

Create a Path-Oriented Alias for the sgen Driver

If the expected sgen alias is already assigned to another driver, you need to create a path-oriented alias that attaches the specified library using sgen, without interfering with existing driver assignments. Proceed as follows:

  1. Log in to the SAM-QFS server host as root.
    root@solaris:~# 
    
  2. Display the system configuration. Use the command cfgadm -vl.

    Note that cfgadm output is formatted using a two-row header and two rows per record:

    root@solaris:~# cfgadm -vl Ap_Id Receptacle Occupant Condition Information When Type Busy Phys_Id c3 connected configured unknown unavailable scsi-sas n /devices/pci@0/pci@0/pci@2/scsi@0:scsi c5::500104f0008e6d78 connected configured unknown unavailable med-changer y /devices/pci@0/.../SUNW,qlc@0,1/fp@0,0:fc::500104f0008e6d78 ... root@solaris:~#
  3. In the output of cfgadm -vl, find the record for the library. Look for med-changer in the Type column of the second row of each record.

    In the example, we find the library in the second record:

    root@solaris:~# cfgadm -vl
    Ap_Id                Receptacle  Occupant     Condition Information  When
    Type        Busy  Phys_Id
    c3                   connected   configured   unknown   unavailable  
    scsi-sas    n     /devices/pci@0/pci@0/pci@2/scsi@0:scsi
    c5::500104f0008e6d78 connected   configured   unknown   unavailable  
    med-changer y /devices/pci@0/.../SUNW,qlc@0,1/fp@0,0:fc::500104f0008e6d78 ... root@solaris:~#
  4. Get the physical path that will serve as the new path-oriented alias. Remove the substring /devices from the entry in the Phys_Id column in the output of cfgadm -vl.

    In the example, the Phys_Id column of the media changer record contains the path /devices/pci@0/pci@0/pci@9/SUNW,qlc@0,1/fp@0,0:fc::500104f0008e6d78, so we select the portion of the string following /devices/ as the alias (note that this physical path has been abbreviated to fit the space available below):

    root@solaris:~# grep scsiclass,08 /etc/driver_aliases
    sdrv "scsiclass,08"
    root@solaris:~# cfgadm -vl
    Ap_Id                Receptacle  Occupant     Condition Information  When
    Type        Busy  Phys_Id
    c3                   connected   configured   unknown   unavailable  
    scsi-sas    n     /devices/pci@0/pci@0/pci@2/scsi@0:scsi
    c5::500104f0008e6d78 connected   configured   unknown   unavailable  
    med-changer y     /devices/pci@0/.../SUNW,qlc@0,1/fp@0,0:fc::500104f0008e6d78 ... root@solaris:~#
  5. Create the path-oriented alias and assign it to the sgen driver. Use the command update_drv -d -i '"/path-to-library"' sgen, where path-to-library is the path that you identified in the preceding step.

    In the example, we use the library path to create the path-oriented alias '"/pci@0/pci@0/pci@9/SUNW,qlc@0,1/fp@0,0:fc::500104f0008e6d78"' (note the single and double quotation marks). The command is a single line, but has been formatted as two to fit the page layout:

    root@solaris:~# update_drv -d -i '"/pci@0/pci@0/pci@9/SUNW,qlc@0,1/fp@0,0:fc::500104f0008e6d78"' sgen root@solaris:~#

    The library has now been configured using the sgen driver

  6. If you are configuring a high-availability file system, go to Configuring Storage for High-Availability File Systems.
  7. Otherwise, go to “Installing SAM-QFS Software”.

Configuring Storage for High-Availability File Systems

For optimal file system performance, the metadata and file data should be accessible through multiple interconnects and multiple disk controllers. In addition, plan to write file data to separate, redundant, highly available disk devices.

Plan to write your file system’s metadata to RAID-10 disks. You can write file data to either RAID-10 or RAID-5 disks.

If you want to configure a QFS shared file system on a cluster, you must provide highly available, redundant data paths and storage. To insure redundant data paths, you must provide multiple host bus adapters (HBAs) that are configured from a single node, and you must configure Oracle Solaris I/O multipathing software (for more information, see the Oracle Solaris SAN Configuration and Multipathing Guide in the Oracle Solaris 11.1 Information Library, or see the stmsboot man page). You can provide redundant storage using either hardware or software RAID technology. You can configure hardware-controlled RAID arrays with RAID-10 mirrors and/or RAID 5 volume groups. You configure software-controlled RAID-1 mirrors by using the multi-owner diskset feature of Oracle Solaris Cluster. No other software volume management configurations are supported. See “Configure QFS Metadata Servers on SC-RAC Nodes Using Software RAID Storage” for details.

To determine redundancy, consult the hardware documentation for your disk controllers and disk devices. You need to know whether the disk controller or disk devices that are reported by the cldevice show command are on redundant storage. For information, see the storage controller vendor’s documentation set and view the current controller configuration