Home > Hitachi > Software > Hitachi Command Suite 8 User Guide

Hitachi Command Suite 8 User Guide

    Download as PDF Print this page Share this page

    Have a look at the manual Hitachi Command Suite 8 User Guide online for free. It’s possible to download the document as PDF or print. UserManuals.tech offer 913 Hitachi manuals and user’s guides for free. Share the user manual or guide on Facebook, Twitter or Google+.

    Page
    of 474
    							Tip: To display the most recent SplitTime in Device Manager after
    performing operations on a Copy-on-Write Snapshot or Thin Image copy pair, you need to refresh the storage system information.
    Related concepts
    •
    About replicating volumes (pair management)  on page 307
    Deleting command devices If you decommission the pair management server, delete the command
    device. When you delete command devices, the communication channel between hosts and storage for replication commands is deleted.
    Procedure 1. From the  Actions menu, select  Manage Replication  to start the
    Replication Manager.
    2. From the  Explorer menu, select  Resources and then Storage
    Systems .
    3. Expand the tree and select the desired storage system.
    4. Click the  Open link, and then on the  Cmd Devs tab select command
    devices and click  Delete Cmd Devices .
    Result
    The command devices you deleted no longer appear in the list of command
    devices.
    Replicating volumes for continuous access311Hitachi Command Suite User Guide 
    						
    							312Replicating volumes for continuous accessHitachi Command Suite User Guide 
    						
    							8
    Optimizing storage performance
    This module describes how to improve your storage.
    □
    About optimizing storage
    □
    About optimizing HBA configurations
    □
    About high temperature mode
    □
    Managing cache logical partitions
    □
    Data mobility
    □
    Data migration
    Optimizing storage performance313Hitachi Command Suite User Guide 
    						
    							About optimizing storageHitachi Command Suite allows you to manage storage by allocating volumes,
    expanding tiers and DP pools, and performing migration, based on
    information acquired from checking summaries, alerts, performance
    statistics, and the operating status of storage resources. Storage utilization can also be improved with effective management of HBA and cache (CLPRs)resources.
    Using HDT pools, you can manage performance and capacity to optimize storage by creating parity groups in the following dynamic tiers:
    • Tier 1 using parity groups for best performance • Tier 2 using parity groups for next best performance
    • Tier 3 using parity groups for capacity independent of drive type or RAID level
    Common storage optimization issues include:
    • If the used capacity for a DP pool has reached or exceeded its threshold, or a volume that satisfies a capacity requirement cannot be created or
    assigned because of insufficient unused capacity, add DP pool volumes to increase the capacity of the DP pools. If the capacity of a specific drive is
    insufficient when using an HDT pool, increase the capacity by mixing different drive types or RAID levels in Tier 3.
    • If the usage rate of the file system has reached the threshold value, expand the file system to increase the capacity that can be allocated.
    • If C/T delta values have degenerated and reached the threshold, use the Replication tab to confirm the degeneration factor and countermeasures,and use Device Manager, Replication Manager or Storage Navigator to
    resolve the problem.
    • If the performance of a DP pool has decreased and data I/O is slow, add more DP pool volumes to distribute loads within the DP pools. Another
    option is to perform volume migration to distribute I/O loads on the DP
    pools.
    • When using the HDT pool, performance problems may occur at certain times. Ensure that monitoring occurs during periods when I/O loads are
    occurring. You can:
    ○ Start or stop the monitoring/relocation process manually in accordance
    with known times for load changes.
    ○ Cancel the monitoring process during periods of low activity.
    • If HDT volume applications switch between online and batch processing, it can be helpful to save optimized volume data placements, by processing
    method, as profiles. By applying the corresponding profile before
    beginning processing, the data placement that fits the characteristics of the processing method is restored.
    • When using HDT pools, you want to prioritize the data relocation of HDT volumes for which capacity and access patterns vary widely, but I/O
    314Optimizing storage performanceHitachi Command Suite User Guide 
    						
    							operations decrease without relocating effectively. You can disable the
    relocation of HDT volumes for which the current data location presents no problems, to reduce relocation load.
    • When using HDT pools, important data is allocated to the lower hardware tier because it has fewer I/O accesses. To prevent unwanted relocations,
    set a specific hardware tier for the HDT pool by configuring tiering (Tier 1,
    Tier 2, and Tier 3).
    • When using HDT pools, use the flexibility of tiering to spread the data in a host volume across multiple layers of parity groups (high-speed, next
    highest speed, and low-speed) that are contained in a pool structured for this purpose.
    • When using HDT pools, understand that if different drive types and/or RAID levels are mixed in a single tier, they will all be considered equal fordata placement regardless of page access frequency. As a result I/O
    performance will be dependent on the drive type characteristics and RAID
    level on which any given page resides.
    • If the load on a volume is too high when volume data is backed up to tape, create a copy pair for the volume. Then do a tape backup by using thecopied volume (as a secondary volume).
    • If it is not possible to assign a high-performance volume to a host because all unassigned volumes are low performance, perform volume migration so
    that less frequently used data is migrated to a low-performance volume,
    and then assign the now available high-performance volume to an
    appropriate host.
    • If the usage frequency of an application increases, you can add an HBA and increase LUN paths to improve data transmission performance andthroughput requirements.
    Related concepts
    •
    About data mobility  on page 324
    •
    About data migration  on page 330
    •
    About optimizing HBA configurations  on page 315
    Related tasks •
    Creating a CLPR  on page 318
    About optimizing HBA configurations Using HCS, optimize or maintain server HBA configurations in support of high
    availability and performance/throughput requirements.
    The initial allocation of volumes to a server typically occurs with the allocatevolumes dialog. LUN paths are established at this time. Host groups are used
    to control access to ports and volumes, meaning all the hosts in the group
    are using the same ports to access allocated volumes for the hosts.
    Optimizing storage performance315Hitachi Command Suite User Guide 
    						
    							Over time, heavily used servers might exhibit the need for improved high-
    availability, and/or improved I/O and throughput performance. HCS provides
    for the optimization and maintenance of server HBA configurations, as
    follows:
    • For optimizing a server HBA configuration, you can add one or more HBAs to a server, and simultaneously identify one or more HBAs (WWNs) in the
    host group for the purpose of inheriting existing LUN path information for the newly added HBAs. This provides a fast and easy way to add HBAs, for
    example increasing from one HBA to two HBAs, or two HBAs to four HBAs.
    • In terms of maintaining current performance levels, you can remove a failed HBA and add a new HBA, or you can add a new HBA then removethe failed HBA after the new HBA is in service.
    Note that redundant HBAs can provide improved high availability,
    performance and throughput for a server, unless the server itself fails. The
    solution for server failure is clustering.
    Related tasks
    •
    Adding an HBA  on page 316
    Adding an HBA Add an HBA to improve performance and throughput requirements. Whenadding an HBA, specify the WWN of the new HBA and then select a WWN of
    an existing HBA from which to model paths.
    Prerequisites
    • Identify the new WWN for the HBA that is being added
    • Identify the WWN from which to model paths • Verify that the new HBA is physically connected
    Procedure 1. On the  Resources  tab, select  Hosts.
    2. After selecting the target operating system, select the target host row
    and click  More Actions > Add HBAs .
    3. Enter the  New WWN  or select a WWN from the list.
    4. Enter the  WWN from which to model paths  or select a WWN from the
    list.
    5. Click  Add.
    6. In the WWN Pairs list, verify that the listed HBA WWN combination are
    paired correctly.
    Tip:
    • If the WWN information is updated when the host is refreshed, the target WWN might not be displayed in the list. In this case, you need
    to manually enter the WWN of the HBA you are adding.316Optimizing storage performanceHitachi Command Suite User Guide 
    						
    							• To edit a WWN nickname from the list of WWN Pairs, click Edit
    WWN Nicknames .
    7.Click  Show Plan  and confirm that the information in the plan summary
    is correct. If changes are required, click  Back.
    8. (Optional) Update the task name and provide a description.
    9. (Optional) Expand  Schedule to specify the task schedule.
    You can schedule the task to run immediately or later. The default setting
    is  Now . If the task is scheduled to run immediately, you can select  View
    task status  to monitor the task after it is submitted.
    10. Click Submit.
    If the task is scheduled to run immediately, the process begins.
    11. (Optional) Check the progress and result of the task on the  Tasks &
    Alerts  tab. Click the task name to view details of the task.
    Result
    When the task completes, the new WWN is added and LUN path settings are
    established to the host.
    Related tasks
    •
    Editing LUN paths when exchanging a failed HBA  on page 227
    •
    Editing LUN paths when adding or exchanging an HBA  on page 228
    About high temperature mode For Virtual Storage Platform G1000 storage systems, you can use Hitachi
    Command Suite to enable high temperature mode, which is a licensed feature that allows the VSP G1000 storage system to run at highertemperatures (60.8°F to 104°F / 16°C to 40°C), saving energy and cooling
    costs.
    When high temperature mode is disabled, the VSP G1000 storage system runs at standard temperatures (60.8°F to 89.6°F / 16°C to 32°C).
    Normal and high temperature alertsWhen high temperature mode is disabled, an alert displays when the
    temperature in the storage system exceeds 89.6°F / 32°C.
    When high temperature mode is enabled, an alert displays when the
    temperature in the storage system exceeds 104°F / 40°C.
    Caution:  Before you enable high temperature mode, review the  Hitachi
    Virtual Storage Platform G1000 Hardware Guide  for restrictions and
    important information.Optimizing storage performance317Hitachi Command Suite User Guide 
    						
    							Related references
    •
    Enabling high temperature mode for VSP G1000 storage systems  on page
    318
    Enabling high temperature mode for VSP G1000 storage systems For Virtual Storage Platform G1000 storage systems, you can enable high
    temperature mode.
    Prerequisites
    • You must install a valid license for this feature.
    Caution:  Before you enable high temperature mode, see the  Hitachi Virtual
    Storage Platform G1000 Hardware Guide  for important information.
    Procedure
    1. On the  Resources  tab, click Storage Systems , and then expand  All
    Storage Systems  and the target storage system.
    2. Click  Components .
    3. Click  Edit High Temperature Mode
    4. Click  Enable (16-40 degrees C) .
    5. Click  Finish.
    6. In the  Confirm  window, verify the settings and enter a task name.
    7. Click  Apply  to register the task. If the  Go to tasks window for status
    check box is checked, the  Task window opens.
    Result
    After the task completes, high temperature mode is enabled.
    Related concepts
    •
    About high temperature mode  on page 317
    Managing cache logical partitions This module describes how to manage cache logical partitions (CLPR),
    including managing the assignment of resources to the CLPR.
    Creating a CLPR You can create partitioned cache as a means of providing predictableperformance levels for server applications, and providing memory protection.
    Caution:  Creating CLPRs can significantly degrade host performance and
    should be performed during the initial installation and setup of the storage318Optimizing storage performanceHitachi Command Suite User Guide 
    						
    							system or during a maintenance window. Before creating a CLPR, read
    Cautions and restrictions for Virtual Partition Manager .
    If no CLPRs have been created, the entire cache is displayed as CLPR0. When
    the first CLPR is created, CLPR1 is added. Up to CLPR31 can be created.
    The default cache capacity is 8 GB. CLPRs can be created by assigning the necessary capacity from CLPR0. If you use Cache Residency, the remaining cache capacity after subtracting the Cache Residency capacity from the cache
    capacity of CLPR0 must be at least 8 GB.
    Procedure 1. On the  Resources  tab, expand the  Storage Systems  tree, and select
    the target storage system.
    2. Choose one of the following options.
    • For Virtual Storage Platform G1000 storage systems: Select  Cache Partitions .
    • For other available storage systems: From the  Actions list in the application pane, select  Element
    Manager . Refer to the documentation for the native management tool
    for your storage system.
    3. On the  Cache Partitions  tab, click Create CLPRs  to open the Create
    CLPRs  window.  CLPR ID displays the first available CLPR number or a
    blank if no CLPR number is available.
    4. In CLPR Name , enter a CLPR name (maximum 16 alphanumeric
    characters). Each CLPR name must be unique. You cannot use a CLPR
    name that is already reserved.
    5. In Total Cache Size  select the cache capacity.
    The default size is 8 GB. You can select 8 GB or a higher value in increments of 4 GB. The maximum value is 504 GB (by subtracting 8 GB
    from the cache capacity of the storage system), but the maximum available capacity (by subtracting the total usage capacity of other CLPRsfrom the total capacity of the storage system) is displayed as the upper limit value.
    6. In Resident Cache Size  select the resident cache capacity.
    The default is 0 GB, and you can select 0 GB or higher value in
    increments of 0.5 GB. The maximum value is 504 GB (cache residency capacity of the storage system), but the maximum available capacity (by subtracting the total usage capacity of other CLPRs from the total capacity of the storage system) is displayed as the upper limit value.
    7. In Number of Resident Extents , enter the number of resident cache.
    The default is 0, and you can specify 0 to 16384. The maximum available capacity (by subtracting the total usage capacity of other CLPRs from the
    total capacity of the storage system) is displayed as the upper limit
    value.
    Optimizing storage performance319Hitachi Command Suite User Guide 
    						
    							8.Click  Add. The created CLPR is added to the  Selected CLPRs table.
    To delete a CLPR from the  Selected CLPRs table, select the CLPR and
    click  Remove . To change the settings of an existing CLPR, select the
    CLPR and  Change Settings  to open the Change Settings  window.
    9. Click  Finish .
    10. Check the settings in the  Confirm window, enter the task name in  Task
    Name , and click  Apply.
    Result
    The CLPR is created. A newly created CLPR has no resources (parity groups).
    To migrate resources to the new CLPR, see  Migrating resources to and from a
    CLPR .
    Related tasks •
    Migrating resources to and from a CLPR  on page 320
    •
    Editing the settings of an existing CLPR  on page 321
    •
    Adjusting the cache capacity of a CLPR  on page 322
    •
    Deleting a CLPR  on page 323
    Migrating resources to and from a CLPR After creating a CLPR, you can migrate resources (parity groups) fromexisting CLPRs to the new CLPR. Before deleting a CLPR, you must first
    migrate resources that you want to keep to other CLPRs.
    Caution:  Migrating resources to and from CLPRs can significantly degrade
    host performance and should be performed during the initial installation and
    setup of the storage system or during a maintenance window.
    When migrating resources to and from CLPRs:
    • You can migrate resources only within the same CU
    • All interleaved parity groups must be in the same CLPR • If a parity group contains one or more LDEVs that have defined Cache Residency Manager extents, you cannot migrate that parity group toanother CLPR.
    Procedure 1. On the  Resources  tab, expand the  Storage Systems  tree, and select
    the target storage system.
    2. Choose one of the following options.
    • For Virtual Storage Platform G1000 storage systems: Select  Cache Partitions .
    • For other available storage systems:
    320Optimizing storage performanceHitachi Command Suite User Guide 
    						
    All Hitachi manuals Comments (0)