Preventive Service Planning
Abstract
This document lists the restrictions specific to SAN Volume Controller V4.3.x. There may be additional restrictions imposed on hardware attached to SAN Volume Controller e.g. switches and storage etc.
Content
DS4000 Maintenance
Host Limitations
SAN Fibre Networks
SAN Routers and Fibre Channel Extenders
SAN Maintenance
SAN Volume Controller Concurrent Code Load CCL
Maximum Configurations
DS4000 Maintenance
SVC supports concurrent ESM firmware upgrades for those DS4000 models listed as such on the "Supported Hardware List" when they are running either 06.23.05.00 or later controller firmware. However, controllers running firmware levels earlier than 06.23.05.00 will not be supported for concurrent ESM upgrades. Customers in this situation, who wish to gain support for concurrent ESM upgrades, will need to first upgrade the DS4000 controller firmware level to 06.23.05.00. This action is a controller firmware upgrade, not an ESM upgrade and concurrent controller firmware upgrades are already supported in conjunction with SVC. Once the controller firmware is at 06.23.05.00 or later the ESM firmware can be upgraded concurrently. |
Note: The ESM firmware upgrade must be done on one disk expansion enclosure at a time. A 10 minute delay from when one enclosure is upgraded to the start of the upgrade of another enclosure is required. Confirm via the Storage Manager application's "Recovery Guru" that the DS4000 status is in an optimal state before upgrading the next enclosure. If it is not, then do not continue ESM firmware upgrades until the problem is resolved. |
Host Limitations
Windows SAN Boot Clusters (MSCS):
It is possible to SAN Boot a Microsoft Cluster subject to the following restrictions imposed by Microsoft:
- Windows 2000 Server clusters require that the boot disk be on a different storage bus to the cluster server disks.
- On Windows 2003, clustered disks and boot disks can be presented on the same storage bus, but ONLY if the Storport driver is being used.
These restrictions and more are described in the Microsoft White Paper: "Microsoft Windows Clustering: Storage Area Networks".
We have not tested, and therefore do not support, modifying the registry key as suggested on page 28 (which would allow boot disks and clustered disks on the same storage bus on Windows 2003 without the Storport driver).
Oracle
Oracle Version and OS | Restrictions that apply: |
Oracle RAC 10g on Windows: | 1 |
Oracle RAC 10g on AIX: | 1, 2 |
Oracle RAC 11g on AIX: | 2 |
Oracle RAC 10g on HP-UX11.31: | 1, 2 |
Oracle RAC 11g on HP-UX11.31: | 1, 2 |
Oracle RAC 10g on HP-UX11.23: | 1, 2 |
Oracle RAC 11g on HP-UX11.23: | 1, 2 |
Oracle RAC 10g on Linux Host: | 1, 3 |
Restriction 1: ASM cannot recognize the size change of the disk when SVC disk is resized unless the disk is removed from ASM and included again.
Restriction 2: After an ASM disk group has successfully dropped a disk, the disk cannot be deleted from the OS. The workaround to the OS restriction is to bring down the ASM instance, delete the disk from the OS, and bring up the ASM instance again.
Restriction 3: For RHEL4 set Oracle Clusterware 'misscount' parameter to a bigger one to allow SDD to do path failover first. The default miscount setting 60s is too short for SDD. We recommend to set it 90s or 120s.
Command to use: crsctl set css misscount 90
SLES10 Series Z:
The FC transport class keeps all outstanding I/O for a certain time (timeout value: default are 60 sec) until it decides that the remote port will note come back. This causes a temporary i/o stall in case of FC link incidents or when paths are varied offline. The timeout value can be set for each remote port via sysfs: /sys/class/fc_remote_ports/rport-0:0-0/dev_loss_tmoAlternatively this value can be preset for all remote ports at module load time of the scsi_transport_fc module via the dev_loss_tmo parameter.
SAN Fibre Networks
Please refer to this document for details on how to configure a supported SAN:
IBM System Storage SAN Volume Controller V4.3.0 - Software Installation and Configuration Guide
SAN Routers and Fibre Channel Extenders
Fibre Channel Extender Technologies:
IBM will support any fibre channel extender technology provided that it is planned, installed and tested to meet the requirements specified in:
IBM System Storage SAN Volume Controller V4.3.0 - Software Installation and Configuration Guide
SAN Router Technologies:
There are distance restrictions imposed due to latency. The amount of latency which can be tolerated depends on the type of copy services being used (Metro Mirror or Global Mirror). Details of the maximum latencies supported can be found in:
IBM System Storage SAN Volume Controller V4.3.0 - Software Installation and Configuration Guide
SAN Maintenance
A number of maintenance operations in SAN fabrics have been observed to occasionally cause IO errors for certain types of hosts. To avoid these errors, IO on these hosts must be quiesced prior to doing any type of SAN re-configuration activity, switch maintenance or SAN Volume Controller maintenance (see later section for Concurrent Code Load restrictions).
- Linux RH EL 2.1 AS and 3 AS
- Linux RH EL 4 AS (ppc64 only)
SAN Volume Controller Concurrent Code Load (CCL)
I/O errors have occasionally been observed during CCL with hosts running the operating system levels below. All IO should be quiesced on these systems before a software upgrade is started and should not be restarted until the code load is complete.
- Linux RH EL 2.1 AS and 3 AS
- Linux RH EL 4 AS (ppc64 only)
- Solaris 9 on SBus based systems
Prior to starting a software upgrade, the SAN Volume Controller error log must be checked and any error conditions must be resolved and marked as fixed. All host paths must be online, and fabric must be fully redundant with no failed paths. If inter-cluster Remote Copy is being used, the same checks must be made on the remote cluster.
Maximum Configurations
Ensure that you are familiar with the maximum configurations for SAN Volume Controller V4.3.x:
Objects | Maximum Number | Comments |
SVC Cluster | ||
Nodes per cluster | 8 | Arranged as four I/O groups |
Nodes per fabric | 32 | Maximum number of nodes that can be present on the same fabric, with visibility of each other |
I/O Groups per cluster | 4 | Each containing two nodes |
Fabrics per cluster | 4 | The number of counterpart SANs which are supported |
Managed Disks | ||
Managed disks (mdisks) | 4096 | The maximum number of logical units which can be managed by SVC. The number includes disks which have not been configured into managed disk groups |
Managed disk groups | 128 | |
Mdisks per mdisk group | 128 | |
Mdisk size | 2 TB | |
Total storage manageable per cluster | 8 PB | If maximum extent size of 2048 MB used |
Virtual Disks | ||
Virtual disks (vdisks) per cluster | 8192 | Includes managed-mode vdisks and image-mode vdisks. Maximum requires an 8 node cluster |
Vdisks per I/O group | 2048 | |
Vdisks per mdisk group | N/A | Cluster limit applies |
Vdisk size | 2 TB | |
Vdisks per host object | 512 | The limit may be different based on host operating system. See Host Attachment Guide for details |
SDD | 512 SAN Volume Controller vpaths per host | One vpath is created for each vdisk mapped to a host. Although the SAN Volume Controller permits 512 vdisks to be mapped to a host, the SDD limit can be exceeded by either : Note : Both of these operations are unsupported for SDD |
SDDPCM (on AIX) | 12,000 vpaths per host | |
Vdisk-to-host mappings | 20,000 | |
Mirrored Virtual Disks | ||
Copies per vdisk | 2 | |
Copies per cluster | 8192 | Maximum number of VDisks copies in the system. Note that this means that the maximum number of VDisks in the system cannot all have the maximum number of copies |
Hosts / Servers | ||
Host ID's per cluster | 1024 - Cisco, Brocade and McDATA fabrics 155 - CNT 256 - Qlogic | A Host ID is a collection of worldwide port names (WWPNs) which represents a host. This is used to associate SCSI LUNs with vdisks. See Also - Host ID's per I/O group below. For Brocade support, please see Note 2 below this table. |
Host ports per cluster | 2048 - Cisco, McDATA and Brocade fabrics 310 - CNT 512 - Qlogic | |
Host IDs per I/O group | 256 - Cisco, McDATA and Brocade fabrics N/A - CNT 64 - Qlogic | |
Host ports per I/O group | 512 - Cisco, McDATA and Brocade fabrics N/A - CNT 128 - Qlogic | |
Hosts ports per host ID | 512 | |
Copy Services | ||
Metro Mirror or Global Mirror relationships per cluster | 1024 | |
Metro Mirror or Global Mirror consistency groups | 256 | |
Metro Mirror and Global Mirror vdisk per I/O group | 1024 TB | There is a per I/O group limit of 1024TB on the quantity of Primary and Secondary vdisk address space which may participate in MetroMirror and Global Mirror relationships. This maximum configuration will consume all 512MB of bitmap space for the IO Group and allow no FlashCopy bitmap space. The default is 40TB. |
FlashCopy targets per source | 256 | |
FlashCopy mappings | 4096 | |
FlashCopy mappings per consistency group | 512 | |
FlashCopy consistency groups | 128 | |
FlashCopy vdisk per I/O group | 1024 TB | This is a per I/O group limit on the quantity of FlashCopy mappings using bitmap space from a given I/O Group. This maximum configuration will consume all 512MB of bitmap space for the I/O Group and allow no Metro or Global Mirror bitmap space. The default is 40TB. |
SVC Nodes | ||
Concurrent SCSI tasks (commands) per node | 10,000 | |
Concurrent commands per FC port | 2048 | |
Logins per SVC Fibre Channel port | 512 | Includes logins from server HBAs, disk controller ports, SVC node ports within the same cluster and SVC node ports from remote clusters. |
Storage Controllers | ||
WWNNs | 64 | Some storage controllers have a separate WWNN per port e.g. Hitachi Thunder |
Storage controller WWPNs | 256 | |
LUNs per storage controller WWNN | 4096 | |
WWNNs per storage controller | 16 | The number of WWNNs per storage controller (Usually 1) |
WWPNs per WWNN | 16 | The maximum number of FC ports per worldwide node name |
Note 1: Fabric and Device Support
A statement of support for a particular fabric configuration here is reflects the fact that SVC has been tested and is supported for attachment to that fabric configuration. Similarly a statement that SVC supports attachment to a particular backend device or host type reflects the fact that SVC has been tested and is supported for that attachment. SVC is only supported however for attachment to particular devices in a given fabric vendor if IBM and that fabric vendor both support that attachment. It is the user’s responsibility to verify that this is true for the particular configuration of interest as it is impossible to list individual ‘support’ or ‘no support’ statements for every possible intermix of front end and backend devices and fabric types.
Note 2: Support for Large fabrics
The following restrictions apply to support for fabrics with up to 1024 hosts with SVC 4.3.x .
1. All switches with more than 64 ports are supported as core switches with the exception of the Brocade M12. Any supported switch may be used as edge switch in this configuration. The SVC ports and backend storage must all be connected to the core switches.
2. The minimum supported firmware level for Brocade core switches is 5.1.0c.
3. Each SVC port must not see more than 512 N port logins. Error code 1800 is logged if this limit is exceeded on a Brocade fabric.
4. Each I/O group may not be associated with more than 256 host objects.
5. A host object may be associated with one or more I/O groups - if it is associated with more than one I/O group it counts towards the max 256 total in all of the I/O groups it is associated with.
Note 3: Example bitmap usage
Flash Copy: Each I/O group supports a default 40TB of target vdisks. The target vdisks may be in any I/O group. For the purpose of this limit vdisks are rounded up to a multiple of 8GB so 512 vdisks of 24.1GB will use all the bitmap space even though 512*24.1GB is less than 40TB.
Example 1: You can make 1 copy of 40TB of vdisks in an I/O group to another 40TB of vdisks anywhere in the cluster
Example 2: You can make 1 copy of 160TB of vdisks (40TB per I/O group) to another 160TB of vdisks anywhere in the cluster
Example 3: You can make 10 copies of 4TB of vdisks in an I/O group onto another 40TB of vdisks anywhere in the cluster
Metro Mirror and Global Mirror: Each I/O group supports a default 40TB of primary+secondary vdisk (that is 40TB shared between primary and secondary vdisks, not 40TB each). The 40TB can be split in any ratio between primary and secondary vdisk. For the purpose of this limit vdisks are rounded up to a multiple of 8GB. Metro Mirror and Global Mirror share the same bitmap memory - therefore the sum of primary Metro Mirror vdisks + primary Global Mirror vdisks + secondary Metro Mirror vdisks + secondary Global Mirror vdisks per I/O group is limited to 40TB.
Example 1: You can use Metro Mirror or Global Mirror to copy 40TB of vdisks per I/O group to a secondary cluster for disaster recovery. You can also make a FlashCopy of your data at your primary and/or secondary cluster for backup
Example 2: You can use intra-cluster Metro Mirror or Global Mirror to copy 20TB of vdisks per I/O group to another 20TB of vdisks in the same I/O group.
Was this topic helpful?
Document Information
Modified date:
17 June 2018
UID
ssg1S1003283