This article discusses briefly the hardware and software prerequisites and settings for a SAN deployment in Microsoft Hyper-V environment. The certain aspects described here include NIC, switch-port and SAN-interface configurations and considerations about jumbo frames, flow control, VLAN, MPIO and port trunking (bonding) / LACP. The information provided is abridged and rather serves the purpose of a check list.
Consider the whole path between the initiator and the target, including server side NIC(s), switches (access, aggregate and core tier), as well as SAN NIC(s) and cables.
- Cables: although for short distanced (2-5 meters) a CAT 5e cable is estimated sufficient, preferably use CAT 6 SSTP (screen shielded twisted pair) or SFTP (screen foiled twisted pair)
- NICs: use server certified NICs supporting jumbo frames, flow control and VLAN 1GB or higher. In a production environment at least two network ports are required. Trunking / network teaming is not supported with iSCSI
- Switches: jumbo frame support, flow control, VLAN capability as well as RSTP (rapid spanning tree protocol) are required. Consider also the backplane bandwidth when attaching multiple hosts. Eventually calculate the bandwidth if the hosts and provide sufficient uplink capacity when using multi-tiered network (for instance 1GB access ports with 10GB uplink). In a production environment a switch redundancy, using MPIO is recommended
- SAN: enterprise SAN solutions usually fulfill all the requirements. In a production environment using a single SAN two (mirrored) controllers are recommended.
Hyper-V is a server role on Windows Server 2008 and 2008 R2, which both have integrated iSCSI Software Target. Additionally install the MPIO feature and check for updates.
Start configuring the network path from the target to the initiator, in other words from the SAN appliance to the Hyper-V host.
Configure the SAN
Set the IP parameters for each port to a different subnet. If you are using separate secured network infrastructure for iSCSI you can omit CHAP authentication, in any other case it is strongly recommended. Usually flow control is enabled on SAN appliances – you’ll either have no option to change it or there is one, check that “generate” is selected. Teaming or trunking can be enabled if you are expecting multiple concurrent sessions to the same target IP. If implementing MPIO you will need multiple teamed interfaces with different subnet settings – this can be achieved with at least 4 NIC ports on the SAN-controller. A widely supported protocol is LACP. At the average there are also no configuration means for a VLAN – consider configuring the switch port as an end (access) port.
Configure the switch (using Cisco’s CLI as an example)
- To enable jumbo frames use either “system mtu jumbo 9000” for low end catalyst or “int gi1/1; mtu 9198” for high end catalyst switches
- To set portfast presuming RSTP or STP are globally enabled use “int gi1/1; spanning-tree portfast”, eventually if bpduguard was enabled disable it
- In case the switch was preconfigured with storm control (disabled by default) disable it on the appropriate switch ports
- To set a VLAN, presuming you are using a separate logical network for the iSCSI flow, use “switchport access VLAN #” (where # is the VLAN number), and then “switchport mode access”
Configure the server NIC
In a production environment a subset of network ports are dedicated for iSCSI. The most common scenario is binding all LUNs over the host an either use them as a data storage for multiple VHD files or assigning them directly as a pass-through to a VM. In such case this subset of network interfaces should not be assigned to a Microsoft virtual switch (Hyper-V Virtual Network Manager). The network setting should be taken directly on the adapter’s driver settings. For instance using Intel’s advanced driver settings you select Jumbo Packet and from the drop-down menu choose the appropriate value. Keep in mind that as showed in the figure below the values are “hard coded” and if you select a 9014 Bytes setting on a low-end catalyst supporting only up to 9000 Bytes jumbo frames the network flow will be disrupted. An interims solution in this example is to select 4088 Bytes. A substantial improvement is to use high end switches.
When configuring flow control on the hypervisor check if “Generate and Respond”, on newer drivers “Rx & Tx Enabled” is selected: this provides for both-ways control, i.e. by both read and write I/Os.
Another possible network logic – although rarely used – is to provide the iSCSI fabric to the guest VMs. Depending on your network card’s driver certain features from the hardware can be provided to Microsoft’s synthetic network driver of the virtual switch. In the above example using Intel 82574L the jumbo frames setting is available, whereas flow control is missing:
Once again, keep in mind that this is presenting a “virtual switch”, so enabling jumbo frames on it only half of the configuration. Additionally the NIC on the VM has to be reconfigured, this time from the Virtual Machine Bus Network Adapter Properties / Advanced tab (analogous to the figure above).
Since MPIO is extensively described on TechNet, we won’t cover the topic here once again. You can refer the screenshots and explanations on the following site instead: http://blogs.technet.com/b/migreene/archive/2009/08/29/3277914.aspx
The most important steps can be summarized as follows:
- When configuring the dedicated interface(s) enable only the necessary protocols. IPv4 would be sufficient in most cases. Further on configure only an IP with a subnet mask of the same subnet. Do not configure gateway and disable DNS registration. Repeat this step for each iSCSI-dedicated NIC. Correction: If you are using Cluster Shared Volumes on a limited environment and the iSCSI dedicated adapters are within the scope of the cluster networks (either as enabled or internal), additionally “Client for Microsoft Networks” and “File and Printer Sharing” protocols must be enabled. See link Unable to access ClusterStorage folder on a passive node in a server 2008 R2 cluster below
- Using iSCSI Initiator Control Panel add the first target IP and connect to the LUN
- After enabling MPIO feature, click on the Discover Multi-Paths tab, check the box for “Add support for iSCSI devices”, and click Add. You will be prompted to restart the server. Please keep in mind that before a target is added, the Discover Multi-Path tab is greyed out
- Add another paths using Connect, “Enable multi-path”, Advanced and selecting the initiator and target IPs of the further subnets
- Check the configuration through selecting a disk and clicking on the MPIO button, where two or more paths will be listed
- iSCSI Storage Solution Using Cisco Catalyst 4900 Series Switches and Dell EqualLogic PS Series SAN Arrays: http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps6021/white_paper_c11-563477.html
- Understanding How Flow Control Works: http://www.cisco.com/en/US/docs/switches/lan/catalyst5000/catos/4.5/configuration/guide/gigabit.html#wp20732
- Using Hyper-V and Failover Clustering (step-by-step guide): http://technet.microsoft.com/en-us/library/cc732181(WS.10).aspx
- Microsoft Multipath I/O Step-by-Step Guide: http://technet.microsoft.com/en-us/library/ee619778(WS.10).aspx
- Unable to access ClusterStorage folder on a passive node in a server 2008 R2 cluster: http://support.microsoft.com/kb/2008795