Networking
Networking in a Hyper-V Server cluster can be very confusing for newcomers. A clustered Hyper-V Server host requires a large number of distinct network connections in comparison to most other computing system deployments. The traffic types were covered in Chapter 1, Hyper-V Cluster Orientation. For review, they are given as follows:
- Management
- Live Migration
- Cluster and Cluster Shared Volumes
- Virtual machine
- Storage (unless using direct-attached)
The recommended setup is to use at least a one-gigabit adapter for each of the above. If you won't be using Cluster Shared Volumes, then you can skip having a dedicated adapter for cluster communications. The pathways to iSCSI, SMB 3.0, and Fibre Channel benefit from the redundancy of multiple adapters, and most storage can leverage multi-path I/O (MPIO) to aggregate bandwidth. Be aware that it is common to overestimate the amount of bandwidth that storage systems actually require. Keep this in mind if you are considering using more than two adapters per host for storage.
An important new feature in Windows Server 2012 and Hyper-V Server 2012 is native teaming. What this gives you in a Hyper-V Server cluster environment is the ability to create a converged fabric. With this technology, multiple physical adapters are placed into a single large team and the various networking roles are then run on that one team. In the past, you might have teamed two physical adapters and assigned management traffic to them, then teamed two others and those set aside for Live Migration traffic, and so forth. The problem with that method was that every role required two adapters and they were mostly idle. With converged fabric, you no longer need to dedicate a unique failover adapter for each individual role. This has the benefit of providing improved redundancy and load-balancing using a lower number of adapters. Teaming and converged fabric will receive much more thorough coverage in the networking chapter.
If you will be using ten-gigabit adapters, the recommendation is to use at least two. One can provide adequate bandwidth for most scenarios. Two provides for load-balancing and redundancy. As with gigabit adapters, these can be placed in a converged fabric so that all communication types can share and be balanced across these two adapters. Note that even though it is possible to create a converged fabric that mixes one-gigabit and ten-gigabit adapters, it is not recommended. You won't be able to meaningfully control how traffic is shaped across the physical adapters so that the ten-gigabit adapters have priority.
As you build your cluster, you'll notice that switch port connections add up quickly. Ensure that you have sufficient switching hardware to handle it all. If you will be employing teaming, consider your options carefully. If you connect all the adapters for a single host to a single switch, you have more options for link-aggregation technologies, but you are also introducing a potential single-point of failure. You may choose to address this by distributing the connections for each host across multiple switches and setting switch-independent teaming modes, or you can connect each host to its own switch to maximize aggregation possibilities and plan for a switch failure to cause complete isolation of a single host.
Isolating iSCSI onto its own network is always recommended. Using dedicated physical adapters and switches results in the best performance and security. If dedicated switches are not feasible, the next best option is to segregate the traffic onto iSCSI-specific VLANs. It is possible to include your iSCSI connections in your converged fabric, but overall this is less efficient than using MPIO.
As mentioned in the previous chapter, you'll need to define subnets for the Live Migration network, cluster/CSV network(s), and storage network(s). They will need to be separate from the subnet the management adapter is part of. The management adapter can be in the same network you use for other servers. You may also consider the usage of virtual LANs (VLANs). All of the networking components in Hyper-V Server 2012 and Windows Server 2012 can work with the 802.1q standard for VLAN tagging. If you do not have a working knowledge of subnetting or VLAN technologies, work with your networking team or spend time learning about them before finalizing the networking portion of planning.
Advanced networking hardware
Most organizations will find that standard gigabit will suit their needs perfectly. Most of the rest will get the extra bit they need from ten-gigabit hardware. For a few, even that isn't enough. It can't be stressed enough that networking is one of the places where system designers commonly over-architect in anticipation of loads they won't actually have. Of course, some actually will need more. There are two types of solutions for those environments.
If speed is an issue that ten-gigabit Ethernet can't solve for you, there are faster technologies available. A few manufacturers provide forty-gigabit and higher solutions today, and 100-gigabit solutions are on the way. Some have been released with support for and demonstrated to work with Hyper-V Server. For these solutions, it is recommended that you work with a vendor that has expertise with such hardware.
If speed itself isn't as much of a concern as overall networking efficiency, hardware that is capable of data center bridging (DCB) can provide an answer. The Ethernet specification does not require the prevention of data being lost. If a system's input buffer overflows, it is allowed to silently drop any following data that won't fit. Unfortunately, this can include waypoints in the communication chain, such as switches and routers, as well as endpoints. The onus is upon higher level systems to detect these conditions and correct them if necessary. TCP, for instance, was designed specifically to overcome the inherent unreliability of complicated interconnects. Some endpoints don't anticipate this sort of loss and struggle on congested networks. A common example is Fibre Channel over Ethernet (FCoE).
Fibre Channel does not share Ethernet's tolerance for data loss, so mixing the two can cause problems in connected storage systems. Data center bridging is a group of IEEE standards that addresses these and other issues by shifting transmission reliability to the hardware. Hyper-V Server 2012 and Windows Server 2012 have specific support and capabilities to work with hardware that implements these standards. Methods of working with these technologies will be explored in further networking chapters; for now, be aware that these options are available to you if you determine that they are a necessity for your environment.