This is part 5 of the following blog series:
- My first FlexPod! (Part 1 – Introduction)
- My first FlexPod! (Part 2 – Hardware Overview)
- My first FlexPod! (Part 3 – Hardware Configuration)
- My first FlexPod! (Part 4 – Quality of Services)
- My first FlexPod! (Part 5 – Hyper-V Cluster)
- My first FlexPod! (Part 6 – System Center)
I hope part 1, 2, 3 and 4 where informative to you. In this part it is time to talk about an actual Hyper-V Cluster on UCS Blade Servers. Hosting a Hyper-V Cluster on UCS is not rocket science. The deployment is just as straightforward as with other brands. The only difference is that you are (more) flexible with your network configuration and you have the benefits from stateless computing, if that is what you want.
In our case we had eight physical UCS B200 M3 Blade Servers available as specified in part 2. When you have pre-configured UCS Manager with all pools and policies that are required, you are ready to create a Server Profile. You can then assign the new Server Profile to a physical server (equipment), boot it and begin installing the Operating System.
On our PlexPod we started with an six-node Hyper-V Cluster based on Windows Server 2012 R2 and SCVMM (System Center Virtual Machine Manager) 2012 R2. I’m not going to give you a step-by-step deployment or explain how it all fits together in SCVMM. Cisco has many validated designs that describe it in full detail. Instead we will focus on some details of a Server Profile and the network configuration within the OS.
Overview:
For this example we use a six-node Hyper-V Cluster:
Each Hyper-V Server has hostname ‘HVS0x‘, and the Failover Cluster has hostname ‘HVC01‘.
Remote Disks:
The Hyper-V Servers share three remote disks that are hosted on the NetApp Storage Array:
- QUORUM (Witness Disk)
- CSV01 (Cluster Shared Volume)
- CSV02 (Cluster Shared Volume)
- CSV03 (Cluster Shared Volume)
- CSV04 (Cluster Shared Volume)
Our NetApp Storage Array is configured in 7-Mode, which means it is separated in two aggregates. We host ‘QUORUM‘, ‘CSV01‘ and ‘CSV03‘ on the first aggregate. And we host ‘CSV02‘ and ‘CSV04‘on the second aggregate.
NOTE: You might ask, why four CSVs instead of two? Well, that has to do with VM backups. There is a known issue with VM backups (snapshots) on Hyper-V when the VMs are stored on a CSV. In some scenarios running multiple VM backups simultaneously, it can cause a CSV to get in a pause status. I have seen this on many Hyper-V environments, unless you use SMB3 or local storage. A thumb rule is to have as many CSVs as the number of nodes, with a maximum of four CSVs. I don’t want to go into detail, but I can tell you it does matter.
Boot from SAN
The Blade Servers we use have no local disks, instead they boot from SAN. On the NetApp Storage Array we created a unique LUN for each Hyper-V Server, and we configured each LUN with ID 0 to be able to boot from SAN.
Optionally, you can store these LUNs on a single volume and enable deduplication on the volume. Of course setting LUN ID 0 is not enough to boot from SAN. You still need to configure a Boot Policy and such in UCS Manager.
NOTE: I would have preferred FCoE (Fiber Channel over Ethernet) on the NetApp Storage Array. The fact is, at that time our NetApp devices where delivered with Ethernet based NICs. It was a total surprise to me, because I expected to have FCoE. Long story short; the choice was FC or iSCSI only. Eventually we kept iSCSI. Although iSCSI works perfectly fine, I recommend to use FC or FCoE for storage connectivity, especially if you are going to use boot from SAN. Not because I prefer FC, but because on UCS it is more straightforward and has some advantages. Also most network devices (like Nexus Switches) already have QoS configured, where traffic is already classified as Ethernet or FC.
vNICs (virtual Network Interface Cards):
For the Hyper-V Cluster we needed the following vNICs:
- iSCSI-A
- iSCSI-B
- Management
- Cluster
- Live Migration
- VM-Ethernet-A
- VM-Ethernet-B
In the beginning we also had vNICs called ‘VM-iSCSI-A’ and ‘VM-iSCSI-B’. We added those vNICs to offer raw iSCSI connectivity within VMs (Guest OS). But one year later we removed them, because we did not use them anymore. Just to let you know because it is so easy to add more vNICs and separate them from other traffic without needing to share ‘iSCSI-A’ or ‘iSCSI-B’ with your VM.
As mentioned in the previous parts; you don’t use NIC Teaming on UCS Servers. UCS offers FF (Fabric Failover). It is up to you how you distribute those vNICs between Fabric A and B, and whether you use FF or not. To illustrate, I configured the vNICs as following:
vNICs (iSCSI for the host OS):
The vNICs ‘iSCSI-A‘ and ‘iSCSI-B‘ (iSCSI for the host OS) each connect to a different Fabric A or B, without FF. FF is disabled because we use MPIO (Multi-Patch I/O). Although FF can work for iSCSI it is certainly not recommended. This is even written in a white paper from Cisco/NetApp.
vNICs (Management for the host OS):
The vNICs ‘Management‘, ‘Cluster‘ and ‘Live Migration‘ (Management for the host OS) connect to Fabric A, with FF to Fabric B. Of course you can distribute them between Fabric A and B, but for simplicity we kept all three on Fabric A.
vNICs (vSwitches for the guest OS):
The vNICs ‘VM-Ethernet-A‘ and ‘VM-Ethernet-B‘ (vSwitches for the guest OS) connect to Fabric A or B, with FF to Fabric B or A. The reason why we created two of them is because we are hosting a Security Multi-Tenancy environment. We wanted to distribute tenants between the Fabrics to have as much bandwidth available. We gave each tenant an ID number. The uneven numbers connect to Fabric A, and the even numbers connect to Fabric B.
To get seven vNICs you just add them to a Server Profile. In UCS Manager you cannot label them as shown above. You have to give them short labels, which can’t be renamed afterwards. So it is essential you keep them logical or keep track of their purpose. In UCS Manager the vNICs are shown like this:
To give you a better understanding this is also another view within a UCS Server Profile:
You might ask, why do you see ‘iSCSI_Eth0‘ and ‘iSCSI_Eth1‘ twice? Well, to configure boot from iSCSI you need to configure a so called overlaying iSCSI vNIC on top of the vNIC. If you don’t boot from SAN (with iSCSI), you won’t need an overlaying NIC to support iSCSI connectivity.
When you add a vNIC you also have to configure the right right policies and assign VLANs. For instance;
- You want to connect it to Fabric A and enable FF
- You want to connect it to certain VLANs
- You want to apply an MTU size of 1500 or 9000 (Jumbo Frames)
- You want to apply a certain Adapter Policy
- You want to apply a certain QoS Policy
- You might want to apply a certain VMQ Policy
The following is just an example of the vNIC properties:
Here is an example of a Adapter Policy (for and Ethernet Adapter in Windows):
VMQ Policy:
If you use Hyper-V and have 10GbE vNICs available for vSwitches, you should definitely use VMQ. Most network interfaces come with a pre-defines number of queues. For example an Intel X520/540 10GbE NIC offers 64 VMQs per port. If you remember from part 2; a Cisco VIC 1240 + Port Expander allows you to create 256 vNICs or vHBAs per server. Well this number also defines the number of queues for RSS or VMQ you can assign to a server. So in our case we added 7 vNICs to the Server Profile. This allows us to assign 256-7=247 VMQs to vNICs with a VMQ Policy.
So if you add a vNIC that is going to be used for a vSwitch (like ‘VM-Ethernet-A‘) and you want to have 32 VMQs available, then you should configure a VMQ Policy with a number of 33 VMQs. And you apply that policy to a vNIC. Here is an example of an VMQ Policy:
NOTE: One VMQ is always reserved for the system, that’s why you add one more which comes down to 33.
Adapter Properties (Windows):
Unlike normal NICs (e.g. Intel or Broadcom) you will notice that the properties of a vNIC (hosted on a Cisco VIC) is somewhat limited:
In fact, it is not limited at all. It is because everything is controlled by UCS. This way you can make sure UCS applies the optimal settings for the vNIC.
BIOS Policy:
For a Hyper-V Server there are some best-practices for the BIOS Policy. Here you have a few examples:
It it to much detail to show our entire configuration. But I think you get it.
Other than this it is just a normal Hyper-V Server and Hyper-V Cluster configuration/deployment, as you would with any type of hardware.
P.S: I have published this part just recently. I going to review it and might add some more information later. Please we aware that Cisco has many validated designs that describe the entire deployment in detail.
Ok, that’s it for now. I hope this information was informative. Click on the link below to continue with the next part.
Leave a Reply