This is part 2 of the following blog series:
- My first FlexPod! (Part 1 – Introduction)
- My first FlexPod! (Part 2 – Hardware Overview)
- My first FlexPod! (Part 3 – Hardware Configuration)
- My first FlexPod! (Part 4 – Quality of Services)
- My first FlexPod! (Part 5 – Hyper-V Cluster)
- My first FlexPod! (Part 6 – System Center)
Hardware Overview:
If you have read part 1 you should now know a bit more about a Cisco FlexPod. As mentioned, a Cisco FlexPod is a reference architecture and can be combined with any hardware you want, which makes is very flexible and scalable. With the budget at that time we started with the following hardware infrastructure:
Altough this pictures might look nice to you, it is quite useless if you don’t know what it is all about. So I have to give you some detailed information about the hardware and it’s capabilities. I am afraid this is going to be a long blog. But hey, let’s go for it! ๐
Compute:
As the most part of this blog is all about, the compute platform as shown above is completely based on Cisco UCS, which in this setup contains the following components:
- 2x Cisco UCS 6248UP Fabric Interconnects
- 1x Cisco UCS 5108 Blade Chassis
- 2x Cisco UCS 2208XP I/O Modules
- 8x Cisco UCS B200 M3 Blade Servers
- 1x Cisco UCS C24 M3 Rack Server
- 1x Cisco UCS 5108 Blade Chassis
Cisco UCS 6248UP Fabric Interconnects:
It all starts with two Fabric Interconnects. From now on known as FI’s. FI’s are based on NX-OS software. FI’s play an upmost important role, and they can be considered as the braincenter of the entire UCS platform.
Cisco UCS 6200 Series Fabric Interconnects
http://www.cisco.com/en/US/products/ps11544/index.html
Two FI’s are configured as an high-available active/active cluster. They are interconnected with 2x 1GbE, which acts as the management plane. There is no data plane between the FI’s. The first FI is configured as so called ‘Fabric A‘, and the second FI as so called ‘Fabric B‘, which can be referred to as the ‘left side’ and ‘right side’. Once they are configured you can connect uplink switches, storage devices, UCS Fabric Extenders and UCS Servers.
Switching Modes:
Although an FI is not a switch, it can do L2 switching in two switching modes. The default switching mode is called ‘End Host Mode‘. In a nutshel; with End Host Mode the FI’s can do local L2 switching for directly connected UCS servers and storage appliances. The FI’s connect to the uplink switches like a server instead of a switch. Uplink switches see the FI’s as super computers (edge network devices). One advantage of this architecture is there is no spanning-tree. The other switching mode is called ‘Switch Mode‘. But Switch Mode is not commonly used and currently out-of-scope for this blog. If you want to know more about the switching modes then I highly recommend you to watch the following video from Brad Hedlund:
Cisco UCS Networking, Switching modes of the Fabric Interconnects
https://www.youtube.com/watch?v=kQ5Bu-Xx1s4
Unified Ports:
A Cisco UCS 6248UP FI has 32 Unified Ports and one expansion slot. Each Unified Port can be individually configured to support line-rate, low-latency, lossless, 1/10Gb Ethernet, 2/4/8Gb Fibre Channel (FC) or 10Gb Fibre Channel over Ethernet (FCoE). Whatever the usage, each port is either configured as Ethernet or Fibre Channel. And specifically to UCS each individual Unified Port can be configured as one of the following port types:
- Ethernet:
- Server Port
- Uplink Port
- FCoE Uplink Port
- FCoE Appliance Port
- Appliance Port
- SPAN Port
- Fibre Channel:
- FC Uplink Port
- FC Appliance Port
- SPAN Port
As you may understand, this is one of the reasons why so many network and storage device can be connected, which makes UCS highly scalable. This doesn’t neccesairly mean you have to connect your storage devices directly to the FI’s. It is still common (and probably best-practice) to uplink other switches (such as Nexus or Catalyst switches) to the FI’s. In fact, that is exactly what we do; we have an uplink with two Nexus 5548UP switches, which acts as a network and storage switch.
UCS Manager:
As described in part 1 Cisco UCS Manager runs on top of the FI’s which controls the entire compute platform. All configuration an even (remote) monitoring is done through UCS Manager. UCS Manager provides you an intuative GUI (web console), and a CLI. There is also a PowerShell Toolkit available, which can be quite useful for automation.
Cisco UCS Management
http://www.cisco.com/c/en/us/products/servers-unified-computing/cisco_ucs_management.html
So for instance, if you want to make changes on one or more UCS servers, such as BIOS settings, or add a VLAN to vNICs, you use the UCS Manager GUI. You use PowerShell against UCS Manager. If you want SCOM (System Center Operations Manager) to monitor all compute devices, you let SCOM connect with UCS Manager. There is also an SCVMM Add-in available. If you want to use a KVM on physical servers, you use UCS Manager GUI. Simple and at the same time very powerfull.
UCS Domain :
A single cluster (or FI) is what makes a so called ‘UCS Domain‘. And a single Cisco UCS domain currently supports a total of 20 Blade Chassis or 160 servers! This doesn’t neccesailry have to be Blade Servers, it can be a combination of both. See the following combination examples:
- 160 servers = 20 Blade Chassis (with 8 half-width Blade Servers each)
- 160 servers = 10 Blade Chassis (with 8 half-width Blade Servers each) + 80 Rack Servers
- 160 servers = 10 Blade Chassis (with 4 full-width Blade Servers each) + 120 Rack Servers
- 160 servers = 160 Rack Servers
Of course if you want to connect 160 Rack Servers you won’t have enough ports directly on the FI’s. A solution is to add a Cisco UCS Fabric Extenders which provides you with additional 10GbE Server Ports to connect more Rack Servers. The I/O Modules in a Blade Chassis already function as a Fabric Extender. The following example represents a single UCS Domain with 20 Blade Chassis:
Ain’t that cool or what? ๐ฎ Of course the number of UCS servers you can connect (whether they are Blade Chassis or Rack Servers) depends on the number of Server Ports you use to connect them. You might ask, what are Server Ports? I will explain that in a moment.
NOTE: If you want to manage multiple UCS Domains in one GUI, which for instance are located in multiple datacenters; you have the option to implement UCS Central. But for this blog UCS Central is out-of-scope.
Server Ports:
Once you have configured two or more Unified Ports on the FI’s as Server Ports you can connect UCS Blade Chassis or UCS Rack Servers. UCS Manager will automatically detect the hardware and considers it as compute resources. Those UCS based compute resources are then fully controlled and managed by UCS Manager.
Cisco UCS 5108 Blade Chassis:
A Cisco UCS 5108 Blade Chassis has space for eight โhalf-withโ or four โfull-withโ form factor Cisco UCS B-Series Blade Servers, or a combination of both. See the following example:
Cisco UCS 5100 Series Blade Server Chassis
http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-5100-series-blade-server-chassis/index.html
Power Supplies Units & Cooling Fans:
The Blade Chassis we use have four 16A-2500W hot-swappable PSU’s (Power Supply Units) and eight Cooling Fans. These redundant PSUโs can be independidly configured, shutdown and put into stand-by mode. You can even put a maximum power cap (limit) to prevent you from getting above a certain power grid of your datacenter environment.
Cisco UCS 2208XP I/O Modules (Fabric Extenders):
Our Blade Chassis is equiped with two Cisco UCS 2208XP I/O Modules (also known as Fabric Extenders) which are positioned in the back the Blade Chassis. See the following example:
These I/O Modules provide the following connectivity:
- Backplane Ports: Each I/O Module has four fixed 10GbE โBackplane Portsโ which are directly connected to the I/O Adapters of each Blade Server. With two I/O Modules each half-with Blade Server has effectively a maximum of 8x10GbE backplane connectivity (depending on the type of I/O Adapter that is used in the Blade Server).
- Fabric Ports: Each I/O Module has eight 10GbE (SFP) โFabric Portsโ available that can be connected with 1x, 2x, 4x or 8x10GbE (SFP) to an FI. With two I/O Modules each Blade Chassis has effectively a 2x, 4x, 8x or 16x10GbE connection to the FI’s.
As the following example illustrates each Fabric Port (on the I/O Module) is connected to a Server Port (on the FI):
Each I/O module can be connected with 1x, 2x, 4x or 8x 10GbE SFP (per Fabric). So as mentioned, the number of Blade Chassis you can connect depends on the number of Server Ports (Fabric Ports) used to connect them. Based on workload requirements you have to decide how much bandwidth you want to have available on each Blade Chassis. Please refer to the Cisco documentation for more detailed information.
We have connected our Blade Chassis with 2x 10GbE on each Fabric, which is quite common. With both Fabrics (A and B) this results in 4x 10GbE of total bandwidth with FI’s. When needed, we can connect more Fabric Ports to increase the bandwidth. Because it is a converged infrastructure this does not require any modifications or re-configuration on the physical Blade Server. vNIC’s and vHBA’s are used on top of the connectivity.
NOTE: The Cisco UCS 5108 Blade Chassis is future-ready for 40GbE standards. But that would requires you to replace the I/O Modules and FI’s.
Cisco B200 M3 Blade Servers:
Cisco has a full product line of B-Series Blade Servers available, which of course are configurable as desired. We have chosen to start with eight Cisco UCS B200 M3 Blade Servers, each with the following hardware specifications:
- Cisco UCS B200 M3 Blade Server:
- 2x Intel Xeon E5-2650 (2.0GHz โ 8 cores)
- 128GB (16x 8GB DDR3-1600-Mhz RDIMM)
- Cisco VIC 1240 + Port Expander (Mezzanine Adapter)
Stateless Computing (Boot from SAN):
Our Blade Servers have no local disks, because we implemented boot from SAN. This offers so called ‘stateless computing‘. With stateless computing you have the option to move the entire configuration of a UCS servers to another server without re-installing the Operating System.
Cisco VIC 1240 + Port Expander:
All our Blade Servers are equiped with a Cisco VIC (Virtual Interface Card) 1240 and a Cisco Port Expander for VIC 1240 (Mezzanine Adapter).
Cisco UCS Virtual Interface Card 1240
http://www.cisco.com/en/US/products/ps12377/index.html
A Cisco VIC 1240 is a ‘CNA (Converged Network Adapter)‘ that supports both Ethernet and Fibre Channel (over Ethernet). A CNA allows you to create vNIC’s (virtual Network Interface Cards) and vHBA’s (virtual Host Bus Adapters). vNIC’s and vHBA’s are presented to the Operating System as PCI-Express adapters.
Personally I am a very big fan of CNA’s. Because they offer a lot of flexibility for Hyper-V Servers. As you know, a Hyper-V Cluster has a few requirements in terms of network interfaces. You need quite a few. At the same time you want to utilize certain network optimization features (such as RSS, VMQ and maybe SR-IOV) and offer failover capabilities (such as Fabric Failover and MPIO). With a CNA you can have all that, and only with a few cables.
A Cisco VIC 1240 + Port Expander allows you to create 256 vNICs/vHBA’s. And it supports many network optimization features, such as RSS (Receive Side Scaling), VMQ (Virtual Machine Queuing) and SR-IOV (Single I/O Root-Virtualization). You might ask, why on earth do you need 256 vNIC’s? Well, there are scenario’s where this is usefull. For example, in VDI solutions where you want each VDI session to have its own vmNIC with SR-IOV enabled. And these 256 also define the maximum number of VMQ queues that are avaialable per Blade Server. When you configure eight vNICs you still have 248 queues available for VMQ. You can spread those queues across the vNICs that have a vSwitch configured. I am not going into detail about VMQ right now, but it is certainly a winner.
Cisco UCS C24 M3L Rack Server:
Cisco has a full product line of C-Series Blade Servers available, which of course are configurable as desired. We started with one Cisco UCS C24 M3L Rack Server. It has the following hardware specifications:
- Cisco UCS C24 M3L Blade Server:
- 1x Intel Xeon E5-2450 (2.5GHz โ 8 cores)
- 32GB (4x 8GB DDR3-1600-Mhz RDIMM)
- 12x Seagate 3TB SAS (7.2K RPM 3.5-inch HDD)
- LSI 6G MegaRAID 9240-8i (RAID 0/1/5/10)
- Cisco VIC 1225 (Dual Port 10GbE SFP+)
- 2x 650W Power Supply
This Rack Server functions as our Backup Server, hosting SCDPM (System Center Data Protection Manager). There are two enclosures available; a 12 LFF (Large Form Factor) or 24 SFF (Small Form Factor). The M3L is the LFF version.
Operational Modes:
A C-Series server can operate in either ‘Standalone Mode‘ or ‘UCS Domain Mode‘. In Standalone Mode the Rack Server is managed by the built-in software, known as the ‘CIMC (Cisco Integrated Management Controller)‘. In UCS Domain Mode the Rack Server is fully integrated into the UCS Domain, the CIMC does not manage the server anymore. Instead it is managed by UCS Manager. And you can take advantage of all Cisco UCS features; like creating vNIC’s/vHBA’s, Fabric Failover, moving or modifying Service Profiles and much more.
As you can imagine, UCS Domain Mode is the preferred option because it offers much more flexiblity. Our Rack Server is operating in UCS Domain Mode. The Service Profile configuration is not really special, just a bunch of disks and one vNIC with Fabric Failover enabled.
Cisco VIC 1225:
Our UCS Rack Server is equiped with a Cisco VIC (Virtual Interface Card) 1225. Its a PCI-Express 16x adapter, with two 10Gb (SFP) Unified Ports. These Unified Ports can be configured as Ethernet or Fibre Channel, and it supports FCoE. A VIC 1225 supports 256 vNIC’s/vHBA’s.
NOTE: We have connected the VIC directly to the FI’s, one port to each Fabric. In the past it would require a Fabric Extender to connect a C-Series Rack Server, like a Blade Server has an I/O Module (Fabric Extender). But since UCS software version 2.2 a feature called ‘UCS Direct Connect‘ is supported, which allows you to directly connect a C-Series Rack Server to the FI’s.
Network:
Cisco UCS plays a big part in terms of networking for the compute platform. But you still need more network devices to implement an entire environment, like a L3 switch and a firewall.
As mentioned, although the FI’s can do local L2 switching, it is certainly not a switch. The FI’s are part of the compute system, that have to connect with your LAN (Local Area Network) and SAN (Storage Area Network). It is not uncommon to use separate switches, one for your LAN and one for your SAN. A Cisco FlexPod reference architecture includes two datacenter switches (from the Nexus family) that are used for both LAN and SAN. Of course you can select any type/model that suits your requirements.
In contrast of the previous overview, the following figure is an abstract connectivity overview of our (network) components:
As shown above our setup contains the following network components:
- 2x Cisco ASA 5515-X Firewall
- 2x Cisco Catalyst 3850-24T-E Core Switch
- 2x Cisco Nexus 5548UP Access Switch
Cisco Nexus 5548UP Switches:
As part of a Cisco FlexPod reference architecture we have implemented two Cisco Nexus 5548UP Switches. Nexus switches are based on NX-OS software. These switches are very powerfull and we use them as Access Switches for both LAN and SAN.
Cisco Nexus 5000 Series Switches
http://www.cisco.com/c/en/us/products/switches/nexus-5000-series-switches/index.html
Unified Ports:
A Cisco Nexus 5548UP has 32 Unified Ports (SFP) and one expansion slot. Each Unified Port can be individually configured to support line-rate, low-latency, lossless, 1/10Gb Ethernet, 2/4/8Gb Fibre Channel (FC) or 10Gb Fibre Channel over Ethernet (FCoE). Whatever the usage, each port is either configured as Ethernet or Fibre Channel. There is a management plane and data plane between the switches. We have connected our Fabric Interconnects, Core Switches and NetApp Storage Controllers to our Access Switches.
Active/Active:
Unlike the Catalyst family, the Nexus family switches are not stackable. They operate as active/active but standalone switches, each with its own configuration. Although they support ‘vPC’s (virtual Port-Channels)‘ and other advanced features which make them very suitable.
Layer 2:
When I was desiging our infrastructure, we thought about using the Nexus switches as L3 Core Switches. But there as some reasons why we finally decided to use Cisco Catalyst switches instead. By default they are Layer 2 switches. We had the option to add a so called ‘Doughter Card‘ with a software license, which upgrades it to a Layer 3 switch. But we also needed VRF support. This required an additional license. This would make them much more expensive. And all Unified Ports are 10Gb (SFP). Which of course is nice, but we needed to connect multiple 1GbE (RJ45) network devices; like our firewall and many management interfaces. You need an SFP module for each RJ45 cable. This would make the price per port costly. We were also thinking about DCI (Data Center Interconnect) in the near future. That might require other L3 switches at a later stage. You cannot re-use a daughter card without the switch itself. So we finally decided to implement Cisco Catalyst switches (with 10GbE uplink ports) as our L3 Core Switches. So when one day we need to migrate those L3 switches, they are easly replaceable and reusable.
NOTE: Please keep in mind this may not be the case for everybody, this is just a typical decision that fits our need.
Cisco Catalyst 3850-24T-E Switches:
We have implemented two Cisco Catalyst 3850-24T-E Switches. Catalyst switches are based on IOS software. We use those switches as Core Switches for our LAN.
Cisco Catalyst 3850 Series Switches
http://www.cisco.com/c/en/us/products/switches/catalyst-3850-series-switches/index.html
Gigabit Ethernet and Ten Gigabit Ethernet:
A Catalyst Catalyst 3850-24T-E has 24x 1GbE (RJ45) and 4x 10GbE (SFP) ports. We use the 1GbE ports for connectivity with our firewalls, private lines and management interfaces. And we use the 10GbE ports for uplink connectivity with our Access Switches (Nexus).
Active/Active (StackWise):
The Core Switches are stacked using Cisco StackWise technology. They act as a single switch, with a single hostname and IOS configuration. This provides many benifits and significantly simplifies the configuration.
Layer 3 and VRF:
We have specifically chosen for the E software version, which supports L3 switching with IP Services. IP Services supports a feature called ‘VRF (Virtual Routing Forwarding)‘. We use VRF for Secure Multi-Tenancy. You might ask, what is VRF? I will explain VRF in part 3.
Cisco ASA 5515-X Firewalls:
We have implemented two Cisco ASA 5512-X firewalls. These firewalls are based on IOS software. We use them as our Front-end Firewalls.
Cisco ASA 5500-X Series Next-Generation Firewalls
http://www.cisco.com/c/en/us/products/security/asa-5500-series-next-generation-firewalls/index.html
Active/Active (Multiple Context Mode):
These firewalls are security appliances, that supports three high-availability modes. Our security appliances are configured in so called ‘Multiple Context Mode‘. With Multiple Context Mode you can partition the appliances into multiple virtual firewall instances, known as ‘Security Contexts‘. Each security context is an independent device, with its own security policy, network interfaces, and administrators. Although Multiple Context Mode is described as an active/active solution, in reality it is an active/passive solution using load distribution, not load balancing. Whereas each security context runs active on one appliance and passive on the other appliance. So for example, when you host three security contexts, you can have two active on the first appliance, and one active on the second appliance. Of course with failover capabilities between the appliances.
At the time of writing we host three security contexts and it works very well. I don’t want to go into to much detail because it is a bit out-of-scope for this blog. So let’s continue with the next part…
Storage:
Storage is not my main expertise. And because the length of this blog, I don’t want to go into detail about the storage.
NEXT >>> My first FlexPod! (Part 3 – Hardware Configuration)