You cannot connect multiple NICs to a single Hyper-V vSwitch without teaming on the host

Can you connect multiple NICs to a single Hyper-V vSwitch without teaming on the host

Recently I got a question on whether a Hyper-V virtual switch can be connected to multiple NICs without teaming. The answer is no. You cannot connect multiple NICs to a single Hyper-V vSwitch without teaming on the host.

This question makes sense as many people are interested in the ease of use and the great results of SMB Multichannel when it comes to aggregation and redundancy. But the answer lies in the name “SMB”. It’s only available for SMB traffic. Believe it or not but there is still a massive amount of network traffic that is not SMB and all that traffic has to pass through the Hyper-v vSwitch.

What can we do?

Which means that any redundant scenario that requires other traffic to be supported than SMB 3 will need to use a different solution than SMB Multichannel. Basically, this means using NIC teaming on a server. In the pre Windows Server 2012 era that meant 3rd party products. Since Windows Server 2012 it means native LBFO (switch independent, static or LACP). In Windows Server 2016 Switch Embedded Teaming (SET) was added to your choice op options. SET only supports switch independent teaming (for now?).

If redundancy on the vSwitch is not an option you can use multiple vSwitches connected to separate NIC and physical switches with Windows native LBFO inside the guests. That works but it’s a lot of extra work and overhead so you only do this when it makes sense. One such an example is SR-IOV which isn’t exposed on top of  a LBFO team.

A 1st look at Discrete Device Assignment in Hyper-V

Let’s take a 1st look at Discrete Device Assignment in Hyper-V

Discrete Device Assignment (DDA) is the ability to take suitable PCI Express devices in a server and pass them through directly to a virtual machine.

This sounds a lot like SR-IOV that we got in Windows 2012  It might also make you think of virtual Fibre Channel in a VM where you get access to the FC HBA on the host. The concept is similar as in that it giving the physical capabilities to a virtual machine.

But this is about Windows Server 2016 new capability which takes all this a lot further. The biggest use case for this seems to be GPUs and NVMe disks. In environment where absolute speed matters the gains by doing DDA are worth the effort. Especially when this works well with live migrations (not yet as far as I can see right now, and it’s probably quite a challenge).

There a great series of blog post on DDA

Microsoft also has a survey script available to find potential DDA devices on your hosts. It will reporting on which of them meet the criteria for passing them through to a VM.

https://github.com/Microsoft/Virtualization-Documentation/tree/master/hyperv-samples/benarm-powershell/DDA

That script is a very nice educational tool by the way. When I look at one of my servers I find that I have a couple of devices that are potential candidates:

So I’m looking at the Mellanox NICs. I wondered if this is the first step to RDMA in the guest. Probably not. The way I’m seeing the network stack evolve in Windows 2016 that’s the place where that will be handled.

image

And the Nvidia GRID K1 –  one of the poster childs of DDA and needed to compete in the high end VDI market

image

And yes the Emulex FC HBAs in the server. This is interesting and I’m curious about the vFC/DDA with FC story. But I have no further info on this. vFC is still there in W2K16 and I don’t think this DDA will replace it.

image

And finally it showed me the PERC controller as a potential candidate. Now trying to get a PERC 730 exposed to a VM with DDA sounds like and experiment I might just spend some evening hours one just to see where it leads. But only NVMe is supported. But hey a boy can play right?

image

So we have some lab work to do and we’ll see where we end up when we get TPv5 in our hands. What I also really need to get my hands on are some NVMe disks Smile But soon I’ll publish my findings on configuring this with an NVIDIA GRID K1 GPU.