Setting a static MAC address on a guest NIC team in Hyper-V

Introduction

Before we talk about setting a static MAC address on a guest NIC team in Hyper-V. We go back to Ubuntu Linux. Do you remember my blog post about configuring an interface bond in a Ubuntu Hyper-V guest? If not, please read it as what I did there got me thinking about setting a static MAC address on a guest NIC team in Hyper-V.

Ubuntu network bond

As you have read by now in the blog post I linked to above, we need to enable MAC Spoofing on both vNICs members of an interface bond in Ubuntu virtual machine on Hyper-V. Only then will you have network connectivity and are you able to get a DHCP address. On Ubuntu (or Linux in general), the bond interface has a generated MAC address assigned. It does not take one of the MAC addresses of the member vNICs. That is why we need MAC spoofing enabled on both member vNIC in the Hyper-V settings for this to work! In a Windows guest, you will find that the MAC address for the LBFO team gets one of the MAC addresses of its member vNICs assigned. As such, this does not require NIC spoofing. During failover, it will swap to the other one.

Setting a static MAC address on a guest NIC team in Hyper-V

In Ubuntu, you can set a chosen static MAC address on a bond and on the member interfaces inside the guest operating system. Would we be able to do the same with a NIC team in a Windows Server guest virtual machine? Well, yes! It sounds like a dirty hack inspired by Linux bonding, which might be way beyond anything resembling a supported configuration. But, if it is allowed for Linux, why not leverage the same technique in Windows?

Configuration walkthrough

We use a mix of MAC address spoofing on the member vNICs with “enable this network adapter to be part of a team in the guest operating system” checked (not actually needed in this case) and a hardcoded MAC address on the team NIC and both member NICs inside the virtual machine. The same MAC address!

Setting a static MAC address on a guest NIC team in Hyper-V
The team interface and its member all get the same static MAC address in the guest

First, note the format of the MAC address. No dashes, dots, or colons. Also, that is a lot of clicking. Let’s try to do this with PowerShell. Using Set-NetAdapter throws an error to the fact that it detects the duplicate MAC address. It protects you against what it thinks is a bad idea.

$TeamName = 'GUEST-TEAM'
Set-NetAdapter -Name $TeamName -MacAddress "14-52-AC-25-DF-74"
ForEach ($MemberNic in $TeamName){
#Get-NetAdapter (Get-NetLbfoTeamMember -Team $MemberNic).Name | Format-Table
Set-NetAdapter (Get-NetLbfoTeamMember -Team $MemberNic).Name  -MacAddress "14-52-AC-25-DF-74"
} 

Set-NetAdapter : The network address 1452AC25DF74 is already used on a network adapter with the name ‘Guest-team-member-01’ At line:2 char:1+ Set-NetAdapter -Name $TeamName -MacAddress “14-52-AC-25-DF-74″+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ CategoryInfo          : InvalidArgument: (MSFT_NetAdapter…wisetech.corp”):ROOT/StandardCimv2/MSFT_NetAdapter) [Set-NetAdapter], CimException    + FullyQualifiedErrorId : Windows System Error 87,Set-NetAdapter
Set-NetAdapter : The network address 1452AC25DF74 is already used on a network adapter with the name ‘Guest-team-member-01’
At line:5 char:1
+ Set-NetAdapter (Get-NetLbfoTeamMember -Team $MemberNic).Name  -MacAdd …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (MSFT_NetAdapter…wisetech.corp”):ROOT/StandardCimv2/MSFT_NetAdapter) [Set-NetAdapter], CimException
    + FullyQualifiedErrorId : Windows System Error 87,Set-NetAdapter

You need to use Set-NetAdapterAdvancedProperty. Mind you that the MAC address property for the team is called “MAC Address” and for the team member NIC “Network Address” just like in the GUI. Use the following code in the guest virtual machine.

$Team = Get-NetLbfoTeam -Name 'GUEST-TEAM'
$MACAddress = "1452AC25DF74"
$TeamName = $Team.Name
#Get-NetAdapterAdvancedProperty -Name $TeamName
Set-NetAdapterAdvancedProperty -Name $TeamName -DisplayName 'MAC Address' -DisplayValue $MACAddress

$TeamMemberNicNames = (Get-NetLbfoTeamMember -Team $TeamName).Name
foreach ($TeamMember in $TeamMemberNicNames){
    #Get-NetAdapterAdvancedProperty -Name $TeamMember
    Set-NetAdapterAdvancedProperty -Name $TeamMember -DisplayName 'Network Address' -DisplayValue $MACAddress
}

Let’s check our handy work with PowerShell

Setting a static MAC address on a guest NIC team in Hyper-V
Verify the team interface and its member all have the same static MAC address in the guest

Last but not least, leave the dynamically assigned MAC addressed on the vNIC team members in Hyper-V setting but do enable MAC spoofing.

Setting a static MAC address on a guest NIC team in Hyper-V
Enable MAC address spoofing

Borrowing a trick from Linux for setting a static MAC address on a guest NIC team in Hyper-V

With this setup, we do not need separate switches for each member vNIC for failover to work but it is still very much advised to do so if you want real failover. First, It sounds filthy, dirty, and rotten, but for lab, demo purposes, go on, be a devil. Secondly, can you use this in production? Yes, you can. Just mind the MAC addresses you assign to avoid conflicts. Now you can tie your backward software license key that depends on a fixed MAC address to a Windows LBFO in a Hyper-V virtual machine. Why? Because we can. Finally, I would perhaps have to say that you should not do it, but Linux does, and so can windows!

Configuring an interface bond in a Ubuntu Hyper-V guest

Introduction

In this post, we take a look at configuring an interface bond in a Ubuntu Hyper-V guest. But first a quick word about NIC teaming and Hyper-V. In real life, teaming is most often done on physical hardware. But in the lab, or for some edge production cases, you might want to use it in virtual machines. The use case here is virtual machines used for testing and knowledge transfer. We are teaching about creating Veeam Backup & Configuration hardened repositories with XFS and immutability. In that lab, we are emulating a NIC team on hardware servers.

When you need redundant, high available networking for your Hyper-V guests, you normally create a NIC team on the host. You then use that NIC team to make your vSwitch. You can use a traditional LBFO team (depreciated) or a SET switch. The latter is the current technology and the way forward. But in this lab scenario, I am using LBFO, native Windows native NIC teaming.

Configuring an interface bond in a Ubuntu Hyper-V guest
99.9% of all use cases will use teaming on the Hyper-V host

Host teaming provides both bandwidth aggregation, redundancy, and failover. Typically, you do not mess around with NIC teaming in the guest in 99.99% of cases. Below we see a figure showing guest teaming. You need to use two physical NICs for genuine redundancy. Each with its separate virtual switch and uplinked to separate physical switches. Beware that only switch independent teaming is supported in the guest OS, so configure the switches and switch ports accordingly.

Configuring an interface bond in a Ubuntu Hyper-V guest
Hyper-V in guest NIC teaming

In-guest teaming is rarely used for production workloads, that is, bar some exceptions with SR-IOV, but that is another discussion. However, you might have a valid reason to use NIC teaming for lab work, testing, documenting configurations, teaching, etc. Luckily, that is easy to do. Hyper-V has a setting for your vNICs, enabling them to be functional members of a NIC team in a Windows guest OS. Als long as that OS supports native teaming. That is the case for Windows Server 2012 and later.

NIC teaming inside a Hyper-V Guest

For each vNIC member of the NIC team in the guest, you must put a checkmark to “enable this network adapter to be part of a team in the guest operating system” there is nothing more to it. The big caveat here is that each member must reside on a different external vSwitch for failover to work correctly. Otherwise, you will see a “The virtual switch lacks external connectivity” error on the remaining when failing over and packet loss.

Enable NIC teaming o the vNIC that are going to be team membersthe Hyper-V settings

There is nothing more you need to make it work perfectly in a Windows guest VM. As you can see in the image below, both my LAN NIC and the NIC get an address from the DHCP server.

Functional team in the virtual machine. Do test failover to make sure you got it right?

That’s great. But sometimes, I need to have a NIC team inside a Linux guest virtual machine. For example, recently, on Ubuntu 20.04, I went through my typical motions to get in guest NIC teaming or bonding in Linux speak. But, much to my surprise, I did not get an IP address from my DHCP server on my Ubuntu 20.04 guest bond. So, what could be the cause?

Configuring an interface bond in a Ubuntu Hyper-V guest

In Ubuntu, we use netplan to configure our networking and in the image below you can see a sample configuration.

A minimal bond configuration in Ubuntu

I have created a bond using eth0 and eth1, and we should get an IP address from DHCP. The bonding mode is balance-rr. But why I am not getting an IP address. I did check the option “Enable this network adapter to be part of a team in the guest operating system” on both member vNICs.

Well, let’s look at the nic interfaces and the bond. There we see something exciting.

Configuring an interface bond in a Ubuntu Hyper-V guest
Note that the bond and it’s member interfaces have the same MAC address that does not come from the Hyper-V host pool

Note that the bond has a MAC address that is the same as both member interfaces. Also, note that this MAC address does not come from the Hyper-V host MAC address pool and is not what is assigned to the vNIC by Hyper-V as you can see in the image below! That is the big secret.

With MAC addressed unknown to the hypervisor, this smells of something that requires MAC spoofing, doesn’t it? So, I enabled it, and guess what? Bingo!

So what is the difference with Windows when configuring an interface bond in a Ubuntu Hyper-V guest?

The difference with Windows is that an interface bond in an Ubuntu Hyper-V guest requires MAC address spoofing. You have to enable MAC Spoofing on both vNICs members of the Ubuntu virtual machine bond. The moment you do that, you will see you get a DHCP address on the bond and get network connectivity. But why is this needed? In Ubuntu (or Linux in general), the bond interface and its members have a generated MAC address assigned. It does not take one of the MAC addresses of the member vNICs. So, we need MAC spoofing enabled on both member vNIC in the Hyper-V settings for this to work! In a Windows guest, the LBFO team gets one of the MAC addresses of its member vNICs assigned. As such, this does not require NIC spoofing.

With Ubuntu (Linux) you don’t even have to check “enable this network adapter to be part of a team in the guest operating system” on the member vNICs. Note that a guest Linux bond does not need every member interface on a separate vSwitch for failover to work. Not even if you enable “enable this network adapter to be part of a team in the guest operating system.” However, the latter is still ill-advised when you want real redundancy and failover.

Virtual switch QoS mode during migrations

Introduction

In Shared nothing live migration with a virtual switch change and VLAN ID configuration I published a sample script. The script works well. But there are two areas of improvement. The first one is here in Checkpoint references a non-existent virtual switch. This post is about the second one. Here I show that I also need to check the virtual switch QoS mode during migrations. A couple of the virtual machines on the source nodes had absolute minimum and/or maximum bandwidth set. On the target nodes, all the virtual switches are created by PowerShell. This defaults to weight mode for QoS, which is the more sensible option, albeit not always the easiest or practical one for people to use.

Virtual switch QoS mode during migrations

First a quick recap of what we are doing. The challenge was to shared nothing live migrate virtual machines to a host with different virtual switch names and VLAN IDs. We did so by adding dummy virtual switches to the target host. This made share nothing live migration possible. On arrival of the virtual machine on the target host, we immediately connect the virtual network adapters to the final virtual switch and set the correct VLAN IDs. That works very well. You drop 1 or at most 2 pings, this is as good as it gets.

This goes wrong under the following conditions:

  • The source virtual switch has QoS mode absolute.
  • Virtual network adapter connected to the source virtual switch has MinimumBandwidthAbsolute and/or MaximumBandwidth set.
  • The target virtual switch with QoS mode weighted

This will cause connectivity loss as you cannot set absolute values to a virtual network attached to a weighted virtual switch. So connecting the virtual to the new virtual switch just fails and you lose connectivity. Remember that the virtual machine is connected to a dummy virtual switch just to make the live migration work and we need to swap it over immediately. The VLAN ID does get set correctly actually. Let’s deal with this.

Steps to fix this issue

First of all, we adapt the script to check the QoS mode on the source virtual switches. If it is set to absolute we know we need to check for any settings of MinimumBandwidthAbsolute and MaximumBandwidth on the virtual adapters connected to those virtual switches. These changes are highlighted in the demo code below.

Secondly, we adapt the script to check every virtual network adapter for its bandwidth management settings. If we find configured MinimumBandwidthAbsolute and MaximumBandwidth values we set these to 0 and as such disable the bandwidth settings. This makes sure that connecting the virtual network adapters to the new virtual switch with QoS mode weighted will succeed. These changes are highlighted in the demo code below.

Finally, the complete script


#The source Hyper-V host
$SourceNode = 'NODE-A'
#The LUN where you want to storage migrate your VMs away from
$SourceRootPath = "C:\ClusterStorage\Volume1*"
#The source Hyper-V host

#The target Hypr-V host
$TargetNode = 'ZULU'
#The storage pathe where you want to storage migrate your VMs to
$TargetRootPath = "C:\ClusterStorage\Volume1"

$OldVirtualSwitch01 = 'vSwitch-VLAN500'
$OldVirtualSwitch02 = 'vSwitch-VLAN600'
$NewVirtualSwitch = 'ConvergedVirtualSwitch'
$VlanId01 = 500
$VlanId02 = 600
  
if ((Get-VMSwitch -name $OldVirtualSwitch01 ).BandwidthReservationMode -eq 'Absolute') { 
    $OldVirtualSwitch01QoSMode = 'Absolute'
}
if ((Get-VMSwitch -name $OldVirtualSwitch01 ).BandwidthReservationMode -eq 'Absolute') { 
    $OldVirtualSwitch02QoSMode = 'Absolute'
}
    
#Grab all the VM we find that have virtual disks on the source CSV - WARNING for W2K12 you'll need to loop through all cluster nodes.
$AllVMsOnRootPath = Get-VM -ComputerName $SourceNode | where-object { $_.HardDrives.Path -like $SourceRootPath }

#We loop through all VMs we find on our SourceRoootPath
ForEach ($VM in $AllVMsOnRootPath) {
    #We generate the final VM destination path
    $TargetVMPath = $TargetRootPath + "\" + ($VM.Name).ToUpper()
    #Grab the VM name
    $VMName = $VM.Name
    $VM.VMid
    $VMName

    #If the VM is still clusterd, get it removed form the cluster as live shared nothing migration will otherwise fail.
    if ($VM.isclustered -eq $True) {
        write-Host -ForegroundColor Magenta $VM.Name "is clustered and is being removed from cluster"
        Remove-ClusterGroup -VMId $VM.VMid -Force -RemoveResources
        Do { Start-Sleep -seconds 1 } While ($VM.isclustered -eq $True)
        write-Host -ForegroundColor Yellow $VM.Name "has been removed from cluster"
    }
    #If the VM checkpoint, notify the user of the script as this will cause issues after swicthing to the new virtual
    #switch on the target node. Live migration will fail between cluster nodes if the checkpoints references 1 or more
    #non existing virtual switches. These must be removed prior to of after completing the shared nothing migration.
    #The script does this after the migration automatically, not before as I want it to be untouched if the shared nothing
    #migration fails.

    $checkpoints = get-vmcheckpoint -VMName $VM.Name

    if ($Null -ne $checkpoints) {
        write-host -foregroundcolor yellow "This VM has checkpoints"
        write-host -foregroundcolor yellow "This VM will be migrated to the new host"
        write-host -foregroundcolor yellow "Only after a succesfull migration will ALL the checpoints be removed"
    }
    
    #Do the actual storage migration of the VM, $DestinationVMPath creates the default subfolder structure
    #for the virtual machine config, snapshots, smartpaging & virtual hard disk files.
    Move-VM -Name $VMName -ComputerName $VM.ComputerName -IncludeStorage -DestinationStoragePath $TargetVMPath -DestinationHost $TargetNode
    
    $MovedVM = Get-VM -ComputerName $TargetNode -Name $VMName

    $vNICOnOldvSwitch01 = Get-VMNetworkAdapter -ComputerName $TargetNode -VMName $MovedVM.VMName | where-object SwitchName -eq $OldVirtualSwitch01
    if ($Null -ne $vNICOnOldvSwitch01) {
        foreach ($VMNetworkadapater in $vNICOnOldvSwitch01) {   
            if ($OldVirtualSwitch01QoSMode -eq 'Absolute') { 
                if (0 -ne $VMNetworkAdapter.bandwidthsetting.Maximumbandwidth) {
                    write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximumBandwidth will be reset to 0."
                    Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MaximumBandwidth 0
                }
                if (0 -ne $VMNetworkAdapter.bandwidthsetting.MinimumBandwidthAbsolute) {
                    write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximuBandwidthAbsolute will be reset to 0."
                    Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MinimumBandwidthAbsolute 0
                }
            }
                
            write-host 'Moving to correct vSwitch'
            Connect-VMNetworkAdapter -VMNetworkAdapter $vNICOnOldvSwitch01 -SwitchName $NewVirtualSwitch
            write-host "Setting VLAN $VlanId01"
            Set-VMNetworkAdapterVlan  -VMNetworkAdapter $vNICOnOldvSwitch01 -Access -VLANid $VlanId01
        }
    }

    $vNICsOnOldvSwitch02 = Get-VMNetworkAdapter -ComputerName $TargetNode -VMName $MovedVM.VMName | where-object SwitchName -eq $OldVirtualSwitch02
    if ($NULL -ne $vNICsOnOldvSwitch02) {
        foreach ($VMNetworkadapater in $vNICsOnOldvSwitch02) {
            if ($OldVirtualSwitch02QoSMode -eq 'Absolute') { 
                if ($Null -ne $VMNetworkAdapter.bandwidthsetting.Maximumbandwidth) {
                    write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximumBandwidth will be reset to 0."
                    Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MaximumBandwidth 0
                }
                if ($Null -ne $VMNetworkAdapter.bandwidthsetting.MinimumBandwidthAbsolute) {
                    write-host -foregroundcolor cyan "Network adapter $VMNetworkAdapter.Name of VM $VMName MaximumBandwidth will be reset to 0."
                    Set-VMNetworkAdapter -Name $VMNetworkAdapter.Name -VMName $MovedVM.Name -ComputerName $TargetNode -MinimumBandwidthAbsolute 0
                }
            }
            write-host 'Moving to correct vSwitch'
            Connect-VMNetworkAdapter -VMNetworkAdapter $vNICsOnOldvSwitch02 -SwitchName $NewVirtualSwitch
            write-host "Setting VLAN $VlanId02"
            Set-VMNetworkAdapterVlan  -VMNetworkAdapter $vNICsOnOldvSwitch02 -Access -VLANid $VlanId02
        }
    }

    #If the VM has checkpoints, this is when we remove them.
    $checkpoints = get-vmcheckpoint -ComputerName $TargetNode -VMName $MovedVM.VMName

    if ($Null -ne $checkpoints) {
        write-host -foregroundcolor yellow "This VM has checkpoints and they will ALL be removed"
        $CheckPoints | Remove-VMCheckpoint 
    }
}

Below is the output of a VM where we had to change the switch name, enable a VLAN ID, deal with absolute QoS settings and remove checkpoints. All this without causing downtime. Nor did we change the original virtual machine in case the shared nothing migration fails.

Some observations

The fact that we are using PowerShell is great. You can only set weighted bandwidth limits via PowerShell. The GUI is only for absolute values and it will throw an error trying to use it when the virtual switch is configured as weighted.

This means you can embed setting the weights in your script if you so desire. If you do, read up on how to handle this best. trying to juggle the weight settings to be 100 in a dynamic environment is a bit of a challenge. So use the default flow pool and keep the number of virtual network adapters with unique settings to a minimum.

Conclusion

To avoid downtime we removed all the set minimum and maximum bandwidth settings on any virtual network adapter. By doing so we ensured that the swap to the new virtual switch right after the successful shared nothing live migration will succeed. If you want you can set weights on the virtual network adapters afterward. But as the bandwidth on these new hosts is now a redundant 25 Gbps, the need was no longer there. As a result, we just left them without. this can always be configured later if it turns out to be needed.

Warning: this is a demo script. It lacks error handling and logging. It can also contain mistakes. But hey you get it for free to adapt and use. Test and adapt this in a lab. You are responsible for what you do in your environments. Running scripts downloaded from the internet without any validation make you a certified nut case. That is not my wrongdoing.

I hope this helps some of you. thanks for reading.

DELL released Replay Manager 8.0

DELL Released Replay Manager 8.0

On September the 4th 2019 DELL released Replay Manager 8.0 for Microsoft Servers. This brings us official Windows Server 2019 support. You can download it here: https://www.dell.com/support/home/us/en/04/product-support/product/storage-sc7020/drivers The Dell Replay Manager Version 8.0 Administrator’s Guide and release notes are here: https://www.dell.com/support/home/us/en/04/product-support/product/storage-sc7020/docs

Replay Manager 8.0.0.13 was released early September 2019

I have Replay Manager 8.0 up and running in the lab and in production. The upgrade went fast and easy. And everything kept working as expected. The good news is that Replay Manager 8 service is compatible with the Replay Manager 7.8 Manager and vice versa. This means there was no rush to upgrade everything asap. We could do smoke testing at a releaxed pace before we upgraded all hosts.

Replay Manager 8.0 adds official support for Windows Server 2019 and Exchange Server 2019. I have tested Windows Server 2019 with Replay Manager 7.8 as well for many months. I was taking snapshots every 30 minutes for months with very few issues actually. But no we had oficial support. Replay Manager 8.0 also introduces support for SCOS 7.4.

No improvements with Hyper-V backups

Now, we don’t have SCOS 7.4 running yet. This will take another few weeks to go into general available status. But for now, with both Windows Server 2016 and 2019 host we noticed the following dissapointing behaviour with Hyper-V workloads. Replay Manager 8 still acts as a Windows Server 2012 R2 requestor (backup software) and hence isn’t as fast and effectice as it could be. I actually do not expect SCOS version 7.4 to make a difference in this. If you leverage the hardware VSS provider with backup software that does support Windows Server 2016/2019 backup mechanisms for Hyper-V this is not an issue. For that I mostly leverage Veeam Backup & Replication.

It is a missed opportunity unless I am missing something here that after so many years Replay Manager requestor still does not support Windows Server 2016/2019 native Hyper-V backups capabilities. And once again, I didn’t even mention the fact that the indivual Hyper-V VMs backups need modernization in Replay Manager to deal with VM mobility. See https://blog.workinghardinit.work/2017/06/02/testing-compellent-replay-manager-7-8/. I also won’t mention Live Volumes as I did then. Now as I leverage Replay Manager as a secondary backup method, not a primary I can live with this. But it could be so much better. I really need a chat with the PM for Replay Manager. Maybe at Dell Technologies World 2020 if I can find a sponsor for the long haul flight.