Copy Cluster Roles Hyper-V Cluster Migration Fails at Final Step with error Virtual Machine Configuration ‘VM01’ failed to register the virtual machine with the virtual machine service

I was working on a migration of a nice two node Windows Server 2012 Hyper-V cluster to Windows Server 2012 R2. The cluster consist out of 2 DELL R610 servers and a DELL  MD3200 shared SAS disk array for the shared storage. It runs all the virtual machines with infrastructure roles etc. It’s a Cluster In A Box like set up. This has been doing just fine for 18 months but the need for features in Windows Server 2012 R2 became too much to resists. As the hardware needs to be recuperated and we have a maintenance windows we use the copy cluster roles scenario that we have used so many times before with great success. It’s the Perform an in-place migration involving only two servers scenario documented on TechNet and as described in one of my previous blogs Migrating a Hyper-V Cluster to Windows 2012 R2 for your convenience.

Virtual Machine Configuration ‘VM01’ failed to register the virtual machine with the virtual machine service

As the source host was running on Windows Server 2012 we could have done the live migration scenario but the down time would be minimal and there is a maintenance window. So we chose this path.

So we performed a good health check. of the source cluster and made sure we had no snapshots left hanging around. Yes it’s supported now for this migration scenario but I like to have as few moving parts as possible during a migration.

It all went smooth like silk. After shutting down the VMs on the source cluster node, bringing the CSV off line (and un-presenting the LUN from the source node for good measure), we present that LUN to the target host. We brought the CSV on line and when that was completed successfully we were ready to bring the virtual machines on line and that failed …

Log Name:      Microsoft-Windows-Hyper-V-High-Availability-Admin
Source:        Microsoft-Windows-Hyper-V-High-Availability
Date:          4/02/2014 19:26:41
Event ID:      21102
Task Category: None
Level:         Error
Keywords:     
User:          SYSTEM
Computer:      VM01.domain.be
Description:
‘Virtual Machine Configuration VM01’ failed to register the virtual machine with the virtual machine management service.

image

image

 

Let’s dive into the other event logs. On the host the application security and system event log are squeaky clean. The Hyper-V event logs are pretty empty or clean to except for these events in the Hyper-V-VMMS Admin log.

Log Name:      Microsoft-Windows-Hyper-V-VMMS-Admin
Source:        Microsoft-Windows-Hyper-V-VMMS
Date:          4/02/2014 19:26:40
Event ID:      13000
Task Category: None
Level:         Error
Keywords:     
User:          SYSTEM
Computer:      VM01.domain.be
Description:
User ‘NT AUTHORITYSYSTEM’ failed to create external configuration store at ‘C:ClusterStorageHyperVStorageVM01’: The trust relationship between this workstation and the primary domain failed.. (0x800706FD)

 

image

Bingo. It must be the fact that no domain controller is available. It’s completely self contained cluster and both domain controller virtual machines are highly available and reside on the CSV. Now the CSV does come on line without a DC since Windows Server 2012 so that’s not the issue. it’s the process of registering the VMs that fails without a DC in an Active Directory environment.

Getting passed this issue

There are multiple ways to resolve this and move ahead with our cluster migration. As the environment is still fully functional on the source cluster I just removed a DC virtual machine from high availability on the cluster. I shut it down and exported it. I than copied it over to the node of the new cluster  (we’re going to nuke the source host afterwards and install W2K12R2, so we moved it to the new host where it could stay) where I put it on local storage and imported it. For this is used the “Register the virtual machine in-place option”. I did not make it high available.

image

After verifying that we could ping the DC and it was up and running well we tried the final phase of the migration again. It went as smooth as we have come to expect!

Other options would have been to host the DC virtual machine on a laptop or other server. If you could no longer get to the the DC for export & import or heck even a shared nothing migration depending on your environment can help you out of this pickle. A restore from backup would also work. But here in that 2 node all in one cluster our approach was fast and efficient.

So there you go. Tip to remember. Virtualizing domain controllers is fully supported, no worries there but you need to make sure that if you have a dependency on a DC you don’t have the DC depending on that dependency. It’s chicken an egg thing.

Checking Host Integration Services Version on all Nodes of A Windows Server 2012 Hyper-V Cluster With PowerShell

It’s important to keep our Hyper-V cluster hosts and the virtual machines running on them up to date. Whilst we have great and free solutions to achieve this there are some things missing like centralized reporting on the Integration Services component version running on all of the nodes in a cluster and way to upgrade all the virtual machines to version running on the host. This post deals with the first issue.

Before we upgrade the Integration Services components on the virtual machines we always check if all nodes in the cluster are on the same version themselves. Sure this should not happen if you mange them right but my world isn’t perfect. So trust but verify.With cluster sizes now up to 64 nodes it’s ever more important to keep an eye on them. But even for smaller cluster the task of determining the Integration Services components manually via the GUI, event viewer and/or registry is rather tedious. Out of sync Integration Services components can be troublesome and cause many issues and if you have out of sync virtual machines, imagine the extra mess you’ll be in when even the cluster nodes are running different versions.

To make live easier I threw a little PowerShell script together to check the host Integration Services component version on all nodes of a Window Server 2012 Hyper-V Cluster With PowerShell. I’m far from a PowerShell guru, but you’ll see that you can do a lot of things  done even if you’re not. I’m sharing it here for you to use, adapt for your own needs and get some inspiration. It basically allows you to optionally pass an expected version of the IS components and a cluster name like this

CheckHyperVClusterHostsICVersion -Version 6.2.9200.16433 -cluster "MyClusterName"

It does the following:

  • It will list per Integration Services component version found on cluster nodes what version was found on what nodes. This gives you a nice overview. I hope this never becomes to much of a list in your clusters.
  • If you don’t specify a cluster it will try to connect to the cluster to which the host you’re running on belongs, if any.
  • If the host does not belong to a cluster it will just provide feedback on the IS version of that Hyper-V host you’re running the script on.

Here’s a screen shot of when you run this on a none clustered host, without Hyper-V installed:

image

This is the result of running it against a well maintained cluster without any parameters that has been updated with KB2770917:

image

The same but now with the expected version and cluster name passed as parameters

image

So, there you go, I hope you find it useful.

#===========================================================
# # Microsoft PowerShell Source File 
# 
# NAME:    CheckISCOnNodesOfHyperVCluster.ps1
# VERSION:    1.0.0.0

# AUTHOR:    Didier Van Hoye
# DATE :    17/11/2012
# 
# COMMENT:     This script is intended to be run 
#              against Windows Server 2012 and assumes
#            the use of PowerShell 3.0
#            The parameters are optional but if you
#            leave out some the remainder should be named.
# # =======================================================
 
cls
$ErrorActionPreference = "Stop"

 
function CheckHyperVClusterHostsICVersion
{
    Param
    (
        #Param help description
        [Version]
        $ExpectedISCVersion,
        #Param help description
        [String]
        $Cluster
    )

    Write-Host "This script will check the IS components on all nodes of a cluster." -ForegroundColor Green
 
    If ($ExpectedISCVersion) {Write-Host "You specified the expected IS component version to be $ExpectedISCVersion" -ForegroundColor Green}
    Else {Write-host "You did not specify an expected IS component version." -ForegroundColor Green}
    
    If ($Cluster)
    {
        Try
        {
            $ClusterObject= Get-Cluster -Name $Cluster
        }
        Catch
        {     
            Write-Host "We cannot contact the cluster you specified"
        }
    }
    Else
    {    
        write-Host "`n`n"
        Write-host "You did not specify a cluster to connect to. We'll use the cluster to which the node this script is running on belongs if any." -ForegroundColor Yellow
        write-Host "`n`n"
  
        Try
        {
            $ClusterObject = Get-Cluster
        }
  
        Catch
        {
            $LocalHost = $env:computername
            Write-Host
            Write-Host "The current node ($LocalHost) is not a member of a cluster. As a courtsey to you we'll check the IS components for current host" -foregroundcolor Magenta
            Write-Host
        }
 
    }
  
    If ($ClusterObject) {$ToCheck= "the nodes of cluster $ClusterObject"} Else { $ToCheck = "server $env:computername"}
 
    write-Host "Attempting to running Integration Components version check on" $ToCheck -ForegroundColor Green
    Write-Host


    If ($ClusterObject)
    {

        $ClusterNodes = Get-Clusternode -cluster $ClusterObject.Name
        
        #Declare an hashtable to hold all host/IS version values. The hosts are the key here.
        $HostISVersions = @{}
 
        foreach ($ClusterNode in $ClusterNodes)
        {
            Try
            {
                 $HostISVersions[$ClusterNode.Name]=Get-ItemProperty "HKLM:SOFTWAREMicrosoftWindows NTCurrentVersionVirtualizationGuestInstallerVersion" | select -ExpandProperty Microsoft-Hyper-V-Guest-Installer
            }
            Catch
            {
            Write-Host "We could not determine the version of the Integration Services on this host, probably due to this not being a Hyper-V host" -ForegroundColor Orange
            Write-Host "We'll check this for you right now" -ForegroundColor Orange
            $HyperVFeature = Get-WindowsFeature Hyper-V
            If ($HyperVFeature.Installstate -eq "Installed")
            {
              Write-Host "Hyper-V seems to be installed on this node. Something else is wrong." -ForegroundColor Red
            }
            Else
            {
                Write-Host "Hyper-V is indeed not installed on this node." -ForegroundColor Orange
            }
            }
        }
         #Use GetEnumerator or thise sorting thing doesn't work out well on an hash tabel :-)
        $UniqueIcVersions = $HostISVersions.GetEnumerator() | Sort-Object -Property Value -Unique
 
        Write-Host "We've found " $UniqueIcVersions.count "versions on the" $HostISVersions.count "nodes of your cluster" $ClusterObject.Name
 
        ForEach ($IcVersion in $UniqueIcVersions )
        {
            $Counter = 1
            $IcVersionValue = $IcVersion.value
            "IC version " + $IcVersion.value + " is found in:"
            foreach ($Key in ($HostISVersions.GetEnumerator()| Where-Object { $_.value -eq $IcVersionValue}))
            {
                "`t" + "$Counter : " + $Key.Name
                $Counter= $Counter + 1
            }
 
            If ($ExpectedISCVersion)
            {
               
                $CompareVersions = ([Version]$IcVersion.Value).CompareTo([Version]$ExpectedISCVersion)
                        
                switch ($CompareVersions)
                {
                    0 {Write-Host "This version ($IcVersionValue) is equal to the expected version ($ExpectedISCVersion)." -ForegroundColor Green}
                    1 {Write-Host "This version ($IcVersionValue) is higher than the expected version ($ExpectedISCVersion). Please ensure all hosts run the same IC version level." -ForegroundColor Yellow}
                    -1 {Write-Host "This version ($IcVersionValue) is lower than the expected version ($ExpectedISCVersion). Please ensure all hosts run the same IC version level." -ForegroundColor Red}
                }
            }

        }
    }

    Else
    {
        Try
        {
            $HostIcVersion = Get-ItemProperty "HKLM:SOFTWAREMicrosoftWindows NTCurrentVersionVirtualizationGuestInstallerVersion" | select -ExpandProperty Microsoft-Hyper-V-Guest-Installer
            Write-Host "The IS component version on server $localhost is $HostIcVersion"
            If ($ExpectedISCVersion)
            {
               
                   $CompareVersions = ([Version]$HostIcVersion).CompareTo([Version]$ExpectedISCVersion)
                        
                switch ($CompareVersions)
                {
                0 {Write-Host "This version ($HostIcVersion) is equal to the expected version ($ExpectedISCVersion)." -ForegroundColor Green}
                1 {Write-Host "This version ($HostIcVersion) is higher than the expected version ($ExpectedISCVersion). Please check if you need to downgrade your host or if the expected version is correct." -ForegroundColor Yellow}
                -1 {Write-Host "This version ($HostIcVersion) is lower than the expected version ($ExpectedISCVersion). Please check if you need to upgrade your host or if the expected version is correct." -ForegroundColor Red}
                }
            }
        }
        Catch
        {
            Write-Host "We could not determine the version of the Integration Services on this host, probably due to this not being a Hyper-V host" -ForegroundColor yellow
            Write-Host "We'll check this for you right now" -ForegroundColor yellow
            $HyperVFeature = Get-WindowsFeature Hyper-V
            If ($HyperVFeature.Installstate -eq "Installed")
            {
                Write-Host "Hyper-V seems to be installed on this node. Something else is wrong." -ForegroundColor Red
            }
            Else
            {
                Write-Host "Hyper-V is indeed not installed on this node." -ForegroundColor yellow
            }
        }
    }
}
 
CheckHyperVClusterHostsICVersion -Version 6.2.9200.16433 -cluster "MyClusterName"

Hyper-V, KEMP LoadMaster & DFS Replication Provide FTP Solutions For Surveyors Network

Remember the blog entry about A Hardware Load Balancing Exercise With A Kemp Loadmaster 2200 KEMP Loadmaster to provide redundancy for a surveyor’s GPS network? Well, we got commissioned to come up with a redundant FTP solution for their needs last month and this blog is about what we came up with. The aim was to make due with what is already available.

FTP 7.5 in Windows 2008 R2

We use the FTP Server available in Windows 2008 R2 which provides us with all functionality we need: User Isolation and FTP over SSL.

The data from all the GPS stations is sent to the FTP server for safekeeping and is to be used to overcome certain issues customers might have with missing data from surveying solutions. This data is not being made available to customers by default, it’s only for special cases & purposes. So we need to collect the data in its own folder named after its account so we can configure user isolation. This also prevents GPS Stations from writing in locations where it shouldn’t.

As every GPS Station slogs in with the “Station” account it ends up in the “Station” folder as root FTP folder and can’t read or write out of that folder. The survey solution service desk can FTP into that folder and access any data they want.

The data that’s being provided by the software solution (LanSurvey01 and lanSurvey02) is to be sent to its own folder “Data” that is also set up with user Isolation to prevent the application from reading or writing anywhere else on the file system.

The data from should be publicly available to the customers and for this, we created a separate FTP site called “Public” that is configured for anonymous access to the same Data folder but with read permissions only. This way the customers can get all the data they need but only have read access to the required data and nothing more.

For more information on setting up FTP 7.5 and using FTP over SSL you might take a look here http://learn.iis.net/page.aspx/304/using-ftp-over-ssl/ and read my blog on FTP over SSL Pollution of the Gene Pool a Real Life “FTP over SSL” Story

High Availability

In the section above we’ve taken care of the FTP needs. Now we still need redundancy. We could use Windows NLB but since this network already uses a KEMP Loadmaster due to the fact that the surveyor’s software has some limitations in its configuration capabilities that don’t allow Windows Network Load Balancing being used.

We want both the GPS stations and the surveyor’s application servers to be able to send FTP data when one of the receiving FTF servers is down for some reason (updates, upgrades, maintenance, or failure). What we did is set up a VIP for use with FTP on the Kemp Loadmaster. This VIP is what is used by the GPS Stations and the application to write and by the customers to read the FTP data.

DFS-R to complete the solution

But up until now, we’ve been ignoring an issue. When using NLB to push data to hosts we need to ensure that all the data will be available on all the nodes all of the time. You could opt to only have the users access the FTP service via an NLB VIP address and push the data to both nodes without using NLB. The latter might be done at the source but then you have twice the amount of data to push out. It also means extra work to configure and maintain the solution. We could copy the data to one FTP node and copy it from there. That works but leaves you very vulnerable to a service outage when the node that gets the original copy is down. No new data will be available. Another issue is the fact that you need a rock-solid way to copy the data and have it done it a timely manner, even after downtime of one or more of the nodes.

As you read above we provide an NLB VIP as a target for the surveyor’s application and the GPS Stations to send their data to. This means the data will be sent to the FTP NLB array even if one of the nodes is down for some reason. To get the data that arrives from 2 application servers and from 40 GPS Stations synchronized and up to date on both the NLB nodes we use the Data File System – Replication (DFS-R) built into Windows 2008 R2. We have no need for a DFS-Namespace here, so we only use the replication feature. This is easy and fast to set up (add the DFS service from the File Server Role) and it doesn’t require any service downtime (no reboot required). The fact that both the FTP nodes are members of a Windows 2008 R2 domain does help with making this easy. To make sure we have replication in all direction we opt to set it up as a full and the replication schedule is 24/7, no days off J Since we chose to replicate the FTP root folder we have both the Data and the Stations folders covered as well as the folder structure needed to have FTP user Isolation function.

This solution was built fast and easily using Windows 2008 R2 out of the box functionality: FTP(S) with User Isolation and DFS-R. The servers are running as hyper-V guests in a Hyper-V cluster providing high availability through Live Migration.

Windows 2008 R2 SP1 – RemoteFX Hardware To Get The Needed GPU Performance

When the first information about RemoteFX in Windows 2008 R2 SP1 Beta became available a lot of people busy with VDI solutions found this pretty cool and good news. It’s is a very much needed addition in this arena. Now after that first happy reaction the question soon arises about how the host will provide all that GPU power to serve a rich GUI experience to those virtual machines. In VDI solutions you’re dealing with at least dozens and often hundreds of VM’s. It’s clear, when you think about it, that just the onboard GPU won’t hack it. And how many high performance GPU can you put into a server? Not many or not even none depending on the model. So where does the VDI hosts in a cluster get the GPU resources? Well there are some servers that can contain a lot of GPUs. But in most cases you just add GPU units to the rack which you attach to the supported server models. Such units exist for both rack servers and for blade servers. Dell has some info up on this over here here. The specs on the  the PowerEdge C410x, a 3U, external PCIe expansion chassis by DELL can be found following this link C410x. It’s just like with external DAS Disk bays. You can attach one or more 1U / 2U servers to a chassis with up to 16 GPUs. They also have solutions for blade servers. So that’s what building a RemoteFX enabled VDI farm will look like. Unlike some of the early pictures showing a huge server chassis in order to make room to stuff all those GPU’s cards the reality will be the use of one or more external GPU chassis, depending on the requirements.