PowerShell script to maintain Azure Public DNS zone conditional forwarders

Introduction

I recently wrote a PowerShell script to maintain Azure Public DNS zone conditional forwarders. If you look at the list is quite long. Adding these manually is tedious and error-prone. Sure you might only need a few, but hey, I think and prepare long term.

Some background on DNS and private endpoints

When using private endpoints in Azure correct DNS name resolution is essential. While Azure can do a lot of things for you under the hood it is important to wrap your head around name resolution in Azure, for all your public, private, and custom DNS requirements. In the end, you need a DNS solution that is maintainable and works for current and future use cases. Your peaceful IT existence will fall apart fast without the ability to correctly resolve the private endpoint IP addresses to their fully qualified domain name (FQDN).

That in itself is a big subject I will not dive into right now. I will say that host files (if applicable) are OK for testing but not a maintainable solution, except for the smallest environments. In Azure, you can link virtual networks to Private DNS zones to resolve DNS queries for private endpoints. As an alternative, you can use your own custom DNS Server(s) with a forwarder to Azure’s VIP 168.63.129.16 and, at least on-premises conditional forwarders.

The latter is a requirement to resolve DNS queries for Azure resources with private endpoints for on-premises. At least until Azure DNS Private Resolver becomes generally available. That will be the way forward in the future if you otherwise have no need for custom DNS servers.

Please note that name resolution for private endpoints uses the public DNS zones. This allows existing Microsoft Azure services with DNS configurations for a public endpoint to keep functioning when accessed from the internet. Azure will intercept queries that originate from Azure or connected on-premises locations and reply with the private IP address of private endpoints. This configuration must be overridden to connect using your private endpoint.

On-premises DNS Servers

While your custom DNS servers in Azure can forward queries they are not authoritative for to the Azure VIP 168.63.129.16, on-premises servers cannot reach that IP address. They need to send the DNS queries for private Azure resources to a custom DSN Server in Azure via conditional forwarding. The Azure custom DNS server will forward the query to 168.63.129.16.

Example on-prem / Azure ADDS environment with Azure FW DNS proxy

PowerShell script to maintain Azure Public DNS zone conditional forwarders

Below you will find the code for the script. I created a CSV file with all the Azure public DNS Conditional forwarder zones. Zones with placeholders for regions, partitions, or SQL instances will be generated. For that, you need to provide the correct parameters. if not these are ignored.

PowerShell script to maintain Azure Public DNS zone conditional forwarders
Adding all Azure public DNS conditional forwarders to an on-premises DNS server

Another attention point is the fact that you can opt to store the zones in Active Directory or not. If so you can specify in what builtin or custom partition.

There are examples in the script TestAzurePublicDNSZoneForwardersScript.ps1 on how to use it. You will need at least one playground DNS server or better, 2 AD integrated DC/DNS servers for testing.

You can find the script at WorkingHardInIT/AzurePublicDnsZoneForwarders (github.com)

Conclusion

That’s it. I can extend the PowerShell script to maintain Azure Public DNS zone conditional forwarders with extra options when adding or updating conditional forwarders. Right now, for its current role, it does what I need. I do not plan to add an option to update the “store this conditional forwarder in Active Directory” setting as this has a bug.

See Bug when changing the “store this conditional forwarder in active directory” setting for more info. The gist is that it makes changing the setting causes DNS queries for the conditional forwarder to fail. We avoid that issue by removing and adding the conditional forwarders again. In many (most?) use cases so far, the default setting of not storing the conditional forwarder in Active Directory is what I need, so the script has no option to change that default setting until I might need it.

Code

Failing compilation with Azure Automation State Configuration: Cannot connect to CIM server. The specified service does not exist as an installed service

Introduction

You can compile Desired State Configuration (DSC) configurations in Azure Automation State Configuration, which functions as a pull server. Next to doing this via the Azure portal, you can also use PowerShell. The latter allows for easy integration in DevOps pipelines and provides the flexibility to deal with complex parameter constructs. So, this is my preferred option. Of course, you can also push DSC configurations to Azure virtual machines via ARM templates. But I like the pull mechanisms for life cycle management just a bit more as we can update the DSC config and push it out when needed. So, that’s all good, but under certain conditions, you can get the following error: Cannot connect to CIM server. The specified service does not exist as an installed service.

When can you get into this pickle?

DSC itself is PowerShell, and that comes in quite handy. Sometimes, the logic you use inside DSC blocks is insufficient to get the job done as needed. With PowerShell, we can leverage the power of scripting to get the information and build the logic we need. One such example is formatting data disks. Configuring network interfaces would be another. A disk number is not always reliable and consistent, leading to failed DSC configurations.
For example, the block below is a classic way to wait for a disk, and when it shows up, initialize, format, and assign a drive letter to it.

xWaitforDisk NTDSDisk {
    DiskNumber = 2

    RetryIntervalSec = 20
    RetryCount       = 30
}
xDisk ADDataDisk {
    DiskNumber = 2
    DriveLetter = "N"
    DependsOn   = "[xWaitForDisk]NTDSDisk"
} 

The disk number may vary depending on whether your Azure virtual machine has a temp disk or not, or if you use disk encryption or not can trip up disk numbering. No worries, DSC has more up its sleeve and allows to use the disk id instead of the disk number. That is truly unique and consistent. You can quickly grab a disk’s unique id with PowerShell like below.

xWaitforDisk NTDSDisk {
    DiskIdType       = 'UniqueID'
    DiskId           = $NTDSDiskUniqueId #'1223' #GetScript #$NTDSDisk.UniqueID
    RetryIntervalSec = 20
    RetryCount       = 30
}

xDisk ADDataDisk {
    DiskIdType  = 'UniqueID'
    DiskId      = $NTDSDiskUniqueId #GetScript #$NTDSDisk.UniqueID
    DriveLetter = "N"
    DependsOn   = "[xWaitForDisk]NTDSDisk"
}

Powershell in compilation error

So we upload and compile this DSC configuration with the below script.

$params = @{
    AutomationAccountName = 'MyScriptLibrary'
    ResourceGroupName     = 'WorkingHardInIT-RG'
    SourcePath            = 'C:\Users\WorkingHardInIT\OneDrive\AzureAutomation\AD-extension-To-Azure\InfAsCode\Up\App\PowerShell\ADDSServer.ps1'
    Published             = $true
    Force                 = $true
}

$UploadDscConfiguration = Import-AzAutomationDscConfiguration @params

while ($null -eq $UploadDscConfiguration.EndTime -and $null -eq $UploadDscConfiguration.Exception) {
    $UploadDscConfiguration = $UploadDscConfiguration | Get-AzAutomationDscCompilationJob
    write-Host -foregroundcolor Yellow "Uploading DSC configuration"
    Start-Sleep -Seconds 2
}
$UploadDscConfiguration | Get-AzAutomationDscCompilationJobOutput –Stream Any
Write-Host -ForegroundColor Green "Uploading done:"
$UploadDscConfiguration


$params = @{
    AutomationAccountName = 'MyScriptLibrary'
    ResourceGroupName     = 'WorkingHardInIT-RG'
    ConfigurationName     = 'ADDSServer'
}

$CompilationJob = Start-AzAutomationDscCompilationJob @params 
while ($null -eq $CompilationJob.EndTime -and $null -eq $CompilationJob.Exception) {
    $CompilationJob = $CompilationJob | Get-AzAutomationDscCompilationJob
    Start-Sleep -Seconds 2
    Write-Host -ForegroundColor cyan "Compiling"
}
$CompilationJob | Get-AzAutomationDscCompilationJobOutput –Stream Any
Write-Host -ForegroundColor green "Compiling done:"
$CompilationJob

So, life is good, right? Yes, until you try and compile that (DSC) configuration in Azure Automation State Configuration. Then, you will get a nasty compile error.

Cannot connect to CIM server. The specified service does not exist as an installed service

“Exception: The running command stopped because the preference variable “ErrorActionPreference” or common parameter is set to Stop: Cannot connect to CIM server. The specified service does not exist as an installed service.”

Or in the Azure Portal:

Cannot connect to CIM server. The specified service does not exist as an installed service

The Azure compiler wants to validate the code, and as you cannot get access to the host, compilation fails. So the configs compile on the Azure Automation server, not the target node (that does not even exist yet) or the localhost. I find this odd. When I compile code in C# or C++ or VB.NET, it will not fail because it cannot connect to a server and validate my code by crabbing disk or interface information at compile time. The DSC code only needs to be correct and valid. I wish Microsoft would fix this behavior.

Workarounds

Compile DSC locally and upload

Yes, I know you can pre-compile the DSC locally and upload it to the automation account. However, the beauty of using the automation account is that you don’t have to bother with all that. I like to keep the flow as easy-going and straightforward as possible for automation. Unfortunately, compiling locally and uploading doesn’t fit into that concept nicely.

Upload a PowerShell script to a storage container in a storage account

We can store a PowerShell script in an Azure storage account. In our example, that script can do what we want, find, initialize, and format a disk.

Get-Disk | Where-Object { $_.NumberOfPartitions -lt 1 -and $_.PartitionStyle -eq "RAW" -and $_.Location -match "LUN 0" } |
Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -DriveLetter "N" -UseMaximumSize |
Format-Volume -FileSystem NTFS -NewFileSystemLabel "NTDS-DISK" -Confirm:$false

From that storage account, we download it to the Azure VM when DSC is running. This can be achieved in a script block.

$BlobUri = 'https://scriptlibrary.blob.core.windows.net/scripts/DSC/InitialiseNTDSDisk.ps1' #Get-AutomationVariable -Name 'addcInitialiseNTDSDiskScritpBlobUri'
$SasToken = '?sv=2021-10-04&se=2022-05-22T14%3A04%8S67QZ&cd=c&lk=r&sig=TaeIfYI63NTgoftSeVaj%2FRPfeU5gXdEn%2Few%2F24F6sA%3D'
$CompleteUri = "$BlobUri$SasToken"
$OutputPath = 'C:\Temp\InitialiseNTDSDisk.ps1'

Script FormatAzureDataDisks {
    SetScript  = {

        Invoke-WebRequest -Method Get -uri $using:CompleteUri -OutFile $using:OutputPath
        . $using:OutputPath
    }

    TestScript = {
        Test-Path $using:OutputPath
    }

    GetScript  = {
        @{Result = (Get-Content $using:OutputPath) }
    }
} 

But we need to set up a storage account and upload a PowerShell script to a blob. We also need a SAS token to download that script or allow public access to it. Instead of hardcoding this information in the DSC script, we can also store it in automation variables. We could even abuse Automation credentials to store the SAS token securely. All that is possible, but it requires more infrastructure, maintenance, security while integrating this into the DevOps flow.

PowerShell to generate a PowerShell script

The least convoluted workaround that I found is to generate a PowerShell script in the Script block of the DSC configuration and save that to the Azure VM when DSC is running. In our example, this becomes the below script block in DSC.

Script FormatAzureDataDisks {
    SetScript  = {
        $PoshToExecute = 'Get-Disk | Where-Object { $_.NumberOfPartitions -lt 1 -and $_.PartitionStyle -eq "RAW" -and $_.Location -match "LUN 0" } | Initialize-Disk -PartitionStyle GPT -PassThru | New-Partition -DriveLetter "N" -UseMaximumSize | Format-Volume -FileSystem NTFS -NewFileSystemLabel "NTDS-DISK" -Confirm:$false'
        $ PoshToExecute | out-file $using:OutputPath
        . $using:OutputPath
    }
    TestScript = {
        Test-Path $using:OutputPath 
    }
    GetScript  = {
        @{Result = (Get-Content $using:OutputPath) }
    }
}

So, in SetScript, we build our actual PowerShell command we want to execute on the host as a string. Then, we persist to file using our $OutputPath variable we can access inside the Script block via the $using: OutputPath. Finally, we execute our persisted script by dot sourcing it with “. “$using:OutputPath” In TestScript, we test for the existence of the file and ignore the output of GetScript, but it needs to be there. The maintenance is easy. You edit the string variable where we create the PowerShell to save in the DSC configuration file, which we upload and compile. That’s it.

To be fair, this will not work in all situations and you might need to download protected files. In that case, the above will solutions will help out.

Conclusion

Creating a Powershell script in the DSC configuration file requires less effort and infrastructure maintenance than uploading such a script to a storage account. So that’s the pragmatic trick I use. I’d wish the compilation to an automation account would succeed, but it doesn’t. So, this is the next best thing. I hope this helps someone out there facing the same issue to work around the error: Cannot connect to CIM server. The specified service does not exist as an installed service.

Allow or block specific FIDO2 security keys in Azure

Allow or block specific FIDO2 security keys in Azure

There might be situations where you want to allow or block specific FIDO2 security keys in Azure. A policy mandating biometric FIDO2 keys will enforce the specific biometric capable FIDO2 security keys. This blog post provides an example of how to achieve this in Azure.

Allowing only a specific type of security key in Azure

In my example, I enforce the use of one particular biometric key, meaning that other, non-biometric FIDO2 security keys are blocked. In the lab, I only have a biometric key and a non-biometric key. I want to allow only my FEITIAN BioPass K26 security key and block the use of any other type.

We can achieve this surprisingly quickly in Azure. The capability to do so leverages the Authenticator Attestation GUID (AAGUID).  During attestation of the security key, the AAGUID comes into play for looking up the device’s metadata in the FIDO Alliance Metadata Service – FIDO Alliance. As the AAGUID uniquely identifies a type of key from a specific vendor, we can use it to allow or block particular types of keys.

Note that a “type” of keys does not mean unique keys form factors by default. Keys from a vendor with the same capabilities and functionality but with different interfaces can have the same AAGUID.  For example, the FEITIAN BioPass security keys come in multiple interface variants (USB-A, USB-C, Bluetooth, NFC). The K26 has a USB-C interface, and the K27 has a USB-A interface. Yet, both have the same AAGUID. So, when I allow a security key with this AAGUID in Azure, both models of the same type will be allowed. The eiPass, a touch-only device with a USB-C and a Lightning interface, will be blocked as we did not put it in our allow list.

How do you find out the AAGUID?

Perhaps the easiest way of finding out the AAGUID of your security key is to look it up in Azure if you have registered the key there. That is feasible because you will have been testing the security key or keys you want to allow. Now, when you want to block specific keys, you might not have added them. You might not even have them. Then you will need to find the AAGUID online or from the vendor.

There is also a Python script (in the  Python-FIDO2 library provided by Yubico) you can use to find out the AAGUID. But, again, you need to have the device to do this.

Now, some vendors publish a list of AAGUID values for their devices.  Here is the AAGUID list from Ubico and TrustKey. Of course, you can always reach out to your vendor to get them.

Setting FIDO2 security key restrictions

First of all, make sure that you have enabled the FIDO2 Security Key authentication method. You do this in the Azure portal by navigating to Azure Active Directory Security > Authentication methods

Secondly, under Policies, click on FIDO2 Security Key to enter its settings. Under Basics, set ENABLE to Yes and set TARGET to All users or a selection of users. If you choose the latter, add users or a group of users.

In the FIDO2 Security Key settings under Configure, you find two sections GENERAL and KEY RESTRICTION POLICY.

Under GENERAL

You will generally have Allow self-service setup enabled and Enforce attestation set to Yes

Under KEY RESTRICTION POLICY

Set Enforce key restrictions to Yes

Set Restrict specific keys to Allow

Add the AAGUID of the K26 FEITIAN BioPass FIDO2 security key:
77010bd7-212a-4fc9-b236-d2ca5e9d4084

Click Save to activate the policy.

Here, I work with an allow list, so only security keys with their AAGUID in that list will be allowed to register and will work. If we used a blocklist, you allow all keys except those we explicitly put in the block list.

The effects of FIDO2 security key restrictions

So, let’s look at what happens when an end-user has a security key that is not explicitly allowed or is explicitly blocked and tries to register it. First, we allowed self-service so that the user could register their keys by themselves. They do this in the security info section under My Profile or My Sign-Ins. The process seems to work well with the FEITIAN eiPass USB-C/Lightning FIDO2 Security key, which has no biometrics. Hence we don’t allow it.

The user can complete the workflow right up to naming their security key, but when they want to apply the settings, it throws the below error.

That’s cool. What happens to users that have already registered a security key type we now block or don’t allow? Does that still work or not? Let’s find out! I tried to log on with a security key that was previously allowed, but we now blocked it. All goes well up to when I swipe my fingerprint. Then, it informs me, I cannot log in using the method and advises me to sign in via a different method and remove this security key. That is what we expect.

Finally, what happens when someone changes the policy while a user is still logged in? It either throws the same message as above or while navigating, or it throws a “something went wrong” message in your browser. When you click “View more,” it becomes evident a policy is blocking your FIDO security key.

All in all, Azure offers straightforward, effective, and efficient ways of managing what keys to allow or block. Going passwordless when you have played with the FIDO2 security keys seems a lot less complicated and scary than you might think. So please test it out and go for it. A better, safer, and easier authentication method is within grasp for everyone!

LDAP_ALTERNATE_LOGINID_ATTRIBUTE is a gem

Introduction

The registry value LDAP_ALTERNATE_LOGINID_ATTRIBUTE is a gem. It is found under the HKLM\SOFTWARE hive in the key \Microsoft\AzureMfa. It plays a critical part to get the NPS extension for Azure MFA to work in real-life scenarios.

LDAP_ALTERNATE_LOGINID_ATTRIBUTE is a gem

For the NPS extension for Azure MFA to work we need to have a match between the User Principal Name (UPN) in the on-premises Active Directory and in Azure Active Directory (AzureAD). The mapping between those two values is not always one on one. You can have Azure AD Connect use different a attribute to populate the Azure Active Directory UPN than the on-premises UPN.

There are many reasons you can need to do so and it happens a lot in real-world environments. Changing a UPN is possible but not always in the manner one wants. Sometimes these reasons are technical, political, or process-driven. In the end, you don’t want to break other processes, confuse your users or upset the powers that be. No matter what the reason, what can you don when you cannot change the UPN to make them match up?

LDAP_ALTERNATE_LOGINID_ATTRIBUTE is a gem

When you have installed the NPS extension for Azure MFA you will find part of its configuration in the registry. In there you can add values or leverage existing ones. One of those is LDAP_ALTERNATE_LOGINID_ATTRIBUTE. It allows using the NPS extension for Azure MFA despite the fact the UPN for users does not match between on-premises Active Directory and the UPN in Azure Active Directory.

What it does is instead of sending the on-premises UPN to Azure AD it uses an alternate value. The trick is the select the attribute that was used to populate the Azure AD UPN in scenarios where these do not match. In our example that is the mail attribute.

AD connect uses the mail attribute to populate the Azure AD UPN for our users. So we have [email protected] there.

AD DS mail attribute set to a different value than the UPN.

In our example here we assume that we cannot add an alternate UPN suffix to our Active Directory and change the users to that. Even if we could, the dots in the user name would require a change there. That could get messy, confuse people, break stuff etc. So that remains at [email protected].

Our AD DS UPN is set to the domain name suffix and the account name has no dots.

When we have the NPS extension for Azure MFA set up correctly and functioning we can set the LDAP_ALTERNATE_LOGINID_ATTRIBUTE to “mail” and it will use that to validate the user in Azure and send an MFA challenge.

LDAP_ALTERNATE_LOGINID_ATTRIBUTE to the rescue

Need help configuring the NPS extension for Azure MFA ?

By the way, if your need help configuring the NPS extension for Azure MFA you can read these two articles for inspiration.

Conclusion

There are a lot of moving parts to get an RD Gateway deployment with NPS extension for Azure MFA to work. It would be a pity to come to the conclusion it takes a potentially disruptive change to a UPN, whether on-premises and/or in Azure is required for it to work. Luckily there is some flexibility in how you configure the NPS extension for Azure MFA via its registry keys. In that respect, LDAP_ALTERNATE_LOGINID_ATTRIBUTE is a gem!