Connect to an Azure VM via Bastion with native RDP using only Azure PowerShell

Connect to an Azure VM via Bastion with native RDP using only Azure PowerShell

To connect to an Azure VM via Bastion with native RDP using only RDP requires a custom solution. By default, the user must leverage Azure CLI. It also requires the user to know the Bastion subscription and the resource ID of the virtual machine. That’s all fine for an IT Pro or developer, but it is a bit much to handle for a knowledge worker.

That is why I wanted to automate things for those users and hide that complexity away from the users. One requirement was to ensure the solution would work on a Windows Client on which the user has no administrative rights. So that is why, for those use cases, I wrote a PowerShell script that takes care of everything for an end user. Hence, we chose to leverage the Azure PowerShell modules. These can be installed for the current user without administrative rights if needed. Great idea, but that left us with two challenges to deal with. These I will discuss below.

A custom PowerShell Script

The user must have the right to connect to their Virtual Machine in Azure over the (central) bastion deployment. These are listed below. See Connect to a VM using Bastion – Windows native client for more information.

  • Reader role on the virtual machine.
  • Reader role on the NIC with private IP of the virtual machine.
  • Reader role on the Azure Bastion resource.
  • Optionally, the Virtual Machine Administrator Login or Virtual Machine User Login role

When this is OK, this script generates an RDP file for them on the desktop. That script also launches the RDP session for them, to which they need to authenticate via Azure MFA to the Bastion host and via their VM credentials to the virtual machine. The script removes the RDP files after they close the RDP session. The complete sample code can be found here on GitHub.

I don’t want to rely on Azure CLI

Microsoft uses Azure CLI to connect to an Azure VM via Bastion with native RDP. We do not control what gets installed on those clients. If an installation requires administrative rights, that can be an issue. There are tricks with Python to get Azure CLI installed for a user, but again, we are dealing with no technical profiles here.

So, is there a way to get around the requirement to use Azure CLI? Yes, there is! Let’s dive into the AZ CLI code and see what they do there. As it turns out, it is all Python! We need to dive into the extension for Bastion, and after sniffing around and wrapping my brain around it, I conclude that these lines contain the magic needed to create a PowerShell-only solution.

See for yourself overhere: azure-cli-extensions/src/bastion/azext_bastion/custom.py at d3bc6dc03bb8e9d42df8c70334b2d7e9a2e38db0 · Azure/azure-cli-extensions · GitHub

In PowerShell, that translates into the code below. One thing to note is that if this code is to work with PowerShell for Windows, we cannot use “keep-alive” for the connection setting. PowerShell core does support this setting. The latter is not installed by default.

# Connect & authenticate to the correct tenant and to the Bastion subscription
Connect-AzAccount -Tenant $TenantId -Subscription $BastionSubscriptionId | Out-Null

 #Grab the Azure Access token
    $AccessToken = (Get-AzAccessToken).Token
    If (!([string]::IsNullOrEmpty($AccessToken))) {
        #Grab your centralized bastion host
        try {
            $Bastion = Get-AzBastion -ResourceGroupName $BastionResoureGroup -Name $BastionHostName
            if ($Null -ne $Bastion ) {
                write-host -ForegroundColor Cyan "Connected to Bastion $($Bastion.Name)"
                write-host -ForegroundColor yellow "Generating RDP file for you to desktop..."
                $target_resource_id = $VmResourceId
                $enable_mfa = "true" #"true"
                $bastion_endpoint = $Bastion.DnsName
                $resource_port = "3389"

                $url = "https://$($bastion_endpoint)/api/rdpfile?resourceId=$($target_resource_id)&format=rdp&rdpport=$($resource_port)&enablerdsaad=$($enable_mfa)"

                $headers = @{
                    "Authorization"   = "Bearer $($AccessToken)"
                    "Accept"          = "*/*"
                    "Accept-Encoding" = "gzip, deflate, br"
                    #"Connection" = "keep-alive" #keep-alive and close not supported with PoSh 5.1 
                    "Content-Type"    = "application/json"
                }

                $DesktopPath = [Environment]::GetFolderPath("Desktop")
                $DateStamp = Get-Date -Format yyyy-MM-dd
                $TimeStamp = Get-Date -Format HHmmss
                $DateAndTimeStamp = $DateStamp + '@' + $TimeStamp 
                $RdpPathAndFileName = "$DesktopPath\$AzureVmName-$DateAndTimeStamp.rdp"
                $progressPreference = 'SilentlyContinue'
            }
            else {
                write-host -ForegroundColor Red  "We could not connect to the Azure bastion host"
            }
        }
        catch {
            <#Do this if a terminating exception happens#>
        }
        finally {
            <#Do this after the try block regardless of whether an exception occurred or not#>
        }

Finding the resource id for the Azure VM by looping through subscriptions is slow

As I build a solution for a Windows client, I am not considering leveraging a tunnel connection (see Connect to a VM using Bastion – Windows native client). I “merely” want to create a functional RDP file the user can leverage to connect to an Azure VM via Bastion with native RDP.

Therefore, to make life as easy as possible for the user, we want to hide any complexity for them as much as possible. Hence, I can only expect them to know the virtual machine’s name in Azure. And if required, we can even put that in the script for them.

But no matter what, we need to find the virtual machine’s resource ID.

Azure Graph to the rescue! We can leverage the code below, and even when you have to search in hundreds of subscriptions, it is way more performant than Azure PowerShell’s Get-AzureVM, which needs to loop through all subscriptions. This leads to less waiting and a better experience for your users. The Az.ResourceGraph module can also be installed without administrative rights for the current users.

$VMToConnectTo = Search-AzGraph -Query "Resources | where type == 'microsoft.compute/virtualmachines' and name == '$AzureVmName'" -UseTenantScope

Note using -UseTenantScope, which ensures we search the entire tenant even if some filtering occurs.

Creating the RDP file to connect to an Azure Virtual Machine over the bastion host

Next, I create the RDP file via a web request, which writes the result to a file on the desktop from where we launch it, and the user can authenticate to the bastion host (with MFA) and then to the virtual machine with the appropriate credentials.

        try {
            $progressPreference =  'SilentlyContinue'
            Invoke-WebRequest $url -Method Get -Headers $headers -OutFile $RdpPathAndFileName -UseBasicParsing
            $progressPreference =  'Continue'

            if (Test-Path $RdpPathAndFileName -PathType leaf) {
                Start-Process $RdpPathAndFileName -Wait
                write-host -ForegroundColor magenta  "Deleting the RDP file after use."
                Remove-Item $RdpPathAndFileName
                write-host -ForegroundColor magenta  "Deleted $RdpPathAndFileName."
            }
            else {
                write-host -ForegroundColor Red  "The RDP file was not found on your desktop and, hence, could not be deleted."
            }
        }
        catch {
            write-host -ForegroundColor Red  "An error occurred during the creation of the RDP file."
            $Error[0]
        }
        finally {
            $progressPreference = 'Continue'
        }

Finally, when the user is done, the file is deleted. A new one will be created the next time the script is run. This protects against stale tokens and such.

Pretty it up for the user

I create a shortcut and rename it to something sensible for the user. Next, I changed the icon to the provided one, which helps visually identify the shortcut from any other Powershell script shortcut. They can copy that shortcut wherever suits them or pin it to the taskbar.

Connect to an Azure VM via native RDP using only Azure PowerShell

What is AzureArcSysTray.exe doing on my Windows Server?

Introduction to AzureArcSysTray.exe

After installing the October 2023 updates for Windows Server 2022, I noticed a new systray icon, AzureArcSysTray.exe.

What is AzureArcSysTray.exe doing on my Windows Server?

It encourages me to launch Azure Arc Setup.

AzureArcSysTray.exe

Which I hope takes a bit more planning than following a systray link. But that’s just me, an old-school IT Pro.

Get rid of the systray entry

Delete the AzureArcSysTray.exe value from the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run registry key. Well, use GPO or another form of automation to get this done whole sales. I used Computer Configuration GPO Preferences in the lab. See the screenshot below from my home lab. It is self-explanatory.

The added benefit of a GPO is that it will deal with it again if Microsoft pushes it again in the next update cycle.

Uninstall the feature

Using DISM or Server Manager, you can uninstall the feature altogether. Do note that this requires a reboot!

Disable-WindowsOptionalFeature -Online -FeatureName AzureArcSetup

Or

Note: This removes the systray exe and Azure Arc Setup. If someone already set it up and configured it, that is still there and needs more attention as the Azure Connect Machine Agent is up and running. Does anyone really onboard servers like this in Azure Arc?

Bad Timing

Well, this was one thing I could have done without on the day I was deploying the October 2023 updates expeditiously. Why expedited? Well, it was about 104 CVEs, of which 12 are critical Remote Code Execution issues, and 3 are ZeroDays) and a Hyper-V RCT fix that we have been waiting for (for the past five – yes, 5 – years). Needless to say, testing + rollout was swift. That AzureArcSysTray.exe delayed us, as we had to explain and mitigate it.

Is it documented?

Yes, it is, right here: https://learn.microsoft.com/en-us/azure/azure-arc/servers/onboard-windows-server. It was incomplete on Tuesday night, but they added to it quickly.

Judging from some social media, Reddit. Slack channels, not too many people were amused with all this. See https://www.reddit.com/r/sysadmin/comments/174ncwy/patch_tuesday_megathread_20231010/k4cdqsj/

https://www.reddit.com/r/sysadmin/comments/1763a7o/azure_arc/https://www.reddit.com/r/sysadmin/comments/1763a7o/azure_arc/

Conclusion

I had to explain what it was and our options to eliminate it, all while we were asked to deploy the updates as soon as possible. Finding AzureArcSysTray.exe Azure Arc Setup installed was not part of the plan late Tuesday night in the lab.

Please, Microsoft, don’t do this. We all know Azure Arc is high on all of Microsoft’s agenda. It is all the local Microsoft employees have been talking about for weeks now. We get it. But nagging us with systray icons is cheesy at best, very annoying, and, for many customers, nearly unacceptable.

ConvertFrom-Json is not serializable

Introduction

While writing Bicep recently, I was stumped by the fact that my deployment kept failing. I spent a lot of time troubleshooting many possible ideas on what might be causing this. As JSON is involved and I am far from a JSON syntax guru, I first focused on that. Later I moved to how I use JSON in Bicep and PowerShell before finally understanding the problem was due to the fact that ConvertFrom-Json is not serializable.

Parameters with Bicep

When deploying resources in Azure with Bicep, I always need to consider who has to deliver or maintain the code and the parameters. It has to be somewhat structured, readable, and understandable. It can’t be one gigantic listing that confuses people to the point they are lost. Simplicity and ease of use rule my actions here. I know when it comes to IaC, this can be a challenge. So, when it comes to parameters, what are our options here?

  • I avoid hard-coding parameters in Bicep. It’s OK for testing while writing the code, but beyond that, it is a bad idea for maintainability.
  • You can use parameter files. That is a considerable improvement, but it has its limitations.
  • I have chosen the path of leveraging PowerShell to create and maintain parameters and pass those via objects to the main bicep file for deployment. That is a flexible and maintainable approach. Sure, it is not perfect either, but neither am I.

Regarding Bicep and PowerShell, we can also put parameters in separate files and read those to create parameters. Whether this is a good idea depends on the situation. My rule of thumb is that it is worth doing when things become easier to read and maintain while reducing the places where you have to edit your IaC files. In the case of Azure Firewall Policy Rules Collection Groups, Rules collections, and Rules, it can make sense.

Bicep and JSON files

You can read file content in Bicep using. With the json() function, you can tell Bicep that this is JSON. So far, so good. The below is perfectly fine and works. We can loop through that variable in a resource deployment.

var firewallChildRGCs = [

    json(loadTextContent('./AFW/Policies/RGSsAfwChild01.json'))

    json(loadTextContent('./AFW/Policies/RGSsAfwChild02.json'))

    json(loadTextContent('./AFW/Policies/RGSsAfwChild03.json'))

]

However, I am not entirely happy with this. While I like it in some aspects, it conflicts with my desire not needing to edit a working Bicep file once it is in use. So what do I like about it?

It keeps Bicep clean and concise and limits the looping to iterate over the Rules Collection Groups, thus avoiding the nested looping for Rules collections and Rules. Why is that? Because I can do this

@batchSize(1)

resource firewallChildPolicyWEUColGroups 'Microsoft.Network/firewallPolicies/ruleCollectionGroups@2022-07-01' = [for (childrcg, index) in firewallChildRGCs: {

  parent: firewallChildPolicyWEU

  name: childrcg.name

  dependsOn: [firewallParentPolicyWEUColGroups]

  properties: childrcg.properties

}]

As you can see, I loop through the variable and pass the JSON into the properties. That way, I create all Rule Collections and Rules without needing to do any nested looping via “helper” modules to get this done.

The drawback, however, is that the loadTextContent function in Bicep cannot use dynamic parameters or variables. As a result, the paths to the files need to be hard coded into the Bicep file. That is something we want to avoid. But until that is possible, it is a hard restriction. That is because parameters are evaluated during runtime (bicep deployment), whereas loadTextContent in Bicep happens while compiling (bicep build). So, in contrast to the early previews of Bicep, where you “transpiled” the Bicep manually, it is now done for you automatically before the deployment. You think this can work, but it does not.

PowerShell and JSON files

As mentioned above, I chose to use PowerShell to create and maintain parameters, and I want to read my JSON files there. However, it prevents me from creating large, long, and complex to maintain PowerShell objects with nested arrays. Editing these is not straightforward for everyone. On top of that, it leads to the need for nested looping in Bicep via “helper” modules. While that works, and I use it, I find it more tedious with deeply nested structures and many parameters to supply. Hence I am splitting it out into easier-to-maintain separate JSON files.

Here is what I do in PowerShell to build my array to pass via an Object parameter. First, I read the JSON filers from my folder.

$ChildFilePath = "../bicep/nested/AfwChildPoliciesAndRules/*"
$Files = Get-ChildItem -File $ChildFilePath -Include '*.json' -exclude 'DONOTUSE*.json'
$Files
$AfwChildCollectionGroupsValidate = @() # We use this with ConvertFrom-Json to validate that the JSON file is OK, but cannot use this to pass as a param to Bicep
    $AfwChildCollectionGroups = @()
    Foreach ($File in $Files) {
    try{
        $AfwChildCollectionGroupsValidate += (Get-Content $File.FullName -Raw) | ConvertFrom-Json
        # DO NOT PUT JSON in here - the PSCustomObject is not serializable and passing this param to Bicep will than be empty!
        $AfwChildCollectionGroups += (Get-Content $File.FullName -Raw) # A string is serializable!
        }
        Catch
        {
        write-host -ForegroundColor Red "ConvertFrom-Json threw and error. Check your JSON in the RCG/RC/R files"
        Exit
        }
    }

I can then use this to roll out the resources, as in the below example.

// Roll out the child Rule Collection Group(s)

var ChildRCGs = [for (rulecol, index) in firewallChildpolicy.RuleCollectionGroups: {

  name: json(rulecol).name

  properties: json(rulecol).properties

}]

Initially, the idea was that by using ConvertFrom-Json I would pass the JSON to Bicep as a parameter directly.

$AfwChildCollectionGroups += (Get-Content $File.FullName -Raw) | ConvertFrom-Json

So not only would I not need to load the files in Bicep with a hard-coded path, I would also not need to use json() function in Bicep.

// Roll out the child Rule Collection Group(s)
var ChildRCGs = [for (rulecol, index) in firewallChildpolicy.RuleCollectionGroups: {
  name: rulecol.name
  properties: rulecol.properties
}]

However, this failed on me time and time again with properties not being found and what not. Below is an example of such an error.

Line |
  30 |          New-AzResourceGroupDeployment @params -DeploymentDebugLogLeve …
     |          ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
     | 2:46:20 PM - Error: Code=InvalidTemplate; Message=Deployment template validation failed: 'The template variable 'ChildRCGs' is not valid: The language expression property 'name' doesn't exist, available properties are ''.. Please see    
     | https://aka.ms/arm-functions for usage details.'.

It did not make sense at all. That was until a dev buddy asked if the object was serializable at all. And guess what? ConvertFrom-Json creates a PSCustomObject that is NOT serializable.

You can check this quickly yourself.

((Get-Content $File.FullName -Raw) | ConvertFrom-Json).gettype().IsSerializable

Will print False

While

((Get-Content $File.FullName -Raw)).gettype().IsSerializable

Will print True

With some more testing and the use of outputs, I can even visualize that the parameter remained empty! The array contains three empty {} where I expected the JSON.

ConvertFrom-Json is not serializable

I usually do not have any issues with this in my pure PowerShell scripting. But here, I pass the object from PowerShell to Bicep, and guess what? For that to work, it has to be serializable. Now, when I do this, there are no warnings or errors. It just seems to work until you use the parameter and get errors that, at first, I did not understand. But the root cause is that in Bicep, the parameter remained empty. Needless to say, I wasted many hours trying to fix this before I finally understood the root cause!

As you can see in the code, I still use ConvertFrom-Json to test if my JSON files contain any errors, but I do not pass that JSON to Bicep as that will not work. So instead, I pass the string and still use the json() function in Bicep.

Hence, this blog post is to help others not make the mistake I made. It will also help me remember ConvertFrom-Json is not serializable.

Use DNS Application Directory Partitions with conditional forwarders to resolve Azure private endpoints

Use DNS Application Directory Partitions with conditional forwarders

Before I explain how to use DNS Application Directory Partitions with conditional forwarders, we need to set the stage. We will shortly revisit how DNS name resolution is set up and configured for hybrid Azure environments. Then it will help to understand why DNS Application Directory Partitions are useful in such scenarios.

Active Directory Domain Services extended to Azure

In the context of this article, an Azure hybrid environment is where you have Active Directory Domain Services (ADDS) extended to Azure. I.e., you have a least one AD site on-premises and at least one AD Site in Azure, with connectivity between the two all set up via ExpressRoute or a Site-to-Site VPN, firewall configured, etc. In most cases, the DNS servers for the ADDS environment are AD integrated. Which is what we want for this scenario.

You must have DNS name resolution sorted out effectively and efficiently in such an environment. Queries from on-premises to Azure need to be resolved, and queries from Azure to on-premises.

Regarding resolving private endpoints in Azure and potentially other private DNS zones in Azure, we need to leverage conditional forwarders. That means we must create all the public DNS zones as conditional forwarders on the on-premises domain controllers. We point these at our custom DNS servers in Azure or our Azure Firewall DNS proxy that points to our custom DNS servers in Azure. The latter is the better option if that is available. Those custom DNS Servers will most likely be our AD/DNS server in Azure. These will forward the queries to azure VIP 168.63.129.16, which will let Azure DNS handle the actual name resolution.

Conditional forwarders on-premises

There are two attention points in this scenario. First, the conditional forwarders should only exist on the on-premises DC/DNS servers. That is normal. The DC/DNS servers in Azure can just forward the queries to the Azure VIP, which will have the AD recursive DNS service query the private DNS zones and provide an answer. That is why we forward the on-prem queries to them directly or via the firewall DNS proxy.

Secondly, less frequent, but more often than you think, ADDS on-premises does not translate into a single Azure tenant or deployment. You can have multiple AD Sites in Azure for the same on-prem ADSS environment. That happens via different business units, mergers, and acquisitions, politics, life, or whatever.

Example on-prem / Azure ADDS environment with Azure FW DNS proxy.

Both attention points mean that we must ensure that the on-prem conditional forwarders only live on the DC/DNS servers that forward to the correct custom DNS services in Azure. Some DC/DNS servers on-premises might send their queries to Azure AD site 1 and others to Azure AD site 2, which might live in separate tenants or Azure deployments.

How do we achieve this?

One option is to create the conditional forwarders without storing and replicating them to all DC/DNS servers in the forest or the domain. That works quite well, but it leaves the burden to configure them on every DC/DNS server where required. That’s OK; PowerShell is your friend. See PowerShell script to maintain Azure Public DNS zone conditional forwarders – Working Hard In ITWorking Hard In IT for a script to do that for you.

But another option might be handy if you have 10 on-premises Active Directory Sites and 5 Azure Active Directory Sites. That option is called DNS partitioning. You can create your own Active Directory partitions, To those, you add the desired DC/DNS servers. You can now create your conditional forwarders, store them in Active Directory and replicate them to their respective custom partition. That leaves the flexibility to keep the conditional forwarders out of the Azure DC/DNS servers and enables different conditional forwarders configurations per on-premises Active Directory Site.

DNS Application Directory Partitions

To create the DNS application directory partitions, you can use PowerShell or the ‘dnscnd’ command line tool. I will use Powershell here. If you want to use the command line, look at How to create and apply a custom application directory partition on an Active Directory-integrated DNS zone on how to do this.

Add-DnsServerDirectoryPartition -Name "OP-BLUE-ADDS-SITE"

The command above did two things. It created the application directory partition and registered the DNS server on which you ran the script. You can test that with Get-DnsServerDirectoryPartition -Name “OP-BLUE-ADDS-SITE” or Get-DnsServerDirectoryPartition -Name “OP-BLUE-ADDS-SITE” -ComputerName ‘DC01’ if you specify the computer name.

By the way, if you run just Get-DnsServerDirectoryPartition, you will see all partition info for the current node or the node you specify.

Register the second DC/DNS server in this partition with the following command.

Register-DnsServerDirectoryPartition -Name "OP-BLUE-ADDS-SITE" -ComputerName 'DC02'.

Register-DnsServerDirectoryPartition -Name "OP-BLUE-ADDS-SITE" -ComputerName 'DC02'.

This returns nothing by default or an error in case something is wrong. Check your handy work with the below command.

Get-DnsServerDirectoryPartition -Name "OP-BLUE-ADDS-SITE" -ComputerName 'DC02'

Note that the Get-DnsServerDirectoryPartition only shows the registered DNS server for the node you are running it on or the one you specify. You do not get a list of all registered servers.

Now go ahead and store some zones in Active Directory and replicate to the BLUE partition on one of the DC/DNS servers you will see the ZoneCount go up on both. Just wait for replication to do its job or force replication to happen.

Storing the conditional forwarder in your DNS application directory partition

It is easy to store conditional forwarders in your custom DNS application directory partition. You can do this by adding or editing a conditional forwarding zone. However, be aware of the bug I wrote about in Bug when changing the “store this conditional forwarder in active directory” setting.

When you create the conditional forwarder zones for the private endpoints, you can store them in Active Directory and replicate them to their respective partitions. Just select the correct partition in the drop-down list. You will only see the partition for which your DC/DNS server has been registered, not every existing partition.

DNS Application Directory Partitions with conditional forwarders
Select the correct DNS application directory partition in the drop-down list

When done, the properties for the conditional forwarder will show that the zone is stored in Active Directory and replicated to all domain controllers in a user-defined scope.

DNS Application Directory Partitions with conditional forwarders
Conditional forwarder replicated to all DCs in a user-defined scope

Create a partition for any collection of DC/DNS servers that you want to have their Azure private endpoint DNS zones sent to a specific Azure deployment. So depending on your situation, that might be one or more.

As I mentioned, the custom partition will not even be offered on any DC/DNS server that has not enlisted for that zone. This protects against people selecting the wrong custom partition for their environment.

Conclusion

That’s it. You now have one more option on your tool belt when configuring on-premises to Azure name resolution in hybrid scenarios. The fun thing is that I have never seen more people learn about using DNS Application Directory Partitions with conditional forwarders now they have to design and configure DNS for hybrid on-premises/Azure ADDS environments. Maybe you learned something new today. If so, I am happy you did.