Automate those pretty graphs (Azure portal dashboards)
- joe parr
- Aug 17, 2022
- 5 min read
Updated: Aug 24, 2022
So, I'm sure for most people having the ability to view nice pretty graphs showcasing resources associated with a Log analytics workspace, is pretty cool right?
Couple that with the ability to automate this process for multiple business units, or different clients, even better right?
In this blog I am going to showcase the way I recently automated the deployment of Azure Portal dashboards and drop in some useful nuggets on the best ways I found to do this.
The automation model was defined with parameters stored in a JSON file. As I delved deeper into the automation of the dashboards, it became quite apparent that to automate the dashboards through a traditional Arm template was a lot cleaner than trying to crowbar it into Bicep, hence the solution to follow and the blog posted prior to this.
So to begin…
But first...
Nugget One!
Go into the Azure portal and manually create your dashboard first. That way you can mess around with the design of your dashboard before trying to automate the process.
You can create charts from pinning Azure Monitor or Log Queries (Log Analytics Workspace) to your dashboard. I am not going into how to create your own dashboard but here is a little link if you need to a bit of a refresher: https://docs.microsoft.com/en-us/azure/azure-portal/azure-portal-dashboards
Just get the dashboard in your preferred state, if you need more than one dashboard, so be it, it is what we have automation for!
Here is one I created earlier:

Apologies for the lack of data, this was taken just after trashing resources as I was running out of Azure credits!
Lettuce begin..
So, now that we have the manual creation out of the way and we have a dashboard we are happy with, you want to go and grab the JSON file of this dashboard.
You can do this in the screen where you can view the dashboard, simply press the Export button, then Download
This will download a "<dashboardname>.json" file
Once you have this file, you should then open this within a code editor.
Your file should look similar to this:

What we really want to focus on here is the parts. This is where all of your charts you manually defined live.
These look similar to the below:

As you can see, we have defined the resource ID of the log analytics workspace of which we will pull the data from.
Once you have your arm template in the correct format you can then leverage an existing parameters file that should contain variables required for the dashboard deployment.
Or as we are using PowerShell to deploy this, we can grab the other variables too and pass them in such as a subscription ID of the context we are running the PowerShell in.
Effectively, we will follow the instructions within this article
The second Nugget!
When trying to pass in resource IDs of the log analytics workspaces. I had to define Arm template parameters to craft the resource ID for Log analytics workspace these included my subscription ID, resource group name and Log Analytics Workspace name. The string in the template looked like this:
"name": "Scope",
"value": {
"resourceIds": [
"[concat('/subscriptions/', parameters('subscriptionId'), '/resourceGroups/', parameters('workspaceResourceGroup'),'/providers/Microsoft.OperationalInsights/workspaces/', parameters('workspaceName'))]"
]
What you will see when you build out your arm template is parameters that you need to set to ensure your dashboard is linked to the correct log analytics workspace and subscriptions. These are the resources we will get and set within the PowerShell script.
Now onto the good bit..
So, now we have your deployable Arm template, let’s create the PowerShell script and utilise an existing parameters file with resources defined to define a portal dashboard.
Just to recap first though, you should now have a dashboard Arm template, parameters file with existing resources defined and a brand new PowerShell script ready for your input.
My dashboard template parameters and variables looked like this when complete:
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"logAnalyticsWorkspaceId": {
"type": "string"
},
"subscriptionId": {
"type": "string"
},
"location": {
"type": "string"
},
"subscriptionIds": {
"type": "array"
},
"customer": {
"type": "string"
},
"workspaceResourceGroup": {
"type": "string"
},
"workspaceName":{
"type": "string"
}
},
"variables": {
"dashboardName": "[concat('Virtual-Machine-Overview-', parameters('location'))]",
"dashboardSecretName": "[concat('Virtual Machine Overview ', parameters('location'))]",
"dashboardLocation": "parameters('location')"
},
Where an example of a dashboard part looked like this:
"0": {
"position": {
"x": 0,
"y": 0,
"colSpan": 12,
"rowSpan": 4
},
"metadata": {
"inputs": [
{
"name": "queryParams",
"value": {
"chartPartQueryParamsType": 0,
"timeRange": {
"options": {},
"relative": {
"duration": 3600000
}
},
"metricQueryId": "top-n-disk"
}
},
{
"name": "bladeName",
"value": "AtScaleVmInsightsBladeViewModel"
},
{
"name": "bladeParams",
"value": {
"scopeSelections": {
"solutionType": "azure",
"scopes": {
"azure": {
"subscription": {
"id": "[concat ('/subscriptions/', parameters('subscriptionId'))]",
"authorizationSource": "RoleBased",
"subscriptionId": "parameters('subscriptionId')",
"displayName": "parameters('customer')",
"state": "Enabled",
"subscriptionPolicies": {
"locationPlacementId": "Public_2014-09-01",
"quotaId": "EnterpriseAgreement_2014-09-01",
"spendingLimit": "Off"
}
},
"resourceGroup": null,
"resourceType": null,
"resource": null
},
"hybrid": {
"workspace": null,
"computerGroup": null,
"computer": null
}
},
"timeRange": {
"performanceTab": {
"options": {},
"relative": {
"duration": 3600000
}
}
}
},
"selectedTab": 1,
"perfViewSelectedTab": 0
}
},
{
"name": "defaultOptionPicks",
"value": [
{
"id": "Avg",
"displayName": "Avg",
"isSelected": false
},
{
"id": "Min",
"displayName": "Min",
"isSelected": false
},
{
"id": "P50",
"displayName": "50th",
"isSelected": false
},
{
"id": "P90",
"displayName": "90th",
"isSelected": false
},
{
"id": "P95",
"displayName": "95th",
"isSelected": true
},
{
"id": "Max",
"displayName": "Max",
"isSelected": false
}
]
},
{
"name": "showOptionPicker",
"value": false
}
],
"type": "Extension/Microsoft_Azure_WorkloadInsights/PartType/ChartPart",
"partHeader": {
"title": "Logical Disk Space Used",
"subtitle": "Virtual Machines"
}
}
},
And then finally, the naming of dashboard looked like this:
"name": "[variables('dashboardName')]",
"type": "Microsoft.Portal/dashboards",
"location": "[parameters('location')]",
"tags": {
"hidden-title": "[variables('dashboardSecretName')]"
},
"apiVersion": "2015-08-01-preview"
As you can see, utilising all the variables and parameters we have defined at the top of the template.
So, let's build out our PowerShell script to deploy this shi*****
We start with the parameters, specifically a location parameter, as you can see with the parameters we have laid these out per region, we shall do the same here, so that we ensure we get the right data in our dashboard and correct naming of said dashboard.
Within the parameters, for this scenario, we required a business unit parameter, this related to the business units parameters folder as I also developer
In effect, when we have a multi region deployment we will have portal-dashboard-region. Obviously, we can restructure this to have one dashboard, this was just the situation I was to solve, so come at me!
Anyway, I digress, onto the engine room of the PowerShell script:
Param
(
[string] $Location,
[string] $Customer
)
Import-Module Az;
Try
{
Try {
$deploymentParameters = get-content -path ..\Parameters\$customer\dashboards.json |ConvertFrom-JSON -Depth 30
$currentContext = get-azcontext
$currentSubId = $currentContext.Subscription.id
$subs = $deploymentParameters.subscriptions
}
catch {
Write-Output "Cannot get all relevant parameters"
break
}
try {
$laws = foreach ($law in $deploymentParameters.workspaces.$location){
Get-AzOperationalInsightsWorkspace `
-ResourceGroupName $law.resourceGroup.name `
-Name $law.name
}
}
catch {
Write-Output "Cannot get Log Analytics Workspaces"
break
}
Try {
$dashboardTemplatePath = "..\Templates\Dashboards\virtualMachines.json"
}
catch {
Write-Output "Cannot get dashboard template"
break
}
New-AzResourceGroupDeployment `
-ResourceGroupName $deploymentParameters.dashboards.resourceGroup.name `
-Name "VirtualMachineDashDeploy" `
-TemplateFile $dashboardTemplatePath `
-logAnalyticsWorkspaceId $laws.resourceId `
-subscriptionId $currentContext.Subscription.id `
-location $location `
-subscriptionIds $subs `
-customer $customer `
-workspaceName $Laws.Name `
-workspaceResourceGroup $Laws.ResourceGroupName `
-Verbose;
}
Catch
{
Write-ErrorLogMessage -LogMessage "Error deploying VM Dashboard: [$($_.Exception.Message)]";
}
What we have here is a couple of Try’s and Catch’s for error handling, but as you can see we utilize the “get-content” to get our deployment parameters (workspaces, resource group etc) and we also get a few other bits and pieces to get Id’s we need - remember the parameters we need to fulfill in the dashboard Arm template?
Now, all we need to do is run the PowerShell script passing our location for deployment and resources and our business unit (obviously this isn’t needed and can be developed out of the automation if you have one client)
You should now have a portal dashboard in a resource group you already have in your subscription, go ahead click it, view the pretty graphs and even share it with your colleagues via RBAC. Simples, right?
To conclude...
Hopefully, y’all found this informative and may help you in someway, as with all my blogs, I’ll share what I have found out in my daily work or personal exploration and hopefully you find something useful here!
{
“SignOff”: {
“Cheers”: “THF”
}
}
Comentarios