I’m assuming you already have:
1. Log Analytics.
2. Automation Account (Hybrid Worker Group not required).
On subscription level, currently there is no diagnostic settings that can be linked to Log Analytics to track quotas and usage. We’ll be creating PowerShell script into Automation Account Runbook to get the current quotas and usage and save them in custom-made Log Analytics table. Then we’ll create schedule from the Automation Account and link it to the Runbook, that way there will be logs for desired frequency, not just one-time. Afterwards we’ll be creating Alerts from Azure Monitor, so we’ll have notifications when the desired usage is exceeded. Which on itself will give you valuable insight, it can help you review and plan for better cost-savings, or you’ll know when it’ll be appropriate time for creating ticket to Microsoft for increase of the quotas.
- Go to your Automation Account and create new PowerShell runbook.


Paste the following code, which will get the current usage and quotas and will import the data into custom-made Log Analytics table:
Param(
[string]$omsWorkspaceId,
[string]$omsSharedKey,
[string]$locations
)
# To test outside of Azure Automation, replace this block with Login-AzureRmAccount
$connectionName = "AzureRunAsConnection"
try
{
# Get the connection "AzureRunAsConnection "
$servicePrincipalConnection = Get-AutomationConnection -Name $connectionName
"Logging in to Azure..."
Add-AzureRmAccount `
-ServicePrincipal `
-TenantId $servicePrincipalConnection.TenantId `
-ApplicationId $servicePrincipalConnection.ApplicationId `
-CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
}
catch {
if (!$servicePrincipalConnection)
{
$ErrorMessage = "Connection $connectionName not found."
throw $ErrorMessage
} else{
Write-Error -Message $_.Exception
throw $_.Exception
}
}
$LogType = "AzureQuota"
$json = ''
# Credit: s_lapointe https://gallery.technet.microsoft.com/scriptcenter/Easily-obtain-AccessToken-3ba6e593
function Get-AzureRmCachedAccessToken()
{
$ErrorActionPreference = 'Stop'
if(-not (Get-Module AzureRm.Profile)) {
Import-Module AzureRm.Profile
}
$azureRmProfileModuleVersion = (Get-Module AzureRm.Profile).Version
# refactoring performed in AzureRm.Profile v3.0 or later
if($azureRmProfileModuleVersion.Major -ge 3) {
$azureRmProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
if(-not $azureRmProfile.Accounts.Count) {
Write-Error "Ensure you have logged in before calling this function."
}
} else {
# AzureRm.Profile < v3.0
$azureRmProfile = [Microsoft.WindowsAzure.Commands.Common.AzureRmProfileProvider]::Instance.Profile
if(-not $azureRmProfile.Context.Account.Count) {
Write-Error "Ensure you have logged in before calling this function."
}
}
$currentAzureContext = Get-AzureRmContext
$profileClient = New-Object Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient($azureRmProfile)
Write-Debug ("Getting access token for tenant" + $currentAzureContext.Subscription.TenantId)
$token = $profileClient.AcquireAccessToken($currentAzureContext.Subscription.TenantId)
$token.AccessToken
}
# Network Usage not currently exposed through PowerShell, so need to call REST API
function Get-AzureRmNetworkUsage($location)
{
$token = Get-AzureRmCachedAccessToken
$authHeader = @{
'Content-Type'='application\json'
'Authorization'="Bearer $token"
}
$azureContext = Get-AzureRmContext
$subscriptionId = $azureContext.Subscription.SubscriptionId
$result = Invoke-RestMethod -Uri "https://management.azure.com/subscriptions/$subscriptionId/providers/Microsoft.Network/locations/$location/usages?api-version=2017-03-01" -Method Get -Headers $authHeader
return $result.value
}
# Get VM quotas
foreach ($location in $locations)
{
$vmQuotas = Get-AzureRmVMUsage -Location $location
foreach($vmQuota in $vmQuotas)
{
$usage = 0
if ($vmQuota.Limit -gt 0) { $usage = $vmQuota.CurrentValue / $vmQuota.Limit }
$json += @"
{ "Name":"$($vmQuota.Name.LocalizedValue)", "Category":"Compute", "Location":"$location", "CurrentValue":$($vmQuota.CurrentValue), "Limit":$($vmQuota.Limit),"Usage":$usage },
"@
}
}
# Get Network Quota
foreach ($location in $locations)
{
$networkQuotas = Get-AzureRmNetworkUsage -location $location
foreach ($networkQuota in $networkQuotas)
{
$usage = 0
if ($networkQuota.limit -gt 0) { $usage = $networkQuota.currentValue / $networkQuota.limit }
$json += @"
{ "Name":"$($networkQuota.name.localizedValue)", "Category":"Network", "Location":"$location", "CurrentValue":$($networkQuota.currentValue), "Limit":$($networkQuota.limit),"Usage":$usage },
"@
}
}
# Get Storage Quota
$storageQuota = Get-AzureRmStorageUsage
$usage = 0
if ($storageQuota.Limit -gt 0) { $usage = $storageQuota.CurrentValue / $storageQuota.Limit }
$json += @"
{ "Name":"$($storageQuota.LocalizedName)", "Location":"", "Category":"Storage", "CurrentValue":$($storageQuota.CurrentValue), "Limit":$($storageQuota.Limit),"Usage":$usage }
"@
# Wrap in an array
$json = "[$json]"
# Create the function to create the authorization signature
Function Build-Signature ($omsWorkspaceId, $omsSharedKey, $date, $contentLength, $method, $contentType, $resource)
{
$xHeaders = "x-ms-date:" + $date
$stringToHash = $method + "`n" + $contentLength + "`n" + $contentType + "`n" + $xHeaders + "`n" + $resource
$bytesToHash = [Text.Encoding]::UTF8.GetBytes($stringToHash)
$keyBytes = [Convert]::FromBase64String($omsSharedKey)
$sha256 = New-Object System.Security.Cryptography.HMACSHA256
$sha256.Key = $keyBytes
$calculatedHash = $sha256.ComputeHash($bytesToHash)
$encodedHash = [Convert]::ToBase64String($calculatedHash)
$authorization = 'SharedKey {0}:{1}' -f $omsWorkspaceId,$encodedHash
return $authorization
}
# Create the function to create and post the request
Function Post-OMSData($omsWorkspaceId, $omsSharedKey, $body, $logType)
{
$method = "POST"
$contentType = "application/json"
$resource = "/api/logs"
$rfc1123date = [DateTime]::UtcNow.ToString("r")
$contentLength = $body.Length
$signature = Build-Signature `
-omsWorkspaceId $omsWorkspaceId `
-omsSharedKey $omsSharedKey `
-date $rfc1123date `
-contentLength $contentLength `
-fileName $fileName `
-method $method `
-contentType $contentType `
-resource $resource
$uri = "https://" + $omsWorkspaceId + ".ods.opinsights.azure.com" + $resource + "?api-version=2016-04-01"
$headers = @{
"Authorization" = $signature;
"Log-Type" = $logType;
"x-ms-date" = $rfc1123date;
}
$response = Invoke-WebRequest -Uri $uri -Method $method -ContentType $contentType -Headers $headers -Body $body -UseBasicParsing
return $response.StatusCode
}
# Submit the data to the API endpoint
Post-OMSData -omsWorkspaceId $omsWorkspaceId -omsSharedKey $omsSharedKey -body ([System.Text.Encoding]::UTF8.GetBytes($json)) -logType $logType
And publish the Runbook.
2. Go to the Automation Account and create new schedule.


Note that the minimum recurring time for Automation Account schedules is every hour, but if you want to go every 30 minutes, you can create two schedules. One running every hour XX:30 and another one running every hour XX:00. So basically, you’ll be running evey time the clock says “XX:30” or “XX:00”, where XX is the hour.
And then go to your published runbook and add the schedule:


Into “Link a schedule to your runbook” select the schedule we’ve just created.
Into “Configure parameters and run settings” you’ll need to paste the Log Analytics ID, Key and for which desired location(s) you want to track the quotas and usage.
How to see Log Analytics ID (OMSWORKSPACEID) and Log Analytics Key (OMSSHAREDKEY)? Go to your Log Analytics Workspace > Settings > Agents Management and copy Workspace ID and the primary or secondary key.

3. Now once you start getting logs, you can review them from Log Analytics under “Custom Logs > AzureQuota_CL” table. Your categories should look something like this (AzureQuota_CL | distinct Name_s):
Availability Sets |
Total Regional vCPUs |
Virtual Machines |
Virtual Machine Scale Sets |
Standard LSv2 Family vCPUs |
Standard FSv2 Family vCPUs |
Standard ESv3 Family vCPUs |
Standard DSv2 Family vCPUs |
Standard FS Family vCPUs |
Standard DASv4 Family vCPUs |
Virtual Networks |
Static Public IP Addresses |
Network Security Groups |
Public IP Addresses – Basic |
Public Ip Prefixes |
Nat Gateways |
Network Interfaces |
Private Endpoints |
Private Endpoint Redirect Maps |
Load Balancers |
Private Link Services |
Application Gateways |
Route Tables |
Route Filters |
Network Watchers |
Packet Captures |
Application Security Groups. |
DDoS Protection Plans. |
DDoS customized policies |
Service Endpoint Policies |
Network Intent Policies |
Standard Sku Load Balancers |
Public IP Addresses – Standard |
DNS servers per Virtual Network |
Custom DNS servers per P2SVpnGateway |
Subnets per Virtual Network |
IP Configurations per Virtual Network |
Peerings per Virtual Network |
Security rules per Network Security Group |
Security rules per Network Intent Policy |
Routes per Network Intent Policy |
Security rules addresses or ports per Network Security Group |
Inbound Rules per Load Balancer |
Frontend IP Configurations per Load Balancer |
Outbound Rules per Load Balancer |
Routes per Route Table |
Routes with service tag per Route Table |
Secondary IP Configurations per Network Interface |
Inbound rules per Network Interface |
Route filter rules per Route Filter |
Route filters per Express route BGP Peering |
Min Public Ip InterNetwork Prefix Length |
Storage Accounts |
Let’s say, compute-wise we’re using some vCPU Families that we want to track, and we want to see if they’re usage are above 75% of the current quotas, for the past 7 days. We can do something like this:
AzureQuota_CL
| where Name_s has_any ("LSv2","FSv2","EASv4","DSv3","DSv2","DASv4","Total")
| where Usage_d > 0.75

Which gives us valuable insight on when the quotas were above the desired percent, what vCPU family it was, what is the exact usage and quota, and for which location.
4. Now that you store logs and have an example on how to query them, let’s make some Alerts from the Azure Monitor, so you’ll be notified for the above example every time it happens, without the need of you going and querying all the time. Go to Azure Monitor > Alerts > New Alert Rule:

And it’ll look like this:

- For “Resource” select the Log Analytics Workspace.
- For “Condition” if we take the example from above, and you you’ve scheduled the logging every 30 minutes, you can do it like this:

- For “Actions” select the action group where the desired mails are located, or if you don’t have you can click on “Manage Actions Group” and create a new one.
- For “Alert Name Details” put alert name and description that would be easy identifiable for you.
- Once you create the rule you can make some customizations from “Customization actions”, for instance change the Email subject to something like “Azure Quotas threshold 75% alert” so it’ll be more easily identifiable for you.
5. Now every time one of the usages exceeds the 75% you’ll receive mail like notification that looks like this:

Hope this functionality helps, until Microsoft creates feature for this to be tracked in real time.
Stay Awesome,
Ivelin