Advanced Compliance Reporting Machine
Here is how you can improve your Intune Device Compliance Reporting using Microsoft Graph, PowerShell and KQL.
Good reporting requires context
Dashboards are everywhere. Almost no application comes without reporting, which is good. Management likes reporting, IT Managers like Compliance Reporting. Compliance metrics is a must have in a modern Operations. Intune makes it easy to verify that devices comply to certain policies.
However, if you’re running the thing and are responsible to improve the percentage of compliant devices, you may have recognized that the reporting features of Intune have some limitations in terms of filtering, drill down capabilities and quantitification options. It also has no history, which means: no context. Where are you coming from? Is it getting better, is it getting worse? Did I break something? I think that good reporting should help you set priorities.
If you’re running the thing you also want to know which settings are failing, on which device types. You wan’t to quantifiy that and take actions on your now identified main issues. You may also want to see progress over time, to check if your actions have been successful. In my opinion this is a must have as reporting is a very asynchronous process.
The good thing is: Intune already has the data. We only need to get it, make it accessible and queryable? - not sure if this is a proper english word. So today I’m showing you the Advanced Compliance Reporting Machine.
Get the data
The goal of this machine is to report every compliance setting of a device managed by Intune on a daily basis - regardless in which compliance policy the item is assigned. The machine will consist of three building blocks:
- Fuel: Microsoft Graph Data
- Engine: Data transformed by Powershell executed by an Azure Automation Account
- Motion: Data in Log Analytics Workspace that moves your reporting forward
In the end we will have a script, that retreives compliance settings from Microsoft Graph and pumps it into Azure Log Analytics where you can do KQL magic with it. With Azure Automation you can execute it according to your needs allowing you to retreive the status over time.
Gimme the script
The script and the according dashboard is available on GitHub. I like to give credit to Mike van der Sluis who added the batch processing and tested the script with over 20000 devices! Thank you!
TL;DR | Let’s go through each building block.
Compliance Policy Setting States
First thing we need to check, is what we want to report. Checking Intune, we will see a certain hierarchy inside of device compliance. Root is the device. It can be either compliant or not. A device has one or more compliance policies assigned. A compliance policy contains multiple compliance policy setting states. These settings can be seen if we drill down from the device in Intune. I will call them compliance item.
The challange of compliance is, that no matter how many items you evaluate on a device: One failure is enough to set the device to noncompliant. Intune has a report with all the seetings available, but it has limited filtering capabilities. But we can retrieve data from the graph.
Building the reporting machine
Microsoft Graph
To get an overview of the data we’re working with, I recommend trying them in the Graph Explorer. The graph explorer App will require these permissions.
- DeviceManagementConfiguration.Read.All
First we need to retreive the list of assigned deviceCompliancePolicySettingStates
. The Graph API reference describes how the (Device Compilance Policy Setting State summary can be retreived. Simply call
https://graph.microsoft.com/v1.0/deviceManagement/deviceCompliancePolicySettingStateSummaries/
It will return a JSON containing all applied compliance setting. Let’s have a look at one item
|
|
Later we will filter only for windows devices using a filter at the end of the query ?$filter=platformType -eq 'Windows10AndLater'
.
In further proceedings we will need the id
. We can use it to list the Device Compliance Setting State. It will return the list of devices with the policy setting required and the according status.
In our example we will call https://graph.microsoft.com/v1.0/deviceManagement/deviceCompliancePolicySettingStateSummaries/Windows10CompliancePolicy.AntiSpywareRequired/
. This will return a list of all devices that must have Anti Spyware enabled setting and the according status. Let’s have a look at one device.
|
|
It contains the deviceId
, deviceName
and the state
. This is everything we need. We now can get the deviceCompliancePolicySettingStateSummary
list, take every Setting State ID, that is assigned to a Windows Device and iterate through all the associated deviceComplianceSettingStates
.
Prerequisites
To run the script in Azure we need a few prerequisites
- App Registration, to access the Graph API
- Log Analytics Workspace
- Azure Automation Account
App Registration
App Registrations are very well documented in the Microsoft Docs. Our app needs DeviceManagementConfiguration.Read.All permissions. For authentication, you need to create a secret.
For the script we will need:
- Tenant ID
- Application (Client) ID
- Client Secret
Log Analytics Workspace
You can use an existing Workspace or create a new one. I recommend to use the one you are using for your Intune Logs because it eases the join of device tables. The script will create a new table. From the Workspace we will need the Workspace ID and Primary Key. Both can be obtained from the Agents page under Log Analytics agent instructions
Azure Automation Account
The Azure Automation Account will hold the environment to run the script on a daily basis. After creating the account, we need to prepare a few things to actually run the code.
Import Module
There is no Import-Module in an Automation Account. You need to tell the Automation Account which modules to import before spinning your PowerShell instance up. We will need the MSAL.PS Module. To add it go to Shared Resources > Modules. Click Add Module an search for MSAL.PS to add it. Once it has been successfully added it looks like this.
Store Variables
Don’t store authentication variables, especially API keys in your code. Your automation account has a place where the credentials can be securely stored and retreived in the script. Go to Shared Resources > Variables and add the variables accordingly.
I got the feedback, that a managed Identity is the more secure way to do that. I will keep that for later. If you have a good resource to add that, let me know.
Store the script
We will need to store the Script available on GitHub as a runbook. Go to Runbook > Create a runbook It needs to be PowerShell 5.1.
Click Create. In the next Window, you can paste the script and save it. To ensure that the script works fine, start it in the testpane. If your test succeeds, publish it.
Create Schedule
Create your schedule to run the script according to your needs. A daily execution is sufficient.
PowerShell
In PowerShell we will do the following:
- Authenticate against the graph
- Get all the setting states of a certain platform (or all platforms)
- Get the devices assigned to each retreived setting state and their compliance status and store them in a JSON array
- Ship data to Log Analytics.
We will go through all the building blocks now.
Authenticate against the graph
As mentioned before we will authenticate against the graph using the MSAL library which must be imported in the automation account. We will retreive the required information previously stored in the variables using Get-AutomationVariable -Name *yourname*
. At the end we will need the $requestBody
variable to authenticate the API calls.
|
|
Get the setting states of windows devices
Next we get the setting states available. We will filter by platform in the query to limit the data retreived.
|
|
Get the assigned devices of each setting
We will run through the retreived settings using for each and use the variable to get the devices and store the results in an array.
When retreiving the devices we will do this using a do while loop to handle pagination. What is pagination? Think of search result that is splitted across mulitple sites. The Graph does the same, so if you’re dealing with more than 10 results, you need to take of this, because Microsoft may change the pagination thresholds any time.
The good thing is: they provide the link to the next page, if there is one. The field we have to look for is @odata.nextLink
. So as long as there is a next link, we will query the page. And that’s why a do while loop is perfect.
|
|
Ship data to log analytics
To ship the data I use functions that are provided by Microsoft: Log Analytics HTTP Data Collector API. I understand what it does but I am barely able to write it myself. I copied it and changed variable names to match the current wording. You will find it in the script on Github. However we will need to convert the data to JSON and trigger the functions. The logType
at the end defines the name of the table in LogAnalytics.
|
|
That’s it. Run this script on a daily basis and you will have data to query in Log Analytics.
Log Analytics
The data in log analytics is as shown below.
It creates a row per setting state and device. Every column has a _s at the end. This referres to the data type used. In most cases it’s _s for string. You also see that there is a lot of duplicate data. If you need to save some money, you can tidy it up.
To query the data it’s good to remove the setting prefix e.g. Windows10CompliancePolicy
. I’m using the parse
function. It will throw away everything before *
including CompliancePolicy.
|
|
With that data you can query the data. In this example I’m filtering for all the devices with a setting non compliant. I use it as basis to summarize. Tell my how many devices I have with a noncompliant setting - per setting.
|
|
Another useful insight is the per device view. Is there a device, that’s completely broken?
|
|
And one of the most powerful queries on earth (IMHO) to analyze and improve compliance is report over time and rendering as a timechart. bin()
will group all reports of one day into one bucket.
|
|
You can ask that data a lot of questions. If you run the script daily you also have historic data which can give you valuable insights.
Dashboard
I built a reporting dashboard. It looks like this and is also available on GitHub
It’s interactive. You can use any of the graphs or tables to drill through and explore your data.
This is a writeup of a session that I gave at WP Ninja Summit 2022