Querying Microsoft Graph with Powershell, the easy way

Querying Microsoft Graph with Powershell, the easy way

Microsoft Graph is a very powerful tool to query organization data, and it’s also really easy to do using Graph explorer but it’s not built for automation.
While the concept I’m presenting in this blogpost isn’t something entirely new, I believe my take on it is more elegant and efficient than what I’ve seen other people use.

So, what am I bringing to the table?

  • Zero dependancies to Azure modules, .net Core & Linux compatibility!
  • Recursive/paging processing of Graph data (without the need for FollowRelLink, currently only available in powershell 6.0)
  • Authenticates using an Azure AD Application/service principal
  • REST compatible (Get/Put/Post/Patch/Delete)
  • Supports json-batch jobs
  • Supports automatic token refresh. Used for extremely long paging jobs
  • Accepts Application ID & Secret as a pscredential object, which allows the use of Credential stores in Azure automation or use of Get-Credential instead of writing credentials in plaintext

Sounds great, but what do I need to do in order to query the Graph API?

First things first, create a Azure AD application, register a service principal and delegate Microsoft Graph/Graph API permissions.
Plenty of people has done this, so I won’t provide an in-depth guide. Instead we’re going to walk through how to use the functions line-by-line.

When we have an Azure AD Application we need to build a credential object using the service principal appid and secret.

Then we aquire a token, here we require a tenantID in order to let Azure know the context of the authorization token request.

Once a token is aquired, we are ready to call the Graph API. So let’s list all users in the organization.

In the response, we see a value property which contains the first 100 users in the organization.
At this point some of you might ask, why only 100? Well that’s the default limit on graph queries, but this can be expanded by using a $top filter on the uri which allows you to query up to 999 users at the same time.

The cool thing with my function is that it detects if your query doesn’t return all the data (has a follow link) and gives a warning in the console.

So, we just add $top=999 and use the recursive parameter to get them all!

What if I want to get $top=1 (wat?) users, but recursive? Surely my token will expire after 15 minutes of querying?

Well, yes. That’s why we can pass a tokenrefresh and credentials right into the function and never worry about tokens expiring!

What if I want to delete a user?

That works as well. Simply change the method (Default = GET) to DELETE and go!

Deleting users is fun and all, but how do we create a user?

Define the user details in the body and use the POST method.

What about json-batching, and why is that important?

Json-batching is basically up to 20 unique queries in a single call. Many organizations have thousands of users, if not hundreds of thousands of users, and that adds up since much of the queries need to be run against individual users. And that takes time. Executing jobs with json-batching that used to take 1 hour now takes about 3 minutes to run. 8 hours long jobs now takes about 24 minutes. If you’re not already sold on json-batching then I have no idea why you’re still reading this post.

This can be used statically by creating a body with embedded queries, or as in the example below, dynamically. We have all users flat in a $users variable. Then we determine how many times we need to run the loop and build a $body json object with 20 requests in a single query, then we run the query using the $batch operation and POST method and put them into a $responses array and tada! We’ve made the querying of Graph 20x more efficient.

Sounds cool, what more can I do?

Almost anything related to the Office 365 suite. Check out the technical resources and documentation for more information. Microsoft is constantly updating and expanding the api functionality. Scroll down for the functions, should work on Powershell 4 and up!

Technical resources:

Creating an Azure AD application
https://www.google.com/search?q=create+azure+ad+application

Graph API
https://docs.microsoft.com/en-gb/graph/use-the-api

About batch requests
https://docs.microsoft.com/en-gb/graph/json-batching

Known issues with Graph API
https://docs.microsoft.com/en-gb/graph/known-issues

Thanks to:
https://blogs.technet.microsoft.com/cloudlojik/2018/06/29/connecting-to-microsoft-graph-with-a-native-app-using-powershell/
https://medium.com/@mauridb/calling-azure-rest-api-via-curl-eb10a06127

Functions



Automating delimiter selection when working with Csv cmdlets

Recently I walked passed a collegue with an error on his Powershell console which peaked my interest. He quickly noted that he had chosen the wrong delimiter for the Csv he imported which resulted in errors in the code, I then jokingly said “Why don’t you just use the ‘Get-CsvDelimiter’ cmdlet?” and we had a quick laugh.
30 minutes later the “Get-CsvDelimiter” function was born, but first,

Let’s dive into how Import-Csv, ConvertFrom-Csv & delimiters really work

When used correctly Import-Csv & ConvertFrom-Csv creates an array object with each row as a PSCustomObject with named content based on headers.

Import-Csv has two constructors, one with Path as a required and the other with Path & Delimiter as required.
If you omit the delimiter parameter the Path constructor will be used, now this constructor has a required parameter (according to the documentation) that is auto filled. Instead of using Delimiter we can use “UseCulture” which takes the current culture delimiter as input,

“To find the list separator for a culture, use the following command: (Get-Culture).TextInfo.ListSeparator.” –docs.microsoft

If we interpret the documentation this is a required parameter, but it is not really required, as we are able to pass only the -Path parameter and delimiter will autofill

Assume we’re using the following CSV as input ($File)

the command “Import-Csv $File” will output a valid object. Now in my opinion it should fail. As my culture specifies semicolon (;) as the default listseparator, while forcing “-UseCulture” can’t produce a correct csv object. This basically means that UseCulture is not a required parameter and the default unless specified is always a comma (,)

The exact same goes for ConvertFrom-Csv.

Presenting Get-CsvDelimiter

The below function will search and find the most probable delimiter used in a Csv file.

Let’s see how it works.

$File is a csv file with the below data.

If we run Import-Csv $File we’ll get a incorrect table object, every row will have a single property containing the row data.

If we instead specify a delimiter as shown below, we’ll calculate the most probable delimiter and use that to produce a correct CSV table object without having to inspect the csv culture or assume anything.

And to prove it works, the below code outputs only the Name column in the Csv

If we have a stringobject and need to convert it that is also possible.

With this method we can use virtually any delimiter we want, assuming that the Csv is formatted correctly. Again we’re using the same csv input but we change the delimiter to a “greater than” (>) symbol and see how the function performs.

Now when we run the function standalone we’ll get a Greater than symbol as the return

And finally, when running the previous code we get the same output as when we were using a semicolon as the delimiter.

 

Get-CsvDelimiter



Datetime and RFC3339 compliance in powershell – a deepdive

A collegue of mine asked if there is a way to output a RFC compliant datetime (https://www.ietf.org/rfc/rfc3339.txt) in powershell without manually formatting in T and Z in the middle and end to comply with ISO standard and imply UTC +-00:00

 

Before i start with the how, I’d like to address the why.

If you’ve ever done some coding you’re sure to have encountered issues with datettime and possibly errors and incidents due to the timeformat of a datetime string.
For example, if I live in the US then time is commonly written in month-day-year format, which during the first 12 days of each month is indistinguishable from the european day-month-year format.
This is also encountered in code, for example in powershell my locale is Swedish and the “Get-Date” cmdlet returns “den 1 augusti 2018 16:35:24” which is easy and readable for a human.
However if i convert it to a string it becomes in US format even though my culture settings in powershell is set to swedish.
In my opinion this behavior is wrong as I expect to be given a ISO standard universal format, or at least a culture appropriate format. Instead I am given a US format.

With that said, developing automation and tools for global customers a standard format is much needed when we write to logs.

The How

After a short time on google it seems no one had done this properly in powershell. I also found out that XML is RFC compliant.

How did i do it?

Returns:

Great! Now let’s put it into some real code.

Example 1: Writing current date into a logfile

The output becomes a RFC compliant string and gets stored in the $now variable to be used into a out-file log operation.

Example 2: Writing a job deadline datettime

Here we create a datettime object, add 20 hours and then convert it to a RFC compliant datettime string and store it into the $RFCDeadline variable.

Hope this helps someone!



Sending CSS formatted tables in Outlook

If you’ve ever used Powershell to send HTML tables in Outlook containing CSS you’ve probably been disappointed of the outcome.
There is some archived documentation for Outlook 2007 that is still viable for Outlook 365 (https://msdn.microsoft.com/en-us/library/aa338201(v=office.12).aspx).

Basically the function accepts a csv and css file, hardcodes the css into the table and outputs a formatted HTML table that is compatible with Outlook.

Example table sent using the function and send-mailmessage
The css has odd/even for readability, bolded column 1/4 and red text for column 3.
This is by default impossible to achieve using just css in outlook.

Commandline

HTML output

CSS

Since the CSS does not work perfectly the style.css file imported needs some specific configuration..

  • classes has some specific name structure”
    • columns are named .coln
      • n is the number of the column starting with 1 to infinity. .col1 .col2 and so on
    • one whitespace is required between class name and the curlybrackets.
      • Curlybrackets must be on the same row as class name
      • Ending curlybrackets must be on a separate line
    • Data must be on separate rows
  • Odd/even css is the only tr handled code.
    • Must be named exactly
      • tbody tr:nth-child(odd) {
      • tbody tr:nth-child(even) {

Example style.CSS

Function

 



Azure Automation – Running scripts locally on VM through runbooks

I was tasked to create a powershell script to run on a schedule on a Azure VM. Normally this would be running as a scheduled task on the VM but seeing as we’re working with AzureVM and schedule tasks are legacy I wanted to explore the possibilities of running the schedule and script in Azure to keep the VM clean and the configuration scalable.

After some research the best option would be running the powershell script as a CustomScriptExtension on the VM, and the schedule would be handled by a Process Automation Runbook (using Automation Accounts).

What I ended up with is the script below. It’s fairly easy to configure and contains almost all the required configuration in the parameters.



Recursively search Azure Ad group members

When working with on-premise Active Directory an administrator often has to recursively search AD groups, this is easy using the ActiveDirectory module with cmdlet “Get-AdGroupMember <Group> -Recusive”.
For the AzureAD equivalent this is no longer an option, the cmdlet Get-AzureADGroupMember has three parameters.

PARAMETERS
-All <Boolean>
If true, return all group members. If false, return the number of objects specified by the Top parameter
-ObjectId <String>
Specifies the ID of a group in Azure AD.
-Top <Int32>
Specifies the maximum number of records to return.



Just enough Administration & RDS

The Problem?

Microsoft RDS has limitations when delegating access, in fact there is no built-in access delegation whatsoever!

The solution? Powershell!

A Just enough Administration (JEA) endpoint, also known as a Constrained Powershell Endpoint.

I’ve created a powershell app to list and logoff users I also created a simple tool in powershell to connect to connection brokers and search users. All without delegating users access to any RDS servers.

  1. Connection broker/Endpoint connection
  2. Collection field, used to search and filter. Multiple collections can be selected at once
  3. Wildcard search field, can be blank (list all users)
  4. User information & selection are
  5. Press to logoff all selected users

How does this work? Constrained Powershell Endpoint.

The endpoint is restricted to only access List collection, list users and subsequently force logoff users. The endpoint configuration to only accept users from a certain AD group.

To configure endpoint, run the following code on all connection brokers. The script is somewhat generalized for easy adaptability. The main parts are in the parameters where we configure who can run, using which account and also what cmdlet are available in the constrained session.

The frontend application code is not posted in the blog.



OS deployment using same MAC address for multiple clients

OS deployment using the same staging dock for multiple clients is a bit of an issue, and there are many different solutions to the problem but all have their downsides.

I did a quick search and found two probable candidates to tackle.

  1. UUID staging
    The idea is to exclude staging docks using a registry value (new in 1610) and stage using only UUID. This way the MAC address is totally irrelevant and OSD is handled using only UUID. The downside is that many Prestage tools use MAC and the UUID Guid is longer and therefore more prone to mistakes.
  2. Ethernet MAC release/Wifi renew
    This method prompts the user at the end of the task sequence to disconnect the ethernet adapter and then resumes using Wifi. The downsides here is that the task sequence no longer becomes unattended and therefore can take longer time. This also requires the use of extra filtering in task sequences and therefore extra maintenance.

With that said I came to the following solution.
On the Site create a SQL job that runs every 30/60 minutes and removes the MAC association and clears the PXE flag.

This allows users to stage their clients using the same Ethernet adapter (USB or docking station) without changing the current pxe routine/application or task sequence.
As with all staging dock solution some consideration must be taken of when the job is scheduled to run so that MAC address association is not wiped before PXE is initated.
If staging is done once a day/week the schedule can be configured to a daily job and thus the afforementioned consideration becomes a non-issue.

UUID staging
https://blogs.technet.microsoft.com/system_center_configuration_manager_operating_system_deployment_support_blog/2015/08/27/reusing-the-same-nic-for-multiple-pxe-initiated-deployments-in-system-center-configuration-manger-osd/

Ethernet MAC Release/Wifi renew
https://gallery.technet.microsoft.com/scriptcenter/Enable-multiple-SCCM-OS-861e42bb



Convert SCCM schedule to readable format

Reading SCCM Maintenance Windows is not an easy thing to do from WMI as they are stored as 16 char hex values

To get the schedule hex values simply run the below line, but the result might not be easy to understand

A server might return the following values.

Now, how do we read this? The easiest thing is to run Convert-CMSchedule but that unfortunately only works if the configuration manager console is installed.
Another way to do is to invoke a wmi method from a site server using the SMS_ScheduleMethods class.

None of these are portable, so I created my own function to decode these. The below function returns the same data as Convert-CMSchedule but is free from any module as it converts the hex to a readable format.

The function also accepts valuefrompipeline

 

The script is reverse engineered from https://msdn.microsoft.com/en-us/library/cc143505.aspx



Applocker eventlog audit report

Applocker is a great resource to avoid malicious code and applications, however it’s not always easy to inventory the applications in your environment.
To solve this Applocker can be configured to audit only for a time and clients can upload logs to a server which can then be filtered with powershell into a easy to filter report.

First a GPO must be configured with enforce or audit only rules.
applocker

Then an Event subscription manager needs to be configured (details at the end of the post).
When a server is configured the subscriptions needs to be configured, set up a subscription per applocker policy type.
eventsubscribers

subscriber

The logs should now be collected by the server and presented in the Forwarded Events log.
logs

However the data is only available in XML view and sorting through hundreds of logs manually is often not a valid approach. This is where powershell comes to the rescue.

The script below can be run on the collector server or remotely and outputs valuable data to a gridview report. The gridview can in turn be copied to an excel sheet for further processing.
output

 

Configure Source initiated subscription:
https://msdn.microsoft.com/en-us/library/bb870973(v=vs.85).aspx