Category: Azure

Querying Microsoft Graph with Powershell, the easy way

Microsoft Graph is a very powerful tool to query organization data, and it’s also really easy to do using Graph explorer but it’s not built for automation.
While the concept I’m presenting in this blogpost isn’t something entirely new, I believe my take on it is more elegant and efficient than what I’ve seen other people use.

So, what am I bringing to the table?

  • Zero dependancies to Azure modules, .net Core & Linux compatibility!
  • Recursive/paging processing of Graph data (without the need for FollowRelLink, currently only available in powershell 6.0)
  • Authenticates using an Azure AD Application/service principal
  • REST compatible (Get/Put/Post/Patch/Delete)
  • Supports json-batch jobs
  • Supports automatic token refresh. Used for extremely long paging jobs
  • Accepts Application ID & Secret as a pscredential object, which allows the use of Credential stores in Azure automation or use of Get-Credential instead of writing credentials in plaintext

Sounds great, but what do I need to do in order to query the Graph API?

First things first, create a Azure AD application, register a service principal and delegate Microsoft Graph/Graph API permissions.
Plenty of people has done this, so I won’t provide an in-depth guide. Instead we’re going to walk through how to use the functions line-by-line.

When we have an Azure AD Application we need to build a credential object using the service principal appid and secret.

Then we aquire a token, here we require a tenantID in order to let Azure know the context of the authorization token request.

Once a token is aquired, we are ready to call the Graph API. So let’s list all users in the organization.

In the response, we see a value property which contains the first 100 users in the organization.
At this point some of you might ask, why only 100? Well that’s the default limit on graph queries, but this can be expanded by using a $top filter on the uri which allows you to query up to 999 users at the same time.

The cool thing with my function is that it detects if your query doesn’t return all the data (has a follow link) and gives a warning in the console.

So, we just add $top=999 and use the recursive parameter to get them all!

What if I want to get $top=1 (wat?) users, but recursive? Surely my token will expire after 15 minutes of querying?

Well, yes. That’s why we can pass a tokenrefresh and credentials right into the function and never worry about tokens expiring!

What if I want to delete a user?

That works as well. Simply change the method (Default = GET) to DELETE and go!

Deleting users is fun and all, but how do we create a user?

Define the user details in the body and use the POST method.

What about json-batching, and why is that important?

Json-batching is basically up to 20 unique queries in a single call. Many organizations have thousands of users, if not hundreds of thousands of users, and that adds up since much of the queries need to be run against individual users. And that takes time. Executing jobs with json-batching that used to take 1 hour now takes about 3 minutes to run. 8 hours long jobs now takes about 24 minutes. If you’re not already sold on json-batching then I have no idea why you’re still reading this post.

This can be used statically by creating a body with embedded queries, or as in the example below, dynamically. We have all users flat in a $users variable. Then we determine how many times we need to run the loop and build a $body json object with 20 requests in a single query, then we run the query using the $batch operation and POST method and put them into a $responses array and tada! We’ve made the querying of Graph 20x more efficient.

Sounds cool, what more can I do?

Almost anything related to the Office 365 suite. Check out the technical resources and documentation for more information. Microsoft is constantly updating and expanding the api functionality. Scroll down for the functions, should work on Powershell 4 and up!

Technical resources:

Creating an Azure AD application
https://www.google.com/search?q=create+azure+ad+application

Graph API
https://docs.microsoft.com/en-gb/graph/use-the-api

About batch requests
https://docs.microsoft.com/en-gb/graph/json-batching

Known issues with Graph API
https://docs.microsoft.com/en-gb/graph/known-issues

Thanks to:
https://blogs.technet.microsoft.com/cloudlojik/2018/06/29/connecting-to-microsoft-graph-with-a-native-app-using-powershell/
https://medium.com/@mauridb/calling-azure-rest-api-via-curl-eb10a06127

Functions



New features in Azure Blueprints

The past couple of weeks i have seen new features being released for Azure Blueprints. In this short post i will write about the updates in Definition location and Lock assignment.

New to Azure Blueprints?

Azure Blueprints allows you to define a repeatable set of Azure resources that follows your organizations standards, patterns and requirements. This allows for a more rapidly deployment of new environments while making it easy to keep your compliance at desired level.

Artifacts:

Azure Blueprints is a package or container used to achieve an organizational standard and patterns for implementation of Azure Cloud Services. To achieve this, we use Artifacts.

Artifacts available today are:

  • Role Assignments
  • Policy Assignments
  • Resource Groups
  • ARM Templates

The public preview of blueprints was released during Ignite in September last year, and its still in preview.

Read more about the basics of Azure Blueprints here

Definition location

This is where in your hierarchy you place the Blueprint, and we think of it as a hierarchy because after creation the assignments of the blue print can be done at current level or below in the hierarchy. Until now the option for definition location has been Management groups. With the new released support for subscription level you can now start use Blueprints even if you have not adopted Management groups yet.

Note you need contributor permissions to be able to save your definition to a subscription.

If you are new to management groups, I recommend you take a look at it since it’s a great way to control and apply your governance across multiple subscriptions.

Read more about Management groups here

Definition location for Blueprints

Lock Assignment

During assignment of a Blueprint we are given the option to lock the assignment.

Up until recently we only had Lock or Don’t lock. If we chose to lock the assignment all resources were locked and could not be modified or removed. Not even by a subscription owner.

Now we have the option to set the assignment to:

  • Don’t Lock – The resources are not protected by blueprints and can be deleted and modified.
  • Read Only – The resources can´t be changed in any way and can´t be deleted.
  • Do Not Delete – This is a new option that gives us the flexibility to lock our resources from deletion but still gives us the option to change the resources.

Lock assignment during assignment of Blueprint

Removing lock states

If you need to modify or remove your lock assignments, you can either:

  • Change the assignment lock to Don´t Lock
  • Delete the blueprint assignment.

Note that there is a cache so changes might take up to 30 minutes before they become active.

You can read more about resource locking here

Summary

With the “Do not Delete” i think we will see a better use of the Lock assignment and we will have the flexibility to make changes on our resources without the possibility to delete them. And with Definition location set to subscription we can start using the Blueprints without Management groups and i can see that this might be a useful in environments where Management groups have not been introduced.

Good luck with your blueprinting!

You can reach me at Tobias.Vuorenmaa@xenit.se if you have any questions.



Create Azure Policy’s based on Resource Graph querys

If you have used Resource graph to query resources you might realized it comes very handy when creating Azure Policy’s, for example you might check the SKU of virtual machines before you create the policy to audit specific sizes of virtual machines or even prevent creation of them. (If you haven’t yet used Azure Resource Graph you can check my previous post out – https://tech.xenit.se/azure-resource-graph/)

Let’s take it further and actually create a Policy based on our Resource Graph query.

In my example below i query all storage accounts that allows connection from all Virtual Networks and the where environment is set to Prod.

Iam running all commands in Cloud Shell and CLI, but you could just aswell use Powershell.

CLI

The query is looking for below setting, it can be found under Firewalls and virtual networks under your storage accounts.

Creating the policy

To create the Policy, I am using the tool GraphToPolicy. The tool and instructions can be found here http://aka.ms/graph2policy

Follow the instructions for the tool and when you have the tool imported to your cloud shell environment you are ready to go.

Iam using the same query as before and creates a Policy to Audit all storage accounts that allows connections from all Virtual Networks and have the environment tag set to Prod.

CLI

Output:

CLI

Same policy as above but query in variable

After creation the policy is ready for assignment. I assigned it to my test subscription and as you can see in my example it shows that one of my storage accounts are non-compliant.

Summary

Resource Graph is a handy tool and as you might have understood its very useful when looking for specific properties or anomalies in your resources. Together with the GraphToPolicy it’s easy to create Azure Policys based on your Resource Graph Querys.

Credit for the tool goes to robinchapas https://github.com/robinchapas/ConvertToPolicy

If you have any questions you can reach me at tobias.vuorenmaa@xenit.se



Azure Resource Graph

During Ignite 2018 Microsoft released a couple of new services and features in public preview for Azure i will try to cover the Governance parts in upcoming posts.

Lets start with Resource Graph.

If you have been working with Azure Resource Manager, you might have realized its limitations for accessing resource properties. The resource fields we have been able to work with is Resource Name, ID, Type, Resource Group, Subscriptions, and Location. If we want to find other properties, we need to query each resource separately and you might end up with quite complicated scripts to complete what started as simple tasks.

This is where Resource Graph comes in, Resource Graph is designed to extend the Azure Resource Management with a Azure Data Explorer Query language base.

With Resource Graph it’s now easy to query all resources over different subscriptions, as well as get properties of all resources without more advanced scripts to query all resource separately. Ill show how in the attached examples below.

All Resources

The new “All resources” view in the portal is based on Resource Graph and if you haven’t tried it out yet go check it out. It’s still in preview so you have to “opt-in” to try it.

Get started

To get started with Resource Graph you can use either CLI, Powershell or the Azure Portal.

In the examples below, I am using Cloudshell and Bash but you could just as well use Powershell:

#Add Resource Graph Extension, needs to be added first time.

#Displays all virtual machines, OS and versions

Example output from above query

# Display all virtual machines that starts with “AZ” and ends with number.

# Display all storage accounts that have the option to “Allow Access from all networks”

# Display linux VMs with OS version 16.04

For more info about the query language check this site:
https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language

If you have any specific scenario feel free to contact me and we can try to query your specific needs.

You can reach me at tobias.vuorenmaa@xenit.se if you have any questions.



Deploy CoreOS with VSTS Agent container using ARM template

In this blog post, I’ll describe how to deploy CoreOS using an ARM Template and auto start the Docker service as well as create four services for the VSTS Agent container.

Container Linux by CoreOS (now part of the Red Hat family) is a Linux distribution and comes with the minimal functionality required to deploy containers. One feature that is really handy when it comes to deploying CoreOS in Azure is Iginition, which is a provisioning utility built for the distribution. This utility makes it possible to (for example) configure services to auto start from an Azure Resource Manager (ARM) Template.

Before we begin, you will also be able to download what I describe in this post here.

First of, we need to describe the service:

Note: VSTS_ACCOUNT and VSTS_TOKEN will be dynamic in the ARM Template and defined using parameters passed to the Ignition configuration dynamically at deployment. I’m using a static pool name ubuntu-16-04-docker-17-12-0-ce-standard.

When we know that the service works, we add it to the Ignition configuration:

Note: In the same way as in the service description, we will be dynamically adding the VSTS_ACCOUNT and VSTS_TOKEN during the deployment.

Now when we have the Ignition configuration it’s just a matter of adding it to the ARM Template. One thing to note is that you will need to escape backslash making \n to \\n in the template.

The ARM Template can look like this: (note that variable coreosIgnitionConfig is a concatenated version of the json above)

Note: I’ve also created a parameter file which can be modified for your environment. See more info here.

After deployment, you’ll have a simple VM with four containers running – and four agents in the agent pool:



Azure AD Connect and .NET Framework 4.7.2

Introduction

Last week a discussion erupted on Microsoft forums regarding Azure AD Connect due to it’s Monitoring Agent using all free resources of CPU on the servers. These issues were caused by a .NET Framework update and a lot of administrators spent time uninstalling and blocking these patches to resolve the CPU usage issues on their servers. On Saturday Microsoft released an update (KB4340558) which contains a collection of several patches where one of the earlier mentioned .NET Framework updates were included. For more information, see this link.

Microsoft has recently published an article regarding this issue. In addition, Microsoft also published a new version of the health agent where they state that the issue is resolved, it can be downloaded from here. The new health agent version is set to be included in the next version of Azure AD Connect, which will be published for Automatic Upgrade (Auto Upgrade). The following patches have been identified with issues causing Azure AD Connect’s monitoring agent using huge amounts of CPU:

Auto Upgrade

In version 1.1.105.0 of Azure AD Connect, Microsoft introduced Auto Upgrade. Although, not all updates are published for Automatic Upgrade. Whether a version is eligible for automatic download and installation will be announced on Microsofts version-history website for Azure AD Connect.

You can verify whether your Azure AD Connect installation have Auto Upgrade enabled by either using Powershell or viewing your configuration in It’s GUI.


Graphical User Interface of Azure AD Connect
PowerShell-command for determining whether Auto Upgrade is enabled or not.

This command will return either Enabled, Disabled or Suspended, where as the Suspended state only can be set by the system itself. Newer installations of Azure AD Connect enables Auto Upgrade by default, in case your installation applies to Microsoft’s recommendations. For more information, see this link.

Enabling Auto Upgrade

In case you have an installation of Azure AD Connect older than 1.1.105.0 (February 2016), Auto Upgrade will be disabled, if you’ve not enabled it manually. Enabling this function can be done with below PowerShell-command if so wanted.

If you have any questions, feel free to email me at robert.skyllberg@xenit.se



Change OS disk on server using Managed disk in Azure

Recently a new capability was released for Azure Virtual Machines using Managed disks.

We have been missing the possibility to change OS disk of VMs using Managed disks. Until now that has only been possible for Unmanaged disks. Before release of this feature we have been forced to recreate the Virtual Machine if we want to use the snapshot and managed disk.

This feature come in handy while performing updates and or changes to OS or applications and where you might want to rollback to previous state on existing VM.

As of today Azure backup only supports restore to a new VM. With this capability we can hope to see a change for this in the feature. But as for now we can use Powershell to change OS disk of VM and restore a older version of that OS disk on existing VM.

In the exemple below we are:

  • Initiating a Snapshot
  • Creating a Managed disk from snapshot using the same name as the original disk but adds creation date.
  • Stop the VM – The server must be stop deallocated state.
  • Swap OS disk of existing VM
  • Start the VM
Source: https://azure.microsoft.com/en-us/blog/os-disk-swap-managed-disks/



How to join a Windows 10 computer to your Azure Active Directory

Introduction

Some of the benefits of having your Windows 10 devices in your Azure AD is that your users can join the computer to your Azure AD without any extra administrator privileges, assuming you have configured this in your Azure AD. They can also login to the computer without the need of being connected to a specific company network the first time, as long as they have internet connection. You can also manage your Windows 10 devices wherever it may be in the world.



Windows 10 Subscription Activation for Hybrid Azure AD Joined devices

In a migration phase to Windows 10 we wanted to be able to benefit from the fairly new Windows 10 Subscription Activation method for the existing environment. One of the requirements for us was that we could do this with Hybrid Azure AD Joined devices. With this post I will try to guide you through the settings and steps for the setup to work properly.

In this scenario the environment looked like this from the beginning:

 

Domain functional level: Windows Server 2012 R2
Windows 7 machines ready to be upgraded to Windows 10
All Windows clients domain-joined to an on-premise domain
An active Office 365 tenant existed
Azure AD Connect was configured with password synchronization only
An active Azure AD Premium P1 subscription existed

 

Now when we got the background information about the environment, lets start listing the things we needed to do before we successfully could make the Windows 10 Subscription Activation work for the new Windows 10 devices.

  1. Configure a service connection point
  2. Enable device writeback in Azure AD Connect
  3. Sync computers accounts via Azure AD Connect
  4. Create a GPO so domain joined computers automatically and silently register as devices with Azure Active directory
  5. Upgrade existing computer or install a new one with Windows 10 Pro 1709 and on-premise domain-join the device
  6. Verify that the Windows 10 computer register as a Hybrid Azure AD Joined device in Azure Active Directory admin center
  7. Assign a Windows 10 E3/E5 license to a user in Office 365 Admin Center
  8. Log onto the computer with the user you assigned the license to
  9. Confirm that the Windows 10 Pro 1709 computer steps up to Enterprise

 

Now I will describe most of the steps in more detail so it’s easier for you to understand what needs to be done.

 

To configure a service connection point, follow the steps below:

In newer versions of Azure AD Connect and when running Express settings, this SCP is created automatically here:

You can also retrieve the setting with PowerShell:

In this case, it had not been created, probably because older version of Azure AD Connect was installed that did not perform this. Run the commands below as admin from the Microsoft Azure Active Directory Module for Windows PowerShell on the Azure AD Connect server which also needs to have RSAT-ADDS installed to create the SCP. Make sure you have 1.1.166 of the module installed.

Verify that the SCP has been created with the retrieve PowerShell command above.

To enable device writeback in Azure AD Connect and sync computer accounts, follow the steps below:

This is done from the Azure AD Connect server.

To create the GPO for domain joined computers to automatically and silently register as devices with Azure Active directory, follow the steps below:

To verify that the Windows 10 computer register as a Hybrid Azure AD Joined device in Azure Active Directory admin center, follow the steps below:

You should also see msDS-Device records in the RegisteredDevices OU in Active Directory.

To assign a Windows 10 E3 or E5 license to a user in Office 365 Admin Center, follow the steps below:

In your Office 365 admin portal, find the user who should log onto the Windows 10 Pro computer and activate the Windows 10 Enterprise license that you bought beforehand. This license can be purchased as a separate license or via Microsoft 365 E3 or E5 license bundle.

To verify that the computer has been activated through Windows 10 Subscription Activation, follow the steps below:

After logging onto the Windows 10 Pro computer, verify that the Enterprise version has been activated.

 

Please note that you need to have a Windows 10 Pro license activated to get this to work. If you have a Windows 7 Pro licensed computer today and you have bought the Windows 10 E3/E5 or Microsoft 365 E3/E5 license you can upgrade your existing Windows 7 Pro computer to Windows 10 Pro by using your existing Windows 7 Pro key. This will give you a valid Windows 10 Pro license that can be used in this scenario.

A good to know command in this hybrid scenario is dsregcmd.exe /status. It will give you the status of your local computer, like if the device is Azure joined or if the user is in Azure.

If you have any questions, feel free to email me at tobias.sandberg@xenit.se.

You can find Microsofts documentation here.



Azure Archive Storage – Manage access tier on all blobs in a container

Last week Archive blob storage went into general availability. If you haven’t checked it out you can find some info here Announcing general availability of Azure Archive Storage

After some testing we realized that you cant change the access tier for an entire container or storage account from the portal. The access tier had to be set blob by blob as shown in the picture.

Here is an easy way to set the access tier with Powershell on all blobs in a specific container. This can be helpful if you have a lot of blobs that could take benefit from the new Archive access tier.

After successfully running the code above we could see that all our blobs had change access tier to “Archive”.

Our example is very simple and with some imagination you can take it further and for example change the access tiers of certain files with certain properties.