Category: Azure

Office Cloud Policy Service – Preview Feature

Earlier this year Microsoft announced a new cloud based service that allows administrators to create and manage policies for Office ProPlus users in your tenant, this service is called “Office Cloud Policy Service” or “OCPS” for short. These policies are created and managed via an internet based portal and are then enforced upon members of a Azure Active Directory Security Group.

The settings that you can apply in your OCPS policies include many of the same settings that you can find in the traditional user based settings in Group Policy. So the best thing about OCPS is that it doesn’t require any on-premises or MDM infrastructure to work, its all cloud based!
Even though its completely cloud based, you shouldn’t see OCPS as a replacement for Group Policy, but more of an extension. That’s because OCPS policies apply to devices even if they aren’t domain joined or MDM enrolled ( where Group Policy can’t be applied), they apply to all devices where the user is logged in to Office ProPlus.  Note that OCPS only applies user based settings and not machine based settings like Group Policy does.

What are the requirements for getting started?

The requirements for getting this to work are not very many or complex

  • The minimum version of Office ProPlus must be 1808
  • Users must login to Office ProPlus with an Azure AD account. This account can either be synced or cloud-only.
  • Security groups in Azure AD that contains the appropriate users that you want to apply a policy to. The groups can be synced or cloud only as well.
  • In order to manage OCPS you must either be a Global Administrator, Security Administrator or a Desktop Analytics Administrator

 

How to create your first policy

Creating a policy in the internet based portal is very simple and straight forward.

1: Start by signing into the OCPS portal with the URL https://config.office.com/officesettings And choose “Go to Office policy management”

2: Click on “Create”

3: You will now be met with the following fields that need to be specified

4: After you’ve specified a name for your policy you can go ahead and click the “Select group” button to be able to specify the group this policy will apply to. Note that you can search for specific groups in the search box, or just choose a group from the list. Note that you can only select one group per policy.

5: After you’ve selected your group you can go ahead and click the “Configure Policies” button to actually start applying setting for the policy

6: There is a search function to easily find the settings you are interested in. For example, I’ve searched for “Outlook” and I’m interested in preventing the attachment preview functionality, so I can click on the setting to start configuring it

When we click on a setting we get a description that will tell us what the setting controls and what will happen if we change the configuration of the setting.

7: After I’ve configured all the setting I want to be a part of my policy, I can go ahead and click “Create” on top of the policy wizard

Managing a policy

After you’ve created a policy it will show up in a list so you can easily edit, delete, copy or reorder its priority.

If you edit a policy you can see what settings have been configured by filtering the “Status” Column to “Configured”

This will show you only the settings that are currently configured in this policy. So you can easily modify the configuration ,and also verify what settings actually apply to your users.
In my example I only have the Attachments preview setting we configured earlier

Note that the status is set to “Configured”

So what is OCPS good for and when should it be used?

Like I previously mentioned OCPS is a way for administrators to control the behavior and configuration of Office ProPlus on all devices a users logs into. It doesn´t have to be a domain joined or MDM enrolled device. These policies are applied once a users logs in and activates Office ProPlus.

I believe that OCPS is a very good function for primarily cloud-only organizations that don’t have or don’t need an on-premises  server structure with Active Directory and Group Policy management, but still want the opportunity to secure and control their users Office ProPlus installations.
It´s also a good tool for organizations that already have Group policy in place, but want to be able to apply similar configuration on non domain or MDM joined devices.
For example, if a CEO has several corporate and private devices he or she logs into Office with, we might want to enforce some settings for the Office applications on those devices that would normally be out of our control. This might be because the CEO might be targeted by different types of harmful attacks, maybe by macro enabled Word documents etc.
This could be prevented on the CEO´s personal devices as well by setting up a OCPS policy that disabled Macros in Word.

 

If you have any questions or would like to know more about Office Cloud Policy Service (OCPS)
feel free to contact me at oliwer.sjoberg@xenit.se or by leaving a comment.



New baseline policies available in Conditional Access

Last week Microsoft starting to rollout three new baseline policies in Conditional Access.

  • Baseline policy: Block legacy authentication (Preview)
  • Baseline policy: Require MFA for Service Management (Preview)
  • Baseline policy: End user protection (Preview)

Baseline Policy in Conditional Access are part of Baseline Protection in Azure Active Directory (Azure AD) and the goal of these policies is to ensure that you have at least the baseline level of security enabled in Azure AD.

Conditional Access are normally part for a Premium SKU (P1 or P2) for Azure AD but Baseline Protection are available for all editions of Azure AD, including Free.

Here is a walk-through of all the available baseline policies that Microsoft offers and how they protect your organization.

Require MFA for admins

This policy requires Multi-Factor Authentication (MFA) for accounts that are part for directory roles that elevate an account with more privileged than a normal account. This policy also blocks legacy authentication, like POP, IMAP and older Office desktop client.

The directory roles that are covered by this policy are:

  • Global administrator
  • SharePoint administrator
  • Exchange administrator
  • Conditional access administrator
  • Security administrator
  • Helpdesk administrator / Password administrator
  • Billing administrator
  • User administrator

Block legacy authentication

This policy blocks all sign-ins using legacy authentication protocols that doesn’t support Multi-Factor Authentication, such as

  • IMAP, POP, SMTP
  • Office 2013 (without registry keys for Modern Authentication)
  • Office 2010
  • Thunderbird client
  • Legacy Skype for Business
  • Native Android mail client

However, this policy does not block Exchange ActiveSync.

Require MFA for Service Management

This policy requires users logging into services that rely on the Azure Resource Manager API to perform multi-factor authentication (MFA). Services requiring MFA include:

  • Azure Portal
  • Azure Command Line Interface (CLI)
  • Azure PowerShell Module

End user protection

This policy protects users by requiring multi-factor authentication (MFA) during risky sign-in attempts to all applications. Users with leaked credentials are blocked from signing in until a password reset.

Once the policy is enabled, users are required to register for MFA within 14 days of their first login attempt. The default method of MFA registration is the Microsoft Authenticator App.

Recommendations

Here are a few recommendations before you enable these polices:

  • If you have privileged accounts that are used in scripts or Azure Automations, you should replace them with Managed Identities for Azure resources or service principals with certificates. You could also exclude specific user accounts from the baseline policy but should be a temporary workaround.
  • Make sure you exclude the emergency-access / break glass account(s) from these polices

Read more about baseline protection and baseline policies on docs.microsoft.com



Building native multi-platform tools with ReasonML, esy and Azure Pipelines

In this post we will go over how I implemented multi-platform packages for Reason with the help of esy and Azure Pipelines and publish them on npm. The high-level overview of the process is this:

  1. Create a npm release for each platform via esy release
  2. Copy all the different releases into a folder
  3. Create a package.json for the whole package and a postinstall.js file
  4. Copy README and LICENSE
  5. Create a placeholder file that we can replace in postinstall
  6. Package up and release to npm

If you just want to get this setup for free you can clone and use use hello-reason as a base or use the excellent esy-pesy tool to bootstrap your project.

What is esy?

esy is a package management tool that has a similar workflow to npm but built from the ground up to be fast, handle project isolation and work with native code. It is currently optimized for Reason and OCaml but can in theory be used for any language, there are multiple examples of C/C++ projects packaged as a esy-package.

I previously wrote a short post about ReasonML and esy that you can find here.

What is Azure Pipelines?

Azure Pipelines is part of Microsofts DevOps offering. It provides both CI and CD via builds and releases. Using different “agents” it’s possible to build on the three major platforms so there is no need for multiple CI platforms. It’s free for open source and has a about 30 hours free for private projects, and you can always host your own agent to have infinite free builds.

It has something they call jobs that can be used either to provide parallelism of tasks that don’t depend on each other and more interesting for this use case, multiple platforms in a single job. Jobs can also depend on each other and one job can provide input to the next.

There is a concept of artifacts that is something produced by the continuous integration pipeline that can either be consumed by another job a release or downloaded. Another useful feature is that it’s possible to split up our build definition in multiples files using templates. A template can almost be seen as a reusable function, you can break out some part of the job to a template and then pass parameters to it.

So how does the setup really work?

The rest of this blogpost will be going over the ins and outs of the setup I created for reenv, a dotenv-cli and dotenv-safe replacement written in Reason and compiled natively for Windows, macOS and Linux. It’s installable via npm but because it’s compiled to a native binary it’s about 10-20 times faster than the node equivalent.

This is the first part of the build definition. We start off by giving it a name and declaring when it should trigger a build, in this case any push to master, tag or PR will trigger a build. Then we use the template feature to declare the same job three times, once for each platform.

Building the package

Let’s go over the template that we use to build the different platforms.

First we declare default parameters, in this case we just use macOS as default. Then we set the name of the current job and the agent OS from the parameters.

The steps we go through to build for each platform is. Make sure the agent has a recent node version installed and install esy via npm. Install all dependencies with esy, run esy pesy to make sure the build config is correct and then build with the command specified in the package.json. We then create docs and copy then publish the generated artifacts. We will go over the testing template in more detail next. And the last step is to create the npm release for the current platform upload the artifact. The release is done via the esy release command that bundles the whole sandbox that esy has for the project and creates relocatable dlls and executables.

Running tests

We run esy test that will run the command that we have declared in the package.json and tell it to continue even if there is an error. Then we save the path to the junit.xml in a variable that we can use to publish a test report. Sadly, there is some difference in where the report is put on macOS/Linux and Windows so there is two different publishing, one when it’s not Windows and one when it is.

It will then generate a beautiful report like this, and yes, the tests are really that fast.

Creating the release

If this was a PR then we would be done as everything builds and we don’t have to create a release for it. But if this was either a push to master or a tag, we want to build a release that anyone can install. I use tags to create actual releases to npm, master will generate the package but it will not be pushed anywhere.

This job is also declared in the main file that was posted as the first image. We run it on a ubuntu machine and it has a condition to not run on PullRequests. We also depend on the previous steps as we need the released artifact that we create for each platform to make a combined package.

First we make sure we have a recent node version installed. Then we create a folder to put all the releases in.

Then for each platform we download the artifact that was created, create a separate folder inside the previously created release folder and copy the needed files into the newly created folder.

We then run a node script that creates a package.json without dependencies by reading your project package.json. It also copies your LICENSE and README.md if they exist and copies a postrelease.js file. This script figures out what platform the package is installed on, copies the correct files in place and runs the esy generated postinstall script for the correct platform. The script that esy generate will replace placeholder paths so that they are actual paths on the consumers machine. The last step is to create a placeholder file and make it executable.

This script is interesting on it’s own as it’s both bash and batch at the same time, to learn more about it you can read this. The placeholder just echoes “You need to have postinstall enabled”. As it’s both bash and batch at the same time it’s runnable on all three platforms without modification. At first I had a js file but npm was “smart” and decided that my package should be run with node which broke the binary.

Testing the release before releasing it to npm

If the build was triggered because of a tag we want to test that our release will work on all the platforms before releasing it on npm. This is done for all the platforms.
First we make sure we have a recent version of node and then we download the the packaged release artifact. We then install the package from the tarball and run the binary, the pipeline will fail if the command exits with a non-zero exit code.

Releasing to npm

At the moment of writing this Azure Pipelines doesn’t support the release workflow in yaml which includes release gates and other powerfull features. I have a setup that I did through the UI that takes the package and basically runs npm publish on it. I will write a follow up on this post when the yaml feature is released. But getting it setup with the UI is pretty straight forward.



Resource Graph Explorer and common use cases

This is my third post about Resource Graph and this time I will cover the new Explorer in the Azure portal and some use cases where I have found Resource Graph really helpful.

You can find my previous posts here:

Azure Resource Graph – Get started

Azure Resource Graph – Azure Policy

Resource Graph Explorer

The new Resource graph Explorer gives the opportunity to create, save and pin queries that we made in Resource graph. In the Explorer you use Kusto query language directly so no need to use Powershell or CLI.

More info about Kusto can be found here

In the explorer we can build queries by just browsing and clicking the resources and properties we are looking for.

In below example i first added virtual machines and then vmsize under hardwareProfile. Then i simply add the size i was looking for in my case Standard_B1ls.

It also gives us an easy way to save and reuse our saved queries, with the save and open query option showed in above picture. No need to memorize or save queries elsewhere.

Another nice feature is that you can pin your results to visualize it on your dashboard.

View from dashboard

Use cases

Housekeeping

Its so easy to create resources in Azure today, which is great! But often when we are cleaning our environments of retired resources, we tend to focus on the one that generates the most cost. This leaves us with orphan resources that might not longer be in use.  Let´s take Availability set as example we create them to keep our SLA´s up for virtual machines but after retirement of the virtual machines the availability sets might still be there.

Let’s use Resource graph explorer to find all Availability sets and show the property of virtual machines.

This can be done through for example Powershell but the scripts tend to become quite advanced just to get the property of resources across subscriptions, this is not the case with Resource Graph.

Evaluate Policy Impact

Perhaps one of the most common use cases for resource graph is to evaluate policy impact before even creating the policy. We can query our resource in such way our policy would evaluate our resources. Take Storage account as an example. Perhaps we would like to deploy a policy to deny creation of Storage accounts that allows connection from “All Networks”. Before doing this, we can run a query in Resource Graph and get the result for already created resources.

If you are using Powershell or CLI its easy to create a policy based on your Resource graph query take a look at the post i linked to at the start of this post.

Get Resource changes

A recently released preview feature gives you the possibility to find changes done to your resources. To do this we can use REST API, Activity log or the compliance view in Azure Policy. All the below approaches to get Change history is provided through Resource Graph. To trigger a change a property of the resource is either added, removed or modified.

Activity log

View Change history is available as preview in Activity logs.

From the Activity log you can drill down to an event and then click “Change History (Preview)”.

In my example below i changed the setting on a storage account to allow Access from – Selected networks.

Azure Policy

From the compliance view in Azure Policy go to your initiative or definition and then select the recourse you would like to view change history for. From here we get similar experience as described before in Activity logs. We can easily determine what properties that have changed.

Choose detection time to show exactly what changed for the resource.

API

Its also possible to get resource changes through the API, i wont cover the details in this post but follow the link below to get started.

Through the API the process can be broken down into three steps.

  1. Enter an interval and resourceID.(Find when changes were detected)
  2. Use the ChangeID to to get what properties changed.(See what properties changed)
  3. The response will be a JSON formatted with two configurations, one before the snapshot and one after the snapshot. Compare both to determine what properties changed for the given changeID.

To get a better understanding on how to use the API for resource changes take a look here

Summary

Change History

I think the new resource change view is great and it gives a simple way to see how your resources have modified. The possibility to do so both through Activity log and API makes it flexible and useful for most common scenarios and through the compliance view it’s a great way to track changes that might have changed your compliance level.

Azure Resource Explorer

With the new explorer we get similar experience as we are familiar with from Log Analytics and the language is also Kusto based as in Log analytics. Build a library of commonly used queries and pin them to your dashboard to keep tracking of resources and properties that’s important in your environment.

If you have any questions or scenarios you would like to discuss you can reach me at

Tobias.Vuorenmaa@Xenit.se



Virtual attendance to Microsoft Build 2019

New features and cool stuff – Microsoft 365, Office 365, Azure, Edge, Windows 10, and everything else Microsoft

There are so many cool things you can do with new types of disruptive technology that was not even imaginable a decade ago. Impressive progress has been made across several disciplines within IT, and it doesn’t look like it will slow down at all. Automation, augmented reality and analytics, AI-driven development, and digital twins just to mention a few areas that come to my mind as examples of groundbreaking new tech-trends – thanks to Gartner’s report Top 10 Strategic Technology Trends for 2019. All of these new technology trends are possible thanks to extremely talented researchers, mathematicians, and developers to name a few. A lot of this new tech is built on or with technology from Microsoft – that’s why Microsoft Build is such an interesting conference.

Even though my daily work is around project management, end user computing and the operations side of digital infrastructure, I’m always curious on what’s to come and try to find the next big thing or cool features that can improve the EUC-experience for all of our current and future customers.

One impressive technology, albeit rather old, is virtual presence and live online streaming. That’s something I’m very thankful for a day like this. Last evening and night, was the first day of Microsofts annual developer conference Build in Seattle, WA, and I was able to watch a few hours of presentations from my couch instead of having to go to the US. Even though attending in person would have been a bit more exciting and fun, my couch is much better than nothing at all. 😃

Being able to listen to Microsoft vision and plan for the future, and also learn about the latest new features from the couch might not sound reasonable for everyone, but a completely logical move to me.

After a good night’s sleep I have been trying to come up with a list of the most interesting parts from the presentation I saw last night, from a EUC standpoint. Obviously, there will be lots and lots of more neat new features and updates to products presented during the conference but that might be for another blog post.

Microsoft Edge Chromium

Three major updates were announced for Microsoft edge last night. Thanks to Microsoft’s decision to move to a fork of the open source browser Chromium my bet is that we will see a lot of more news around the browser in the months to come.

If you would like to try the new public version of the Edge Chromium browser you can do so here!

  1. IE Mode

This is a big one for EUC enthusiasts like myself. There has always been a push-pull struggle to decide on which browser to use for end users in an enterprise environment and that usually, not always, has to do with compatibility to do.

Microsoft’s announcement last night hopefully means that we won’t have to trade off new features in modern browsers and being able to work effectively in old and legacy LOB applications. I think we all can agree on the fact that most bigger enterprises have a handful of “extremely important” old web apps that won’t disappear in the forseeable future.

What Microsoft announced is the possibility for Edge Chromium to load an old web app straight into the new browser but with the old Internet Explorer rendering engine. Previously Edge started a separate IE-process and users had to switch between the two browsers, this news means that you can have IE-tabs and Edge Chromium-tabs within the same browser, really neat.

IE Enterprise Mode works well, but I think this will be much much better. We’ll see!

  1. Collections

Another cool feature presented during the keynote was Collections. In summary, I’d say that is the next generation of the old favorites feature. You will be able to create collections of links, pictures, text, and other information within the browser.

If you want to it’s then possible to export/share that collection with your co-workers via Excel or Word. The Edge Chromium browser generates good looking files with headers, aligned pictures, and URLs/sources.

  1. Privacy

You will be able to select from one of three predefined privacy configurations. Unrestricted, balanced or strict. The strict mode blocks most trackers and trackers but sites might break. The unrestricted is the complete opposite, and the balanced mode is what we swedes say is lagom – not too much, not too little tracking.

The World’s Computer (Azure)

It’s no surprise to see that there’s a lot of focus on Microsoft Azure during the conference. Some interesting news that might be of extra interest for the EUC community I’d say are these:

Of course, there are loads of other new features but i found these to stand out.

To see all Microsoft Azure announcements, check out this link.

Windows Terminal

WOW! Finally, the old terminal will be replaced with something new! The new terminal will support shells like Command Prompt, PowerShell, and WSL.

To get a glimpse of the amazing future of Windows Terminal, check this out.

The new console is open source and you can build, run, test, for the app right now. Their repo can be found here.

Key features according to Microsoft is:

  • Multiple tabs
  • Beautiful text (GPU accelerated DirectWrite/DirectX-based rendering, emojis, powerline, icons, etc.)
  • Lots and lots of configuration possibilities (profiles, tabs, blur/transparency/fonts… you name it)

So, get a new graphics card and get started working in the new terminal 😃

Office 365 Fluid Framework

A new framework called the Fluid Framework was announced. The new framework will make it seem like users are working together in real time, charts and presentations will be updated in an instant, and translations into loads of languages will be live.

During the keynote, the presenter wrote in a document at the same time as others did, and it really looked like there was no latency. The live translation part was really cool and I recommend you to watch it in action to get why this is something that might be of real interest for your business.

Watch it in action here.

Windows Hello, FIDO2 certification

Windows Hello is now FIDO2 certified. What does that mean?

Without digging into the details the new certification hopefully means that more websites and online services will be able to allow other forms of authentication than just username/password. Passwordless authentication is proven secure and with Microsoft adhering to the new specification it will be easier to allow user-friendly authentication methods like fingerprint and face recognition.

FIDO2 is the overarching term for FIDO Alliance’s newest set of specifications. FIDO2 enables users to leverage common devices to easily authenticate to online services in both mobile and desktop environments. The FIDO2 specifications are the World Wide Web Consortium’s (W3C) Web Authentication (WebAuthn) specification and FIDO Alliance’s corresponding Client-to-Authenticator Protocol (CTAP).

Windows Subsystem on Linux 2 (WSL2)

The new version of WSL will be running from a completely open source Linux Kernel that Microsoft will build themselves. There are probably 100s of reasons why Microsoft do this but one of them is performance. The kernel version will be 4.19 which is the same version that is used by Azure.

The new WSL-version will make it possible to run containers natively which means that locally hosted Virtual Machines won’t be necessary anymore.

Like before there won’t be any userspace binaries within WSL which means that we will still be able to select which flavor we want to run.

The first public versions of WSL2 will be available sometime this summer.

Honorable mentions or too cool not to mention

  • Mixed reality services within Teams and Hololens, for example, the live Spatial meetings using AR
  • Hololens 2 and the Mittel presentation
  • Cortana updates where the AI Bot is integrated and helps even further with scheduling and assisting you during your workday
  • All news regarding containers, Docker, and Kubernetes/AKS
  • Microsofts new Fluent Design System
  • Xbox Live for new devices (Android and iPhone) and new collaborations with game studios
  • Some kind of Minecraft AR game for mobile phones being released on May 17

Psst. Did you know that you can watch loads of presentations and also the keynote here?

What do you think? Have I missed anything obvious?



Add you own local admin users on Azure AD devices

Do you have issues when trying to add an account as local admin on your Azure AD Joined device? Maybe you have specific requirements regarding which accounts should be admins on your client machines and the Azure AD solution (additional local administrators on Azure AD joined devices) is not enough to satisfy your needs.

There are a couple of alternatives out there, for example the use of RestrictedGroups policy (minimum version 1803) where you can define which users should be members of your local groups via a policy. Unfortunately, this is not a great solution if you want to set different users for each computer.

So how do we solve this?

We developed a Powershell script that will help you automate this process. It can add multiple users to different local groups on your Azure AD Joined devices. It’s based on the Add-LocalGroupMember command which gives you the opportunity to add users from multiple sources (including Azure AD). Just copy the script, make it fit your environment, verify functionality, upload it in the Powershell script section in the Intune portal and deploy it to the users/devices of your choice.

The script is highly adoptable and can be changed in a lot of ways to fit your environment. So feel free to use it as you want.

If you have any questions, feel free to email me at tobias.sandberg@xenit.se or comment down below. I will try to answer you as soon as possible.

 



Querying Microsoft Graph with Powershell, the easy way

Microsoft Graph is a very powerful tool to query organization data, and it’s also really easy to do using Graph explorer but it’s not built for automation.
While the concept I’m presenting in this blogpost isn’t something entirely new, I believe my take on it is more elegant and efficient than what I’ve seen other people use.

So, what am I bringing to the table?

  • Zero dependancies to Azure modules, .net Core & Linux compatibility!
  • Recursive/paging processing of Graph data (without the need for FollowRelLink, currently only available in powershell 6.0)
  • Authenticates using an Azure AD Application/service principal
  • REST compatible (Get/Put/Post/Patch/Delete)
  • Supports json-batch jobs
  • Supports automatic token refresh. Used for extremely long paging jobs
  • Accepts Application ID & Secret as a pscredential object, which allows the use of Credential stores in Azure automation or use of Get-Credential instead of writing credentials in plaintext

Sounds great, but what do I need to do in order to query the Graph API?

First things first, create a Azure AD application, register a service principal and delegate Microsoft Graph/Graph API permissions.
Plenty of people has done this, so I won’t provide an in-depth guide. Instead we’re going to walk through how to use the functions line-by-line.

When we have an Azure AD Application we need to build a credential object using the service principal appid and secret.

Then we aquire a token, here we require a tenantID in order to let Azure know the context of the authorization token request.

Once a token is aquired, we are ready to call the Graph API. So let’s list all users in the organization.

In the response, we see a value property which contains the first 100 users in the organization.
At this point some of you might ask, why only 100? Well that’s the default limit on graph queries, but this can be expanded by using a $top filter on the uri which allows you to query up to 999 users at the same time.

The cool thing with my function is that it detects if your query doesn’t return all the data (has a follow link) and gives a warning in the console.

So, we just add $top=999 and use the recursive parameter to get them all!

What if I want to get $top=1 (wat?) users, but recursive? Surely my token will expire after 15 minutes of querying?

Well, yes. That’s why we can pass a tokenrefresh and credentials right into the function and never worry about tokens expiring!

What if I want to delete a user?

That works as well. Simply change the method (Default = GET) to DELETE and go!

Deleting users is fun and all, but how do we create a user?

Define the user details in the body and use the POST method.

What about json-batching, and why is that important?

Json-batching is basically up to 20 unique queries in a single call. Many organizations have thousands of users, if not hundreds of thousands of users, and that adds up since much of the queries need to be run against individual users. And that takes time. Executing jobs with json-batching that used to take 1 hour now takes about 3 minutes to run. 8 hours long jobs now takes about 24 minutes. If you’re not already sold on json-batching then I have no idea why you’re still reading this post.

This can be used statically by creating a body with embedded queries, or as in the example below, dynamically. We have all users flat in a $users variable. Then we determine how many times we need to run the loop and build a $body json object with 20 requests in a single query, then we run the query using the $batch operation and POST method and put them into a $responses array and tada! We’ve made the querying of Graph 20x more efficient.

Sounds cool, what more can I do?

Almost anything related to the Office 365 suite. Check out the technical resources and documentation for more information. Microsoft is constantly updating and expanding the api functionality. Scroll down for the functions, should work on Powershell 4 and up!

Technical resources:

Creating an Azure AD application
https://www.google.com/search?q=create+azure+ad+application

Graph API
https://docs.microsoft.com/en-gb/graph/use-the-api

About batch requests
https://docs.microsoft.com/en-gb/graph/json-batching

Known issues with Graph API
https://docs.microsoft.com/en-gb/graph/known-issues

Thanks to:
https://blogs.technet.microsoft.com/cloudlojik/2018/06/29/connecting-to-microsoft-graph-with-a-native-app-using-powershell/
https://medium.com/@mauridb/calling-azure-rest-api-via-curl-eb10a06127

Functions



New features in Azure Blueprints

The past couple of weeks i have seen new features being released for Azure Blueprints. In this short post i will write about the updates in Definition location and Lock assignment.

New to Azure Blueprints?

Azure Blueprints allows you to define a repeatable set of Azure resources that follows your organizations standards, patterns and requirements. This allows for a more rapidly deployment of new environments while making it easy to keep your compliance at desired level.

Artifacts:

Azure Blueprints is a package or container used to achieve an organizational standard and patterns for implementation of Azure Cloud Services. To achieve this, we use Artifacts.

Artifacts available today are:

  • Role Assignments
  • Policy Assignments
  • Resource Groups
  • ARM Templates

The public preview of blueprints was released during Ignite in September last year, and its still in preview.

Read more about the basics of Azure Blueprints here

Definition location

This is where in your hierarchy you place the Blueprint, and we think of it as a hierarchy because after creation the assignments of the blue print can be done at current level or below in the hierarchy. Until now the option for definition location has been Management groups. With the new released support for subscription level you can now start use Blueprints even if you have not adopted Management groups yet.

Note you need contributor permissions to be able to save your definition to a subscription.

If you are new to management groups, I recommend you take a look at it since it’s a great way to control and apply your governance across multiple subscriptions.

Read more about Management groups here

Definition location for Blueprints

Lock Assignment

During assignment of a Blueprint we are given the option to lock the assignment.

Up until recently we only had Lock or Don’t lock. If we chose to lock the assignment all resources were locked and could not be modified or removed. Not even by a subscription owner.

Now we have the option to set the assignment to:

  • Don’t Lock – The resources are not protected by blueprints and can be deleted and modified.
  • Read Only – The resources can´t be changed in any way and can´t be deleted.
  • Do Not Delete – This is a new option that gives us the flexibility to lock our resources from deletion but still gives us the option to change the resources.

Lock assignment during assignment of Blueprint

Removing lock states

If you need to modify or remove your lock assignments, you can either:

  • Change the assignment lock to Don´t Lock
  • Delete the blueprint assignment.

Note that there is a cache so changes might take up to 30 minutes before they become active.

You can read more about resource locking here

Summary

With the “Do not Delete” i think we will see a better use of the Lock assignment and we will have the flexibility to make changes on our resources without the possibility to delete them. And with Definition location set to subscription we can start using the Blueprints without Management groups and i can see that this might be a useful in environments where Management groups have not been introduced.

Good luck with your blueprinting!

You can reach me at Tobias.Vuorenmaa@xenit.se if you have any questions.



Create Azure Policy’s based on Resource Graph querys

If you have used Resource graph to query resources you might realized it comes very handy when creating Azure Policy’s, for example you might check the SKU of virtual machines before you create the policy to audit specific sizes of virtual machines or even prevent creation of them. (If you haven’t yet used Azure Resource Graph you can check my previous post out – https://tech.xenit.se/azure-resource-graph/)

Let’s take it further and actually create a Policy based on our Resource Graph query.

In my example below i query all storage accounts that allows connection from all Virtual Networks and the where environment is set to Prod.

Iam running all commands in Cloud Shell and CLI, but you could just aswell use Powershell.

CLI

The query is looking for below setting, it can be found under Firewalls and virtual networks under your storage accounts.

Creating the policy

To create the Policy, I am using the tool GraphToPolicy. The tool and instructions can be found here http://aka.ms/graph2policy

Follow the instructions for the tool and when you have the tool imported to your cloud shell environment you are ready to go.

Iam using the same query as before and creates a Policy to Audit all storage accounts that allows connections from all Virtual Networks and have the environment tag set to Prod.

CLI

Output:

CLI

Same policy as above but query in variable

After creation the policy is ready for assignment. I assigned it to my test subscription and as you can see in my example it shows that one of my storage accounts are non-compliant.

Summary

Resource Graph is a handy tool and as you might have understood its very useful when looking for specific properties or anomalies in your resources. Together with the GraphToPolicy it’s easy to create Azure Policys based on your Resource Graph Querys.

Credit for the tool goes to robinchapas https://github.com/robinchapas/ConvertToPolicy

If you have any questions you can reach me at tobias.vuorenmaa@xenit.se



Azure Resource Graph

During Ignite 2018 Microsoft released a couple of new services and features in public preview for Azure i will try to cover the Governance parts in upcoming posts.

Lets start with Resource Graph.

If you have been working with Azure Resource Manager, you might have realized its limitations for accessing resource properties. The resource fields we have been able to work with is Resource Name, ID, Type, Resource Group, Subscriptions, and Location. If we want to find other properties, we need to query each resource separately and you might end up with quite complicated scripts to complete what started as simple tasks.

This is where Resource Graph comes in, Resource Graph is designed to extend the Azure Resource Management with a Azure Data Explorer Query language base.

With Resource Graph it’s now easy to query all resources over different subscriptions, as well as get properties of all resources without more advanced scripts to query all resource separately. Ill show how in the attached examples below.

All Resources

The new “All resources” view in the portal is based on Resource Graph and if you haven’t tried it out yet go check it out. It’s still in preview so you have to “opt-in” to try it.

Get started

To get started with Resource Graph you can use either CLI, Powershell or the Azure Portal.

In the examples below, I am using Cloudshell and Bash but you could just as well use Powershell:

#Add Resource Graph Extension, needs to be added first time.

#Displays all virtual machines, OS and versions

Example output from above query

# Display all virtual machines that starts with “AZ” and ends with number.

# Display all storage accounts that have the option to “Allow Access from all networks”

# Display linux VMs with OS version 16.04

For more info about the query language check this site:
https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language

If you have any specific scenario feel free to contact me and we can try to query your specific needs.

You can reach me at tobias.vuorenmaa@xenit.se if you have any questions.