Tag: Azure

Palo Alto VM-Series with active/passive HA support in Azure

Since the latest release of Palo Alto Network PAN-OS 9.0.0 the VM-Series firewall now supports the VM-Series plugin, a built-in-plugin architecture for integration with public clouds or private cloud hypervisors, with the plugin you can now configure VM-Series firewalls with active/passive high availability (HA) in Azure. I will cover some of the requirements in short which is needed to setup HA in Azure.



What is FSLogix Cloud Cache?

Background

Last year FSLogix released its award winning (at Citrix Synergy) technology Cloud Cache, and I for one was very curious about what this meant and what I could use it for. The fact that is was included in the license for Office 365 Container and Profile Container was a really nice surprise, but I was somewhat confused about what it actually does, I mean, have FSLogix developed their own cloud service? It sure sounds like it, that was however not the case. First off, this is a technology that will make your profiles or Outlook cache easily available cross-platform and a kind of built in High Availability so you don’t have load or create a fail-over file-cluster. But there are some things you should take in consideration before implementing this to your environment, but first let me explain what Cloud Cache really is and what the target benefits are!

What is Cloud Cache, really?

As I mentioned you might think that is has something to do with the cloud, or the cloud services, that’s wrong, or at least regarding the technology. Cloud Cache contains primarily 3 features:

  1. Automatic Replication
  2. Cache of “hot” data from your container
  3. Use of Azure blob storage as VHD location

Automatic Replication

Before Cloud Cache you could in FSLogix set multiple paths for the VHD-files and it would automatically check the second path specified if the first was unavailable, the problem was that you needed to set up the replication between the two file locations yourself, and that was complicated since the VHD-disks will be locked during use, and it was hard to do an incremental copy since the changes in data resides within the VHD file, the replication would potentially take a lot of time and load the network considerably.

With Cloud Cache they solved that issue, it is now built in to the product. It will automatically copy the data between the two locations. The pretty neat part of their solution is that the replication begins when the user logs on to their environment and copies the incremental part of the container since its now open and happens automatically. As you can figure out, this is also a great way of migrating your containers to a new location. Just add a new location, wait a couple of days and then remove the old path, really smooth, no hassle, no downtime, no late night service-windows.

Cache of hot data from your container

It’s known that FSLogix will solve the high CPU (on the file-server) issue you normally would see if you would redirect the ost-file to a file share, but it will still demand quite fast disks and some network-load. With FSLogix Cloud Cache you will now be able to place your containers in Microsoft Azure, which is cool but there are two fundamental issues with this approach 1. Azure bills in consumption and 2. high latency to access the data. FSLogix has solved this by caching the hottest data from the containers to the actual Server/Client you reside on, this will minimize the cost in Azure and the load of the network, this is ideal if you use your FSLogix container on different platforms (On your client and a VDI-solution) or on a VDI-environment where the cache will be saved and not downloaded again.

Client profile management

Before Cloud Cache, if you want to manage the profiles of a clients with FSlogix you would have some issues, since it will require you to have the client online all the time. Fortunately with Cloud Cache, you will now not be affected by offline sessions, it will continue with the cached data and as soon its online again it will update the original VHD with the new changes that happen offline.

 

What to consider before using Cloud Cache

Now when you know what Cloud Cache is and what’s makes it good you should also know what to consider in some scenarios. First thing to consider is the cached data, how much will it cache? That is a good question, a question I have not yet received an answer to, from what I gathered this cannot be specified, meaning you cannot control the amount of data it cached, therefor you cannot control the size of the cached data on the potential Citrix server, this can in some environment be a really risky approach. I have some examples below when you really need to assess the value against the risk regarding Cloud Cache:

Citrix Provisioning Services with Citrix Virtual Apps and Desktop

When using Cloud Cache in this setup you will have issues, the cache is suppose to be persistent on the location where you are, which it will not be when using PVS and Citrix Virtual Apps and Desktop. Within this setup your cache will download every time you logon to Citrix, if you also are using “Cache on RAM with overflow on disk” you will also potentially fill your page file-disk.

Citrix Virtual Apps and Desktop

You need to be sure how to set it up, the C-drive must be large, to handle the amount of cached data every user will download, and you must set “Delete Cache on logoff” otherwise one user can potentially download his/hers cache to multiple Citrix server during logoff and logon, and that also means your user will download the cached data every time they logon. Wtich might not be the best experience you had in mind when implementing the solution. There is however a solution to this, you can redirect the cached data to another server, but if you do that, it is highly recommended to place it on fast disks and in a High availability-mode.

 

Summary

All in all this is a really nice feature and will add a lot to the product. But you need to assess it before activating Cloud Cache to see if it’s suitable to you and your environment. In the right scenario this could really improve the experience of your users and your IT-department. If you are curious about the product please don’t hesitate to contact me at jonas.agblad@xenit.se, or leave a comment below!

 

You can also find more information about FSLogix with my previous posts here:

Convert Citrix UPM to FSLogix Profile Containers

Teams in your mulit-user environment done right!

Outlook Search index with FSLogix – Swedish

FSLogix Profile Container – Easy and fast Profile management – Swedish

Office 365 with FSLogix in a Multi-user environment – Swedish

 

 



Querying Microsoft Graph with Powershell, the easy way

Microsoft Graph is a very powerful tool to query organization data, and it’s also really easy to do using Graph explorer but it’s not built for automation.
While the concept I’m presenting in this blogpost isn’t something entirely new, I believe my take on it is more elegant and efficient than what I’ve seen other people use.

So, what am I bringing to the table?

  • Zero dependancies to Azure modules, .net Core & Linux compatibility!
  • Recursive/paging processing of Graph data (without the need for FollowRelLink, currently only available in powershell 6.0)
  • Authenticates using an Azure AD Application/service principal
  • REST compatible (Get/Put/Post/Patch/Delete)
  • Supports json-batch jobs
  • Supports automatic token refresh. Used for extremely long paging jobs
  • Accepts Application ID & Secret as a pscredential object, which allows the use of Credential stores in Azure automation or use of Get-Credential instead of writing credentials in plaintext

Sounds great, but what do I need to do in order to query the Graph API?

First things first, create a Azure AD application, register a service principal and delegate Microsoft Graph/Graph API permissions.
Plenty of people has done this, so I won’t provide an in-depth guide. Instead we’re going to walk through how to use the functions line-by-line.

When we have an Azure AD Application we need to build a credential object using the service principal appid and secret.

Then we aquire a token, here we require a tenantID in order to let Azure know the context of the authorization token request.

Once a token is aquired, we are ready to call the Graph API. So let’s list all users in the organization.

In the response, we see a value property which contains the first 100 users in the organization.
At this point some of you might ask, why only 100? Well that’s the default limit on graph queries, but this can be expanded by using a $top filter on the uri which allows you to query up to 999 users at the same time.

The cool thing with my function is that it detects if your query doesn’t return all the data (has a follow link) and gives a warning in the console.

So, we just add $top=999 and use the recursive parameter to get them all!

What if I want to get $top=1 (wat?) users, but recursive? Surely my token will expire after 15 minutes of querying?

Well, yes. That’s why we can pass a tokenrefresh and credentials right into the function and never worry about tokens expiring!

What if I want to delete a user?

That works as well. Simply change the method (Default = GET) to DELETE and go!

Deleting users is fun and all, but how do we create a user?

Define the user details in the body and use the POST method.

What about json-batching, and why is that important?

Json-batching is basically up to 20 unique queries in a single call. Many organizations have thousands of users, if not hundreds of thousands of users, and that adds up since much of the queries need to be run against individual users. And that takes time. Executing jobs with json-batching that used to take 1 hour now takes about 3 minutes to run. 8 hours long jobs now takes about 24 minutes. If you’re not already sold on json-batching then I have no idea why you’re still reading this post.

This can be used statically by creating a body with embedded queries, or as in the example below, dynamically. We have all users flat in a $users variable. Then we determine how many times we need to run the loop and build a $body json object with 20 requests in a single query, then we run the query using the $batch operation and POST method and put them into a $responses array and tada! We’ve made the querying of Graph 20x more efficient.

Sounds cool, what more can I do?

Almost anything related to the Office 365 suite. Check out the technical resources and documentation for more information. Microsoft is constantly updating and expanding the api functionality. Scroll down for the functions, should work on Powershell 4 and up!

Technical resources:

Creating an Azure AD application
https://www.google.com/search?q=create+azure+ad+application

Graph API
https://docs.microsoft.com/en-gb/graph/use-the-api

About batch requests
https://docs.microsoft.com/en-gb/graph/json-batching

Known issues with Graph API
https://docs.microsoft.com/en-gb/graph/known-issues

Thanks to:
https://blogs.technet.microsoft.com/cloudlojik/2018/06/29/connecting-to-microsoft-graph-with-a-native-app-using-powershell/
https://medium.com/@mauridb/calling-azure-rest-api-via-curl-eb10a06127

Functions



New features in Azure Blueprints

The past couple of weeks i have seen new features being released for Azure Blueprints. In this short post i will write about the updates in Definition location and Lock assignment.

New to Azure Blueprints?

Azure Blueprints allows you to define a repeatable set of Azure resources that follows your organizations standards, patterns and requirements. This allows for a more rapidly deployment of new environments while making it easy to keep your compliance at desired level.

Artifacts:

Azure Blueprints is a package or container used to achieve an organizational standard and patterns for implementation of Azure Cloud Services. To achieve this, we use Artifacts.

Artifacts available today are:

  • Role Assignments
  • Policy Assignments
  • Resource Groups
  • ARM Templates

The public preview of blueprints was released during Ignite in September last year, and its still in preview.

Read more about the basics of Azure Blueprints here

Definition location

This is where in your hierarchy you place the Blueprint, and we think of it as a hierarchy because after creation the assignments of the blue print can be done at current level or below in the hierarchy. Until now the option for definition location has been Management groups. With the new released support for subscription level you can now start use Blueprints even if you have not adopted Management groups yet.

Note you need contributor permissions to be able to save your definition to a subscription.

If you are new to management groups, I recommend you take a look at it since it’s a great way to control and apply your governance across multiple subscriptions.

Read more about Management groups here

Definition location for Blueprints

Lock Assignment

During assignment of a Blueprint we are given the option to lock the assignment.

Up until recently we only had Lock or Don’t lock. If we chose to lock the assignment all resources were locked and could not be modified or removed. Not even by a subscription owner.

Now we have the option to set the assignment to:

  • Don’t Lock – The resources are not protected by blueprints and can be deleted and modified.
  • Read Only – The resources can´t be changed in any way and can´t be deleted.
  • Do Not Delete – This is a new option that gives us the flexibility to lock our resources from deletion but still gives us the option to change the resources.

Lock assignment during assignment of Blueprint

Removing lock states

If you need to modify or remove your lock assignments, you can either:

  • Change the assignment lock to Don´t Lock
  • Delete the blueprint assignment.

Note that there is a cache so changes might take up to 30 minutes before they become active.

You can read more about resource locking here

Summary

With the “Do not Delete” i think we will see a better use of the Lock assignment and we will have the flexibility to make changes on our resources without the possibility to delete them. And with Definition location set to subscription we can start using the Blueprints without Management groups and i can see that this might be a useful in environments where Management groups have not been introduced.

Good luck with your blueprinting!

You can reach me at Tobias.Vuorenmaa@xenit.se if you have any questions.



Create Azure Policy’s based on Resource Graph querys

If you have used Resource graph to query resources you might realized it comes very handy when creating Azure Policy’s, for example you might check the SKU of virtual machines before you create the policy to audit specific sizes of virtual machines or even prevent creation of them. (If you haven’t yet used Azure Resource Graph you can check my previous post out – https://tech.xenit.se/azure-resource-graph/)

Let’s take it further and actually create a Policy based on our Resource Graph query.

In my example below i query all storage accounts that allows connection from all Virtual Networks and the where environment is set to Prod.

Iam running all commands in Cloud Shell and CLI, but you could just aswell use Powershell.

CLI

The query is looking for below setting, it can be found under Firewalls and virtual networks under your storage accounts.

Creating the policy

To create the Policy, I am using the tool GraphToPolicy. The tool and instructions can be found here http://aka.ms/graph2policy

Follow the instructions for the tool and when you have the tool imported to your cloud shell environment you are ready to go.

Iam using the same query as before and creates a Policy to Audit all storage accounts that allows connections from all Virtual Networks and have the environment tag set to Prod.

CLI

Output:

CLI

Same policy as above but query in variable

After creation the policy is ready for assignment. I assigned it to my test subscription and as you can see in my example it shows that one of my storage accounts are non-compliant.

Summary

Resource Graph is a handy tool and as you might have understood its very useful when looking for specific properties or anomalies in your resources. Together with the GraphToPolicy it’s easy to create Azure Policys based on your Resource Graph Querys.

Credit for the tool goes to robinchapas https://github.com/robinchapas/ConvertToPolicy

If you have any questions you can reach me at tobias.vuorenmaa@xenit.se



Deploy CoreOS with VSTS Agent container using ARM template

In this blog post, I’ll describe how to deploy CoreOS using an ARM Template and auto start the Docker service as well as create four services for the VSTS Agent container.

Container Linux by CoreOS (now part of the Red Hat family) is a Linux distribution and comes with the minimal functionality required to deploy containers. One feature that is really handy when it comes to deploying CoreOS in Azure is Iginition, which is a provisioning utility built for the distribution. This utility makes it possible to (for example) configure services to auto start from an Azure Resource Manager (ARM) Template.

Before we begin, you will also be able to download what I describe in this post here.

First of, we need to describe the service:

Note: VSTS_ACCOUNT and VSTS_TOKEN will be dynamic in the ARM Template and defined using parameters passed to the Ignition configuration dynamically at deployment. I’m using a static pool name ubuntu-16-04-docker-17-12-0-ce-standard.

When we know that the service works, we add it to the Ignition configuration:

Note: In the same way as in the service description, we will be dynamically adding the VSTS_ACCOUNT and VSTS_TOKEN during the deployment.

Now when we have the Ignition configuration it’s just a matter of adding it to the ARM Template. One thing to note is that you will need to escape backslash making \n to \\n in the template.

The ARM Template can look like this: (note that variable coreosIgnitionConfig is a concatenated version of the json above)

Note: I’ve also created a parameter file which can be modified for your environment. See more info here.

After deployment, you’ll have a simple VM with four containers running – and four agents in the agent pool:



Change OS disk on server using Managed disk in Azure

Recently a new capability was released for Azure Virtual Machines using Managed disks.

We have been missing the possibility to change OS disk of VMs using Managed disks. Until now that has only been possible for Unmanaged disks. Before release of this feature we have been forced to recreate the Virtual Machine if we want to use the snapshot and managed disk.

This feature come in handy while performing updates and or changes to OS or applications and where you might want to rollback to previous state on existing VM.

As of today Azure backup only supports restore to a new VM. With this capability we can hope to see a change for this in the feature. But as for now we can use Powershell to change OS disk of VM and restore a older version of that OS disk on existing VM.

In the exemple below we are:

  • Initiating a Snapshot
  • Creating a Managed disk from snapshot using the same name as the original disk but adds creation date.
  • Stop the VM – The server must be stop deallocated state.
  • Swap OS disk of existing VM
  • Start the VM
Source: https://azure.microsoft.com/en-us/blog/os-disk-swap-managed-disks/



Nyheter på väg till RDS 2016

Microsoft presenterade tidigare i höstas nyheter som är på väg till Remote Desktop Services (RDS) 2016. Det är några stora förändringar på gång som är viktiga att känna till, och detta inlägg sammanfattar några av de nyheter som ska komma inom kort.

Infrastruktur

I en traditionell RDS infrastruktur måste alla servrar i uppsättningen vara med i domänen. Det innebär att RD Gateway och Webaccess servrarna både är med i domänen och har direkt kontakt mot internet, vilket gör dem sårbara för attack.

Med den nya infrastruktur design som Microsoft presenterar så är Gateway, Webaccess och de övriga rollerna ej längre med i domänen. Kontakten från domänen till infrastrukturen görs endast genom utgående trafik på port 443. Förutom att detta ökar säkerheten, så möjliggör det för organisationer att drifta flera olika miljöer med samma RDS infrastruktur. Inte längre behövs den en RDS miljö för varje domän, utan nu kan infrastrukturen sättas upp en gång för att drifta flera olika miljöer och låta användare ansluta till deras respektive domän och Sessionhosts.

Microsoft presenterar även en ny roll inom Remote Desktop Services; Diagnostics, vilket har som uppgift att samla in information om uppsättningen och kan användas för att felsöka anslutningsproblem.

Azure

Integration med Azure Active Directory (AAD) är snart här. Med hjälp av AAD så kan Multi-Factor Authentication, Intelligent Security Graph och övriga Azure tjänster nyttjas i RDS miljön. Azure AD är något som många organisationer redan nyttjar, om de använder sig av Office 365 tjänster.

 

Om RDS miljön sätts upp i Azure så kan organisationer installera RDS rollerna som Platform as a Service (Paas) tjänster. Det innebär att det inte längre krävs ett VM för varje roll i infrastrukturen, Administratörer slipper alltså managera varje VM individuellt, samt de får tillgång till den smidiga skalbarheten som Azure erbjuder. Denna uppsättning stödjer även hybrid-lösningar, Sessionhosts kan alltså ligga on-premise och resten av infrastrukturen i Azure.

Det finns fortfarande ingen ETA på när dessa nyheter görs tillgängliga. För mer information och demo på några av dessa funktioner, se inlägget från Microsoft.



Azure Archive Storage – Manage access tier on all blobs in a container

Last week Archive blob storage went into general availability. If you haven’t checked it out you can find some info here Announcing general availability of Azure Archive Storage

After some testing we realized that you cant change the access tier for an entire container or storage account from the portal. The access tier had to be set blob by blob as shown in the picture.

Here is an easy way to set the access tier with Powershell on all blobs in a specific container. This can be helpful if you have a lot of blobs that could take benefit from the new Archive access tier.

After successfully running the code above we could see that all our blobs had change access tier to “Archive”.

Our example is very simple and with some imagination you can take it further and for example change the access tiers of certain files with certain properties.



NetScaler HA heartbeats in Azure

When using NetScaler with multiple NICs in Azure, heartbeats will not be seen on other interfaces other than the one NSIP is configured on.

To resolve this, disable heartbeats on the other interfaces (in my case, NSIP is on 0/1 and disabling on 1/1 and 1/2):