Category: Microsoft

Automating delimiter selection when working with Csv cmdlets

Recently I walked passed a collegue with an error on his Powershell console which peaked my interest. He quickly noted that he had chosen the wrong delimiter for the Csv he imported which resulted in errors in the code, I then jokingly said ”Why don’t you just use the ‘Get-CsvDelimiter’ cmdlet?” and we had a quick laugh.
30 minutes later the ”Get-CsvDelimiter” function was born, but first,

Let’s dive into how Import-Csv, ConvertFrom-Csv & delimiters really work

When used correctly Import-Csv & ConvertFrom-Csv creates an array object with each row as a PSCustomObject with named content based on headers.

Import-Csv has two constructors, one with Path as a required and the other with Path & Delimiter as required.
If you omit the delimiter parameter the Path constructor will be used, now this constructor has a required parameter (according to the documentation) that is auto filled. Instead of using Delimiter we can use ”UseCulture” which takes the current culture delimiter as input,

”To find the list separator for a culture, use the following command: (Get-Culture).TextInfo.ListSeparator.” –docs.microsoft

If we interpret the documentation this is a required parameter, but it is not really required, as we are able to pass only the -Path parameter and delimiter will autofill

Assume we’re using the following CSV as input ($File)

the command ”Import-Csv $File” will output a valid object. Now in my opinion it should fail. As my culture specifies semicolon (;) as the default listseparator, while forcing ”-UseCulture” can’t produce a correct csv object. This basically means that UseCulture is not a required parameter and the default unless specified is always a comma (,)

The exact same goes for ConvertFrom-Csv.

Presenting Get-CsvDelimiter

The below function will search and find the most probable delimiter used in a Csv file.

Let’s see how it works.

$File is a csv file with the below data.

If we run Import-Csv $File we’ll get a incorrect table object, every row will have a single property containing the row data.

If we instead specify a delimiter as shown below, we’ll calculate the most probable delimiter and use that to produce a correct CSV table object without having to inspect the csv culture or assume anything.

And to prove it works, the below code outputs only the Name column in the Csv

If we have a stringobject and need to convert it that is also possible.

With this method we can use virtually any delimiter we want, assuming that the Csv is formatted correctly. Again we’re using the same csv input but we change the delimiter to a ”greater than” (>) symbol and see how the function performs.

Now when we run the function standalone we’ll get a Greater than symbol as the return

And finally, when running the previous code we get the same output as when we were using a semicolon as the delimiter.

 

Get-CsvDelimiter



Mapped network printers unavailable due to SMB1 being obsolete

INTRODUCTION

As we all might be familiar with, printers are one of those little peculiar matters within IT. Implementing these in an IT-environment is self-explanatory oftentimes, but when they do not cooperate the issue itself can stem from one single obscure root cause, if not a string of these having to be checked upon.

Recently, I encountered a particular printer issue which I found interesting enough to share. The root cause here, in summary, was due to the network protocol SMB1 (Server Message Block) being obsolete in recent Windows releases.



Run Windows Defender inside a sandbox

Last week Microsoft released the news that they have added a new feature to Windows Defender Antivirus. The new feature allows Windows Defender Antivirus to run within a sandbox.

Other antivirus providers have been offering the possibility to open files in a sandbox-environment before but what Windows Defender now offers is the feature that all virus scans are done inside a virtual sandbox. The biggest benefit of running the virus scans in a virtual sandbox is when the antivirus engine is scanning a malicious file. The malicious code that usually would be executed to exploit a vulnerability will now only affect the virtual sandbox and not the actual computer resources.

This new feature proves that Microsoft are really making an effort to increase the reputation of Defender. There is a debate whether this is the best way to go but it is good sign that Microsoft is really making an effort to develop Defender Antivirus with features like this.

Microsoft describes the feature in the following way on their blog:

Running Windows Defender Antivirus in a sandbox ensures that in the unlikely event of a compromise, malicious actions are limited to the isolated environment, protecting the rest of the system from harm. This is part of Microsoft’s continued investment to stay ahead of attackers through security innovations. Windows Defender Antivirus and the rest of the Windows Defender ATP stack now integrate with other security components of Microsoft 365 to form Microsoft Threat Protection. It’s more important than ever to elevate security across the board, so this new enhancement in Windows Defender Antivirus couldn’t come at a better time.

It sure sounds interesting, so why don’t we try it out?

Requirements:

  • Windows 10, version 1703 and above.

How to activate the sandbox feature:

  • Open an elevated ‘Command Prompt’
  • Type: ‘setx /M MP_FORCE_USE_SANDBOX 1’ and then press enter
  • Then restart your computer and that should be it. Just make sure that you actually restart it since it won’t be activated if you just do a regular shutdown/start.

How to inactivate the sandbox feature:

  • Open the ‘Control Panel’
  • Navigate to ‘System’ and click on ‘Advanced System Settings’
  • Click on ‘Environment Variables’, navigate to ‘System Variables’ and remove ‘MP_FORCE_USE_SANDBOX 1’.
  • Restart the computer.

The feature is not enabled by default yet so it might not be a good idea to use it in a production environment, but it might be a good time to give Windows Defender a new chance? Have you had time to try it out yet? What do you think about it?



Teams in your Multi-user environment done right!

Microsoft Teams is on the rise, more and more businesses is seeing the potential of Teams and want a piece of the action.

Unfortunately Microsoft Teams is not ideally designed to work on a Multi-user environment like Citrix Xenapp or Microsoft Remote Desktop services. It is entirely installed in the users profile, and its quite big. A clean installation of teams is roughly 600 MB and will quickly grow, and you know what that means… You guessed it: Super long logon time, since logging on to the Multi-user environment often means the profile would be downloaded to Session Host before you are properly logged on, the users will not be happy! And on top of that, the latest recommendation in size per Teams installation is 3 GB…

There is however some rumors indicating there will be releasing a business version soon addressing this very issue! But if you are anything like me, and cant simply wait, there is a solution if you are willing to pay a small price, and you will at the same time have access to tons of other great stuff.

FSLogix Profile Container

FSLogix Profile Container is a great product that basically removes the profile size entirely, is an little agent you install on your Session Hosts and configure with an ADMX, you also need a file share with enough space for some big profiles. FSLogix is in the business of so called filter-drivers, what it does is simply put, lying to Windows. For example, when you install a 32-bit application to your 64-bit Windows System, Windows will use its own filter-driver to get it to work, its the same technology, its efficient and simple. In FSLogix case it is lying to the windows about the profiles, Windows thinks its a local profile, it does not know that in fact, the entire profile is contained in a vhd-file, mounted to the server. Because its a virtual disk that attaches to the server, there is only one SMB handle. It will therefor not be a huge load on the network, which you often sees when you for example roam your profiles.

Install Teams

When you have FSLogix Profile Container in place you can now install teams on your environment.  In early October Microsoft released a new version of Teams with some new features when deploying Teams to all the users in an organization, we are going to use parts of that to install Teams on to our environment!

 

  1. Download the latest version of Teams MSI-file (x64) file here!
  2. If you like to disable Auto-start of Teams use the following install string (otherwise just install without the option):
    This will put an Install file under ”C:\Program Files”, and when a user logon it will automatically install Teams to this user.
  3. You do not need to update the MSI to the latest version, Teams will automatically download and install pending updates on the next logon of the user.

There you go, now your users can benefit from the full experience of Teams in your Multi-user environment, with one exception: if you are using Citrix, you have ”Skype for Business Optimization Pack” to utilize local client resources for best quality of Skype meetings and calls. There is no support for Teams as of for now. It will soon be available though. With that said, I wouldn’t uninstall Skype for business just yet.

Other Great stuff

As mentioned above, there is a lot of benefits using FSLogix Profile Container. For a great period of time, Citrix User Profile Manager has been the best way to reduce the size of the profiles while still have the most important settings saved in your profile. But this is still just a trade-off, you trade off your caches and settings that impact your profile logon, but at the same time still trying to get the best experience for the user, this will sometimes collide and you have to choose between longer logon time or full functionality of a certain application.

With FSLogix Profile Container you no longer need to worry about large profiles, you don´t need to trade off! There are a lot of applications that saves a ton of settings and files in your profile that you now can install without impacting the user experience, this opens up a great deal of opportunities. You can for example install OneNote with it´s (potentially)  gigantic cache, CAD applications with thousands of files in the user profile and so much more.

 

If you find this interesting and would like a trial of FSLogix Profile Container to see if this fits your organizations needs, please contact us. It is easily installed and does not require additional servers or infrastructure!

 



Create Azure Policy’s based on Resource Graph querys

If you have used Resource graph to query resources you might realized it comes very handy when creating Azure Policy’s, for example you might check the SKU of virtual machines before you create the policy to audit specific sizes of virtual machines or even prevent creation of them. (If you haven’t yet used Azure Resource Graph you can check my previous post out – https://tech.xenit.se/azure-resource-graph/)

Let’s take it further and actually create a Policy based on our Resource Graph query.

In my example below i query all storage accounts that allows connection from all Virtual Networks and the where environment is set to Prod.

Iam running all commands in Cloud Shell and CLI, but you could just aswell use Powershell.

CLI

The query is looking for below setting, it can be found under Firewalls and virtual networks under your storage accounts.

Creating the policy

To create the Policy, I am using the tool GraphToPolicy. The tool and instructions can be found here http://aka.ms/graph2policy

Follow the instructions for the tool and when you have the tool imported to your cloud shell environment you are ready to go.

Iam using the same query as before and creates a Policy to Audit all storage accounts that allows connections from all Virtual Networks and have the environment tag set to Prod.

CLI

Output:

CLI

Same policy as above but query in variable

After creation the policy is ready for assignment. I assigned it to my test subscription and as you can see in my example it shows that one of my storage accounts are non-compliant.

Summary

Resource Graph is a handy tool and as you might have understood its very useful when looking for specific properties or anomalies in your resources. Together with the GraphToPolicy it’s easy to create Azure Policys based on your Resource Graph Querys.

Credit for the tool goes to robinchapas https://github.com/robinchapas/ConvertToPolicy

If you have any questions you can reach me at tobias.vuorenmaa@xenit.se



Monitoring vDisk Rebalance Enabled

In a recent use-case that I stumbled across, I wanted to monitor a few different things in a Citrix-environment with Provisioning Services technology.

In this specific blog-post I’ll show you how I configured monitoring for whether Rebalance Enabled is configured for active vDisk, with Provisioning Services (PVS) Powershell SnapIn.



Monitoring vDisk Replication

In a recent use-case that I stumbled across, I wanted to monitor a few different things in a Citrix-environment with Provisioning Services technology.

In this specific blog-post I’ll show you how I configured monitoring of vDisk Replication with Provisioning Services (PVS) Powershell SnapIn.



Azure Resource Graph

During Ignite 2018 Microsoft released a couple of new services and features in public preview for Azure i will try to cover the Governance parts in upcoming posts.

Lets start with Resource Graph.

If you have been working with Azure Resource Manager, you might have realized its limitations for accessing resource properties. The resource fields we have been able to work with is Resource Name, ID, Type, Resource Group, Subscriptions, and Location. If we want to find other properties, we need to query each resource separately and you might end up with quite complicated scripts to complete what started as simple tasks.

This is where Resource Graph comes in, Resource Graph is designed to extend the Azure Resource Management with a Azure Data Explorer Query language base.

With Resource Graph it’s now easy to query all resources over different subscriptions, as well as get properties of all resources without more advanced scripts to query all resource separately. Ill show how in the attached examples below.

All Resources

The new “All resources” view in the portal is based on Resource Graph and if you haven’t tried it out yet go check it out. It’s still in preview so you have to “opt-in” to try it.

Get started

To get started with Resource Graph you can use either CLI, Powershell or the Azure Portal.

In the examples below, I am using Cloudshell and Bash but you could just as well use Powershell:

#Add Resource Graph Extension, needs to be added first time.

#Displays all virtual machines, OS and versions

Example output from above query

# Display all virtual machines that starts with “AZ” and ends with number.

# Display all storage accounts that have the option to “Allow Access from all networks”

# Display linux VMs with OS version 16.04

For more info about the query language check this site:
https://docs.microsoft.com/en-us/azure/governance/resource-graph/concepts/query-language

If you have any specific scenario feel free to contact me and we can try to query your specific needs.

You can reach me at tobias.vuorenmaa@xenit.se if you have any questions.



HTML5 Web Client for Remote Desktop Services 2016

Microsoft recently announced that the new HTML5 client for Remote Desktop Services has reached general availability. The new web client lets users access the Remote Desktop infrastructure using a modern browser that supports HTML5.

Requirements & Installation

Microsoft have a great article explaining the requirements and how to get started with the new client in the following link. It’s important to note that if you run any previous versions of the client and want to update to the latest release, it first has to be uninstalled from the Web Access servers.
The client can be installed and run simultaneously as your old RDWeb-page, they just use different URLs to be accessed. To access the new client, the URL https://<FQDN>/RDWeb/webclient/ is used.

Using the new client

The new client that was released previously this year, has now reached version 1.0.0 and with it, a new sign in experience and SSO to the applications. Below is how the now much improved login-screen looks like:

Web Client login screen

After logging in the apps are presented, and right away you can see the much improved design comparing to the old and very outdated default RDWeb page:

New updated application menu

The great thing about the HTML5 client is that it doesn’t require any software to run, just a browser that supports HTML5, which most browsers does these days. So this is good news for tablet and thin-client users.
The applications are contained within the browser window. You can only have one browser window open at a time, and opening multiple applications at the same time creates tabs within the browser window:

 

Applications running

Printing and copy/paste is available from within the session. Using print will download the job as a PDF file to your local computer.

Some features are still missing for making it a complete replacement for the old one, but Microsoft will be releasing updates in the future and adding more features as time goes by, so keep an eye out.



Deploy CoreOS with VSTS Agent container using ARM template

In this blog post, I’ll describe how to deploy CoreOS using an ARM Template and auto start the Docker service as well as create four services for the VSTS Agent container.

Container Linux by CoreOS (now part of the Red Hat family) is a Linux distribution and comes with the minimal functionality required to deploy containers. One feature that is really handy when it comes to deploying CoreOS in Azure is Iginition, which is a provisioning utility built for the distribution. This utility makes it possible to (for example) configure services to auto start from an Azure Resource Manager (ARM) Template.

Before we begin, you will also be able to download what I describe in this post here.

First of, we need to describe the service:

Note: VSTS_ACCOUNT and VSTS_TOKEN will be dynamic in the ARM Template and defined using parameters passed to the Ignition configuration dynamically at deployment. I’m using a static pool name ubuntu-16-04-docker-17-12-0-ce-standard.

When we know that the service works, we add it to the Ignition configuration:

Note: In the same way as in the service description, we will be dynamically adding the VSTS_ACCOUNT and VSTS_TOKEN during the deployment.

Now when we have the Ignition configuration it’s just a matter of adding it to the ARM Template. One thing to note is that you will need to escape backslash making \n to \\n in the template.

The ARM Template can look like this: (note that variable coreosIgnitionConfig is a concatenated version of the json above)

Note: I’ve also created a parameter file which can be modified for your environment. See more info here.

After deployment, you’ll have a simple VM with four containers running – and four agents in the agent pool: