Category: Other

Keep your FSLogix VHD-files optimized

Background

When using either or both of FSLogix products Office 365 Containers and Profile containers you will have quite large VHD-files. However you could for example in Office 365 Container limit the size when specifying the Outlook cache limit but non the less it will require quite large storage space. Since the standard and recommended way of creating these VHD-files is setting Dynamic it will make it complicated for you when and if you run out of space, let me explain:

A dynamic disk will automatically expand when needed ensuring each disk will reserve only the actual size of the content of the disk, which is good, but it will however not shrink automatically. This means the disk will have the size of when the it contained the most, but not actually represent the actual size inside. Over time this this might be an issue, if not else, waste of disk-space.

To create a script that will shrink the disk is complicated and there is a risk that the disk will be corrupt, instead I we will focus on how to maintain an efficient use of the stored data to minimize the disk of growing in size.

Solution

There is however no solution from FSLogix to tackle this yet, so you we would need to focus on what we can do with the VHD-files witch is essentially standard virtual disks. When searching for a good long time solution to this problem I found a great script created by David Ott that Optimize the disks that is available at the given time you run the script.

How it works

The script will check the VHD-files if they are available (if the user is logged on the disk is locked), it will then proceed with the one´s available and mount them, run a Optimization Job, close them and mail a complete report of the result. The best way of using this script would be to Schedule it to run after office hours (preferably after the Session hosts has restarted) to maintain the most efficient size of the disk. This will minimize the growths of the disk and in the long run save you some space.

Where to find it

As mentioned above I found the script from the creator David Ott and you can find the original post with script here!

If you have want to know more about FSLogix you can email me at jonas.agblad@xenit.se or check out my earlier posts here:

Convert Citrix UPM to FSLogix Profile Containers

Teams in your mulit-user environment done right!

Outlook Search index with FSLogix – Swedish

FSLogix Profile Container – Easy and fast Profile management – Swedish

Office 365 with FSLogix in a Multi-user environment – Swedish



New generation of Imprivata appliances

With every major release of Imprivata’s OneSign product, they also release an updated version of the appliance the product is running on. With the release of Imprivata OneSign & Confirm ID version 6.1 this December, they also released a new generation of the appliance. This version, called generation 3 (G3), is only available as a virtual appliance.

The G3 appliance provides an updated operating system and database, together with the latest security patches.

What does this mean?

Version 6.1 and all subsequent versions of the OneSign product, will only be supported on this new appliance. This means, that if you want to update Imprivata OneSign to version 6.1 or later, you must first migrate to the new G3 appliance. You must purchase the new virtual appliances, along with new licenses, to replace the existing generation 2 appliances.

Generation 2 (G2) appliances will continue to receive patches and hotfixes until their respectively EOL (End of life). This date is set to December 31, 2020 for the virtual and April 30 2019 for the physical appliances. After these dates, Imprivata will no longer provide product updates or patches.

What are the requirements?

The G2 appliances and all Imprivata agents must run at least version 5.2 or later before beginning the migration to G3. The G3 appliance is not backwards compatible with agents operating at 5.1 SP1 or earlier. So, if you haven’t already, it may soon be time to start thinking about upgrading.

Feel free to contact us if you need any assistance with the migration.



Wireless Networking in Windows Server 2019

The other day I installed a NUC with an integrated wireless NIC. I installed Windows Server 2019 to the NUC and installed the wireless networking drivers from Intel’s website. The problem was that after I’ve installed the network drivers they didn’t work. After a lot of trial and error I discovered that you cannot use wireless NICs without the “Wireless-Networking” role installed. Install the role by running below in Powershell.

 

 

I restarted the computer and after that everything started to work as expected.

 



What is ReasonML and why should you care?

ReasonML is a alternate syntax for OCaml invented at Facebook to be more familiar to programmers coming from JavaScript . OCaml is a over 20 year old general purpose language that is both expressive and safe. It belongs to the ML family of languages and it means that it has a strong type system that will guide you to write less bugs. Other languages in the ML family that you might have heard about are F# and Haskell.

Because ReasonML is just a alternate syntax for OCaml everything that is possible with one can be done with the other. ReasonML can be compiled to native binaries and bytecode with the standard compiler and there are two ways to compile to JavaScript, “JavaScript of OCaml (JSOO)” and BuckleScript. The natively compiled binaries are often really fast especially when compared to node based JavaScript that powers both web servers and desktop apps. esy is a package manager that introduces a npm-like workflow for native development, right now it’s main goal is to be the best package manager for OCaml and Reason but can in theory be used with any language.

At Xenit we’re using ReasonML together with React, via ReasonReact, to build the frontend of our Identity Provider. It brings us ease of refactoring and a higher degree of security when writing our code. We’re building some other internal tools with ReasonReact and we’re also exploring developing native applications with ReasonML. This includes both UI applications with a framework, revery, that aims to replace electron as a simple way to create desktop applications and micro service backends.

This is just a introduction and there will be more interesting posts about the Reason universe in the future.



How to handle pinned start menu apps in Windows 10

As I have been working with customizing Windows 10 for a while now, it has never worked against me this much. However, sometimes Windows do have its ways of working against you. With challenges like these you get the opportunity to spend a lot of time coming up with a solution. So this blog post is about my battle with the start menu of Windows 10 Professional. If you are here for the quick solution, skip to the bottom and the TL;DR section.

The Problem:

I have been able to customize the start menu of Windows 10 with ease since version 1511 with the Export / Import-StartLayout cmdlet. But this time I got a request to remove all the pinned apps on the right side of the start menu. A colleague discussed this and he told me he had done a similar solution inside a Citrix Virtual Desktop, and he spent quite the amount of time with this, I thought this would be much easier than it turned out to be. So the requested start menu should at the end look something like this upcoming picture, with the following demands:

  • No pinned apps on the right box or the start menu
  • In the task bar, have Chrome & Explorer pinned. 

This was the requested layout

To begin with, I created an XML file with just Chrome & Explorer pinned in the task bar, and having set the <DefaultLayoutOverride LayoutCustomizationRestrictionType=”OnlySpecifiedGroups”> . My thought was that this would give me a clean start menu, but this was my first failed attempt. The colleague of mine who preciously had a similar issue in a Citrix environment had during his research time come across this post containing a script called “Pin-Apps“. This script contained a Unpin function which turned out to be very helpful. So I started adapting my work after this script. But this is where I came across my second setback. First, I was not able to have this script and the Import-StartLayout-script in the same logon script, nor having one script on startup, and one on login, so I had to think of a way configure this in my isolated lab environment.

Luckily, I’ve been working a lot with OS-deployment, so I created a Task Sequence containing the Import-StartLayout-script, which managed to run successfully together with my login-script containing the Pin-Apps script. But here I came across my third setback, which by far had the most impact and was the one I spent the most time struggling with. For some reason I was not able to remove bloatware, such as Candy Crush, Minecraft etc. The script ran successfully, but every time, the outcome looked like this

Some applications would not be removed

I could not understand why these applications would not be removed. I have had to deal with bloat ware before, but then it was just to remove them with Appx-cmdlets. I checked Get-AppxPackage & Get-AppxProvisionedPackage, and ran Remove-AppxPackage and Remove-AppxProvisionedPackage several times, but these apps were not removable and did not show up until I manually selected them, and they started downloading (as shown on the application in the top right corner on the picture). So apparently they were either links or shortcuts to the Windows Store. This is works if you are using Windows 10 Enterprise. 

This is where I started going deep. The apps were all published in the Windows AppStore, so I started looking for any kind of possibilities, with help from Powershell, to by force download all apps in the Windows Store. I spent a lot of time with this, but without any success. So I had to rethink my plan. There was no way to have the bloat ware-applications to be downloaded by force, there was no way to remove them by removing them with Appx-cmdlets, and there was no way to have a clean start menu with a XML-file. This gave me the idea. If you can’t beat them, join them. There was no way to actively remove all the applications from the start menu of a Windows 10 Professional, but replacing them worked.

The solution:

As I have yet to find any other way of removing the superfluous applications, creating a new XML replacing the start menu with some random default applications was the only successful way for me. To list these applications, go to Shell:AppsFolder or shell:::{4234d49b-0245-)4df3-b780-3893943456e1} in file explorer.

Applications can be found here

I just chose to pin some of the applications which were default on my start menu, that I knew was very much removable, exported these to a new XML which turned out to it look like this:

From here I had to modify the Pin-Apps script to make it more suitable for a Swedish operating system, and added a register key so it would not run more than once on each user. If you want to lock down the right side of the start menu, you just set or create the LockedStartLayout registry key, located under both HKEY_Local_Machine & HKEY_Current_User\Software\Policies\Microsoft\Windows\Explorer, to 1

If you are running another OS language than Swedish or English, to find the verb for unpin, simply save an application name to the variable $appname (as an example I will use Windows Powershell) and run the following part: 

This will give you all the verbs which are applied to this application. In this case “Unpin from Start” is present.

After modifying the necessary bits I added it to a PowerShell logon script GPO with the parameter -UnpinAll, with the .ps1 file located inside the GPO repository, making sure it’s accessible for everyone.

 

TL;DR: 

If you are running Windows 10 Professional, you need to replace applications in the start menu before removing them, as a suggestion running in a Task Sequence of some kind setting the default start menu layout and then have a GPO to run the PowerShell script stated above.

If you are running Windows 10 Enterprise, just use the Logon script GPO and you will be fine. If you still have some unwanted applications, run a script removing built-in apps (for example this Invoke-RemoveBuiltinApps )

If you have any questions or thoughts about this post, feel free to email me at johan.nilsson@xenit.se



Create Threat Exceptions for specific traffic

At some point you might encounter a false-positive threat that you want to make an exception for. If you know a file is safe if its downloaded from a specific place but you don’t want other files classified with the same threat ID/name to be whitelisted, you can create a separate security profile.

Start by identifying the traffic and where it’s blocked. In this example the file got blocked by the vulnerability protection-profile.

Click on the magnifying class to see more detailed information and find the threat ID.

If we look in the detailed section we can see that the threat ID is 39040 for this threat-name.

Go to Objects > Security Profile > Vulnerability Protection. Since we want to specify what traffic this is whitelisted on we need to create a separate profile so the current security policys is unaffected.

Clone the profile that are currently used for this kind of traffic and rename it properly. Go to the exceptions-tab and select “Show all signatures”. Type the threat ID, press enter and enable the signature.
Press on the current action (default (alert)) and change it to allow or leave it at default. In this example I will select default (alert) since I still want it to be logged.

When this is done we can either add it to a new Security Profile Group or add it directly to a new Security Policy. Here we will add it directly to a security policy.

Create a new Security Policy above the one that blocked the file.

Specify you source adress and destination.
In the actions-tab, select Profile Type: Profiles and under Vulnerability Protection: <The profile you created>

Commit and verify that the traffic hits the correct Security Policy and is logged with alert.

Be very cautious when you create exceptions and always make sure you only allow the traffic you intended. Always make sure you look at alternative ways before creating an exception.

The same method can be applied on different security profiles.

 



Smart Check – Monitor Your Citrix Sites

Citrix Smart Check is a software and a service that installs on a Citrix Delivery Controller and collects diagnostic data, sends it to the Citrix Cloud account, where it gets analyzed and presented on the Citrix Cloud website. The information helps Citrix administrators to prevent and resolve issues before they happen or impact the users, give recommendations on fixes and to keep the Citrix environment stable.

The Smart service helps Citrix administrators that do not have their own monitoring setup or are unable to monitor their sites for other reasons and presents it on a webpage overview. The administrators can also get scheduled summarized mail reports regarding errors, warnings and information regarding the state of the different sites.

Citrix Cloud Smart Tools

Smart Check – Sites Overview

What Smart Check provides

  • Overview of the Citrix sites and products used, site-by-site
  • An extensive diagnostic and health checks for the different sites and services
  • Scheduled health controls of Delivery Groups, StoreFronts, Delivery Controllers, Machine Catalogs, Provisioning and License Servers
  • Give recommendations what administrators should do with the site to keep it up-to-date and stable
  • Help with simplified troubleshooting and pin down where the issue may be impacting users
  • Upload diagnostic data to Citrix Insight Services (CIS)
Smart Check - Overview

Smart Check – Overview

How to get started

First, you need a Citrix Cloud account. Register an account at https://smart.cloud.com. After you have created an account you can login, click Add Site and download the Smart Check software. The software should be installed on a Delivery Controller on the site and comes with a one-time signed JSON Web Token (JWT) that is used to connect your site to the Citrix Cloud – Smart Tools service.

Smart Tools - Add Site

Smart Check – Steps to take

Add Site - CitrixSmartToolsagent.exe

Add Site – CitrixSmartToolsagent.exe

Once the Smart Check agent is installed it will show up on the Citrix Cloud – Smart Check webpage as Site Discovered. You will need to click on Complete Setup and provide a domain user account that is a member of the local Administrator group of the Delivery Controller and full administrator role in Citrix Studio. PowerShell 3.0 or greater needs to be installed on the Delivery Controllers and outbound internet access on port 443 enabled to be able to upload to Citrix Cloud.

Smart Check - Site Discovered

Smart Check – Site Discovered

Smart Check - Enter Credentials

Smart Check – Enter Credentials

For VDA the following must be enabled:

  • File and Printer Sharing
  • Windows Remote Management (WinRM)
  • Windows Management Instrumentation (WMI)

For a full list of requirements and supported site components, visit Citrix Product Documentation – Smart Check requirements.

Smart Checks

Below is a list of the checks that are available as of this post. There are probably more to come:

  • Site Health
  • Citrix Optimizer
  • Citrix Provisioning
  • Delivery Controller Configuration
  • License Server
  • LTSR Compliance
  • Product LifeCycle
  • StoreFront
  • VDA Health

Each category contains several checks. You can read an excerpt of the different checks performed below.

Site Health Checks

Site Health Checks provide a comprehensive evaluation of all the FMA services including their database connectivity on your Delivery Controllers. Citrix recommends you run these checks at least once daily. Site Health Checks verify the following conditions:

  • A recent site database backup exists
  • Citrix broker client is running for environment test
  • Citrix Monitor Service can access its historical database
  • Database connection of each FMA service is configured
  • Database can be reached by each FMA service
  • Database is compatible and working properly for each FMA service
  • Endpoints for each FMA service are registered in the Central Configuration service
  • Configuration Service instances match for each FMA service
  • Configuration Service instances are not missing for each FMA service
  • No extra Configuration Services instance exists for each FMA service
  • Service instance published by each FMA Service matches the service instance registered with the Configuration service
  • Database version matches the expected version for each FMA service
  • Each FMA service can connect to Configuration Logging Service
  • Each FMA service can connect to Configuration Service

Citrix Provisioning Checks

Citrix Provisioning Checks verifies Citrix Provisioning status and configuration.The following checks are performed:

  • Installation of Provisioning Server and Console
  • Inventory executable is running
  • Notifier executable is running
  • MgmtDaemon executable is running
  • StreamProcess executable is running
  • Stream service is running
  • Soap Server service is running
  • TFTP Service is running
  • PowerShell minimum version check
  • Database and Provisioning server availability
  • License Server connectivity
  • Provisioning Update Check
  • PXE service is running
  • TSB service is running

StoreFront Checks

StoreFront Check validates the services status, connectivity to Active Directory, Base URL setting, IIS Application Pool version and the SSL certificates for Storefront, and verifies the following conditions:

  • Citrix Default Domain Services is running
  • Citrix Credential Wallet services is running
  • The connectivity from the StoreFront server to port 88 of AD
  • The connectivity from the StoreFront server to port 389 of AD
  • Base URL has a valid FQDN
  • Can retrieve the correct IP address from the Base URL
  • IIS application pool is using .NET 4.0
  • Certificate is bound to the SSL port for the host URL
  • Whether or not the certificate chain is incomplete
  • Whether or not certificates have expired
  • Whether or not certificate(s) will expire within one month

VDA Health Checks

VDA Health Checks help Citrix administrators troubleshoot VDA configuration issues. This check automates a series of health checks to identify possible root causes for common VDA registration and session launch issues.

  • VDA software installation
  • VDA machine domain membership
  • VDA communication ports availability
  • VDA services status
  • VDA Windows firewall configuration
  • VDA communication with each Controller
  • VDA registration status

For Session Launch:

  • Session launch communication ports availability
  • Session launch services status
  • Session launch Windows firewall configuration
  • Validity of Remote Desktop Server Client Access License

Closing words

You can run checks manually, but it is also possible to schedule (recommended) the different health checks and get a summarized report daily or every week at designated time of day. The summary gets mailed to the registered Citrix Cloud account and to view more information you need to logon to the Smart Cloud website.

It is possible to view previous reports of the Smart Check runs and hide alerts that has been previously acknowledged:

Smart Check Health Alerts

Smart Check – Health Check Runs History

Under Site Details you can view components or add new ones. If needed it is also possible to Edit Site Credentials, Sync Site Data or Delete the Site:

Smart Check - Site Details

Smart Check – Site Details

Smart Check is supported both on-prem and in the Citrix Cloud environment.
It is easy to setup and brings a great deal of value. You should try it out! Let me know how it went in the comments down below.

Smart Tools contains Smart Checks and Smart Scale. Smart scale helps reduce your XenApp and XenDesktop on Azure Cloud resource costs. But this will be in covered another post.

Source: https://docs.citrix.com/en-us/smart-tools/whats-new.html



HOW TO: Configure BGP between Arista and Palo Alto using loopback-interfaces

In this example I will be showing you how you can configure BGP between Arista and Palo Alto. The setup has two Arista COR-switches which is configured with MLAG and a Palo Alto Networks firewall.

The goal is to use iBGP between the Arista-switches and eBGP between the Arista-switches and Palo Alto.

We will also be using a specific VRF in this example, if you have more than one VRF the same configuration-method can be applied again.

We will also assume that all linknet-interfaces are already configured on each device.

The topology is shown below.

Start by adding your route distinguisher and activate routing on your VRF on the Arista-switches.

Configure the loopback-interfaces and create static routes between them.

Next we will configure BGP on both Arista-switches. Both Arista-switches will have the same router BGP-ID but will be distinguished by “local-as”. Also in this example we will redistribute connected and static routes, these can be changed depending on your needs.

Verify that that the neighbor Arista-switch is in established state with the below command.

Next we will configure the Palo Alto-firewall with BGP. For simplicity we will call the Virtual Router “vrf-01” here as well.

Start by creating your loopback-interface.

Then create your static-routes and enable ECMP to be able to use both paths.

Next we will create a redistribution profile to decide what routes will be redistributed. As on the Arista-switches we will redistribute connected and static routes.

As a final step we will configure BGP on the VR. This can be configured in several different ways depending on your needs and this example is kind of slim but enough to distribute the routes.

Verify that BGP is established to both arista-core1 & arista-core2 by going to:

You should see that both “peer-arista-core1” and “peer-arista-core2” is established.

Also verify the established neighbors (should be two) on the Arista-switches with the below command:

At this point the only routes that should be added by BGP is the linknets that is not directly connected.

For example on arista-cor1:

As seen in the topology 10.0.0.2/31 is between arista-core2<->pa-fw01 and arista-core1 routes this traffic via the linknet ip on arista-core2.

Feel free to send me any questions to petter.vikstrom@xenit.se or add your question in the comments.



Palo Alto introduces new feature to support Terminal Service (TS) Agent on Windows Server 2016

In the latest release of Palo Alto Networks Terminal Service Agent 8.1.1, we were introduced to a new feature where it is now supported to install the agent on Windows Server 2016.

This is a very welcome feature that a lot of us have been waiting for. There are no other features added to this version or the one before.

This release is also compatible with all the PAN-OS versions that Palo Alto Networks still support.

For more information see:

Where Can I Install the Terminal Service (TS) Agent?

Release Notes – Terminal Service Agent 8.1



Chrome – Certificate warning – Invalid Common Name

Users of Google Chrome version 58 (released March 2017) and later will receive a certificate alert when browsing to HTTPS-sites if the certificate only uses Common Name and does not use any Subject Alternative Name (SAN) values. This has been ignored and for many years the Common Name field was exclusively used. The Chrome developers finally had enough with the field that refuses to die. In Chrome 58 and later, the Common Name field is now ignored entirely.

Chrome - Certificate warning - Invalid commonName

Chrome – Certificate warning – NET::ERR_CERT_COMMON_NAME_INVALID

The reason for this is to prevent homograph attack – which exploits characters which are different but look similar. The lookalike characters can be used for phishing and other malicious purposes. For instance, the English letter “a” looks identical to the Cyrillic “a”, but from a computers point of view these are encoded as two entirely different letters. This allows domains to be registered that look just like legitimate domains.

Some organizations with an internal or private PKI have been issuing certificates with only the Common Name field. Many often do not know that the “Common Name” field of an SSL certificate, which contains the domain name the certificate is valid for, was phased-out via RFC nearly two decades ago (RFC 2818 was published in 2000). Instead the SAN (Subject Alternative Name) field is the proper place to list the domain(s), which all publicly trusted certificate authorities must abide by, has required the presence of a SAN (Subject Alternative Name) since 2012.

Publicly-trusted SSL certificates have been supporting both fields for years, ensuring maximum compatibility with all software – so you have nothing to worry about if your certificate came from a trusted CA like Digicert.
Below is an example of a correctly issued certificate with Common Name and Subject Alternative Name.

tech.xenit.se - Common Name

tech.xenit.se – Common Name

tech.xenit.se - Certificate Subject Alternative name

tech.xenit.se – Subject Alternative Name

RFC 2818 – Common Name deprecated by Google Chrome 58 and later

“RFC 2818 describes two methods to match a domain name against a certificate: using the available names within the subjectAlternativeName extension, or, in the absence of a SAN extension, falling back to the commonName.

/…

The use of the subjectAlternativeName fields leaves it unambiguous whether a certificate is expressing a binding to an IP address or a domain name, and is fully defined in terms of its interaction with Name Constraints. However, the commonName is ambiguous, and because of this, support for it has been a source of security bugs in Chrome, the libraries it uses, and within the TLS ecosystem at large.

Source: https://developers.google.com/web/updates/2017/03/chrome-58-deprecations