Category: Citrix

FSLogix Profile Containers – Enkel och snabb profilhantering

FSLogix har en intressant produkt som heter Profile Containers, den tar hand om huvudvärken som profiler ofta skapar i fleranvändarmiljöer. Det är idag komplext att sätta upp en fleranvändarmiljö som erbjuder en bra upplevelse för användarna. En utav de stora utmaningarna är inloggningstiden för användaren eftersom storleken på profilen ofta är en stor faktor. Det krävs mycket tid till att exkludera så mycket som möjligt för att hålla nere användarens profilstorlek, vilket måste underhållas om t.ex. när nya applikationer introduceras i miljön. Det är dessutom väldigt standardiserad lösningar som inte tar hänsyn till varje persons unika behov vilket också försämrar upplevelsen.

Faktum är att Office 365 Containers som jag har skrivit om tidigare är en ”light-version” av Profiles Containers som löser några av de största problemen relaterade till Office 365 i en fleranvändarmiljö. Profile Containers fungerar nästan precis likadant som deras lite lättare produkt Office 365 Profile Containers som skapades just för att kunna nyttja några av de största fördelarna i Office 365 i en fleranvändarmiljö.

Precis som Office 365 Containers skapas en personlig VHD-fil för varje användare som lämpligtvis finns på en lagringsyta med hög tillgänglighet. VHD-filen kommer att anslutas till användarens session och hela profilen finns nu tillgänglig för systemet, ingenting behöver kopieras över, vilket är en mycket stor fördel. Det spelar ingen roll om profilen är 100 MB eller 5 GB, det kommer alltid ta samma tid för VHD-filen att ansluta till din session vilket innebär att inloggningstiden kommer ligga konstant, och det är runt ca 15 sek. Vi behöver alltså inte skapa komplexa regler för vad som ska finnas i profilen längre, användaren kan ha kvar allt och bibehåller då alla sina inställningar och data. Nedan kan du se skillnaden mellan FSLogix och andra metoder för att peka om profilen.

För att läsa mer om FSLogix Profile Containers och deras övriga produkter kan ni läsa mer på deras officiella sida

Vill ni veta mer om denna produkten tveka inte att kontakta oss för en mer detaljerad beskrivning om hur denna produkt kan hjälpa er!

Print drivers and Microsoft Update KB3170455

Typically users get their printers mapped by Group Policies or Group Policy Preferences. Especially in Citrix environments, users should not have the right to add their own printers or drivers that are not approved for multi-user environments. On July 12th 2016, Microsoft released a security update (KB3170455) to safeguard Man-in-the-Middle (MITM) attacks for clients and print servers. Then an updated version was released again September 12th 2017.

Users could encounter the dialog boxes below if the driver did not meet the requirements of Microsoft where the driver would be packaged and signed with a certificate:

Scenario 1

For non-package-aware v3 printer drivers, the following warning message may be displayed when users try to connect to point-and-print printers:


Do you trust this printer?

Scenario 2

Package-aware drivers must be signed with a trusted certificate. The verification process checks whether all the files that are included in the driver are hashed in the catalog. If that verification fails, the driver is deemed untrustworthy. In this situation, driver installation is blocked, and the following warning message is displayed:


Connect to Printer

Even if you enabled Point and Print restrictions in GPO and specified which server’s clients could get drivers from, users could encounter an installation prompt and request administrator privileges to install.

For most printers this is not an issue if there is an up-to-date driver which is compliant. Some manufacturers do not always provide printers drivers that is both packaged and signed. The first thing you should do is update the driver to one that both is signed and packaged. Usually the drivers from the manufacturer are signed according to Microsoft Windows Hardware Quality Labs (WHQL) but may not be packaged correctly and the users get prompted for administrator credentials when the printer is being added to the client computer or in the remote desktop session.

Since KB3170455 we need to enable point and print restrictions and specify our print servers in the GPO. For most printers there is no issues, however a couple of printers will not be pushed out by Group Policy Preferences since the update. Even though the print server was listed in the point and print GPO. Browsing the print share and trying to connect the printer manually would result in the ”Do you trust this printer” pop up which will then prompt for administrator credentials to install the driver. Looking at Print Management on the server in question shows that the problem printer drivers have a ”Packaged” status of false.


If you are pushing out printers via Group Policy or Group Policy Preferences and they are of Non-Packaged type you will always get a prompt to install, ignoring the point and print GPO, which will cause the install to fail. A workaround to this is a registry edit on the print server – test and verify this first before putting it into production:

  • HKLM\System\CurrentControlSet\Control\Print\Enviroments\Windowsx64\Drivers\<…>\<Driver name>\PrinterDriverAttributes

Change the value from 0 to 1 and reboot the printspool service or/and server. The value for other print drivers may not be 1, but to make this work the value needs to be set to an odd number. For example, if the value is 4 change it to 5. Only do these changes if you have no other means of getting a valid driver or printer swapped. In RDS/Citrix environments you could pre-install the printer driver on the host if viable and you only have a few session-hosts.

Back in Print Management you will see the Packaged status is now changed to true, and the printer should deploy. If you can find packaged print drivers then use those, but some manufacturers have not bothered supplying them.


PrintManagement – Packaged True


RfWebUI idle timeout

There seems to be an issue with the idle timeout in RfWebUI (verified in NetScaler version 12.0) and I’ve created a workaround until it is solved.

It is all based on a JavaScript that checks if the user is logged on, if logged on it starts a timer and when the timer is reached logs the user out.

Change the parameter at the top ”var timeout = xyz” where xyz is the time out in seconds. Because I wasn’t able to only insert this script when the user is logged in (always had to refresh) I chose to create a check that checks every five seconds for the cookie NSC_AAAC which is created upon logon and removed during logout. In this case, we reset the timer if the mouse is moved, a page is loaded or a key is pressed. This can be changed based on your requirements (for example removing document.onclick = resetTimer; if you don’t want a click to reset the idle timer).

When using it in Netscaler, add it like this:

If you are using the NetScaler Web UI to create the rewrite, the action expression will look like this:


Remove ”Password 2” from RfWebUI


Seems like the first method actually removes a password field when changing password. This shouldn’t do that:

Original post:

Have you had an issue with RfWebUI where you need to remove the ”Password 2”-field when for example using RADIUS as primary authentication source (challenge based) and LDAP as secondary?

As always, the great Sam Jacobs has the answer on Citrix Discussions.

If you don’t want to edit any files yourself or not create a a new theme you can use a rewrite to do this for you: (I’m editing style.css and not theme.css)

It should now look as expected:

Publishing XenMobile Self-Help Portal via NetScaler AAA

In our deployments of XenMobile we always recommend that our customers use the Two Factor option for enrollment, requiring username, password as well as a PIN created in the administration GUI of the appliance, for the added security. In larger organizations this can put some toll on the administrators which is why there is a self-help portal for the users. This portal allows users to create their own PIN and also comes with the added benefit of allowing them to wipe or lock lost and stolen devices themselves, cutting down on the time between theft and action taken.

However, this comes with a drawback. There is no built-in way in XenMobile to enforce any kind of second factor for the login to this portal, which effectively renders the MFA for enroll useless. To prevent this, we can put NetScaler AAA ahead of the login to enforce a second factor for logon to the portal.

I’ll assume that you already have AAA set up, with an authentication profile we can use. In my examples, ours is called AAA-AUPL-EXT_OTP

Starting with some standard boilerplate:

Add the traffic actions, policies as well as the form-fill action:

/zdm/cxf/login is the actual path to where the logon method from /zdm/login_xdm_uc.jsp is posted to. You can view the logon script at and see this for yourself.

Depending on how your AAA is set up, we might also need an authorization policy. Here I do it via a policy label:

A few rewrites:

These are necessary since we need the correct referrer to be allowed to login. We also need to tell the client to move from to once we have performed the login.

Create a service group for your nodes:

Finally, create a vServer and bind the policies:

Please note that this still leaves logon through unprotected, since you still can browse directly there, so you should take care not to expose these pages to the outside. One way, if using SSL offload, is to block it with a responder policy:

Do note that this needs to be bound to your LB for MDM, not the vServer we set up above.

Office365 med FSLogix i en fleranvändarmiljö

Eftersom Microsoft hårdsatsar på molnet och Office 365 har det länge varit ett naturligt steg att flytta sin on-prem Exchange till molnet – Exchange-Online i Office 365 för att kunna nyttja de många fördelar den erbjuder. Men det har i vissa fall inneburit försämrad upplevelse för slutanvändarna. I och med att Exchange nu befinner sig i Office 365 (Azure) har svarstiden ökat generellt. I många fall har en svarstiden ökat med en faktor om 10. Detta medför att navigering mellan mejl (när man använder förhandsvisaren aktiverad) är seg, det kan ta någon sekund för att det nya mailet man navigerar till dyker upp. Upplevelsen av Outlook blir påtagligt sämre och effektiviteten i sitt dagliga arbete påverkas.

Outlook Cached Mode

En lösning på problemet har varit att helt enkelt slå på Cached mode och helt blir av med problematiken av hög svarstid, detta fungerar väldigt väl med alla som har en ”fet klient” och har diskutrymme lokalt. I en fleranvändarmiljö blir det lite mer komplicerat, för att kunna köra cached mode i en fleranvändarmiljö såsom Citrix eller Microsoft Remote Desktop Services (RDS) måste man först kunna hantera stora mängder data eftersom fler och fler användare har väldigt stora mejllådor, profilstorlekarna blir dessutom väldigt stora och påverkar bland annat inloggningen negativt. För att komma runt detta riktar man om Outlook cachen (*.OST-filen) till ett lagringsyta (vanligtvis en filserver) som har kapacitet för lagringen och det i sin tur kommer inte belasta fleranvändarmiljön.

Många som kom fram till denna lösning har i sina tester haft mycket bra resultat, i en testgrupp på säg 20 personer fungerar detta mycket väl och implementering i sin produktionsmiljö är ett faktum. Men dessvärre slutar det inte här, det som kan vara svårt att förutspå är hur mycket den konstanta indexeringen av ost-filen belastar CPU:n och hur mycket nätverkstrafik SMB protokollet använder vid en sådan frekvent uppdatering av ost-filen, för att inte tala om vad Windows Search gör i en sökning av din mejllåda. All denna kraft står nu filservern för, som oftast inte är dimensionerad för detta, så som en Exchange-server är. Så när man produktionssätter sin lyckade lösning med 100+ användare blir resultatet ännu sämre än innan.

Detta är ett stort problem som många upplever och något som man har velat se en lösning på i många år, men det har dessvärre aldrig funnits någon lösning från Microsoft för detta. Det är här FSLogix Office 365 Containers kommer in i bilden.

FSlogix Office365 Containers

FSlogix är kortfattat ett företag där de tog saken i egna händer. De har utvecklat en produkt som löser alla problem ovan riktigt snyggt. Själva installationsförfarandet är mycket enkelt, du installerar en agent på alla servrar dina användare loggar på, lägger till ADMX tillägget i din Group Policy manager för styrning via GPO och pekar ut den filyta du vill att cachefilerna ska lagras.

Agenten kommer nu automatiskt peka om alla cachade filer till denna filyta (oavsett vad du definierar i Outlook). Det som FSLogix gör är att den skapar en vDisk för varje användare som vid inloggning kopplas på din session, det underlättar nätverksbelastningen avsevärt i jämförelse med SMB till en filserver, de har dessutom utvecklat intelligens för själva OST-filen som i grund och botten är en databas som konstant uppdateras. Den har ett slags mellanlager som sköter uppdateringen av OST-filen effektivare och snabbare, vilket gör att CPU belastningen blir en bråkdel av alternativet. Någon som också var en nyhet i våras som var mycket efterlängtat är att den även nu stödjer Windows Search så att din Cachade OST-fil är sökbar i Outlook.

FSLogix är ett mycket bra komplement för Exchange-Online i fleranvändarmiljöer!


I samma licens får man även tillgång till deras cachning av OneDrive, den lägger sig i samma vDisk som för Outlook och eftersom OneDrive börjar bli en bra produkt som fler och fler företag börjar använda är detta en mycket trevlig bonus.



Vill ni läsa mer om FSLogix Office365 Containers kan ni trycka här!

Prepopulate username with NetScalers RfWebUI

We’ve been seeing an issue with AAA in front of ADFS where credentials entered at the service provider (Office 365 for example) doesn’t populate the username in the NetScaler login, which works with ADFS. This isn’t the biggest issue, but something that makes it annoying to use AAA instead of pure ADFS. We were able to do this just fine with the cookie NSC_NAME (or even query based) before when not using RfWebUI. Because RfWebUI is the latest and greatest as well as responsive, most want to use it.

I’ve been looking into how to solve this using RfWebUI and may not have found the best solution in the world, but it works reliably and is easy to implement. A big thanks to Sam Jacobs who helped me out with the javascript parts, I haven’t been working with it before so was crucial to tying the knot on the issue.

The first thing I had to figure out was how to extract the username that Office 365 sends to ADFS. We can see the username in the query to ADFS as follows:

Note: It is URL Encoded which means the @ will be presented as %40.

The thing is that when using AAA, a new redirect will be made directly inside NetScaler to the AAA vserver for authentication. I was thinking of either trying to add ”Set-Cookie: UserNameCookie=<email>” somewhere here but I was thinking that this may not work since rewrites doesn’t always work on internal redirects and I may have to add another redirect to an already long chain of redirects – which may cause issues for some browser. What I did find was a cookie named NSC_TASS that contains a long string of random letters, numbers and symbols. After trying some things I was able do decode it by first converting it from URL Encoded format and then from Base64. When doing this, I was able to see the original ADFS URL containing the query with the username. To do this, I had to run the following to get the email/username in the correct format for the NetScaler login:

In other words we do the following: Grab the value of NSC_TASS and decode the URL Encoding. After that, decode it using Base64. Then typecast it to a HTTP URL and grab the value of the query username, and decode the URL Encoding (converting %40 to @ in my case).

Now to the part where I had to get some help to actually insert the username into the form. The solutions works fine, but if you have a better way of doing it please share! We had to put a small loop and wait to make sure the input field is created before the username can be inserted. The result looks like this:

We’re waiting for the window to load, but for some reason that doesn’t mean that the input field ”login” exists (yet) and that’s where the setInterval-loop comes in. Without the loop, I did see it work most of the times on computers but rarely on phones. To make sure that this only happens when being redirected, we’ll be verifying that the cookie NSC_TASS exists and that the referrer length is greater or equal 1. After that we verifies that the element ”login” is created and inserts the username / email and changes focus to the password input.

Now it’s just a matter of using a rewirte to insert this:

If you are using the GUI, the rewrite part looks like this:

I hope this can help some people out there making their end users happier! If you find a way of doing this easier, please share!

Scout 3.0 släppt med XenApp 7.14

I samband med att XenApp/XenDesktop 7.14 media blev tillgängligt 2017-06-09 ingår även Scout v3.0 med i mediet.

Scout GUI

Scout GUI

Scout är ett verktyg för Citrix-administratörer att enkelt kunna samla in diagnostik och loggfiler från en Delivery Controller eller VDA. Loggfiler kan sedan laddas upp till Citrix Insight Services (CIS) för analys, hälsokontroll och få rekommendationer på åtgärder eller förbättringar.

Verktyget används även vid kontakt med Citrix support för att underlätta vid felsökning och korta ner tiden det tar att lösa eventuella incidenter eller problem. För närvarande finns inte Scout v3.0 tillgängligt via en separat nedladdningslänk likt Scout v2.23.

Scout v.3.0 ingår endast i det media som används vid installation och uppgradering av XenApp/XenDesktop miljöer.

Värt att notera är att tidigare version av Scout (2.x) stödjer:

  • XenApp 6.x
  • XenDesktop 5.x
  • XenApp/XenDesktop 7.1 upp till 7.14

Version 3.0 stödjer endast XenApp/XenDesktop 7.14 och senare.

Om man av någon anledning valt bort installation Citrix Telemetry Service när man installerat VDA eller tagit bort tjänsten, kan man utföra installationen manuellt genom att köra installationfilen i mediet som finns under ”Citrix XenApp and XenDesktop 7.14.1\x64\Virtual Desktop Components\TelemetryServiceInstaller_x64.msi”

Den nya versionen förbättrar säkerheten, prestandan och användarupplevelsen.

Andra förbättringar är:

  • Capture Always-on-Traces (AOT)
    • AOT eliminerar behovet av att reproducera problem eftersom inloggningsspåren kan skickas säkert till Citrix med verktyget.
  • Insamling av obegränsad diagnostik data (beroende på resurser tillgängliga)
    • Tidigare versioner hade 10 enheter som standard vid ett insamlingstillfälle.
  • Support för Citrix Cloud
  • Schemaläggning av Call Home
  • Powershell Call Home cmdlets på alla maskiner med därTelemetry Service installerats
    • Tidigare fick man använda sig av CMD på den lokala maskinen.

För att läsa mer hur man använder sig av den nya Scout v3.0 besök följande länk.


Configure Google Chrome in a multi-user environment

Installing and configuring Google Chrome in a multi-user environment can be everything but easy. More and more users change from Internet Explorer to a much more convenient browser and they expect to use it in business too. In this post, I will provide a short tutorial how I usually install and configure Google Chrome for a non-popup seamless experience for your end user.

Installing Google Chrome is a basic next, next, next installation by using the MSI-file provided here. The problem with configuring Chrome is that there are several ways to set different kinds of settings. Sometimes you can configure the same type of settings on several places and sometimes you only have one place to configure some settings. There are mainly three ways of configuring settings – Policy based (ADMX-template), master preferences and tags on the shortcut when launching Chrome. I will be talking about the first two in this post. I always try to set as much settings as possible in a group policy (GPO) using the ADMX-templates. Why? Because it is much easier to update a GPO than to update a file on each session host.

Google Chrome is an application that configure and do many things in the background. Do you really want all users to be prompted to check the default browser, get a first run introduction and create shortcuts on the desktop? Although this is a standard procedure that most users are familiar with, it is much more convenient (and enterprise) to not get any popups at all. Below is what I usually add in the “master_preferences”-file. I have not found a convenient way to see a full list of settings to configure, but this is the closest I have yet to come.


notepad “C:\Program Files (x86)\Google\Chrome\Application\master_preferences”


After installing Google Chrome and adding the “master_preferences”-file I usually proceed by downloading the ADMX-templates from here. Download and install the ADMX-template in your central store. Browsing through the settings you should notice three things.

  1. All settings are applied to the computer, meaning that the settings configured do not affect user login time.
  2. There is two folders with policys. In “Google Chrome – Default Settings” the user may override all the configured defaults. In the other folder the user may not override the configured defaults.
  3. The settings we configured using the “master_preferences” is not overridable.


Google Chrome GPO

Please browse through each setting in the group policy and configure the settings to your liking.

Google Chrome saves a lot of settings and files in the user profile. If you are using roaming profiles, the profiles will soon begin to fill up and users will get a longer login time. There are two different approaches we can take. If your roaming profile system allows you to include and exclude files and directories you may use the first one below.


Include files:


The second approach is to configure the policy “Set user data directory” to your home catalog. I prefer using the second one due to that it is much easier to manage.

Google Chrome GPO User data directory

DirectAccess with Teredo Protocol requires ICMP traffic to be allowed

With Microsoft DirectAccess (DA) you have three different protocols that you can utilize, with 6To4 and Teredo being the primary ones and IPHTTPS being the fallback if both primary fail/are not configured correctly (6To4 is attempted before Teredo). One thing that is different with Teredo protocol is that the DirectAccess server will send a one-time Ping (ICMP) to any DA-configured address that the DirectAccess client is trying to connect to (link).

Example of this with Netscaler Load Balancing:
In a real-life case, a customer was load balancing internal websites on a Netscaler and wanted these websites to be accessible through DA, but it was not working for unknown reasons – until we figured out the ICMP requirement and added an exclusion to this on the Netscaler. By default when we configure a Netscaler we set it up to block most ICMP traffic (

Below is an example Access-Control-List (ACL) on Netscaler that will allow ICMP traffic (by default a Netscaler will not have any ACLs configured and allow all traffic, in which case this is not needed). Replace ”” with the IP of your DA server, and ensure the ACL Priority is a lower value than your Deny ACLs (if you have any).
add ns acl A-DA-ANY_ICMP ALLOW -sourceIP = -protocol ICMP -priority 100
apply ns acls

The ICMP message is of Type 8, Code 0, which you can specify in the ACL if you want to create a more specific ACL rule. Picture taken from Wireshark trace:

In case you are not fronting your internal servers with a load balancer (such as Netscaler) and you are using Windows Firewall on your internal hosts, you might need to allow ICMP traffic in your Windows Firewall for these hosts.

Is Teredo used in my environment/scenario?
One way to check if Teredo is being utilized in your DA setup is to logon to your DA server and run below command. If there is a Teredo Interface showing, then it is at least configured, and then you can check how much, if any, traffic is being passed through it (‘Bytes In’, ‘Bytes Out’). As we see in this picture, Teredo is being utilized.

Disabling 6To4 or Teredo can be done either on the DA server or the DA clients by disabling the relevant Network Interfaces (Teredo, for example), or by disabling the DA Clients to use specific protocols using the DirectAccess GPO Settings (see link further down).

To get a better understanding of the three protocols possible with DirectAccess, and their advantages/disadvantages, I recommend below blog posts:
Richard Hicks: DirectAccess IPv6 Transition Protocols Explained
Richard Hicks: Disable 6to4 IPv6 Transition Protocol for DirectAccess Clients
Microsoft Technet: DirectAccess and Teredo Adapter Behaviour

If you have any questions or feedback on above content, feel free to email me at