Category: Microsoft

Deploy separate Intune workloads to different collections (Co-management)

I was looking for a way to be able to deploy a Co-management policy with only Windows Update policies workload to a specific collection. This in order to transition a smaller amount of computers (who are not a member of the already existing Pilot group) to be controlled via Intune instead. In the Configuration Management console I was not able to create multiple Co-management policies so I thought that this was not possible to do. But then I found this great article describing the exact scenario I had and so I went ahead and tried it in my environment which worked like a charm.

All the credits goes to Cody Mathis and his original article about this topic.

Co-management – Multiple Pilot Policies


So what do I need to do this make this possible?

We need to use Powershell to create a new Co-management policy with the cmdlet New-CMCoManagementPolicy. We can then rename and deploy the policy to whatever collection we want. Isn’t that awesome?

In this example we will create a policy with the WufbWorkloadEnabled which will only activate the Windows update policies on the specific collection of our choice.

Other Workloads can be set be using the following parameters.

  • CAWorkloadEnabled = Compliance policies
  • RAWorkloadEnabled = Resource access policies
  • WufbWorkloadEnabled = Windows Updates Policies
  • EPWorkloadEnabled = Endpoint Protection
  • Office Click-to-Run apps = Doesn’t have it’s own parameter so you need to create that via an XML instead. Very well described in Codys article (link above) so I won’t write about that in this post.

Start Powershell from within the console and run the following commands (please note that there is different commands depending on the version you are running):


If done correctly the policy should now be deployed to the collection you defined in the commands above and you should see it like on the picture below.

On the computer you can now see that the new Co-management policy (CoMgmtSettingsPilot-WUFB) has been applied in configurations tab (control smscfgrc). Please note that you can see multiple CoMgmtSettings depending on your configuration.

We can also see that the Intune policies have been applied to the computer (Settings > Update & Security > View configured update policies > Policies set on your device).


If you have any questions, feel free to email me at tobias.sandberg@xenit.se or comment down below. I will try to answer you as soon as possible.


Other articles about Configuration Manager and Intune.

Move Software Updates to Intune with Co-management

Device cleanup rules for Microsoft Intune

Intune – Administrative Templates (Preview) are here

App Protection Policies for managed and unmanaged devices in Intune

 



Windows 7 licens key is “not genuine” and activation failes after installing KB971033.

INTRODUCTION

After installing KB971033 update some clients has issue with the KMS licens key is not genuine.  It is a known issue for Microsoft. You find more information here.  https://support.microsoft.com/en-us/help/4480970/windows-7-update-kb4480970

SOLUTION

The solution from Microsoft to be able to activate Windows again is to uninstall the patch, rebuild the Activation related files and then activate Windows.

  1. Start with uninstall the patch from Control Panel > Windows Update > View update history > Installed Updates, right-click Update (KB971033), and select Uninstall.
  2. Restart the computer.
  3. Now when the patch is no longer installed, we should rebuild the activation related files and activate Windows. Start CMD as administrator and run following commands:

net stop sppuinotify

sc config sppuinotify start= disabled

net stop sppsvc

del %windir%\system32\7B296FB0-376B-497e-B012-9C450E1B7327-5P-0.C7483456-A289-439d-8115-601632D005A0 /ah

del %windir%\system32\7B296FB0-376B-497e-B012-9C450E1B7327-5P-1.C7483456-A289-439d-8115-601632D005A0 /ah

del %windir%\ServiceProfiles\NetworkService\AppData\Roaming\Microsoft\SoftwareProtectionPlatform\tokens.dat

del %windir%\ServiceProfiles\NetworkService\AppData\Roaming\Microsoft\SoftwareProtectionPlatform\cache\cache.dat

net start sppsvc

cscript c:\windows\system32\slmgr.vbs /ipk <edition-specific KMS client key>

cscript c:\windows\system32\slmgr.vbs /ato

sc config sppuinotify start= demand

 

You can find the KMS-keys in the following link. https://docs.microsoft.com/sv-se/windows-server/get-started/kmsclientkeys

 



Citrix Virtual Apps and Desktops 1903

Citrix announced their new release Virtual Apps and Desktops 1903 on 28th of March and it contains a lot of interesting changes in all categories along with a long list of fixed issues. I will cover two of the changes in this blog-post which I found extra interesting, and that I would recommend you looking into as well!

Director

Citrix Director has been given some love and has received a few changes in the user interface. It has also been announced that similar changes to improve the user experience, are to be expected in the coming releases.

Also a profile processing duration counter has been added on the logon duration chart. This for making troubleshooting easier on profile related matters.

Virtual Delivery Agent

DPI matching on Windows Server 2016/2019, which allows your session to match your clients DPI. Requires minimum Citrix Workspace App on your client.

Pen functionality support with Windows Ink-based applications on Microsoft Surface products. Requires Windows 10 and Citrix Workspace App 1902 for a minimum.

Deprecation and removal

With change comes deprecation, and Virtual Apps and Desktops release 1903 is not an exception. In this release Citrix announced and removed the following components:

  • Announced in 1903 – To be removed
    • Smart Check for Virtual Apps and Desktops
  • Removed in 1903
    • Linux VDA – Support on Red Hat Enterprise Linux/CentOS 7.5
    • Citrix Receiver for Web classic experience
    • Support for Framehawk – Also removed option to enable from VDA installation
    • Delivery Controller options for end-of-life products (VDI-in-a-Box, and XenMobile < 9.0)

A full list of changes can be found here:
https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/whats-new.html

If you have any questions regarding Citrix Virtual Apps and Desktops, feel free to email me at robert.skyllberg@xenit.se or comment down below.



Microsoft Teams devices

So maybe you’ve read my article on Microsoft Teams Rooms? These solutions are just a part of Teams devices which offer smarter ways to connect and work together in the ever-changing workplace.

First of all, the Teams devices are certified to work with Teams and Skype for Business for that matter. Then they offer the best-in-class performance and crisp sound and picture that the certification requires.

Room Systems – check this article out.

Room phones – These are for smaller rooms which don’t need a complete Room System. These devices actually run Android and have the Teams client installed so essentially, the device and room is actually logged into a room. This way you can quickly book a room and join the meeting from the room phone. You don’t have to login with your personal credentials but it can also be a shared room which is always logged on. Here’s a sneak peek how it looks:

Personal devices – these devices are your personal ones. For example the Jabra 710 which has a Teams button/LED which will flash if you have a missed call and when you press it, it will get you to the missed calls list in the Teams client.

Desk phones are still used by many. For example the left one below is Plantronics Elara 60 which is a mobile dock. Just put your mobile phone in the dock for wireless charging and it will pair itself with the dock. You will get hard puttons for calling and also a Teams button which will flash if you have missed calls in Teams and will bring you to the missed calls list on your mobile phone and remind you when you have meetings.

The right is a Yealink phone which has a large touchscreen which is running Android and the Teams app. This means you can easily perform and receive Teams calls directly on the phone. You can have it as a companion to your computer where you have your daily meeting schedule open on the device at all times. For the IT-pro, this also means you will be able to manage these phones from the Teams admin center since the device itself is actually enrolled into Azure AD as Azure AD registered.

And of course, the headsets which comes in various models and sizes. At Xenit, we use Jabra which have a large portfolio of different models.

But seriously, what’s wrong with any-high-quality Bluetooth headset out there, won’t it work? Well, to be honest – it might. My personal experience is that you can definitely pair your headset to your phone and Windows 10 client. You might miss out on some special functionality like busy-light on, call control functionality but you might not get the crisp sound quality you otherwise get because to be honest, the built-in Bluetooth in some laptop devices are simply not manufactured with sound quality in mind. But when I tried to use a high-quality Jabra Bluetooth headset with the built-in Bluetooth in my laptop did not work well. It worked 9 out of 10 times but I experience some unplanned disconnections during some meetings which I didn’t with the Jabra dongle.. that’s sad since the USB dongle really annoys me.

So before you go shopping, make sure you check out the list of certified devices at http://office.com/teamsdevices.



What is FSLogix Cloud Cache?

Background

Last year FSLogix released its award winning (at Citrix Synergy) technology Cloud Cache, and I for one was very curious about what this meant and what I could use it for. The fact that is was included in the license for Office 365 Container and Profile Container was a really nice surprise, but I was somewhat confused about what it actually does, I mean, have FSLogix developed their own cloud service? It sure sounds like it, that was however not the case. First off, this is a technology that will make your profiles or Outlook cache easily available cross-platform and a kind of built in High Availability so you don’t have load or create a fail-over file-cluster. But there are some things you should take in consideration before implementing this to your environment, but first let me explain what Cloud Cache really is and what the target benefits are!

What is Cloud Cache, really?

As I mentioned you might think that is has something to do with the cloud, or the cloud services, that’s wrong, or at least regarding the technology. Cloud Cache contains primarily 3 features:

  1. Automatic Replication
  2. Cache of “hot” data from your container
  3. Use of Azure blob storage as VHD location

Automatic Replication

Before Cloud Cache you could in FSLogix set multiple paths for the VHD-files and it would automatically check the second path specified if the first was unavailable, the problem was that you needed to set up the replication between the two file locations yourself, and that was complicated since the VHD-disks will be locked during use, and it was hard to do an incremental copy since the changes in data resides within the VHD file, the replication would potentially take a lot of time and load the network considerably.

With Cloud Cache they solved that issue, it is now built in to the product. It will automatically copy the data between the two locations. The pretty neat part of their solution is that the replication begins when the user logs on to their environment and copies the incremental part of the container since its now open and happens automatically. As you can figure out, this is also a great way of migrating your containers to a new location. Just add a new location, wait a couple of days and then remove the old path, really smooth, no hassle, no downtime, no late night service-windows.

Cache of hot data from your container

It’s known that FSLogix will solve the high CPU (on the file-server) issue you normally would see if you would redirect the ost-file to a file share, but it will still demand quite fast disks and some network-load. With FSLogix Cloud Cache you will now be able to place your containers in Microsoft Azure, which is cool but there are two fundamental issues with this approach 1. Azure bills in consumption and 2. high latency to access the data. FSLogix has solved this by caching the hottest data from the containers to the actual Server/Client you reside on, this will minimize the cost in Azure and the load of the network, this is ideal if you use your FSLogix container on different platforms (On your client and a VDI-solution) or on a VDI-environment where the cache will be saved and not downloaded again.

Client profile management

Before Cloud Cache, if you want to manage the profiles of a clients with FSlogix you would have some issues, since it will require you to have the client online all the time. Fortunately with Cloud Cache, you will now not be affected by offline sessions, it will continue with the cached data and as soon its online again it will update the original VHD with the new changes that happen offline.

 

What to consider before using Cloud Cache

Now when you know what Cloud Cache is and what’s makes it good you should also know what to consider in some scenarios. First thing to consider is the cached data, how much will it cache? That is a good question, a question I have not yet received an answer to, from what I gathered this cannot be specified, meaning you cannot control the amount of data it cached, therefor you cannot control the size of the cached data on the potential Citrix server, this can in some environment be a really risky approach. I have some examples below when you really need to assess the value against the risk regarding Cloud Cache:

Citrix Provisioning Services with Citrix Virtual Apps and Desktop

When using Cloud Cache in this setup you will have issues, the cache is suppose to be persistent on the location where you are, which it will not be when using PVS and Citrix Virtual Apps and Desktop. Within this setup your cache will download every time you logon to Citrix, if you also are using “Cache on RAM with overflow on disk” you will also potentially fill your page file-disk.

Citrix Virtual Apps and Desktop

You need to be sure how to set it up, the C-drive must be large, to handle the amount of cached data every user will download, and you must set “Delete Cache on logoff” otherwise one user can potentially download his/hers cache to multiple Citrix server during logoff and logon, and that also means your user will download the cached data every time they logon. Wtich might not be the best experience you had in mind when implementing the solution. There is however a solution to this, you can redirect the cached data to another server, but if you do that, it is highly recommended to place it on fast disks and in a High availability-mode.

 

Summary

All in all this is a really nice feature and will add a lot to the product. But you need to assess it before activating Cloud Cache to see if it’s suitable to you and your environment. In the right scenario this could really improve the experience of your users and your IT-department. If you are curious about the product please don’t hesitate to contact me at jonas.agblad@xenit.se, or leave a comment below!

 

You can also find more information about FSLogix with my previous posts here:

Convert Citrix UPM to FSLogix Profile Containers

Teams in your mulit-user environment done right!

Outlook Search index with FSLogix – Swedish

FSLogix Profile Container – Easy and fast Profile management – Swedish

Office 365 with FSLogix in a Multi-user environment – Swedish

 

 



Microsoft Teams Rooms for modern meetings

How easy is it at your company to start a Teams or Skype meeting online in your conference room without technical difficulties? Maybe you have a very large (and expensive) video conference system in your board room but you wish you also could equip the smaller huddle rooms with such systems? Then you should look into Microsoft Teams Rooms which is the new name for Skype Room Systems.

You cant  argue the trend of moving to a more modern and mobile workplace. In a few years, more and more employees will probably not be stationed at a certain office or desk. This requires better tools and services and a big part of this is the digital meetings. We have during the last 3 years seen a massive growth, installing more video conference rooms than the last 30 years and we have seen a switch moving from proprietary (and expensive) solutions to standardized and more affordable systems so even the smallest huddle room can get one…

In it’s simplest form, you book the room in Outlook as you have done for years and you choose if it should be a Teams or a Skype meeting:

When you enter the conference room, the control unit on the table lights up and show you the upcoming meetings:

All you have to do is to click Join on your meeting and within a few seconds the meetings is started, all participants are joined, no matter if it’s via the Teams/Skype client, the web client, app on their phone or have dialed in to the number in the invitation. You see the participants on the control unit and on the bigscreen in front of the room and of course their video if they share it. From the control unit you can mute/unmute and and instantly add participants to the meeting from the directory or call them.

Want to share your screen? Simple, just plug in the HDMI cable to your laptop and it will output to the bigscreen but also share it in the meeting with remote participants. Of course, remote participants can also share their screen in the meeting.

It’s the simplicity – one-click-join and the meeting is started. You no longer need to be a technician to get a meeting started, choosing the correct input on the bigscreen, choose the right speaker and mic.

Microsoft Team Rooms comes from different partners (Logitech, HP, Lenovo, Creston, Polycom, Yealink) which have certified systems in different sizes – from the smallest 4-people huddle room to the largest boardroom. A few examples:

Xenit has used Skype Room Systems for a long time and are extremely happy how it works.

So what about the tech and for IT?

Compared to other proprietary systems, Microsoft Teams Rooms run on Windows 10 with an Windows app. This means you can use your current tools for deploying and managing it as you would do for any other Windows client except that you need to make sure not all policies apply to the system. On-premise AD join, Azure AD join and Workgroup are all supported. The app itself, which only installs on certified devices so you can’t do this DIY, is automatically updated through the Windows Store. So for us at Xenit, it has been almost no support for this system since it was first setup – except for some occasional hardware issues where someone was “smart” to disconnect the HDMI cabling to connect it directly to their laptop.

Of course, Microsoft has done some work to cloud enable these devices if you want.

For example you can use Azure OMS (Operations Management Suite) to monitor these devices since they log a lot of information to the event log. For example you can get information regarding:

  • Active / Inactive Devices
  • Devices which experienced hardware / applications issues (disconnected cables anyone?)
  • Application versions installed
  • Devices where the application had to be restarted

All this can be alerted upon so you hopefully can solve problems before someone calls it in as a problem.

In a few months, the Microsoft Teams Rooms will light up in the Teams Admin Center for additional functionality. For example, if you enroll many of these devices, the admin center will enable you to more quickly enroll them with a profile with settings you want. It will also make it easier for inventory management, updates, monitoring and reporting.

Here’s a short demo:

Let us know if you want to discuss or even get a personal demo at our office.



Easily analyse your memory dumps

Recently I stumbled over a great application for debugging your system while trying to examine a memory dump. The application is named WinDbg Preview and is distributed by Microsoft themselves and serves several purposes for debugging Windows operating systems.

WinDbg Preview is a modernized version of WinDbg and extremely easy to use! With WinDbg Preview you can for example do the following:

  • Debug executables
  • Debug dump and trace files
  • Debug app packages
  • Debug scripts

WinDbg Preview

In my use case I wanted to quickly analyse a memory dump file which had been generated. A minute and about five clicks later I had received an analysis which gave me all the information I needed. I was also told which commands to use on the go without thinking.

Attaching memory dump file

Analysis result

WinDbg Preview is available from the Windows Store and can be read more about it here.

If you have any questions, feel free to email me at robert.skyllberg@xenit.se or comment down below.



Changing default ADFS Decrypt/Signing Certificate lifetime from 1 year to X years

ADFS 2.0 and above versions have a feature called AutoCertificateRollover that will automatically updates the Decrypt and Signing certificates in ADFS, and by default these certificates will have a lifetime of 1 year. If you have federations (Relying Party Trusts) configured and the Service Provider (SP) is not using the ADFS metadata file to keep their configuration updated when ADFS changes occur, then the ADFS administrator will have to notify these Service Providers of the new Decrypt/Signing certificate thumbprints each time time the ADFS servers automatically renews the certificates.

To minimize the frequency of above task you can configure the default lifetime of the Decrypt and Signing certificates so you only have to do it every X years instead of every 1 year.

Below is the ADFS 3.0 Powershell configuration you can run to change the default lifetime to 5 years.

 

See below for how it should look with new Secondary certificates created with a lifetime of 5 years. When the date 3/23/2019 is reached, the ADFS server will automatically activate the (currently) Secondary certificates and update its metadata file accordingly. For any federations that do not use the ADFS metadata file those SPs will have to update the decrypt/signing certificate thumbprints on their side on this particular date (and specific hour, to minimize any downtime of the federation trust).

If you have any questions or comments on above, feel free to leave a message here or email me directly at rasmus.kindberg@xenit.se.

 



Simplify removing of distributed content with the help of Powershell

Begin

TLDR; Go to the Process block.

Ever since I first got introduced to Powershell, I have always tried to come up with ways to include, facilitate and apply it to my my everyday tasks. But for me, using Powershell in combination with SCCM has never been the ultimate combination, the built in cmdlets doesn’t always do it for me, and the gui is most of the times easier to understand.

So when I got a request to simplify removal of distributed content on all distribution points or all distribution point groups, it left me with two options. To create a script what did the desired job, or to create a function that would cover all the possible scenarios. So I thought; “Why don’t I take these matters in my own hands and create what I actually desire?” That is why I created a script that helped to find the content wanted for removal, and to have the distributed content removed from every Distribution Point or Distribution Point Group.

Lets say that you have 10 Distribution Points, and you have distributed content to 5 out of 10, and you have not been using a Distribution Point Group, the way to go would be to repeatedly proceed with the following steps:


And to do these steps for every distribution point would just take forever. Of course, using one Distribution Point Group would of course be more effective and the ideal way to go, but you might have distributed it to multiple Distribution Point Groups? That is something that already has been thought of, and that is why this script is created. Even if you have distributed it to some distribution points, and some distribution point groups, it will all be removed.

Process

But how does it work? In this demonstration, I will have two packages distributed with similar names. One of them will be sent to a Distribution Point Group, and the other one to 2 Distribution Points. And I would like to have both of them removed from whatever they have been distributed to. 
1. Start by launching Powershell, and import the script by running “. .\Remove-CMAllSiteContent.ps1”

2. Run the script with the required parameters. As shown in the picture below, I searched for ‘TestCM’, but it resulted in showing multiple results. The search is done with wildcard, so everything similar to the stated PackageName will be found. All the parameters have a more detailed description in the script below.

  • The search can either be done with the parameter -PackageName or -PackageID,
  • The parameter -PackageName is searching with wildcards both at the beginning and the end of the stated name. This should be used when you are not sure of the PackageID, or want to remove multiple packages, 
  • The parameter -PackageID is the unique ID for the specific package you want to remove from the distribution point(s) or group(s). This should be used when you are sure of what you would like to remove,
  • The parameter -CMSiteCode is mandatory and must be specified. 

3. In this case, I would like to remove both of the displaying packages, so I choose 0 for ‘All’, followed by a confirmation (Y / N is not case sensitive)

4. After it has been confirmed, the script will check the following:

  • If the content is distributed to Distribution Point Group(s) as an Application,
  • If not, check if it distributed to Distribution Point Group(s) as a Package,
  • If none of these is correct, the script will check if the content is distributed on each Distribution Point as an Application,
  • If not, it will check if the content is distributed to each Distribution Point as a Package.

At the beginning of the script, the content is validated as distributed. If not, it will not be shown. These four steps above covers all distributed scenarios.

5. When finished, we can see that the Distributed content successfully has been removed.

Please read the comment based help to get a better understanding of what is actually running in the background.

End

This can of course be modified with more choices in every step, but at the moment I did not see the need for it.

If anyone have any questions or just want to discuss their point of view regarding this post, I would be more than happy to have a dialogue. Please email me at johan.nilsson@xenit.se or comment below.



Querying Microsoft Graph with Powershell, the easy way

Microsoft Graph is a very powerful tool to query organization data, and it’s also really easy to do using Graph explorer but it’s not built for automation.
While the concept I’m presenting in this blogpost isn’t something entirely new, I believe my take on it is more elegant and efficient than what I’ve seen other people use.

So, what am I bringing to the table?

  • Zero dependancies to Azure modules, .net Core & Linux compatibility!
  • Recursive/paging processing of Graph data (without the need for FollowRelLink, currently only available in powershell 6.0)
  • Authenticates using an Azure AD Application/service principal
  • REST compatible (Get/Put/Post/Patch/Delete)
  • Supports json-batch jobs
  • Supports automatic token refresh. Used for extremely long paging jobs
  • Accepts Application ID & Secret as a pscredential object, which allows the use of Credential stores in Azure automation or use of Get-Credential instead of writing credentials in plaintext

Sounds great, but what do I need to do in order to query the Graph API?

First things first, create a Azure AD application, register a service principal and delegate Microsoft Graph/Graph API permissions.
Plenty of people has done this, so I won’t provide an in-depth guide. Instead we’re going to walk through how to use the functions line-by-line.

When we have an Azure AD Application we need to build a credential object using the service principal appid and secret.

Then we aquire a token, here we require a tenantID in order to let Azure know the context of the authorization token request.

Once a token is aquired, we are ready to call the Graph API. So let’s list all users in the organization.

In the response, we see a value property which contains the first 100 users in the organization.
At this point some of you might ask, why only 100? Well that’s the default limit on graph queries, but this can be expanded by using a $top filter on the uri which allows you to query up to 999 users at the same time.

The cool thing with my function is that it detects if your query doesn’t return all the data (has a follow link) and gives a warning in the console.

So, we just add $top=999 and use the recursive parameter to get them all!

What if I want to get $top=1 (wat?) users, but recursive? Surely my token will expire after 15 minutes of querying?

Well, yes. That’s why we can pass a tokenrefresh and credentials right into the function and never worry about tokens expiring!

What if I want to delete a user?

That works as well. Simply change the method (Default = GET) to DELETE and go!

Deleting users is fun and all, but how do we create a user?

Define the user details in the body and use the POST method.