Category: Microsoft

HTML5 Web Client for Remote Desktop Services 2016

Microsoft recently announced that the new HTML5 client for Remote Desktop Services has reached general availability. The new web client lets users access the Remote Desktop infrastructure using a modern browser that supports HTML5.

Requirements & Installation

Microsoft have a great article explaining the requirements and how to get started with the new client in the following link. It’s important to note that if you run any previous versions of the client and want to update to the latest release, it first has to be uninstalled from the Web Access servers.
The client can be installed and run simultaneously as your old RDWeb-page, they just use different URLs to be accessed. To access the new client, the URL https://<FQDN>/RDWeb/webclient/ is used.

Using the new client

The new client that was released previously this year, has now reached version 1.0.0 and with it, a new sign in experience and SSO to the applications. Below is how the now much improved login-screen looks like:

Web Client login screen

After logging in the apps are presented, and right away you can see the much improved design comparing to the old and very outdated default RDWeb page:

New updated application menu

The great thing about the HTML5 client is that it doesn’t require any software to run, just a browser that supports HTML5, which most browsers does these days. So this is good news for tablet and thin-client users.
The applications are contained within the browser window. You can only have one browser window open at a time, and opening multiple applications at the same time creates tabs within the browser window:

 

Applications running

Printing and copy/paste is available from within the session. Using print will download the job as a PDF file to your local computer.

Some features are still missing for making it a complete replacement for the old one, but Microsoft will be releasing updates in the future and adding more features as time goes by, so keep an eye out.



Deploy CoreOS with VSTS Agent container using ARM template

In this blog post, I’ll describe how to deploy CoreOS using an ARM Template and auto start the Docker service as well as create four services for the VSTS Agent container.

Container Linux by CoreOS (now part of the Red Hat family) is a Linux distribution and comes with the minimal functionality required to deploy containers. One feature that is really handy when it comes to deploying CoreOS in Azure is Iginition, which is a provisioning utility built for the distribution. This utility makes it possible to (for example) configure services to auto start from an Azure Resource Manager (ARM) Template.

Before we begin, you will also be able to download what I describe in this post here.

First of, we need to describe the service:

Note: VSTS_ACCOUNT and VSTS_TOKEN will be dynamic in the ARM Template and defined using parameters passed to the Ignition configuration dynamically at deployment. I’m using a static pool name ubuntu-16-04-docker-17-12-0-ce-standard.

When we know that the service works, we add it to the Ignition configuration:

Note: In the same way as in the service description, we will be dynamically adding the VSTS_ACCOUNT and VSTS_TOKEN during the deployment.

Now when we have the Ignition configuration it’s just a matter of adding it to the ARM Template. One thing to note is that you will need to escape backslash making \n to \\n in the template.

The ARM Template can look like this: (note that variable coreosIgnitionConfig is a concatenated version of the json above)

Note: I’ve also created a parameter file which can be modified for your environment. See more info here.

After deployment, you’ll have a simple VM with four containers running – and four agents in the agent pool:



Duplicate SRV records are cousing domain join workflows to fail

Have you ever had problems with duplicate SRV records in your environment? This is a quite common phenomenon when you google it without any real solution to it (not at least what I could find). Some environments would not be affected by this, but I got into a specific situation recently where some workflows in Nutanix would fail because of duplicate SRV records.

Symptoms:

  • Duplicate SRV records, one in lower-case – one in upper-case, are causing some workflows in Nutanix to fail.
  • When deleting the oldest record the duplicate is just recreated after some period of time (like 30 minutes or so).

So whats cousing this? In this specific case we managed (together with Microsoft support) to isolate the issue and found out that there were two main things that were related to this behaviour listed below.

Causes:

  • Some Domain Controllers names were in lower-case, others in upper-case.
  • When you have a mixture of DNS servers running Windows Server 2012 and 2016 the way that machine names are registered differs between those Windows versions.

So how do we solve this? The preferred solution from Microsoft was to rename all domain controllers to lowercase, but since all Domain Controllers except one, in this case, was in uppercase we tried to rename that specific DC to uppcase instead. The following steps were performed on the server:

    1. Demote DC
    2. Rename to uppercase
    3. Promote DC
    4. Delete all duplicated SRV records in DNS
    5. (If  the issue is still happening):
      1.  Stop netlogon service
      2. Delete C:\Windows\System32\config\netlogon.dnb
      3. start netlogon service

After doing this the duplicate SRV records stopped being recreated in the environment.

Resolution:

  • The preferred way to solve the issue is to rename all domain controllers to lowercase (or uppercase which works too).

If you have any questions, feel free to email me at tobias.sandberg@xenit.se or comment down below. I will try to answer you as soon as possible.



SCCM 1806 – News and features

Once more it was time to upgrade our SCCM environment to the newest release that is 1806. As it was not released for everyone yet, I had to run the Fast-Ring script to allow the update to present itself. I found this update very interesting as it comes with some exciting new features, and there are alot. These are the ones that I am most excited about.

  • Ability to PXE boot without WDS
  • CMTrace installed as default on clients
  • Ability to exclude Active Directory containers from discovery
  • High availability on Site Server
  • CMPivot
  • Boundary group for peer downloads
  • Enhanced HTTP site system
  • Improvements to OS deployment
  • Software Updates for third-party

…and much more. You can read about all the new features here on Microsoft docs.

Since there are a lot of news, I have chosen to cover the two that I am most excited about in this new release.

CMPivot

Configuration Manager is a very helpful tool when gathering information, CMPivot now allows you to take it to the next step by real-time querying clients. This allows you to gather a lot of information instantly. This feature uses Azure Analytics Language, .

CMPivot is located under Asset and Compliance > Overview > Device Collection, you can find this new feature in the top ribbon bar.

Location of CMPivot

An example is to find BIOS-information about the Dell computers that are currently online. From this output you easily create a collection (the members of the collection will be added as Direct Members) or export to both CSV and Clipboard.

 

PXE Without WDS

It is exciting to have a new way of deploying over PXE. Since Windows Deployment Services has been available for a long time, it feel suitable to have an updated way of deploying clients. By replacing WDS, the distribution point will create the service ConfigMgr PXE Responder. If you have plans of using Multi-Cast, you are for now stuck with WDS.

This setting can be found under Administration > Overview > Distribution Point, right click on the distribution point you would like to modify with the setting shown below.

After applying this setting, Windows Deployment Services will automatically be disabled. Be advised that if you are monitoring this service, it will be report as stopped. SCCM PXE Without WDS

If you have questions, thoughts or anything you would like discuss? Send an email to Johan.Nilsson@xenit.se and I will be more than glad to talk about these topics.



Datetime and RFC3339 compliance in powershell – a deepdive

A collegue of mine asked if there is a way to output a RFC compliant datetime (https://www.ietf.org/rfc/rfc3339.txt) in powershell without manually formatting in T and Z in the middle and end to comply with ISO standard and imply UTC +-00:00

 

Before i start with the how, I’d like to address the why.

If you’ve ever done some coding you’re sure to have encountered issues with datettime and possibly errors and incidents due to the timeformat of a datetime string.
For example, if I live in the US then time is commonly written in month-day-year format, which during the first 12 days of each month is indistinguishable from the european day-month-year format.
This is also encountered in code, for example in powershell my locale is Swedish and the ”Get-Date” cmdlet returns ”den 1 augusti 2018 16:35:24” which is easy and readable for a human.
However if i convert it to a string it becomes in US format even though my culture settings in powershell is set to swedish.
In my opinion this behavior is wrong as I expect to be given a ISO standard universal format, or at least a culture appropriate format. Instead I am given a US format.

With that said, developing automation and tools for global customers a standard format is much needed when we write to logs.

The How

After a short time on google it seems no one had done this properly in powershell. I also found out that XML is RFC compliant.

How did i do it?

Returns:

Great! Now let’s put it into some real code.

Example 1: Writing current date into a logfile

The output becomes a RFC compliant string and gets stored in the $now variable to be used into a out-file log operation.

Example 2: Writing a job deadline datettime

Here we create a datettime object, add 20 hours and then convert it to a RFC compliant datettime string and store it into the $RFCDeadline variable.

Hope this helps someone!



Azure AD Connect and .NET Framework 4.7.2

Introduction

Last week a discussion erupted on Microsoft forums regarding Azure AD Connect due to it’s Monitoring Agent using all free resources of CPU on the servers. These issues were caused by a .NET Framework update and a lot of administrators spent time uninstalling and blocking these patches to resolve the CPU usage issues on their servers. On Saturday Microsoft released an update (KB4340558) which contains a collection of several patches where one of the earlier mentioned .NET Framework updates were included. For more information, see this link.

Microsoft has recently published an article regarding this issue. In addition, Microsoft also published a new version of the health agent where they state that the issue is resolved, it can be downloaded from here. The new health agent version is set to be included in the next version of Azure AD Connect, which will be published for Automatic Upgrade (Auto Upgrade). The following patches have been identified with issues causing Azure AD Connect’s monitoring agent using huge amounts of CPU:

Auto Upgrade

In version 1.1.105.0 of Azure AD Connect, Microsoft introduced Auto Upgrade. Although, not all updates are published for Automatic Upgrade. Whether a version is eligible for automatic download and installation will be announced on Microsofts version-history website for Azure AD Connect.

You can verify whether your Azure AD Connect installation have Auto Upgrade enabled by either using Powershell or viewing your configuration in It’s GUI.


Graphical User Interface of Azure AD Connect
PowerShell-command for determining whether Auto Upgrade is enabled or not.

This command will return either Enabled, Disabled or Suspended, where as the Suspended state only can be set by the system itself. Newer installations of Azure AD Connect enables Auto Upgrade by default, in case your installation applies to Microsoft’s recommendations. For more information, see this link.

Enabling Auto Upgrade

In case you have an installation of Azure AD Connect older than 1.1.105.0 (February 2016), Auto Upgrade will be disabled, if you’ve not enabled it manually. Enabling this function can be done with below PowerShell-command if so wanted.

If you have any questions, feel free to email me at robert.skyllberg@xenit.se



Enable Exchange Mailbox Auditing for all users

Enabling Mailbox Auditing as an Exchange Administrator has for a long time been something you have need to do manually.

Yesterday, Microsoft announced that they will be enabling mailbox auditing by default for all user mailboxes using Office 365 and Exchange Online. This is a welcome change, so you don’t need to manually enable mailbox auditing on new users or use a script that enables that for all users in Office 365 and Exchange Online.

For on-premises Exchange environment, there is no such feature (hopefully it will come with a future Cumulative Update) so you still need to change it manually. Either you add this as a process when creating a new mailbox, or you can use a PowerShell script as an Schedule Task on your Exchange Server that will automatically enable auditing.

Here’s an example on how such script can look like, and you can find it as a download here.



Exchange Server and .NET Framework 4.7.2

Yesterday Microsoft released a new version of .NET Framework, 4.7.2 and it’s showing up as an important update in Windows Update.

For Exchange Servers it’s important that you don’t install this update as this version, at this time, is not part of the support matrix for Exchange Servers:

The full list of supported .NET Framework versions are available at Exchange Server Supportability Matrix – Microsoft .NET Framework

To block the installation of .NET Framework 4.7.2 from Windows Update, you can run the following command:

This will add the following registry key:

To unblock the installation, once it’s supported you can run the following command:


This will remove the registry key from the computer and the update will be available once again from Windows Update.



Device cleanup rules for Microsoft Intune

As an IT Administrator you want to keep your IT environment clean and tidy and the same goes for Microsoft Intune.

By default all devices that has been inactive or stale and hasn’t checked in for over 270 days will automatically been removed from the console.

In the latest update for Microsoft Intune dated July 2, Microsoft included a new feature, Device cleanup rules:.

New rules are available that let you automatically remove devices that haven’t checked in for a number of days that you set.

 

You will find it in the Intune pane, select Devices, and select Device Cleanup Rules:

By default, this is not enabled, so you need to change it to Yes and specific the numbers of days between 90 and 270 that suites your company’s policy and requirements.

If nothing is changed or you remain it set to No, it will use the default 270 days:



App Protection Policies for managed and unmanaged devices in Intune

In the latest update of Microsoft Intune, you now have the option to target App protection policies for Mobile apps if the device is Intune managed or if its unmanaged.

The two options that for now is available, if you select not to target all app types are:

  • Apps on unmanaged devices
    Unmanaged devices are devices where Intune MDM management has not been detected.
  • Apps on Intune managed devices
    Managed devices are managed by Intune MDM and have the IntuneMAMUPN app configuration settings deployed to the app.

With this new update, you are now able to create required settings for devices that are fully managed by Intune and separate policy for devices not managed by Intune.
For example you could allow saving files locally on devices managed by Intune and only allow saving to OneDrive or SharePoint (which is protected by App protection policies) on devices not managed by Intune.

If you are interested in learning more about App Protection Policies, you read more on docs.microsoft.com or drop a comment below!