Category: Applications

5 Things I check after I’ve installed Microsoft Edge Dev (Chromium)

Many of you probably already know that Microsoft have released their new Microsoft Edge built on Chromium as a Development-build for the public so we can try it out.

As usual when it comes to new things we want to personalize it, so I would like to share my first 5 things that I customize in this new version of Edge.
Before we begin, remember that this is a Dev-build (Version 76.0.152.0) so the things that I mention below might change when it is fully released.

Also worth mentioning is that this release is not built in to Windows 10, and that means that you have to install the browser like any other application out there.
You can find the download link at the bottom of this page.

 



The best features in Varonis 7.4

The new big update to Varonis (7.4) was released about an month ago. Now after I have been upgrading and using it for a while I’m starting to get a feeling for the new features and it feels like a great time to talk a bit about the great features that were released.

The first big thing I want to show you are the new dashboards that will help you get a good overview of the status of your environment.You can use the predefined dashboards or create your own.

Active directory dashboard:

GDPR dashboard where you can see if you are compliant to the regulations and if you have control over your sensitive data.

The next thing I really like is that it now is possible to search through the logs via the web interface. It is more responsive and the user interface looks great.The reason why it’s more responsive now is because from this version SOLR software is being used. Varonis promises significant performance improvements and that investigations will go lightning-fast with SOLR and I can definitely agree on that. Searching and investigating alerts in the web interface works perfectly.

Another interesting feature that have been added to the web interface is the integrated incident response playbooks that can be used when handling incidents from DatAlert. As you can see in the picture below you will get detailed information about what happened and which next steps to take.

Varonis Edge have got multiple new threat models added to DatAlert so you can now among other things find out if data have been exfiltrated via DNS tunneling, if DNS cache poisoning have occurred or if data have been uploaded to external websites.

Varonis Edge is a product that is used to analyze metadata from perimeter systems like DNS, VPN and web proxies. These kinds of devices often write the logs in very different ways and it can be very hard to obtain interesting and useful data from them. Edge is used to filter out only the interesting metadata from the perimeter devices and present the events more readable for the user. With help from Varonis Edge you can for example find out whether a user was accessing the network from their usual location, if sensitive data was accessed, and if the event occurred during a user’s normal time window and more.

If you want to know more about the features in the latest version or are interested in Varonis products don’t hesitate to send me an email at rickard.carlsson@xenit.se



Virtual attendance to Microsoft Build 2019

New features and cool stuff – Microsoft 365, Office 365, Azure, Edge, Windows 10, and everything else Microsoft

There are so many cool things you can do with new types of disruptive technology that was not even imaginable a decade ago. Impressive progress has been made across several disciplines within IT, and it doesn’t look like it will slow down at all. Automation, augmented reality and analytics, AI-driven development, and digital twins just to mention a few areas that come to my mind as examples of groundbreaking new tech-trends – thanks to Gartner’s report Top 10 Strategic Technology Trends for 2019. All of these new technology trends are possible thanks to extremely talented researchers, mathematicians, and developers to name a few. A lot of this new tech is built on or with technology from Microsoft – that’s why Microsoft Build is such an interesting conference.

Even though my daily work is around project management, end user computing and the operations side of digital infrastructure, I’m always curious on what’s to come and try to find the next big thing or cool features that can improve the EUC-experience for all of our current and future customers.

One impressive technology, albeit rather old, is virtual presence and live online streaming. That’s something I’m very thankful for a day like this. Last evening and night, was the first day of Microsofts annual developer conference Build in Seattle, WA, and I was able to watch a few hours of presentations from my couch instead of having to go to the US. Even though attending in person would have been a bit more exciting and fun, my couch is much better than nothing at all. 😃

Being able to listen to Microsoft vision and plan for the future, and also learn about the latest new features from the couch might not sound reasonable for everyone, but a completely logical move to me.

After a good night’s sleep I have been trying to come up with a list of the most interesting parts from the presentation I saw last night, from a EUC standpoint. Obviously, there will be lots and lots of more neat new features and updates to products presented during the conference but that might be for another blog post.

Microsoft Edge Chromium

Three major updates were announced for Microsoft edge last night. Thanks to Microsoft’s decision to move to a fork of the open source browser Chromium my bet is that we will see a lot of more news around the browser in the months to come.

If you would like to try the new public version of the Edge Chromium browser you can do so here!

  1. IE Mode

This is a big one for EUC enthusiasts like myself. There has always been a push-pull struggle to decide on which browser to use for end users in an enterprise environment and that usually, not always, has to do with compatibility to do.

Microsoft’s announcement last night hopefully means that we won’t have to trade off new features in modern browsers and being able to work effectively in old and legacy LOB applications. I think we all can agree on the fact that most bigger enterprises have a handful of “extremely important” old web apps that won’t disappear in the forseeable future.

What Microsoft announced is the possibility for Edge Chromium to load an old web app straight into the new browser but with the old Internet Explorer rendering engine. Previously Edge started a separate IE-process and users had to switch between the two browsers, this news means that you can have IE-tabs and Edge Chromium-tabs within the same browser, really neat.

IE Enterprise Mode works well, but I think this will be much much better. We’ll see!

  1. Collections

Another cool feature presented during the keynote was Collections. In summary, I’d say that is the next generation of the old favorites feature. You will be able to create collections of links, pictures, text, and other information within the browser.

If you want to it’s then possible to export/share that collection with your co-workers via Excel or Word. The Edge Chromium browser generates good looking files with headers, aligned pictures, and URLs/sources.

  1. Privacy

You will be able to select from one of three predefined privacy configurations. Unrestricted, balanced or strict. The strict mode blocks most trackers and trackers but sites might break. The unrestricted is the complete opposite, and the balanced mode is what we swedes say is lagom – not too much, not too little tracking.

The World’s Computer (Azure)

It’s no surprise to see that there’s a lot of focus on Microsoft Azure during the conference. Some interesting news that might be of extra interest for the EUC community I’d say are these:

Of course, there are loads of other new features but i found these to stand out.

To see all Microsoft Azure announcements, check out this link.

Windows Terminal

WOW! Finally, the old terminal will be replaced with something new! The new terminal will support shells like Command Prompt, PowerShell, and WSL.

To get a glimpse of the amazing future of Windows Terminal, check this out.

The new console is open source and you can build, run, test, for the app right now. Their repo can be found here.

Key features according to Microsoft is:

  • Multiple tabs
  • Beautiful text (GPU accelerated DirectWrite/DirectX-based rendering, emojis, powerline, icons, etc.)
  • Lots and lots of configuration possibilities (profiles, tabs, blur/transparency/fonts… you name it)

So, get a new graphics card and get started working in the new terminal 😃

Office 365 Fluid Framework

A new framework called the Fluid Framework was announced. The new framework will make it seem like users are working together in real time, charts and presentations will be updated in an instant, and translations into loads of languages will be live.

During the keynote, the presenter wrote in a document at the same time as others did, and it really looked like there was no latency. The live translation part was really cool and I recommend you to watch it in action to get why this is something that might be of real interest for your business.

Watch it in action here.

Windows Hello, FIDO2 certification

Windows Hello is now FIDO2 certified. What does that mean?

Without digging into the details the new certification hopefully means that more websites and online services will be able to allow other forms of authentication than just username/password. Passwordless authentication is proven secure and with Microsoft adhering to the new specification it will be easier to allow user-friendly authentication methods like fingerprint and face recognition.

FIDO2 is the overarching term for FIDO Alliance’s newest set of specifications. FIDO2 enables users to leverage common devices to easily authenticate to online services in both mobile and desktop environments. The FIDO2 specifications are the World Wide Web Consortium’s (W3C) Web Authentication (WebAuthn) specification and FIDO Alliance’s corresponding Client-to-Authenticator Protocol (CTAP).

Windows Subsystem on Linux 2 (WSL2)

The new version of WSL will be running from a completely open source Linux Kernel that Microsoft will build themselves. There are probably 100s of reasons why Microsoft do this but one of them is performance. The kernel version will be 4.19 which is the same version that is used by Azure.

The new WSL-version will make it possible to run containers natively which means that locally hosted Virtual Machines won’t be necessary anymore.

Like before there won’t be any userspace binaries within WSL which means that we will still be able to select which flavor we want to run.

The first public versions of WSL2 will be available sometime this summer.

Honorable mentions or too cool not to mention

  • Mixed reality services within Teams and Hololens, for example, the live Spatial meetings using AR
  • Hololens 2 and the Mittel presentation
  • Cortana updates where the AI Bot is integrated and helps even further with scheduling and assisting you during your workday
  • All news regarding containers, Docker, and Kubernetes/AKS
  • Microsofts new Fluent Design System
  • Xbox Live for new devices (Android and iPhone) and new collaborations with game studios
  • Some kind of Minecraft AR game for mobile phones being released on May 17

Psst. Did you know that you can watch loads of presentations and also the keynote here?

What do you think? Have I missed anything obvious?



Install OneDrive (and soon Teams) on Local Machine

One of the most requested features for OneDrive and Teams have been to install the programs the local machine instead of in the profile for each user. Microsoft have finally released a OneDrive client to support this. As of version 19.043.0304.0003 OneDrive can be installed to the local machine by installing it with the below handle.

 

 

This makes a huge difference in a multi-user (Virtual Apps and Desktops) environment. If you wanted to use OneDrive before, you had to install the OneDrive client to all users profile. However, this can be very time consuming. Especially if something goes wrong with the installation and/or program files that’s stored in each users profile.

 

It seems that Microsoft have finally caved for the community. Christiaan Brinkhoff on twitter also states that a Teams per-machine is in progress.

 

 

This will be a very welcomed change for us that are passionate about multi-user environments.

 

 



Easily analyse your memory dumps

Recently I stumbled over a great application for debugging your system while trying to examine a memory dump. The application is named WinDbg Preview and is distributed by Microsoft themselves and serves several purposes for debugging Windows operating systems.

WinDbg Preview is a modernized version of WinDbg and extremely easy to use! With WinDbg Preview you can for example do the following:

  • Debug executables
  • Debug dump and trace files
  • Debug app packages
  • Debug scripts

WinDbg Preview

In my use case I wanted to quickly analyse a memory dump file which had been generated. A minute and about five clicks later I had received an analysis which gave me all the information I needed. I was also told which commands to use on the go without thinking.

Attaching memory dump file

Analysis result

WinDbg Preview is available from the Windows Store and can be read more about it here.

If you have any questions, feel free to email me at robert.skyllberg@xenit.se or comment down below.



Are you able to spot phishing emails?

Phishing is an attack concept where an attacker usually contacts a victim pretending to be from a trustworthy source to get information that they shouldn’t have gotten if they used their real identity.
When an attacker targets specific individuals or groups within an organization the phishing method is called spear phishing. According to Symantec ISTR report volume 23 from 2018, the majority of organized security breaches used spear phishing as the infection vector.

One of the reasons why these attacks are so effective and commonly used is because the attack is built to exploit people’s feelings. It will also require less effort to write a mail and pretend to be from a supplier and trick a victim into clicking on a link or open an attachment instead of putting in the time and effort to find a way through a firewall or other security solution. Usually malware is being spread with these emails in form of malicious links or malicious attachment. When the user clicks on the link or opens the attachment the malicious code will be executed in the victim’s system.

This has been a common problem for years now and many users are aware that you shouldn’t open files from unknown sources but are you equally careful when clicking on links? If you find the description interesting, you will most likely just click on the link without actually reading the domain name before and that is another weakness an attacker can exploit.

Example on link-manipulation
Let’s say that you work for the company xyz and that your website is ‘xyz.com’. An attacker could then create a malicious website with a similar name, for example ‘secure-xyz.com’ or use a legit domain but with a redirect to a malicious site.

  • http://www.secure-xyz.com
  • http://www.xyz.com/amp/http://www.badsite.com

They could also encode the URL to make it harder to read or shorten it

  • http://www.xyz.com%2Fexit.asp%3FURL%3Dhttp%3A%2F%2Fwww.badsite.com
  • https://bit.ly/2TZB50k

Generally, you should keep attention to links that you think look weird and if you are not sure where the link leads to you shouldn’t visit it. It is better to be safe than sorry and today there are great tools available online where you can scan for malicious content and one of them is

To use it you just enter a URL and press enter. Multiple anti malware-engines will then scan the URL.

And for this test we can see that no engines detected our URL ‘https://www.xenit.se’ as malicious.

This and similar tools are great but the best way to reduce the risk of becoming a victim to this kind of attacks is to arrange awareness training for all employees regularly. Below you will find a link to a quiz where you will put your ability to identify phishing emails to test. You will inspect some emails and then you have to decide if you think it is malicious or not and afterwards you will get a good explanation on why or why not it is malicious.

Link to quiz:
https://phishingquiz.withgoogle.com/

Were you able to identify all phishing mails? Please leave a comment with your result or if you want to discuss phishing further.

 



Simplify removing of distributed content with the help of Powershell

Begin

TLDR; Go to the Process block.

Ever since I first got introduced to Powershell, I have always tried to come up with ways to include, facilitate and apply it to my my everyday tasks. But for me, using Powershell in combination with SCCM has never been the ultimate combination, the built in cmdlets doesn’t always do it for me, and the gui is most of the times easier to understand.

So when I got a request to simplify removal of distributed content on all distribution points or all distribution point groups, it left me with two options. To create a script what did the desired job, or to create a function that would cover all the possible scenarios. So I thought; “Why don’t I take these matters in my own hands and create what I actually desire?” That is why I created a script that helped to find the content wanted for removal, and to have the distributed content removed from every Distribution Point or Distribution Point Group.

Lets say that you have 10 Distribution Points, and you have distributed content to 5 out of 10, and you have not been using a Distribution Point Group, the way to go would be to repeatedly proceed with the following steps:


And to do these steps for every distribution point would just take forever. Of course, using one Distribution Point Group would of course be more effective and the ideal way to go, but you might have distributed it to multiple Distribution Point Groups? That is something that already has been thought of, and that is why this script is created. Even if you have distributed it to some distribution points, and some distribution point groups, it will all be removed.

Process

But how does it work? In this demonstration, I will have two packages distributed with similar names. One of them will be sent to a Distribution Point Group, and the other one to 2 Distribution Points. And I would like to have both of them removed from whatever they have been distributed to. 
1. Start by launching Powershell, and import the script by running “. .\Remove-CMAllSiteContent.ps1”

2. Run the script with the required parameters. As shown in the picture below, I searched for ‘TestCM’, but it resulted in showing multiple results. The search is done with wildcard, so everything similar to the stated PackageName will be found. All the parameters have a more detailed description in the script below.

  • The search can either be done with the parameter -PackageName or -PackageID,
  • The parameter -PackageName is searching with wildcards both at the beginning and the end of the stated name. This should be used when you are not sure of the PackageID, or want to remove multiple packages, 
  • The parameter -PackageID is the unique ID for the specific package you want to remove from the distribution point(s) or group(s). This should be used when you are sure of what you would like to remove,
  • The parameter -CMSiteCode is mandatory and must be specified. 

3. In this case, I would like to remove both of the displaying packages, so I choose 0 for ‘All’, followed by a confirmation (Y / N is not case sensitive)

4. After it has been confirmed, the script will check the following:

  • If the content is distributed to Distribution Point Group(s) as an Application,
  • If not, check if it distributed to Distribution Point Group(s) as a Package,
  • If none of these is correct, the script will check if the content is distributed on each Distribution Point as an Application,
  • If not, it will check if the content is distributed to each Distribution Point as a Package.

At the beginning of the script, the content is validated as distributed. If not, it will not be shown. These four steps above covers all distributed scenarios.

5. When finished, we can see that the Distributed content successfully has been removed.

Please read the comment based help to get a better understanding of what is actually running in the background.

End

This can of course be modified with more choices in every step, but at the moment I did not see the need for it.

If anyone have any questions or just want to discuss their point of view regarding this post, I would be more than happy to have a dialogue. Please email me at johan.nilsson@xenit.se or comment below.



Smart Check – Monitor Your Citrix Sites

Citrix Smart Check is a software and a service that installs on a Citrix Delivery Controller and collects diagnostic data, sends it to the Citrix Cloud account, where it gets analyzed and presented on the Citrix Cloud website. The information helps Citrix administrators to prevent and resolve issues before they happen or impact the users, give recommendations on fixes and to keep the Citrix environment stable.

The Smart service helps Citrix administrators that do not have their own monitoring setup or are unable to monitor their sites for other reasons and presents it on a webpage overview. The administrators can also get scheduled summarized mail reports regarding errors, warnings and information regarding the state of the different sites.

Citrix Cloud Smart Tools

Smart Check – Sites Overview

What Smart Check provides

  • Overview of the Citrix sites and products used, site-by-site
  • An extensive diagnostic and health checks for the different sites and services
  • Scheduled health controls of Delivery Groups, StoreFronts, Delivery Controllers, Machine Catalogs, Provisioning and License Servers
  • Give recommendations what administrators should do with the site to keep it up-to-date and stable
  • Help with simplified troubleshooting and pin down where the issue may be impacting users
  • Upload diagnostic data to Citrix Insight Services (CIS)
Smart Check - Overview

Smart Check – Overview

How to get started

First, you need a Citrix Cloud account. Register an account at https://smart.cloud.com. After you have created an account you can login, click Add Site and download the Smart Check software. The software should be installed on a Delivery Controller on the site and comes with a one-time signed JSON Web Token (JWT) that is used to connect your site to the Citrix Cloud – Smart Tools service.

Smart Tools - Add Site

Smart Check – Steps to take

Add Site - CitrixSmartToolsagent.exe

Add Site – CitrixSmartToolsagent.exe

Once the Smart Check agent is installed it will show up on the Citrix Cloud – Smart Check webpage as Site Discovered. You will need to click on Complete Setup and provide a domain user account that is a member of the local Administrator group of the Delivery Controller and full administrator role in Citrix Studio. PowerShell 3.0 or greater needs to be installed on the Delivery Controllers and outbound internet access on port 443 enabled to be able to upload to Citrix Cloud.

Smart Check - Site Discovered

Smart Check – Site Discovered

Smart Check - Enter Credentials

Smart Check – Enter Credentials

For VDA the following must be enabled:

  • File and Printer Sharing
  • Windows Remote Management (WinRM)
  • Windows Management Instrumentation (WMI)

For a full list of requirements and supported site components, visit Citrix Product Documentation – Smart Check requirements.

Smart Checks

Below is a list of the checks that are available as of this post. There are probably more to come:

  • Site Health
  • Citrix Optimizer
  • Citrix Provisioning
  • Delivery Controller Configuration
  • License Server
  • LTSR Compliance
  • Product LifeCycle
  • StoreFront
  • VDA Health

Each category contains several checks. You can read an excerpt of the different checks performed below.

Site Health Checks

Site Health Checks provide a comprehensive evaluation of all the FMA services including their database connectivity on your Delivery Controllers. Citrix recommends you run these checks at least once daily. Site Health Checks verify the following conditions:

  • A recent site database backup exists
  • Citrix broker client is running for environment test
  • Citrix Monitor Service can access its historical database
  • Database connection of each FMA service is configured
  • Database can be reached by each FMA service
  • Database is compatible and working properly for each FMA service
  • Endpoints for each FMA service are registered in the Central Configuration service
  • Configuration Service instances match for each FMA service
  • Configuration Service instances are not missing for each FMA service
  • No extra Configuration Services instance exists for each FMA service
  • Service instance published by each FMA Service matches the service instance registered with the Configuration service
  • Database version matches the expected version for each FMA service
  • Each FMA service can connect to Configuration Logging Service
  • Each FMA service can connect to Configuration Service

Citrix Provisioning Checks

Citrix Provisioning Checks verifies Citrix Provisioning status and configuration.The following checks are performed:

  • Installation of Provisioning Server and Console
  • Inventory executable is running
  • Notifier executable is running
  • MgmtDaemon executable is running
  • StreamProcess executable is running
  • Stream service is running
  • Soap Server service is running
  • TFTP Service is running
  • PowerShell minimum version check
  • Database and Provisioning server availability
  • License Server connectivity
  • Provisioning Update Check
  • PXE service is running
  • TSB service is running

StoreFront Checks

StoreFront Check validates the services status, connectivity to Active Directory, Base URL setting, IIS Application Pool version and the SSL certificates for Storefront, and verifies the following conditions:

  • Citrix Default Domain Services is running
  • Citrix Credential Wallet services is running
  • The connectivity from the StoreFront server to port 88 of AD
  • The connectivity from the StoreFront server to port 389 of AD
  • Base URL has a valid FQDN
  • Can retrieve the correct IP address from the Base URL
  • IIS application pool is using .NET 4.0
  • Certificate is bound to the SSL port for the host URL
  • Whether or not the certificate chain is incomplete
  • Whether or not certificates have expired
  • Whether or not certificate(s) will expire within one month

VDA Health Checks

VDA Health Checks help Citrix administrators troubleshoot VDA configuration issues. This check automates a series of health checks to identify possible root causes for common VDA registration and session launch issues.

  • VDA software installation
  • VDA machine domain membership
  • VDA communication ports availability
  • VDA services status
  • VDA Windows firewall configuration
  • VDA communication with each Controller
  • VDA registration status

For Session Launch:

  • Session launch communication ports availability
  • Session launch services status
  • Session launch Windows firewall configuration
  • Validity of Remote Desktop Server Client Access License

Closing words

You can run checks manually, but it is also possible to schedule (recommended) the different health checks and get a summarized report daily or every week at designated time of day. The summary gets mailed to the registered Citrix Cloud account and to view more information you need to logon to the Smart Cloud website.

It is possible to view previous reports of the Smart Check runs and hide alerts that has been previously acknowledged:

Smart Check Health Alerts

Smart Check – Health Check Runs History

Under Site Details you can view components or add new ones. If needed it is also possible to Edit Site Credentials, Sync Site Data or Delete the Site:

Smart Check - Site Details

Smart Check – Site Details

Smart Check is supported both on-prem and in the Citrix Cloud environment.
It is easy to setup and brings a great deal of value. You should try it out! Let me know how it went in the comments down below.

Smart Tools contains Smart Checks and Smart Scale. Smart scale helps reduce your XenApp and XenDesktop on Azure Cloud resource costs. But this will be in covered another post.

Source: https://docs.citrix.com/en-us/smart-tools/whats-new.html



HTML5 Web Client for Remote Desktop Services 2016

Microsoft recently announced that the new HTML5 client for Remote Desktop Services has reached general availability. The new web client lets users access the Remote Desktop infrastructure using a modern browser that supports HTML5.

Requirements & Installation

Microsoft have a great article explaining the requirements and how to get started with the new client in the following link. It’s important to note that if you run any previous versions of the client and want to update to the latest release, it first has to be uninstalled from the Web Access servers.
The client can be installed and run simultaneously as your old RDWeb-page, they just use different URLs to be accessed. To access the new client, the URL https://<FQDN>/RDWeb/webclient/ is used.

Using the new client

The new client that was released previously this year, has now reached version 1.0.0 and with it, a new sign in experience and SSO to the applications. Below is how the now much improved login-screen looks like:

Web Client login screen

After logging in the apps are presented, and right away you can see the much improved design comparing to the old and very outdated default RDWeb page:

New updated application menu

The great thing about the HTML5 client is that it doesn’t require any software to run, just a browser that supports HTML5, which most browsers does these days. So this is good news for tablet and thin-client users.
The applications are contained within the browser window. You can only have one browser window open at a time, and opening multiple applications at the same time creates tabs within the browser window:

 

Applications running

Printing and copy/paste is available from within the session. Using print will download the job as a PDF file to your local computer.

Some features are still missing for making it a complete replacement for the old one, but Microsoft will be releasing updates in the future and adding more features as time goes by, so keep an eye out.



Chrome – Certificate warning – Invalid Common Name

Users of Google Chrome version 58 (released March 2017) and later will receive a certificate alert when browsing to HTTPS-sites if the certificate only uses Common Name and does not use any Subject Alternative Name (SAN) values. This has been ignored and for many years the Common Name field was exclusively used. The Chrome developers finally had enough with the field that refuses to die. In Chrome 58 and later, the Common Name field is now ignored entirely.

Chrome - Certificate warning - Invalid commonName

Chrome – Certificate warning – NET::ERR_CERT_COMMON_NAME_INVALID

The reason for this is to prevent homograph attack – which exploits characters which are different but look similar. The lookalike characters can be used for phishing and other malicious purposes. For instance, the English letter “a” looks identical to the Cyrillic “a”, but from a computers point of view these are encoded as two entirely different letters. This allows domains to be registered that look just like legitimate domains.

Some organizations with an internal or private PKI have been issuing certificates with only the Common Name field. Many often do not know that the “Common Name” field of an SSL certificate, which contains the domain name the certificate is valid for, was phased-out via RFC nearly two decades ago (RFC 2818 was published in 2000). Instead the SAN (Subject Alternative Name) field is the proper place to list the domain(s), which all publicly trusted certificate authorities must abide by, has required the presence of a SAN (Subject Alternative Name) since 2012.

Publicly-trusted SSL certificates have been supporting both fields for years, ensuring maximum compatibility with all software – so you have nothing to worry about if your certificate came from a trusted CA like Digicert.
Below is an example of a correctly issued certificate with Common Name and Subject Alternative Name.

tech.xenit.se - Common Name

tech.xenit.se – Common Name

tech.xenit.se - Certificate Subject Alternative name

tech.xenit.se – Subject Alternative Name

RFC 2818 – Common Name deprecated by Google Chrome 58 and later

“RFC 2818 describes two methods to match a domain name against a certificate: using the available names within the subjectAlternativeName extension, or, in the absence of a SAN extension, falling back to the commonName.

/…

The use of the subjectAlternativeName fields leaves it unambiguous whether a certificate is expressing a binding to an IP address or a domain name, and is fully defined in terms of its interaction with Name Constraints. However, the commonName is ambiguous, and because of this, support for it has been a source of security bugs in Chrome, the libraries it uses, and within the TLS ecosystem at large.

Source: https://developers.google.com/web/updates/2017/03/chrome-58-deprecations