Microsoft Defender ATP for Mac now available in Public Preview

Yesterday Microsoft released Microsoft Defender ATP for Mac in public preview and are now available for download and installation though the Microsoft Defender Security Center.

In the onboarding section in Microsoft Defender Security Center, if you have preview features selected, you will see how to onboard macOS machines.

You will have the option to download a standalone package or package for Mobile Device Management / Microsoft Intune.

System Requirements

Before you try to install Microsoft Defender ATP on macOS you need to make sure you meet the following system requirements [1]:

  • macOS version: 10.14 (Mojave), 10.13 (High Sierra), 10.12 (Sierra)
  • Disk space: 1GB
  • No other third-party endpoint protection software installed

Manual deployment

If you want to manually deploy Microsoft Defender ATP to your macOS devices, Microsoft has created the following guide:
https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-antivirus/microsoft-defender-atp-mac-install-manually

Microsoft Intune

If you use Microsoft Intune as a Mobile Device Management solution for your macOS devices, you could configure it to automatically onboard and deploy Microsoft Defender ATP. A guide from Microsoft on how this could be done is found here:
https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-antivirus/microsoft-defender-atp-mac-install-with-intune

JAMF

If you use JAMF as a Mobile Device Management solution for your macOS devices, you could configure it to automatically onboard and deploy Microsoft Defender ATP. A guide from Microsoft on how this could be done is found here:
https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-antivirus/microsoft-defender-atp-mac-install-with-jamf

Other MDM

If you are not using Microsoft Intune or JAMF but a other third party Mobile Device Management solution for your macOS devices, Microsoft has created a guide for this process on how you could use it to automatically onboard and deploy Microsoft Defender ATP for Mac, which could be found here:
https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-antivirus/microsoft-defender-atp-mac-install-with-other-mdm

 

Keep in mind, Microsoft Defender ATP for Mac is in Public Preview, so you want to make sure you verify and test this before rolling out in full scale production!

[1] https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-antivirus/microsoft-defender-atp-mac#system-requirements



Citrix Synergy 2019

I år är det jag och Robert Skyllberg som fått åka till Citrix Synergy 2019 i Atlanta. Det är en hel del nyheter och jag tänkte försöka skapa en kortare sammanfattning av de olika delarna.

Bild från Synergy

Citrix VD, David Henshall, inledde med att beskriva vad som som levererats sedan Synergy 2018:

  • Citrix Worskpace App
  • Citrix Mobile Apps
  • Citrix Casting
  • Citrix Workspace Self-Service with ServiceNow
  • Citrix Analytics
  • Citrix SD-WAN Service for Managed Service Providers
  • Citrix SD-WAN + Azure
  • Citrix Intelligent Traffic Management
  • New Citrix Endpoint Management Capabilities
  • Citrix Cloud App Control
  • Cloud Portability
  • Autoscale for Google Cloud
  • Microsoft Azure Government Support
  • Windows Server 2019 Day 1 Support

Utöver ovan gick de ut med en lång lista andra funktioner och produkter som blivit tillgängliga sedan dess:

Citrix ADC for Azure DNS Private Zones, Citrix Workspace with Citrix Cloud Control Plane, Citrix SD-WAN for Microsoft Network Performance, Citrix MDM/MAM with EMS/Intune Enlightened, Citrix Analytics with Microsoft Security Graph, Citrix WOrkspace App on Google Android, Citrix ADC VPX on Google Cloud, Power Management Support for Google Cloud on Citrix Cloud, Citrix SD-WAN on GCP Marketplace, Citrix AD on GCP Marketplace, Citrix ADC integration with Google Kubernetes, Launch partner for Google Anthos, Citrix Workspace AutoScaling integration for AWS, AWS Quick Start for Citrix Resource Locations, Citrix support for AWS Dedicated Hosts, Citrix support for AWS Identity and Access Management (IAM), Citrix Virtual Apps and Desktop integration with NVIDIA QUadro Virtual WOrkstation for Microsoft Azure, Citrix Virtual App and Desktop integration with NVIDIA Quadro Virtual WOrkstation for Google Cloud, Cisco HyperFlex for Citrix Cloud, HPE SimpliVity integration for Citrix Cloud, Citrix Workspace App for Samsung DeX (Galaxy S10, Note9 and Tab S4 devices), Citrix Workspace App enabling Dual Screen on Samsung tablets (Samsung Tab + monitor), Citrix Endpoint Management support for Samsung Knox (KPE/KPS), EFOTA and other APIs

Efter vad som gjorts presenterats så gick de in på hur Citrix i framtiden kommer göra livet enklare för de anställda – både verksamhet och IT. Visade hur det kan se ut idag med multi tasking, context switching och tiden anställda behöver lägga på arbete som egentligen inte skapar värde utan istället bara driver tid. De kallar det “employee disengagement” och trycker hårt på hur många anställda inte får eller kan fokusera på vad de är bäst på utan blir väldigt lätt mycket annat.

Citrix vision är att skapa en brygga mellan de anställda och tekniken, skala bort det överflödiga och göra det möjligt att utföra så mycket som möjligt från samma plats.

I och med detta annonserade Citrix sin nästa del i arbetet för att ge verksamheten möjlighet att fokusera på vad som är viktigt och minimera tiden som behöver läggas på det som inte spelar så stor roll: Citrix Intelligent Workspace

Kortfattat har Citrix integrerat uppköpet av Sapho för att själva skapa micro-appar direkt i Citrix Workspace (tidigare Citrix Receiver) och ge kunderna möjligheter att skapa dessa micro appar själva. Citrix kommer från början ha stöd för micro-appar kopplat till ett antal stora företag:

Micro Apps

Utöver att kunna koppla ihop Citrix Workspace med ovan “out-of-the-box” för att utföra vardagliga uppgifter kommer en micro-app builder släppas som gör det möjligt att enkelt bygga egna integrationer antingen helt utan kod (men även mer avancerat för dem som behöver det).

Det kommer även i framtiden vara möjligt att starta lokala applikationer från Workspace, vilket skall göra det möjligt att från en enda plats komma åt det som behövs så snabbt som möjligt. Tanken är att Workspace skall bli den ytan allt arbete utgår ifrån och verksamheten skall inte behöva leta på olika ställen efter de system som behövs – samt förenkla arbetsflödena för så många system som möjligt.

Intelligent Workspace

Det släpps även stöd för att blocka keyloggers för applikationer i Citrix Workspace samt stöd för att censurera skärmbilder (screenshots).

Anti keylogger

De annonseringar som gjordes kring Workspace:

  • Citrix Workspace intelligent experience
    • Mobile productivity
    • Out-of-the-box micro-apps
    • Micro-app builder
  • Local application support in Citrix Workspace
  • Citrix Managed Desktops on Azure
  • Access Control Service for hybrid deployments
  • Expanded security capabilities for Citrix WOrkspace
  • Flexible identity support in Citrix Workspace including Google Identity
  • G Suite integration in Citrix Workspace
  • Citrix support for INtune Conditional Access APIs
  • Citrix Virtual Apps and Desktops for VMware cloud
  • Machine Creation Services for Google Cloud

Annonseringar kring Analytics:

  • Citrix Analytics for Performance
  • Citrix Analytics partner integrations
  • Citrix Analytics availability in Europe and Asia-Pacific

Annonseringar kring Networking:

  • Citrix support for Windows Virtual Desktops on Azure
  • HDX Optimization for Microsoft Teams
  • Office 365 optimization for Citrix SD-WAN
  • CItrix SD-WAN for Citrix Managed Desktops and Windows Virtual Desktops
  • Citrix ADC High Availability for Google Cloud
  • Citrix ADC BLX

Redan nu har vi gått inte på väldigt många olika funktioner och annonseringar och förhoppningsvis intressant för en del att känna till vad det är som är här, släpptes eller skall släppas.

Rent presonligen är jag väldigt exalterad över Citrix ADC BLX som nu ger oss NetScaler på Linux. I och med detta börjar fler och fler möjligheter öppna upp sig och jag tror – utan att skriva för mycket – att detta är framtiden. Helt klart något alla intresserade av ADC / NetScaler skall hålla ögonen på.

Det finns en hel del mer, men för allas skull tror jag detta räcker.



Office Cloud Policy Service – Preview Feature

Earlier this year Microsoft announced a new cloud based service that allows administrators to create and manage policies for Office ProPlus users in your tenant, this service is called “Office Cloud Policy Service” or “OCPS” for short. These policies are created and managed via an internet based portal and are then enforced upon members of a Azure Active Directory Security Group.

The settings that you can apply in your OCPS policies include many of the same settings that you can find in the traditional user based settings in Group Policy. So the best thing about OCPS is that it doesn’t require any on-premises or MDM infrastructure to work, its all cloud based!
Even though its completely cloud based, you shouldn’t see OCPS as a replacement for Group Policy, but more of an extension. That’s because OCPS policies apply to devices even if they aren’t domain joined or MDM enrolled ( where Group Policy can’t be applied), they apply to all devices where the user is logged in to Office ProPlus.  Note that OCPS only applies user based settings and not machine based settings like Group Policy does.

What are the requirements for getting started?

The requirements for getting this to work are not very many or complex

  • The minimum version of Office ProPlus must be 1808
  • Users must login to Office ProPlus with an Azure AD account. This account can either be synced or cloud-only.
  • Security groups in Azure AD that contains the appropriate users that you want to apply a policy to. The groups can be synced or cloud only as well.
  • In order to manage OCPS you must either be a Global Administrator, Security Administrator or a Desktop Analytics Administrator

 

How to create your first policy

Creating a policy in the internet based portal is very simple and straight forward.

1: Start by signing into the OCPS portal with the URL https://config.office.com/officesettings And choose “Go to Office policy management”

2: Click on “Create”

3: You will now be met with the following fields that need to be specified

4: After you’ve specified a name for your policy you can go ahead and click the “Select group” button to be able to specify the group this policy will apply to. Note that you can search for specific groups in the search box, or just choose a group from the list. Note that you can only select one group per policy.

5: After you’ve selected your group you can go ahead and click the “Configure Policies” button to actually start applying setting for the policy

6: There is a search function to easily find the settings you are interested in. For example, I’ve searched for “Outlook” and I’m interested in preventing the attachment preview functionality, so I can click on the setting to start configuring it

When we click on a setting we get a description that will tell us what the setting controls and what will happen if we change the configuration of the setting.

7: After I’ve configured all the setting I want to be a part of my policy, I can go ahead and click “Create” on top of the policy wizard

Managing a policy

After you’ve created a policy it will show up in a list so you can easily edit, delete, copy or reorder its priority.

If you edit a policy you can see what settings have been configured by filtering the “Status” Column to “Configured”

This will show you only the settings that are currently configured in this policy. So you can easily modify the configuration ,and also verify what settings actually apply to your users.
In my example I only have the Attachments preview setting we configured earlier

Note that the status is set to “Configured”

So what is OCPS good for and when should it be used?

Like I previously mentioned OCPS is a way for administrators to control the behavior and configuration of Office ProPlus on all devices a users logs into. It doesn´t have to be a domain joined or MDM enrolled device. These policies are applied once a users logs in and activates Office ProPlus.

I believe that OCPS is a very good function for primarily cloud-only organizations that don’t have or don’t need an on-premises  server structure with Active Directory and Group Policy management, but still want the opportunity to secure and control their users Office ProPlus installations.
It´s also a good tool for organizations that already have Group policy in place, but want to be able to apply similar configuration on non domain or MDM joined devices.
For example, if a CEO has several corporate and private devices he or she logs into Office with, we might want to enforce some settings for the Office applications on those devices that would normally be out of our control. This might be because the CEO might be targeted by different types of harmful attacks, maybe by macro enabled Word documents etc.
This could be prevented on the CEO´s personal devices as well by setting up a OCPS policy that disabled Macros in Word.

 

If you have any questions or would like to know more about Office Cloud Policy Service (OCPS)
feel free to contact me at oliwer.sjoberg@xenit.se or by leaving a comment.



New baseline policies available in Conditional Access

Last week Microsoft starting to rollout three new baseline policies in Conditional Access.

  • Baseline policy: Block legacy authentication (Preview)
  • Baseline policy: Require MFA for Service Management (Preview)
  • Baseline policy: End user protection (Preview)

Baseline Policy in Conditional Access are part of Baseline Protection in Azure Active Directory (Azure AD) and the goal of these policies is to ensure that you have at least the baseline level of security enabled in Azure AD.

Conditional Access are normally part for a Premium SKU (P1 or P2) for Azure AD but Baseline Protection are available for all editions of Azure AD, including Free.

Here is a walk-through of all the available baseline policies that Microsoft offers and how they protect your organization.

Require MFA for admins

This policy requires Multi-Factor Authentication (MFA) for accounts that are part for directory roles that elevate an account with more privileged than a normal account. This policy also blocks legacy authentication, like POP, IMAP and older Office desktop client.

The directory roles that are covered by this policy are:

  • Global administrator
  • SharePoint administrator
  • Exchange administrator
  • Conditional access administrator
  • Security administrator
  • Helpdesk administrator / Password administrator
  • Billing administrator
  • User administrator

Block legacy authentication

This policy blocks all sign-ins using legacy authentication protocols that doesn’t support Multi-Factor Authentication, such as

  • IMAP, POP, SMTP
  • Office 2013 (without registry keys for Modern Authentication)
  • Office 2010
  • Thunderbird client
  • Legacy Skype for Business
  • Native Android mail client

However, this policy does not block Exchange ActiveSync.

Require MFA for Service Management

This policy requires users logging into services that rely on the Azure Resource Manager API to perform multi-factor authentication (MFA). Services requiring MFA include:

  • Azure Portal
  • Azure Command Line Interface (CLI)
  • Azure PowerShell Module

End user protection

This policy protects users by requiring multi-factor authentication (MFA) during risky sign-in attempts to all applications. Users with leaked credentials are blocked from signing in until a password reset.

Once the policy is enabled, users are required to register for MFA within 14 days of their first login attempt. The default method of MFA registration is the Microsoft Authenticator App.

Recommendations

Here are a few recommendations before you enable these polices:

  • If you have privileged accounts that are used in scripts or Azure Automations, you should replace them with Managed Identities for Azure resources or service principals with certificates. You could also exclude specific user accounts from the baseline policy but should be a temporary workaround.
  • Make sure you exclude the emergency-access / break glass account(s) from these polices

Read more about baseline protection and baseline policies on docs.microsoft.com



5 Things I check after I’ve installed Microsoft Edge Dev (Chromium)

Many of you probably already know that Microsoft have released their new Microsoft Edge built on Chromium as a Development-build for the public so we can try it out.

As usual when it comes to new things we want to personalize it, so I would like to share my first 5 things that I customize in this new version of Edge.
Before we begin, remember that this is a Dev-build (Version 76.0.152.0) so the things that I mention below might change when it is fully released.

Also worth mentioning is that this release is not built in to Windows 10, and that means that you have to install the browser like any other application out there.
You can find the download link at the bottom of this page.

 



Building native multi-platform tools with ReasonML, esy and Azure Pipelines

In this post we will go over how I implemented multi-platform packages for Reason with the help of esy and Azure Pipelines and publish them on npm. The high-level overview of the process is this:

  1. Create a npm release for each platform via esy release
  2. Copy all the different releases into a folder
  3. Create a package.json for the whole package and a postinstall.js file
  4. Copy README and LICENSE
  5. Create a placeholder file that we can replace in postinstall
  6. Package up and release to npm

If you just want to get this setup for free you can clone and use use hello-reason as a base or use the excellent esy-pesy tool to bootstrap your project.

What is esy?

esy is a package management tool that has a similar workflow to npm but built from the ground up to be fast, handle project isolation and work with native code. It is currently optimized for Reason and OCaml but can in theory be used for any language, there are multiple examples of C/C++ projects packaged as a esy-package.

I previously wrote a short post about ReasonML and esy that you can find here.

What is Azure Pipelines?

Azure Pipelines is part of Microsofts DevOps offering. It provides both CI and CD via builds and releases. Using different “agents” it’s possible to build on the three major platforms so there is no need for multiple CI platforms. It’s free for open source and has a about 30 hours free for private projects, and you can always host your own agent to have infinite free builds.

It has something they call jobs that can be used either to provide parallelism of tasks that don’t depend on each other and more interesting for this use case, multiple platforms in a single job. Jobs can also depend on each other and one job can provide input to the next.

There is a concept of artifacts that is something produced by the continuous integration pipeline that can either be consumed by another job a release or downloaded. Another useful feature is that it’s possible to split up our build definition in multiples files using templates. A template can almost be seen as a reusable function, you can break out some part of the job to a template and then pass parameters to it.

So how does the setup really work?

The rest of this blogpost will be going over the ins and outs of the setup I created for reenv, a dotenv-cli and dotenv-safe replacement written in Reason and compiled natively for Windows, macOS and Linux. It’s installable via npm but because it’s compiled to a native binary it’s about 10-20 times faster than the node equivalent.

This is the first part of the build definition. We start off by giving it a name and declaring when it should trigger a build, in this case any push to master, tag or PR will trigger a build. Then we use the template feature to declare the same job three times, once for each platform.

Building the package

Let’s go over the template that we use to build the different platforms.

First we declare default parameters, in this case we just use macOS as default. Then we set the name of the current job and the agent OS from the parameters.

The steps we go through to build for each platform is. Make sure the agent has a recent node version installed and install esy via npm. Install all dependencies with esy, run esy pesy to make sure the build config is correct and then build with the command specified in the package.json. We then create docs and copy then publish the generated artifacts. We will go over the testing template in more detail next. And the last step is to create the npm release for the current platform upload the artifact. The release is done via the esy release command that bundles the whole sandbox that esy has for the project and creates relocatable dlls and executables.

Running tests

We run esy test that will run the command that we have declared in the package.json and tell it to continue even if there is an error. Then we save the path to the junit.xml in a variable that we can use to publish a test report. Sadly, there is some difference in where the report is put on macOS/Linux and Windows so there is two different publishing, one when it’s not Windows and one when it is.

It will then generate a beautiful report like this, and yes, the tests are really that fast.

Creating the release

If this was a PR then we would be done as everything builds and we don’t have to create a release for it. But if this was either a push to master or a tag, we want to build a release that anyone can install. I use tags to create actual releases to npm, master will generate the package but it will not be pushed anywhere.

This job is also declared in the main file that was posted as the first image. We run it on a ubuntu machine and it has a condition to not run on PullRequests. We also depend on the previous steps as we need the released artifact that we create for each platform to make a combined package.

First we make sure we have a recent node version installed. Then we create a folder to put all the releases in.

Then for each platform we download the artifact that was created, create a separate folder inside the previously created release folder and copy the needed files into the newly created folder.

We then run a node script that creates a package.json without dependencies by reading your project package.json. It also copies your LICENSE and README.md if they exist and copies a postrelease.js file. This script figures out what platform the package is installed on, copies the correct files in place and runs the esy generated postinstall script for the correct platform. The script that esy generate will replace placeholder paths so that they are actual paths on the consumers machine. The last step is to create a placeholder file and make it executable.

This script is interesting on it’s own as it’s both bash and batch at the same time, to learn more about it you can read this. The placeholder just echoes “You need to have postinstall enabled”. As it’s both bash and batch at the same time it’s runnable on all three platforms without modification. At first I had a js file but npm was “smart” and decided that my package should be run with node which broke the binary.

Testing the release before releasing it to npm

If the build was triggered because of a tag we want to test that our release will work on all the platforms before releasing it on npm. This is done for all the platforms.
First we make sure we have a recent version of node and then we download the the packaged release artifact. We then install the package from the tarball and run the binary, the pipeline will fail if the command exits with a non-zero exit code.

Releasing to npm

At the moment of writing this Azure Pipelines doesn’t support the release workflow in yaml which includes release gates and other powerfull features. I have a setup that I did through the UI that takes the package and basically runs npm publish on it. I will write a follow up on this post when the yaml feature is released. But getting it setup with the UI is pretty straight forward.



The best features in Varonis 7.4

The new big update to Varonis (7.4) was released about an month ago. Now after I have been upgrading and using it for a while I’m starting to get a feeling for the new features and it feels like a great time to talk a bit about the great features that were released.

The first big thing I want to show you are the new dashboards that will help you get a good overview of the status of your environment.You can use the predefined dashboards or create your own.

Active directory dashboard:

GDPR dashboard where you can see if you are compliant to the regulations and if you have control over your sensitive data.

The next thing I really like is that it now is possible to search through the logs via the web interface. It is more responsive and the user interface looks great.The reason why it’s more responsive now is because from this version SOLR software is being used. Varonis promises significant performance improvements and that investigations will go lightning-fast with SOLR and I can definitely agree on that. Searching and investigating alerts in the web interface works perfectly.

Another interesting feature that have been added to the web interface is the integrated incident response playbooks that can be used when handling incidents from DatAlert. As you can see in the picture below you will get detailed information about what happened and which next steps to take.

Varonis Edge have got multiple new threat models added to DatAlert so you can now among other things find out if data have been exfiltrated via DNS tunneling, if DNS cache poisoning have occurred or if data have been uploaded to external websites.

Varonis Edge is a product that is used to analyze metadata from perimeter systems like DNS, VPN and web proxies. These kinds of devices often write the logs in very different ways and it can be very hard to obtain interesting and useful data from them. Edge is used to filter out only the interesting metadata from the perimeter devices and present the events more readable for the user. With help from Varonis Edge you can for example find out whether a user was accessing the network from their usual location, if sensitive data was accessed, and if the event occurred during a user’s normal time window and more.

If you want to know more about the features in the latest version or are interested in Varonis products don’t hesitate to send me an email at rickard.carlsson@xenit.se



Resource Graph Explorer and common use cases

This is my third post about Resource Graph and this time I will cover the new Explorer in the Azure portal and some use cases where I have found Resource Graph really helpful.

You can find my previous posts here:

Azure Resource Graph – Get started

Azure Resource Graph – Azure Policy

Resource Graph Explorer

The new Resource graph Explorer gives the opportunity to create, save and pin queries that we made in Resource graph. In the Explorer you use Kusto query language directly so no need to use Powershell or CLI.

More info about Kusto can be found here

In the explorer we can build queries by just browsing and clicking the resources and properties we are looking for.

In below example i first added virtual machines and then vmsize under hardwareProfile. Then i simply add the size i was looking for in my case Standard_B1ls.

It also gives us an easy way to save and reuse our saved queries, with the save and open query option showed in above picture. No need to memorize or save queries elsewhere.

Another nice feature is that you can pin your results to visualize it on your dashboard.

View from dashboard

Use cases

Housekeeping

Its so easy to create resources in Azure today, which is great! But often when we are cleaning our environments of retired resources, we tend to focus on the one that generates the most cost. This leaves us with orphan resources that might not longer be in use.  Let´s take Availability set as example we create them to keep our SLA´s up for virtual machines but after retirement of the virtual machines the availability sets might still be there.

Let’s use Resource graph explorer to find all Availability sets and show the property of virtual machines.

This can be done through for example Powershell but the scripts tend to become quite advanced just to get the property of resources across subscriptions, this is not the case with Resource Graph.

Evaluate Policy Impact

Perhaps one of the most common use cases for resource graph is to evaluate policy impact before even creating the policy. We can query our resource in such way our policy would evaluate our resources. Take Storage account as an example. Perhaps we would like to deploy a policy to deny creation of Storage accounts that allows connection from “All Networks”. Before doing this, we can run a query in Resource Graph and get the result for already created resources.

If you are using Powershell or CLI its easy to create a policy based on your Resource graph query take a look at the post i linked to at the start of this post.

Get Resource changes

A recently released preview feature gives you the possibility to find changes done to your resources. To do this we can use REST API, Activity log or the compliance view in Azure Policy. All the below approaches to get Change history is provided through Resource Graph. To trigger a change a property of the resource is either added, removed or modified.

Activity log

View Change history is available as preview in Activity logs.

From the Activity log you can drill down to an event and then click “Change History (Preview)”.

In my example below i changed the setting on a storage account to allow Access from – Selected networks.

Azure Policy

From the compliance view in Azure Policy go to your initiative or definition and then select the recourse you would like to view change history for. From here we get similar experience as described before in Activity logs. We can easily determine what properties that have changed.

Choose detection time to show exactly what changed for the resource.

API

Its also possible to get resource changes through the API, i wont cover the details in this post but follow the link below to get started.

Through the API the process can be broken down into three steps.

  1. Enter an interval and resourceID.(Find when changes were detected)
  2. Use the ChangeID to to get what properties changed.(See what properties changed)
  3. The response will be a JSON formatted with two configurations, one before the snapshot and one after the snapshot. Compare both to determine what properties changed for the given changeID.

To get a better understanding on how to use the API for resource changes take a look here

Summary

Change History

I think the new resource change view is great and it gives a simple way to see how your resources have modified. The possibility to do so both through Activity log and API makes it flexible and useful for most common scenarios and through the compliance view it’s a great way to track changes that might have changed your compliance level.

Azure Resource Explorer

With the new explorer we get similar experience as we are familiar with from Log Analytics and the language is also Kusto based as in Log analytics. Build a library of commonly used queries and pin them to your dashboard to keep tracking of resources and properties that’s important in your environment.

If you have any questions or scenarios you would like to discuss you can reach me at

Tobias.Vuorenmaa@Xenit.se



Virtual attendance to Microsoft Build 2019

New features and cool stuff – Microsoft 365, Office 365, Azure, Edge, Windows 10, and everything else Microsoft

There are so many cool things you can do with new types of disruptive technology that was not even imaginable a decade ago. Impressive progress has been made across several disciplines within IT, and it doesn’t look like it will slow down at all. Automation, augmented reality and analytics, AI-driven development, and digital twins just to mention a few areas that come to my mind as examples of groundbreaking new tech-trends – thanks to Gartner’s report Top 10 Strategic Technology Trends for 2019. All of these new technology trends are possible thanks to extremely talented researchers, mathematicians, and developers to name a few. A lot of this new tech is built on or with technology from Microsoft – that’s why Microsoft Build is such an interesting conference.

Even though my daily work is around project management, end user computing and the operations side of digital infrastructure, I’m always curious on what’s to come and try to find the next big thing or cool features that can improve the EUC-experience for all of our current and future customers.

One impressive technology, albeit rather old, is virtual presence and live online streaming. That’s something I’m very thankful for a day like this. Last evening and night, was the first day of Microsofts annual developer conference Build in Seattle, WA, and I was able to watch a few hours of presentations from my couch instead of having to go to the US. Even though attending in person would have been a bit more exciting and fun, my couch is much better than nothing at all. 😃

Being able to listen to Microsoft vision and plan for the future, and also learn about the latest new features from the couch might not sound reasonable for everyone, but a completely logical move to me.

After a good night’s sleep I have been trying to come up with a list of the most interesting parts from the presentation I saw last night, from a EUC standpoint. Obviously, there will be lots and lots of more neat new features and updates to products presented during the conference but that might be for another blog post.

Microsoft Edge Chromium

Three major updates were announced for Microsoft edge last night. Thanks to Microsoft’s decision to move to a fork of the open source browser Chromium my bet is that we will see a lot of more news around the browser in the months to come.

If you would like to try the new public version of the Edge Chromium browser you can do so here!

  1. IE Mode

This is a big one for EUC enthusiasts like myself. There has always been a push-pull struggle to decide on which browser to use for end users in an enterprise environment and that usually, not always, has to do with compatibility to do.

Microsoft’s announcement last night hopefully means that we won’t have to trade off new features in modern browsers and being able to work effectively in old and legacy LOB applications. I think we all can agree on the fact that most bigger enterprises have a handful of “extremely important” old web apps that won’t disappear in the forseeable future.

What Microsoft announced is the possibility for Edge Chromium to load an old web app straight into the new browser but with the old Internet Explorer rendering engine. Previously Edge started a separate IE-process and users had to switch between the two browsers, this news means that you can have IE-tabs and Edge Chromium-tabs within the same browser, really neat.

IE Enterprise Mode works well, but I think this will be much much better. We’ll see!

  1. Collections

Another cool feature presented during the keynote was Collections. In summary, I’d say that is the next generation of the old favorites feature. You will be able to create collections of links, pictures, text, and other information within the browser.

If you want to it’s then possible to export/share that collection with your co-workers via Excel or Word. The Edge Chromium browser generates good looking files with headers, aligned pictures, and URLs/sources.

  1. Privacy

You will be able to select from one of three predefined privacy configurations. Unrestricted, balanced or strict. The strict mode blocks most trackers and trackers but sites might break. The unrestricted is the complete opposite, and the balanced mode is what we swedes say is lagom – not too much, not too little tracking.

The World’s Computer (Azure)

It’s no surprise to see that there’s a lot of focus on Microsoft Azure during the conference. Some interesting news that might be of extra interest for the EUC community I’d say are these:

Of course, there are loads of other new features but i found these to stand out.

To see all Microsoft Azure announcements, check out this link.

Windows Terminal

WOW! Finally, the old terminal will be replaced with something new! The new terminal will support shells like Command Prompt, PowerShell, and WSL.

To get a glimpse of the amazing future of Windows Terminal, check this out.

The new console is open source and you can build, run, test, for the app right now. Their repo can be found here.

Key features according to Microsoft is:

  • Multiple tabs
  • Beautiful text (GPU accelerated DirectWrite/DirectX-based rendering, emojis, powerline, icons, etc.)
  • Lots and lots of configuration possibilities (profiles, tabs, blur/transparency/fonts… you name it)

So, get a new graphics card and get started working in the new terminal 😃

Office 365 Fluid Framework

A new framework called the Fluid Framework was announced. The new framework will make it seem like users are working together in real time, charts and presentations will be updated in an instant, and translations into loads of languages will be live.

During the keynote, the presenter wrote in a document at the same time as others did, and it really looked like there was no latency. The live translation part was really cool and I recommend you to watch it in action to get why this is something that might be of real interest for your business.

Watch it in action here.

Windows Hello, FIDO2 certification

Windows Hello is now FIDO2 certified. What does that mean?

Without digging into the details the new certification hopefully means that more websites and online services will be able to allow other forms of authentication than just username/password. Passwordless authentication is proven secure and with Microsoft adhering to the new specification it will be easier to allow user-friendly authentication methods like fingerprint and face recognition.

FIDO2 is the overarching term for FIDO Alliance’s newest set of specifications. FIDO2 enables users to leverage common devices to easily authenticate to online services in both mobile and desktop environments. The FIDO2 specifications are the World Wide Web Consortium’s (W3C) Web Authentication (WebAuthn) specification and FIDO Alliance’s corresponding Client-to-Authenticator Protocol (CTAP).

Windows Subsystem on Linux 2 (WSL2)

The new version of WSL will be running from a completely open source Linux Kernel that Microsoft will build themselves. There are probably 100s of reasons why Microsoft do this but one of them is performance. The kernel version will be 4.19 which is the same version that is used by Azure.

The new WSL-version will make it possible to run containers natively which means that locally hosted Virtual Machines won’t be necessary anymore.

Like before there won’t be any userspace binaries within WSL which means that we will still be able to select which flavor we want to run.

The first public versions of WSL2 will be available sometime this summer.

Honorable mentions or too cool not to mention

  • Mixed reality services within Teams and Hololens, for example, the live Spatial meetings using AR
  • Hololens 2 and the Mittel presentation
  • Cortana updates where the AI Bot is integrated and helps even further with scheduling and assisting you during your workday
  • All news regarding containers, Docker, and Kubernetes/AKS
  • Microsofts new Fluent Design System
  • Xbox Live for new devices (Android and iPhone) and new collaborations with game studios
  • Some kind of Minecraft AR game for mobile phones being released on May 17

Psst. Did you know that you can watch loads of presentations and also the keynote here?

What do you think? Have I missed anything obvious?



FSLogix and Microsoft – When and How!

Since Microsoft acquired FSLogix in November there has been some uncertainty regarding licenses and most importantly when it will be available through Microsoft.

Ever since Microsoft acquired FSLogix there has not been much information about whats happening. We know Microsoft had their eyes on Office 365 container solution and potentially Profile containers as well, but what will happened with the rest of the suite, such as App Masking and Java Redirection? Will they disappear or will they continue the support and development of the entire suite?

When Microsoft released their new Windows Virtual Desktop to Public Preview and at the same time their intention with the FSLogix Suite!

As you now probably are aware about, FSLogix will be a part of the Windows Virtual Desktop, but it does not stop there, see below on when you are entitled to use FSLogix suite.

Licensing

FSLogix will be available with no additional cost if you have one of the following Microsoft licenses:

  • F1, E3, är E5 Microsoft 365 licensing
  • A3 and above for educational and non-profit
  • Windows 10 Enterprise E3 or E5
  • or even If you have RDS CALs

 

Where and when can I use it?

The really good news here is that its not only available to Azure, you can use  wherever you want, even On-Prem! You cannot acquire the license for this just yet, it will be available in June, but you can however request a trial witch will give you all the functionality an features in the meantime. Don’t hesitate to contact me if you would like to get a trial to start benefit from this amazing product today!

Wich FSLogix apps is included?

  • Office 365 Containers
  • Profile Containers
  • Java Redirection
  • App Masking

 

This is really good news since this is a solid product solving head-aching problems, i’m looking forward for this implementation and so should you! If you are looking to implement this solution for your environment, don’t hesitate to contact me at Jonas.Agblad@Xenit.se or leave a comment.

 

Don’t miss my earlier posts about FSlogix for more information:

What is FSLogix Cloud Cache?

Keep your FSLogix VHD-files Optimized!

Convert Citrix UPM to FSLogix Profile Containers

Teams in your mulit-user environment done right!

Outlook Search index with FSLogix – Swedish

FSLogix Profile Container – Easy and fast Profile management – Swedish

Office 365 with FSLogix in a Multi-user environment – Swedish