Configuration Management of Non-Azure VM’s with Azure Automation – Part 3

We’ve now reached the final article in this three part series covering Configuration Management in Azure automation.  In Part 1, I discussed the Inventory tool and how to onboard an AWS EC2 virtual machine to Azure.  Part 2 covered Change tracking and how to monitor changes to various resources on the AWS instance.  In this article, Part 3, I will cover Azure State configuration (DSC) and how to register an AWS VM as a DSC node to apply a desired state.

An Overview of State configurtion (DSC) Components
Azure State configuration builds upon Powershell’s Desired State Configuration (DSC), which was introduced in version 4.0 of Powershell.  These are the main components of Azure DSC, along with a brief description of each:

  • Configuration files – Powershell scripts written in a declarative syntax that defines how a target should be configured.  These contain building blocks that define the nodes to be configured and the resources to be applied.
  • Resources – These are Powershell modules that contains the code that determines the state of a machine.
  • Local Configuration Manager (LCM) – This runs on the target nodes.  It serves as the engine that consumes and applies DSC configurations to the target machine.  It also has settings that determine how often a node checks for new configs from the Pull server in Azure automation and what action to take when a node drifts from a desired state.
  • DSC Metaconfigurations – These files define the settings for the Local Configuration Manager (LCM).
  • Pull Server – located in Azure automation.  It functions as a repository for the configuration files that are applied to DSC target nodes.

Why Azure State configuration (DSC)
By leveraging DSC as a service in Azure, this eliminates the need to deploy and maintain an On-Premises Pull server, and all of it’s necessary components such as SSL certificates.  Also, State configuration (DSC) data can be forwarded to Azure Monitor, which provides searching capabilities with Log Analytics and alerting on compliance failures.  Furthermore, if you’re doing the DevOps thing by implementing agile practices, State configuration fits in seamlessly with continuous deployments in Azure DevOps.

Prerequisites
To complete the tasks in this example, the following will be needed:
1. An Azure automation account
2. An AWS EC2 VM
3. WMF 5.1 – on the AWS VM
4. WinRM – on the AWS VM

Steps to Onboard the AWS EC2 VM to Azure
Even though DSC is a part of Azure’s Configuration Management, any machine that it manages has to be onboarded separately from Inventory and Change Tracking.  There are a few ways to onboard an AWS instance to State configuration (DSC):

  1. The AWS DSC Toolkit created by the Powershell Team – this method includes installing the toolkit using PSGet, then running commands to login to your Azure subscription and register the AWS EC2 instance.  Go here to get more details.
  2. A Powershell DSC script – involves running a Powershell DSC configuration script locally on the AWS EC2 instance.  The script will generate a Metaconfig file that contains settings for the Local Configuration Manager (LCM). This Microsoft doc explains more.
  3. Azure Automation Cmdlets – a quick method to produce the DSC Metaconfig file using AzureRm cmdlets in Windows Powershell on the AWS VM.

Options 1 and 3 will allow you to register the node in Azure without having to write a Powershell DSC configuration script to generate the Metaconfiguration.  This is fine as long as you’re comfortable with the default LCM settings. In this case, I will use Option 3 since it’s simple and this is only for demonstration purposes.  Here are the steps to be executed on the AWS VM:

  1. In a Powershell console, install the AzureRm module from the Powershell Gallery using  install-module AzureRM
  2. Login to Azure using login-azurermaccount
  3. Download the Powershell DSC Metaconrigurations from Azure automation with the following Powershell command :
    $Params = @{
    ResourceGroupName = 'myautogroup'; 
    AutomationAccountName = 'myautoacct1'; 
    ComputerName = @('IP address or computer name'); #this will be the Private IP not Public 
    OutputFolder = "$env:UserProfile\Desktop\";
    }
    Get-AzureRmAutomationDscOnboardingMetaconfig @Params
    
  4. Run Set-DscLocalConfigurationManager -path “$env:UserProfile\Desktop\”

To verify that the AWS VM has been successfully onboarded, in the Azure portal, go to the Automation account.  Under Configuration Management, click on State configuration (DSC) to go to the main where it shows the EC2 instance and it’s configuration status.

The next step is to assign a configuration to the node.

Add and Assign State Configurations
Now that the AWS VM is registered as a DSC node in Azure, configuration files can be assigned to it.   This can be done by composing your own configuration files and uploading them to Azure; or using the ones available in the Gallery.

If you select “Configurations“, there’s an option to upload a configuration file or compose one in the Portal.  I will click “Add” and upload a previously created config file called “WindowsFeatureSet.ps1 that will ensure the IIS and Web server features are enabled.

Next browse to the location of configuration script file and hit “Ok” to import it.

I’m not going to walk through the steps of writing a DSC script, but only provide an overview of assigning it to a node.  Once the import is complete, the config file is now available under “Configurations“.

Before it’s assigned to a node, the uploaded file will then need to be compiled into a MOF document by the LCM on the Pull server.  To begin this process, select the imported config file and click on the “Compile” button.

Once complete, the file will show as compiled and will be in the format of filename.nodename.  At this point, the configuration can be assigned to the node.

To assign the configuration, select the DSC node from the Nodes screen and click on the “Assign node configuration” button.

Confirm the assignment by clicking “Ok

The following figure shows that the configuration file has been assigned to the DSC node.

Here I remoted into the AWS VM and verified that IIS has been installed.

Powershell commands to manage State configuration (DSC)
The following Powershell script and commands can be used to complete some of the above tasks that were done in the Portal.

Script to upload config file and compile it:

Import-AzureRmAutomationDscConfiguration
-ResourceGroupName myautogroup –AutomationAccountName myautoacct1
-SourcePath $env:userprofile\Desktop\AzureAutomationDsc\WindowsFeaturSet.ps1
-Published –Force

$jobData = Start-AzureRmAutomationDscCompilationJob
-ResourceGroupName myautogroup –AutomationAccountName myautoacct1
-ConfigurationName Featureset

$compilationJobId = $jobData.Id

Get-AzureRmAutomationDscCompilationJob
-ResourceGroupName myautogroup –AutomationAccountName myautoacct1
-Id $compilationJobId

Command to view DSC Nodes

Command to view the status of a DSC compilation job

Command to view the a DSC configuration

Troubleshooting – Notes from the Field
During the process of onboarding the AWS VM to Azure, I ran into a couple of errors when running the Set-DscLocalConfigurationManager command.

Problem 1 – Here’s a screenshot of the first error:

Fix 1 – I had to log into the AWS console and re-configure the Security group that manages traffic to the virtual machine.  The inbound traffic rule needed an allowance for WinRm-HTTPS (Powershell remoting) on Port 5986.

Problem 2 – when I ran the Set-DscLocalConfigurationManager command again after allowing Powershell remoting in the EC2 Security group, I got this error:

Fix 2 – I had to open the Local Group Policy editor on the AWS VM, enable trusted hosts for the WinRM client and add the source as a Trusted host.  Open gpedit.msc and go to Computer Configuration > Administrative Templates > Windows Components > Windows Remote Management (WinRM) > WinRM Client.  From there, enable Trusted Hosts and add the source server.

Summary
This concludes the third article in the series on Configuration management of a non-Azure VM.  Azure State configuration is a service  that can be used to manage Azure, On-premises and other non-Azure VMs to configure and maintain a desired state.

 

 

 

 

 

Advertisements
Standard

Configuration Management of Non-Azure VM’s with Azure Automation – Part 2

In part 1 of this series, I discussed the Inventory tool that is a part of Azure Automation’s config management and how to on-board an AWS VM for management.  In this article, I will cover Change Tracking.  With Inventory, you get a report on the Windows files, registry and services, as well installed software for the machines being monitored.  However, Change Tracking takes it a step further and provides a notification whenever there is a change to anything that’s being tracked on the machine.  It also provides the capability to perform queries against the change logs.  Let’s take a look and see how it works.

Since I previously enabled Inventory, there’s nothing that needs to be done to enable Change Tracking.  If you go to your Automation account in the Azure Portal, look under the Configuration Management section and click on “Change Tracking“.  On the main screen, there’s a graphical layout of changes made to Windows services, registry, files, Linux daemons and software being tracked on the AWS VM that was previously on-boarded.  In this case, there was 1 software change and a large number of changes to Windows services.

Below the graph, there’s a tabular layout indicating the resource name, resource type, source machine and time of change.

You can click on one of the resources to see more details about how it was modified.  If I select one of the Windows Update changes, more details are provided about exactly what changed.  Below it shows that the service state changed from “Running” to “Stopped”.

Configure Change Tracking
Change Tracking is customizable by allowing you to configure which Registry keys, services or files are tracked.  To configure Change Tracking, click on the “Edit Settings” button at the top of the main screen.  This will bring you to the Workspace Configuration page for Change Tracking.

Here you will see sections for each type of resource that is able to be tracked.  Under “Windows Registry” in the below screenshot, you will see a list of recommended keys to track.  They’re disabled by default.  To enable a key, just click on it and set enabled to “true”  You can also add a registry key by clicking on the “Add” button.

Under “Windows Files” you can view the files being tracked.  These have to be added manually.  For example, I have added the “c:\windows\system32\drivers\etc” folder path.  If a change is made to the Hosts file or any other file in this location, it will be recorded in Change Tracking.  Also, the process to enable tracking for Linux files is the same.

Another thing you can do under Workspace Configuration is enable the content of modified files to be saved in a Storage account.  To get started, click on the “File Content” tab. Enabling this feature will generate a Share Access Signature (SAS) Uri that can be used to access the stored data.  I won’t go into the details on how this works, but it’s very helpful if you would like to provide others access to the change data.

Lastly, let’s see what can be done under “Windows Services”.  The tracking for Windows services is enabled by default, but you can adjust the frequency that changes to them are reported.  The default setting is every 30 minutes, but you can adjust to anywhere between that and 10 seconds.  Here, I set the collection frequency to 10 minutes.

Querying Change Tracking Logs
Now I am going to discuss my favorite feature in Change Tracking – Logs search and query.  Click on “Log Analytics” to view the page to execute queries against the logged changes.

In the left side pane, there’s a schema that is built on the Workspace called “Trackchangesspace” that Log Analytics uses for Change tracking.  In here there are two databases:  ChangeTracking and LogManagement, which holds records of the data collected by the Tracking tool.  with the Change Tracking database expanded to show its tables ConfigurationChange and ConfigurationData.  In the main window is where queries are executed and the results are shown.  When you first open the Log Analytics page, all the changes collected in the last 24 hours are displayed by default in a tabular view.

By clicking on the “>” symbol next to an item, you can get more details.  For example, below are details about one of the software modifications.  It shows the SoftwareName, Computer, ChangeCategory, Previous and Current states.  In this case, an update for Windows Defender Antivirus was added.

What if you want to run a query of all Windows services that are stopped?  In the top pane, you can write a KQL query that searches in the ConfigurationChange table for all services whose SvcState is equal to stopped in the following format:
Configuration | where SvcState == “Stopped”.  I was pleasantly surprised to discover that the query editor has built-in tab completion and intellisense.  This is very helpful for lousy typists like me, or you are used to IDE’s like VScode or Powershell ISE.  Do note this: the query is case sensitive. If you type “Stopped” with a lower case “s”, the query will not yield any results.  If you are a Powersheller, your experience is that comparison values are not case-sensitive.

Here’s another time saver for building queries:  In the below figure, I went back to the results pane and expanded one of the Windows services to get the detailed information.  By placing the cursor next to “SvcState”, two buttons will appear:  a plus and minus.  Clicking on the plus button will add the search logic for all Windows services where the SvcState is stopped.  Then just click “Run” to get the results.  Nice!

Change Alerting
Lastly, Change Tracking provides the ability to configure alert rules if you would like to receive notifications on certain changes.  In the Logs window, click on “New alert rule” to set this up.  In the screen blow, set the resource as the Workspace used by Change tracking, add the appropriate logic for the condition and define how to receive notifications under Action groups.  You can have the alerts emailed or sent via text messages.

Supplemental resources
The following links are to articles that discuss topics that are related to material covered in this article:
1. https://docs.microsoft.com/en-us/azure/kusto/query/ – an overview of Kusto Query Language (KQL)
2. https://docs.microsoft.com/en-us/azure/automation/automation-change-tracking – an overview of Chang tracking
3. https://docs.microsoft.com/en-us/azure/automation/change-tracking-file-contents –  how to view contents of tracked files

This concludes Part 2 of 3 on Configuration management using Azure automation.  In summary, this article explained how to utilize Change tracking to monitor particular resources on an AWS virtual machine, how to perform queries against logged changes and setup alerts for those changes. The final article, part 3, will cover Azure State configuration (DSC).

 

 

 

 

 

 

 

 

 

 

Standard

Configuration Management of Non-Azure VM’s with Azure Automation – Part 1

One of the first questions asked whenever a system or application goes down is “what changed recently”.  Ill-planned or unplanned changes are often the underlying cause of failures.  And if you live on the Operations side of the IT fence like me, a large portion of your existence is dedicated to mitigating the negative impact of these age accelerating events.

To help in this ongoing battle, you can leverage Azure Configuration Management.  This feature is comprised of three tools:  Inventory, Change tracking and State configuration (DSC).  Inventory gives you a view of installed software, Windows services, registry keys, files and Linux Daemons that are present on your machine.  Change Tracking monitors and reports changes to these items.  And DSC allows you to compile configuration files to maintain a desired state for these monitored sources.  Configuration Management can be used for machines that are in Azure of course, but also an On-Premises data center and other cloud services.  For non-Azure machines, the data is gathered by an installed agent and reported back to Azure, where it is stored in a Log Analytics Workspace.

This article will be the first in a three part series that covers the components of Azure Configuration Management and how they can be used with a virtual machine running in AWS EC2. In part 1, I will show you how to get started using Inventory and provide an overview of it’s capabilities.  Part 2 will cover Change Tracking and Part 3 will outline State configuration (DSC).

Prerequisites
Here’s what is needed:

  • An Automation account in Azure
  • A Log Analytics Workspace
  • A non-Azure VM (AWS)

I’m not going to outline the steps of creating the above resources.  Here, I will just focus on the process of enabling and using Inventory.

How to Enable Inventory
In the Azure portal, go to your Automation account and select “Inventory” under Configuration Management.  On the popup screen that appears, select your subscription and the Log Analytics Workspace in the drop-down menus, and click on the enable button. This step will also enable Change Tracking.

The next step is to configure the virtual machine to communicate with Log Analytics.

Configure Non-Azure Machine
This Microsoft doc explains the following in more detail. For this task, I will be using an AWS t2.micro instance of Windows 2016.  The first thing that needs to be done is to add registry subkeys for TLS 1.2 and .Net Framework to use strong cryptography.  The Microsoft Monitoring Agent (MMA) that will be installed, uses TLS 1.2 to report changes to the Log Analytics service in Azure.  Here are the Registry Keys for TLS 1.2 and .NET 4.6 that need to be added:

  1. HKLM\System\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client – you will need to create the TLS 1.2 and Client subkeys.  And in the Client subkey, create the following DWORD value:
    • Enabled [Value = 1]
    • DisabledByDefault [Value = 0]
  2. HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319 – no subkeys need to be created; create the following DWORD value: SchUseStrongCrypto with a value of 1.
  3. HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v4.0.30319 – no subkeys need to be created; created the DWORD value SchUseStrongCrypto with a value of 1.

Now that the Registry edits have been completed, the MMA can be installed.  There are three options for installing the agent: the installation wizard, command line or Powershell DSC.  In this case, I will use the command line.  Also, there are two agent options:  a 64-bit and 32-bit version.  Once the appropriate agent has been downloaded, open an elevated command prompt, navigate to the location of the file and run the following command to extract the MSI file to a folder path.  Finally, install the agent with the extracted MSI file.

Once the agent is installed you can verify the installation a couple of ways.  In Control Panel, go to MMA Agent to check that it’s intstalled as seen here.

Also, in the Azure Portal, you can run a search query to check that the agent has a heartbeat and is communicating with Log Analytics.  Go to the Logs tab on your Workspace to run the query. The following figure shows there’s a heartbeat detected from the AWS EC2 instance.

Now that the VM is communicating with Azure, we can complete the onboarding of the AWS VM.  Go back to Inventory and you will see a new screen with various options:  A button to Add Azure VMs; a doc on how to Add non-Azure machine; another button to Manage Machines.  Click on Manage Machines.

Now you can see the AWS EC2 instance as an available machine.  You have three options to choose on how to enable Change Tracking and Inventory.  I select the default to “Enable on all available machines and click enable.

Inventory
Now that the Inventory has been enabled, the agent has been installed and the machine is reporting to Azure, let’s see what we get out of the box.  First, if you go to the Inventory tab of your Automation account, it will show you the machine(s) reporting, along with details about the machine (O/S, version, platform, etc).  Additionally, there are tabs to view the software installed on the node, along with Windows services, Windows Registry keys and Linux daemons if applicable.  You will need to configure Files and Registry keys to monitor under Edit settings.  I will demonstrate that in the next article on Change Tracking.

To see the installed software, click on the “Software” tab.  It will show the software name, version, publisher, last refreshed time and number of machines it’s installed on.

Also, go to the “Windows Services” tab to see details such as the service name, status, startup type, last refreshed time and number of machines.

Lastly, the Inventory tools gives you the ability to add machines to a Machine Group.  A default group is created when a machine is added to Inventory.

A Machine group is a group of machines defined by a Log Analytics KQL query.  For instance, if you go to Machine groups and select the “MicrosoftDefaultComputerGroup“, you can see the query that was used to generate the group as well as member machines.

You can create your own Machine group as well.  Click on “Create a Machine Group” on the Inventory screen and fill in the fields on the popup window below.  Here, I’m creating a group of machines that contain “EC2” in the name.  Once you enter a query in the definition box, validate the query and then create it.  A more practical use case would be a group of machines that are missing updates.

This completes part 1 of Azure Configuration Management.  Here, I covered how to enable Inventory, install the monitoring agent on an AWS VM and use Inventory to view a machines configuration.  In Part 2 of this series I will cover Change Tracking.

 

 

Standard

C# Intro for Powershellers

Let me begin by stating that I am not a programmer or C# expert.  I am a System engineer by trade, who has become proficient in Powershell scripting.  When I first began working in IT, about 13 years ago, the first programming language that I tried to learn was Perl.  At the beginning of that process, I immediately discovered programming is not for me and this relationship will not work.  So I dumped Perl after a few “dates” and never looked back. However, that was then.  Now, after having worked with Powershell to create scripts to automate tasks, I have developed a renewed interest in coding.  According to Jeffrey Snover, the inventor of Powershell, the intent behind Powershell was for it to be a glide path to C#.  I guess you can say, mission accomplished.

In this article, I’m going to create a basic C# application in Visual Studio while using Powershell as reference to explain some of the concepts in C#.  As the saying goes, use what you already know to learn new things.  However, let’s review a few basics first.

FUNDAMENTALS
Before we dive into C# coding, it’s necessary that we briefly cover the essential elements related to the .Net Framework architecture. There are two main components of .Net Framework – Common Language Runtime (CLR) and Class Library. In a nutshell, the CLR is responsible for converting the compiled code of C# into the native code of the underlying machine.  C# code is initially compiled into a format called Intermediate Language code (IL). This IL code is consumed by the CLR and converted into a format which is comprehensible to the underlying machine. This process flow is depicted in Figure 1.

The second component which makes up the .Net framework architecture, the Class Library, is a set of namespaces, classes, interfaces and types that are used by the application to access the systems functionality.  Here’s a small subset of the .Net Class Library in Figure 2.


Figure 2

Due to the number of classes, they need to be organized in a manner that is based on their function.  This is the purpose of a Namespace – a container of related classes. In Figure 3 below, we see illustrated the structure of a .Net Framework namespace called “System”.  The System namespace consists of multiple classes such as Console, DateTime, Math and many more not presented in the diagram. Powershell commands and C# code use namespaces to produce and manipulate instantiated instances (objects) of these classes, which contains various methods and properties.

As mentioned, within each namespace are classes  A class (Figure 4) is a container that consists of various methods and types.  Methods allow you to perform certain actions against an object referenced by a class and a type describes its properties.  In essence, a class can be seen as the blueprint used to build objects. For Instance, if you have a class called “Car”, it may have the following type categories:  make, model, color, year, etc. Also, it will have methods which determine the actions that can be performed against a “Car” object such as: start(), drive(), stop() and so on.  The types and methods of a class are known as the class members.

To build a fully functional C# application, we must assemble all of the necessary components that work in conjunction to produce a desired result.  For instance, you may have an application that may need to output to the screen or work with a collection of data. These tasks will require you to make the System and System.Collections namespaces accessible to your application, along with the relevant classes.  As the number of namespaces that your application uses increase, they will need to be organized into a unit called an “Assembly”. This is basically a DLL or EXE file (Figure 5).


Figure 5  C# Assembly (DLL or EXE)

Now that we have completed our review of the .Net Framework architecture and components, let’s proceed to doing some basic C# coding.

THE CODE

For this article, I will create a C# application that will retrieve the current date/time and output the results to the console screen.

                                                    Powershell

As you already know, in Powershell this is simply done by executing the “Get-Datecommand shown in Figure 6.


Figure 6

Another important point of significance is to know how Powershell reaches into .Net Framework to produce this output.  If we pipe this command to “Get-Member”, we will see the TypeName or namespace and class the Get-Date command is referencing.  In Figure 5, we see it’s using the DateTime class which is a member of the “System” namespace. Therefore, our C# code must also use the System.DateTime class to achieve the same result.  For non-seasoned Powershellers, I omitted additional output produced by Get-Member in the Figure 7 screenshot. The Get-Member command also displays the methods and properties of objectified output that is passed to it via the pipeline.


Figure 7

                                                         C#

For this example, I will create a C# application that will retrieve the current date/time and output the results to the console screen.  I will be using Visual Studio Community 2017 to accomplish this task.

First, open Visual Studio and go to File > New > Project (CTRL + Shift + N) as shown below in Figure 8.


Figure 8

In the window that appears (Figure 9), expand Visual C#, select > .Net Core and then choose Console App (.Net Core).


Figure 9

Notice you are given several settings to configure:

  • Name:  Give your Project a name.  I will call mine DateApp1.
  • Location:  Choose a location to store your project (or use the default location).
  • Solution:  You can create a new solution or use and existing one.  A solution contains the project and build information. Also, it can have multiple projects.
  • Solution Name:  By default, this will be the same name you give the Project.  You can give it a different name but I will leave the default.
  • Create new Git repository:  Check this box to create a Git repository.  I will leave it unchecked.

Once all the settings are configured, hit “OK”.  You will now see below in Figure 10, a window with a basic structure of your C# program.


Figure 10

Let me explain the above:  By default, certain namespaces are automatically added to a new program, each of which is preceded by the “using” keyword.  But remember, for our project we will only need to use the “System” namespace. It contains the class (DateTime) that our application requires.  Also, notice that a namespace is created that matches the name of the project, as well as a class called “Program”. Both can be changed to a different name.  The final piece added is a static method that can accept multiple string parameters. It has a “void” keyword, meaning it doesn’t return a value. A static method is one that can be called without creating an instance of the class it’s a member of.  Look to the following Powershell examples to provide further explanation:


Figure 11  Powershell static method of the
DateTime class to add 5 days to the current date

Figure 12 Powershell instance method of the
DateTime class variable ($date) to add 5 days
to the current date

Figure 13 shows the structure of our DateTime app after removing the unused .Net namespaces, changing the Class name and adding the code to produce the current date on the screen.  To achieve this, create a variable called “Now” and assign it to the DateTime.Now property. In C#, a variable is created by preceding the variable name with the name of the .Net class it’s using. And finally, add the Console.Writeline method with a “Now” argument. Also, notice the “System” namespace is now highlighted, which indicates it’s being used.


Figure 13

To compile and execute the above code, press CTRL + F5.  If the build is successful, you will get a pop-up Command window showing the current date and time(Figure 14).  By default, it’s displayed in a long date and time format.


Figure 14

Visual Studio will also show output detailing the Build results.  It displays the location of the application executable file and if the Build succeeded or failed.


Figure 15

In summary, in this article I explained the fundamentals of .Net Framework and created a basic C# application that outputs the current date and time to the console.

Standard

MyIgnite – Reflections on Microsoft Ignite 2018

I attended this year’s Microsoft Ignite conference in Orlando, FL and decided i would provide my reflections on the event.  The annual conference provides a plethora of sessions on Microsoft technology offerings and solutions related to Microsoft 365, IoT, containers, DevOps, Team collaboration, Azure services and more.  Also, there’s an Expo of various IT vendors; panel discussions on Diversity in IT; and hands on labs to provide IT skill development.  It’s a huge event with attendees in all walks of IT from around the world.

The conference kicked off with a Keynote address from Microsoft CEO Satya Nadella.  In his opening speech, he outlined Microsoft’s vision of the next generation in IT.  This involves solutions which revolve around an Intelligent cloud and edge that transform products (business apps, gaming, infrastructure, etc.) and how IT organizations design their operations.  Traditionally, IT has been slow to adopt new technologies due to security concerns and policies.  Also, some IT shops are still afraid of the cloud and the perceived risks that it presents to business information.  However, this posture is no longer viable as users and business partners must be given the flexibility to be productive from any device and any location.

What’s new at Microsoft?  Two new features introduced at Ignite are Ideas and a refined Microsoft Search.  Ideas is cool feature that uses AI to predict what a user will do and can offer a set of design suggestions when creating a PowerPoint presentation.  For instance, if you are designing a PowerPoint slide, Ideas will suggest a particular graphic image based on the written text in the slide.  It can also find content inconsistencies such as a particular word spelled differently and offer to remediate the differences.  You must have OfficePro Plus to use this feature since it leverages the AI capabilities in the cloud.  It was announced that Microsoft Search has now been expanded to search across all Office products and device types.  By using Microsoft Graph and Bing, It intelligently provides customized results based on previous activities and work.

Also, Microsoft 365 now has a new Admin center.  As an improvement to the Office 365 Admin Center, it offers a more focused and centralized workplace for managing and securing resources in Microsoft’s cloud ecosystem.  If you are a Security Administrator for your organization, the Microsoft 365 Admin Center has an HTTP endpoint called security.microsoft.com that is a custom portal for security related responsibilities such as DLP, document classification and permissions.  Also, there’s an endpoint called admin.microsoft.com for managing users, groups and resources.  This approach falls in line with the concept of Just Enough Administration (JEA).

Microsoft is clearly implementing a full court press towards a wider adoption of Azure and Office365.  A majority of the sessions are related to Azure cloud platform and its myriad of offerings.  On-Premise enterprise applications such as Exchange server may not be dead, but they are definitely on the endangered list. At past conferences, there would have been a variety of sessions around On-Premise Exchange and related features, particular in a year which has a new release of Exchange server.  Not so this year.  There were a couple session devoted towards Exchange 2019, which is currently in Preview.  Although, the handwriting has been on the wall for several years that the focal point of messaging is the cloud.

In addition to technical skill development of staff, a very important part of IT is creating a work environment that is free of sexual harassment and racial biases.  The IT field is very male dominated and an unspoken reality is that it’s often a toxic world for women.  It was good to see in the session lineup several discussions highlighting the challenges that women face in IT; how to overcome biases; or creating a more inclusive workplace.

Microsoft has announced that Ignite will be held in Orlando again in 2019.  However, it will be the first week of November as opposed to the last week of September.  This will mark the third year in a row that Ignite will be in Orlando.  Although it’s nice to be able to visit different cities, Orlando is great location for the conference, which has nearly 30,000 attendees.  The weather is great, it’s close to Disney theme parks (the closing celebration was at Universal Studios) and it’s not congested like other major cities.

Those are my thoughts and takeaways.  What stood out to you about this year’s Ignite?

 

 

 

 

 

Standard

Azure AD Attribute Hide and Seek

Azure Ad Connect provides organizations with the ability to synchronize their On-premise users and groups to Azure Active Directory.  When synchronizing objects to Azure, administrators have the ability to control which users or groups are synchronized to the cloud.    Furthermore, it’s also possible to select which user or group attributes are synchronized.  Some organizations may have Security policies that prohibit certain information, such as phone numbers and addresses, from appearing in the cloud.  Luckily, attributes can be easily filtered by unchecking the attribute on the AD connector object in Synchronization Service Manager.  However, what if there’s an attribute that is being synced, but does not appear on the Azure AD connector as a filterable option?  Here’s an example that shows you how to deal with that.

Lets take a look at a user called TesterB in Powershell.  Using the Azure Powershell module (or Azure Cloud shell), we can get the user object and its properties with the following command.  Notice that the City attribute for our user is set to New York.

We don’t want location information available in Azure AD.  Lets logon to the Azure AD connect sever and open Synchronization Service Manager to filter this attribute.  Once there, click on the Connectors button.  You will see two connectors:  one for Azure AD and the other for On-premise AD.  Select the On-premise AD connector.

On the Properties window for the AD connector, click on “Select Attributes” to see the list of attributes that are available and being synchronized to Azure.

As shown below in the AD connector attributes window, there isn’t a “City” attribute.  Also, the attributes with a check mark are being synced to Azure AD.  This view shows the ldap name for each attribute, which is not always the same as its Display name, which is what the user property showed above in Powershell. To get to the bottom of this, we will need to look at the Attributes Editor for the user object in On-premise AD.

Open the TesterB user in ADUC and go to the Attribute Editor tab.  There you will see a list of the attributes that are available.  This view shows the ldap name for the attribute and its value, if one is set.  The ldap name for City is “l”, since the value is New York.

Now if you go back to the AD connector for verification, you will notice the attribute “l” is checked.  This will need to be unchecked.

Once you uncheck it and save the change, run the following command in Powershell to remove the City information from users in Azure AD and prevent it from being synced in the future.

A quick look at the City property for TesterB shows the location is no longer displayed.

That’s it!  If you ever have a situation where you can’t find an attribute to filter on the Azure AD connector, remember it probably has a ldap name that is different from the display name.

 

 

 

Standard

A Guide to Passing Azure Exam 70-533

Back in April of this year, I passed Azure exam 70-533:  Implementing Microsoft Azure Infrastructure Solutions.  To be honest, this was actually my second attempt at the exam.  I failed on my first try about three weeks earlier.  But who’s counting?   All that matters is that I persisted and eventually passed.  I’m not mentioning this to be discouraging to anyone intending to take the exam.  However, my intention is to provide encouragement if you don’t pass the first time around.  No one likes seeing the word “Fail” on the exam printout, but it’s not the end of the world.  With that being said, I thought I would write an article outlining the methods I employed to prepare for the test.

Practical Experience

First and foremost, you will need hands-on experience to pass this test.  Azure exam 70-533 is not easy and cannot be passed solely on reading books or articles.  If you do not have access to Azure through your employer or a Visual Studio subscription, Microsoft offers a 30-day free trial, which comes with a $200 credit.  The free trial allows you to create resources in Azure such as VM’s, vrtual networks, storage accounts, web apps, containers, etc.

Once you setup your account, it’s important to have a strategy to learning the skills that are needed to pass the exam.  Microsoft has a list of objectives and related skills that are covered by the exam.  As of this writing, the objectives were last updated on March 29, 2018.  Under each category of objectives are a number of relevant tasks or exercises.  Go to the exam site and do exercises around all the listed skill areas.  Microsoft has excellent documentation that will help you develop the skills measured by exam 70-533.  Also, it’s very important to learn how to accomplish tasks using Powershell and ARM templates, instead of only in the Portal.  For instance, learn how to deploy VM’s and related resources from a script or template.  Perform all of the tasks until you feel you have mastered them.

Training Courses

Pluralsight courses were an asset that proved to be a critical component of my training.  This site offers a number courses that cover topics such as Azure infrastructure solutions, storage, networking, application services, ARM templates, Identity management and more.  Also, there is a learning path for exam 70-533 that consists of about 7 or 8 course.  The training material is excellent, and consists of demos and exercise files that provide some practical training.  Pluralsight courses will give you a solid foundation.  Additionally, a monthly Pluralsight subscription will cost you $29.  The site is more than worth the price.  Another site that was helpful is Cloud Ranger.  The courses are free but many of them are now outdated since they are designed around the old Classic Model.

Practice Exam

I would advise you to get the official Measureup practice exams from Mindhub.  Some of the questions are on the Classic model, however the exam was still very helpful.  The real exam is all ARM, nothing on the Classic model.  The Measureup practice exam provides the option of taking the test in Practice mode, which is a customizable format.  For instance, you can select questions from a particular objective, or only questions that you missed during the last practice exam.  A huge benefit with the practice test is that it offers explanations for why an answer is correct and the others are wrong.  Also, each answer has links to documents that are relevant to each question.  DO NOT memorize the answer; know why an answer is correct.  I retook the full Practice exam (nearly 200 questions) until I consistently passed with at least a 95%.  At this point, I moved on to taking the practice test in Exam mode.  Mindhub currently has a special that offers an exam voucher, the practice test and 2 retakes for $266.00.

Helpful Links

The Exam 70-533 reference book has not been updated for awhile, but this site has tips that were extracted from the book’s content.  These bullet points are important facts that you will need to remember for the exam.  Also, make sure you know the features and pricing with app service plans and SQL database service tiers.

I hope the information I provided was beneficial and will contribute towards you passing exam 70-533.  Good luck!

 

 

 

 

 

 

Standard