Azure Firewall vs Network Security Group (NSG)

An important Security measure when running workloads in Azure or any Cloud service is to control the type of traffic that flows in and out of resources.  The resources can be virtual machines running a SQL database, web applications or domain services.  In Azure, there are two security features that can be used to manage both inbound and outbound traffic to resources:  Azure Firewall and Network Security Groups (NSGs).  In this article, I’m going to show how the two compare to each other and can be used together to protect to traffic resources in Azure.

Azure Firewall and NSG Overview
Lets start with Network Security Groups.  An NSG filters traffic at the network layer and consists of security rules that allows or denies traffic based on 5-tuple information:
1. Protocol – such as TCP, UDP, ICMP
2. Source – IP address,
3. Source port
4. Destination
5. Destination port

You can associate an NSG with a subnet or the network interface of an Azure VM.  Fun fact – in your mother’s Azure (the old classic model), it was possible to link an NSG to a VM as well as subnet.  In accordance with Best practices, it’s recommended to scope NSGs at the subnet level or network interface, not both. This can make it complicated when having to troubleshoot network issues.  Also, the same NSG can be applied to multiple subnets.

You can probably imagine how NSG rules can become difficult to manage in large environments that contain multiple subnets and virtual machines. Who wants to manually input rules allowing traffic to individual IP addresses?  This is where Application Security Groups (ASGs) come to the rescue.  An ASG is a logical grouping of virtual machines that allows you to apply security rules at scale.  For example, if you have a group of VM’s serving a web application, the VM’s can be placed in an ASG called “webappvms”.  The webappvms group can then be added to a rule within an NSG allowing HTTP (TCP) traffic over port 80.  This alleviates the need to add individual IP addresses to the security rule.

Azure Firewall is a highly available, managed firewall service that filters network and application level traffic.  It has the ability to process traffic across subscriptions and VNets that are deployed in a hub-spoke model.  Azure Firewall is priced in two ways: 1) $1.25/hour of deployment, regardless of scale and 2) $0.016/GB of data processed.

Azure Firewall and NSG Comparison
An NSG is a firewall, albeit a very basic one.  It’s a software defined solution that filters traffic at the Network layer.  However, Azure Firewall is more robust.  It’s a managed firewall service that can filter and analyze L3-L4 traffic, as well as L7 application traffic.  Azure Firewall provides the same capabilities as an NSG, plus more. The following chart offers a comparative illustration of each solution:

Here are some definitions if you’re not familiar with all of the features listed in the above chart:

  • Service Tags – these are labels that represent a range of IP addresses for particular services such as Azure Key Vault, Data Lake, Container Registry, etc.  These are managed by Microsoft and cannot be customized.  You can learn more about them here. As an example, here’s a Service tag I have configured for Event Hub in an outbound NSG rule.  This same rule can also be created in Azure Firewall.
  • FQDN Tags – represent a group of fully qualified domain names of Microsoft services such as Windows Update or Azure Backup.  Like Service tags, they are maintained by Microsoft and cannot be customized.  The are a significantly fewer number of  FQDN tags than Service tags.  Go here to see the list of FQDN tags. Here’s an example of FQDN tag I have for Windows Update in my Azure Firewall application rule.  This allows you to avoid creating multiple application rules for each of the numerous Windows Update endpoints.  One tag to rule them all!
  • SNAT – Source Network Address Translation is a feature of Azure Firewall. It’s possible to configure Azure Firewall with a Public IP address (PIP) that can be used to masked the IP address of Azure Resources that are sending out via the Firewall.
  • DNAT – Source Destination Address Translation is used to translate incoming traffic to the firewall’s Public IP to the Private IP addresses of the VNet.

Azure Firewall and NSG in Conjuction
NSGs and Azure Firewall work very well together and are not mutually exclusive or redundant.  You typically want to use NSGs when you are protecting network traffic in or out of a subnet.  An example would be a subnet that contains VMs that require RDP access (TCP over 3389) from a Jumpbox. Azure Firewall is the solution for filtering traffic to a VNet from the outside.  For this reason, it should be deployed in it’s own VNet and isolated from other resources.  Azure Firewall is a highly available solution that automatically scales based on its workload.  Therefore, it should be in a /26 size subnet to ensure there’s space for additional VMs that are created when it’s scaled out.

A scenario to use both would be a Hub-spoke VNet environment with incoming traffic from the outside.  Consider the following diagram:

The above model has Azure Firewall in the Hub VNet which has peered connections to two Spoke VNets.  The Spoke Vnets are not directly connected, but their subnets contain a User Defined Route (UDR) that points to the Azure Firewall, which serves as a gateway device.  Also, Azure Firewall is public facing and is responsible for protecting inbound and outbound traffic to the VNet.  This is where features like Application rules, SNAT and DNaT come in handy.  If you have a simple environment, then NSGs should be sufficient for network protection.

The following links are to Microsoft docs that provide detailed information about Azure Firewall and Network Security Groups and were used as source material for this article:
https://docs.microsoft.com/en-us/azure/virtual-network/security-overview
https://docs.microsoft.com/en-us/azure/firewall/overview
https://docs.microsoft.com/en-us/azure/firewall/integrate-lb

 

 

 

Standard

Azure Monitor Logs and Kusto Query Language (KQL)

The Azure platform consists of a variety of resources that generate large volumes of activity and diagnostic log data.  The source of this data can be subscription level events such as deallocating a virtual machine, deleting a resource group or creating a load balancer – essentially any create, update or delete operation on a resource.  It can also include resource level activities such as a VM Windows event logs, VM performance data, web app response times – logs related to resource utilitization. In this article, I will provide an overview of how Azure Monitor organizes all of this data, along with some examples of how Kusto Query Language (KQL) can be leveraged to parse this information.

AZURE MONITOR LOGS OVERVIEW
Azure Monitor Logs is responsible for collecting all log and telemetry data and organizing it in a structured format.  The data is stored in a Log Analytics Workspace, which organizes it into categorical units.  Within each unit or solution are tables that contain columns for various types of data.  The data types can be string, numerical or date/time.  The graphic below shows the Schema pane within Azure Monitor logs, which gives a hierarchical view of this structure.  By default, it is scoped to a Workspace that I created which has three categories called ChangeTracking, LogManagement and SecurityCenterfree, as you can see here:

By the way,  you have the ability to change the scope to either another Workspace, Resource group or subscription.  To do this, click here and the following window will appear:

But let’s go back to the originally scoped Workspace and see what it contains.  By expanding the LogManagement unit, you can see its tables.  Some of the tables are AuditLogs, AzureActivity, AppCenterError, etc.

As I mentioned, each table has columns for various types of data.  If we look inside the AppCenterError table above, there are icons next to each column that represents the type of data it contains.  For example, the next to the Createdat column means it has date/time data; a  next to the Errorline column is indicative of numerical data; and the indicates a column has textual data.

KUSTO QUERY LANGUAGE (KQL)
Now that we’ve gone over the Azure Monitor Logs data platform, let’s take a look some ways to analyze all of the data it holds using Kusto Query Language.  All queries performed by KQL have at least one table as its search base.  However, the scope of a query can expand across all tables, multiple tables or only one table – you have the right to choose.  Also, when executing queries in the Query editor, the default time frame is 24 hours, which can be customized. Here are some examples.

The following query searches for the keyword “compliant”  in the AzureDiagnostics table.  It retrieves all of the records in the AzureDiagnostics table and pipes it to the “search” operator.

 

Here’s another way to perform the previous query.  Instead of retrieving all records in the table and then filtering, the search operation is executed directly on the AzureDiagnostics table. Although, there are times when the former query is more suitable.

 

You can further narrow the scope of the previous query to a certain column in the AzureDiagnostics table.  This command takes all data from the AzureDiagnostics table and filters out any records in the DSCResourceStatus_s column with the keyword Compliant.

 

How about performing queries across multiple tables? You can perform a query across an array of tables with the following example:

If you want to reduce the result size of data set, use the Limit operator.  The Take operator can be used as well, as the two operators are synonymous. Also, they will retrieve a random subset of records.

 

By default, the maximum number of records that can be produced by the Kusto Data engine is 500,000 or or no more than 64 MB in size.  Therefore, if you specify a limit of say 550,000, the query will fail.  If you find this to be unacceptable, you can summon the Dark Side of the Force and suppress the default query limit with the notruncation option and retrieve any number of records (not recommended, however):

In conclusion, Azure Monitor logs is a valuable tool for performing data gathering and analysis.  The Cloud has elevated the importance of data and being able to parse it for insight into your environment.  With Azure Monitor logs comes the ability to consume log data from a variety resources and perform Kusto queries across a multitude of data sets.

 

 

 

 

 

 

 

 

 

 

 

 

 

Standard

Configuration Management of Non-Azure VM’s with Azure Automation – Part 3

We’ve now reached the final article in this three part series covering Configuration Management in Azure automation.  In Part 1, I discussed the Inventory tool and how to onboard an AWS EC2 virtual machine to Azure.  Part 2 covered Change tracking and how to monitor changes to various resources on the AWS instance.  In this article, Part 3, I will cover Azure State configuration (DSC) and how to register an AWS VM as a DSC node to apply a desired state.

An Overview of State configurtion (DSC) Components
Azure State configuration builds upon Powershell’s Desired State Configuration (DSC), which was introduced in version 4.0 of Powershell.  These are the main components of Azure DSC, along with a brief description of each:

  • Configuration files – Powershell scripts written in a declarative syntax that defines how a target should be configured.  These contain building blocks that define the nodes to be configured and the resources to be applied.
  • Resources – These are Powershell modules that contains the code that determines the state of a machine.
  • Local Configuration Manager (LCM) – This runs on the target nodes.  It serves as the engine that consumes and applies DSC configurations to the target machine.  It also has settings that determine how often a node checks for new configs from the Pull server in Azure automation and what action to take when a node drifts from a desired state.
  • DSC Metaconfigurations – These files define the settings for the Local Configuration Manager (LCM).
  • Pull Server – located in Azure automation.  It functions as a repository for the configuration files that are applied to DSC target nodes.

Why Azure State configuration (DSC)
By leveraging DSC as a service in Azure, this eliminates the need to deploy and maintain an On-Premises Pull server, and all of it’s necessary components such as SSL certificates.  Also, State configuration (DSC) data can be forwarded to Azure Monitor, which provides searching capabilities with Log Analytics and alerting on compliance failures.  Furthermore, if you’re doing the DevOps thing by implementing agile practices, State configuration fits in seamlessly with continuous deployments in Azure DevOps.

Prerequisites
To complete the tasks in this example, the following will be needed:
1. An Azure automation account
2. An AWS EC2 VM
3. WMF 5.1 – on the AWS VM
4. WinRM – on the AWS VM

Steps to Onboard the AWS EC2 VM to Azure
Even though DSC is a part of Azure’s Configuration Management, any machine that it manages has to be onboarded separately from Inventory and Change Tracking.  There are a few ways to onboard an AWS instance to State configuration (DSC):

  1. The AWS DSC Toolkit created by the Powershell Team – this method includes installing the toolkit using PSGet, then running commands to login to your Azure subscription and register the AWS EC2 instance.  Go here to get more details.
  2. A Powershell DSC script – involves running a Powershell DSC configuration script locally on the AWS EC2 instance.  The script will generate a Metaconfig file that contains settings for the Local Configuration Manager (LCM). This Microsoft doc explains more.
  3. Azure Automation Cmdlets – a quick method to produce the DSC Metaconfig file using AzureRm cmdlets in Windows Powershell on the AWS VM.

Options 1 and 3 will allow you to register the node in Azure without having to write a Powershell DSC configuration script to generate the Metaconfiguration.  This is fine as long as you’re comfortable with the default LCM settings. In this case, I will use Option 3 since it’s simple and this is only for demonstration purposes.  Here are the steps to be executed on the AWS VM:

  1. In a Powershell console, install the AzureRm module from the Powershell Gallery using  install-module AzureRM
  2. Login to Azure using login-azurermaccount
  3. Download the Powershell DSC Metaconrigurations from Azure automation with the following Powershell command :
    $Params = @{
    ResourceGroupName = 'myautogroup'; 
    AutomationAccountName = 'myautoacct1'; 
    ComputerName = @('IP address or computer name'); #this will be the Private IP not Public 
    OutputFolder = "$env:UserProfile\Desktop\";
    }
    Get-AzureRmAutomationDscOnboardingMetaconfig @Params
    
  4. Run Set-DscLocalConfigurationManager -path “$env:UserProfile\Desktop\”

To verify that the AWS VM has been successfully onboarded, in the Azure portal, go to the Automation account.  Under Configuration Management, click on State configuration (DSC) to go to the main where it shows the EC2 instance and it’s configuration status.

The next step is to assign a configuration to the node.

Add and Assign State Configurations
Now that the AWS VM is registered as a DSC node in Azure, configuration files can be assigned to it.   This can be done by composing your own configuration files and uploading them to Azure; or using the ones available in the Gallery.

If you select “Configurations“, there’s an option to upload a configuration file or compose one in the Portal.  I will click “Add” and upload a previously created config file called “WindowsFeatureSet.ps1 that will ensure the IIS and Web server features are enabled.

Next browse to the location of configuration script file and hit “Ok” to import it.

I’m not going to walk through the steps of writing a DSC script, but only provide an overview of assigning it to a node.  Once the import is complete, the config file is now available under “Configurations“.

Before it’s assigned to a node, the uploaded file will then need to be compiled into a MOF document by the LCM on the Pull server.  To begin this process, select the imported config file and click on the “Compile” button.

Once complete, the file will show as compiled and will be in the format of filename.nodename.  At this point, the configuration can be assigned to the node.

To assign the configuration, select the DSC node from the Nodes screen and click on the “Assign node configuration” button.

Confirm the assignment by clicking “Ok

The following figure shows that the configuration file has been assigned to the DSC node.

Here I remoted into the AWS VM and verified that IIS has been installed.

Powershell commands to manage State configuration (DSC)
The following Powershell script and commands can be used to complete some of the above tasks that were done in the Portal.

Script to upload config file and compile it:

Import-AzureRmAutomationDscConfiguration
-ResourceGroupName myautogroup –AutomationAccountName myautoacct1
-SourcePath $env:userprofile\Desktop\AzureAutomationDsc\WindowsFeaturSet.ps1
-Published –Force

$jobData = Start-AzureRmAutomationDscCompilationJob
-ResourceGroupName myautogroup –AutomationAccountName myautoacct1
-ConfigurationName Featureset

$compilationJobId = $jobData.Id

Get-AzureRmAutomationDscCompilationJob
-ResourceGroupName myautogroup –AutomationAccountName myautoacct1
-Id $compilationJobId

Command to view DSC Nodes

Command to view the status of a DSC compilation job

Command to view the a DSC configuration

Troubleshooting – Notes from the Field
During the process of onboarding the AWS VM to Azure, I ran into a couple of errors when running the Set-DscLocalConfigurationManager command.

Problem 1 – Here’s a screenshot of the first error:

Fix 1 – I had to log into the AWS console and re-configure the Security group that manages traffic to the virtual machine.  The inbound traffic rule needed an allowance for WinRm-HTTPS (Powershell remoting) on Port 5986.

Problem 2 – when I ran the Set-DscLocalConfigurationManager command again after allowing Powershell remoting in the EC2 Security group, I got this error:

Fix 2 – I had to open the Local Group Policy editor on the AWS VM, enable trusted hosts for the WinRM client and add the source as a Trusted host.  Open gpedit.msc and go to Computer Configuration > Administrative Templates > Windows Components > Windows Remote Management (WinRM) > WinRM Client.  From there, enable Trusted Hosts and add the source server.

Summary
This concludes the third article in the series on Configuration management of a non-Azure VM.  Azure State configuration is a service  that can be used to manage Azure, On-premises and other non-Azure VMs to configure and maintain a desired state.

 

 

 

 

 

Standard

Configuration Management of Non-Azure VM’s with Azure Automation – Part 2

In part 1 of this series, I discussed the Inventory tool that is a part of Azure Automation’s config management and how to on-board an AWS VM for management.  In this article, I will cover Change Tracking.  With Inventory, you get a report on the Windows files, registry and services, as well installed software for the machines being monitored.  However, Change Tracking takes it a step further and provides a notification whenever there is a change to anything that’s being tracked on the machine.  It also provides the capability to perform queries against the change logs.  Let’s take a look and see how it works.

Since I previously enabled Inventory, there’s nothing that needs to be done to enable Change Tracking.  If you go to your Automation account in the Azure Portal, look under the Configuration Management section and click on “Change Tracking“.  On the main screen, there’s a graphical layout of changes made to Windows services, registry, files, Linux daemons and software being tracked on the AWS VM that was previously on-boarded.  In this case, there was 1 software change and a large number of changes to Windows services.

Below the graph, there’s a tabular layout indicating the resource name, resource type, source machine and time of change.

You can click on one of the resources to see more details about how it was modified.  If I select one of the Windows Update changes, more details are provided about exactly what changed.  Below it shows that the service state changed from “Running” to “Stopped”.

Configure Change Tracking
Change Tracking is customizable by allowing you to configure which Registry keys, services or files are tracked.  To configure Change Tracking, click on the “Edit Settings” button at the top of the main screen.  This will bring you to the Workspace Configuration page for Change Tracking.

Here you will see sections for each type of resource that is able to be tracked.  Under “Windows Registry” in the below screenshot, you will see a list of recommended keys to track.  They’re disabled by default.  To enable a key, just click on it and set enabled to “true”  You can also add a registry key by clicking on the “Add” button.

Under “Windows Files” you can view the files being tracked.  These have to be added manually.  For example, I have added the “c:\windows\system32\drivers\etc” folder path.  If a change is made to the Hosts file or any other file in this location, it will be recorded in Change Tracking.  Also, the process to enable tracking for Linux files is the same.

Another thing you can do under Workspace Configuration is enable the content of modified files to be saved in a Storage account.  To get started, click on the “File Content” tab. Enabling this feature will generate a Share Access Signature (SAS) Uri that can be used to access the stored data.  I won’t go into the details on how this works, but it’s very helpful if you would like to provide others access to the change data.

Lastly, let’s see what can be done under “Windows Services”.  The tracking for Windows services is enabled by default, but you can adjust the frequency that changes to them are reported.  The default setting is every 30 minutes, but you can adjust to anywhere between that and 10 seconds.  Here, I set the collection frequency to 10 minutes.

Querying Change Tracking Logs
Now I am going to discuss my favorite feature in Change Tracking – Logs search and query.  Click on “Log Analytics” to view the page to execute queries against the logged changes.

In the left side pane, there’s a schema that is built on the Workspace called “Trackchangesspace” that Log Analytics uses for Change tracking.  In here there are two databases:  ChangeTracking and LogManagement, which holds records of the data collected by the Tracking tool.  with the Change Tracking database expanded to show its tables ConfigurationChange and ConfigurationData.  In the main window is where queries are executed and the results are shown.  When you first open the Log Analytics page, all the changes collected in the last 24 hours are displayed by default in a tabular view.

By clicking on the “>” symbol next to an item, you can get more details.  For example, below are details about one of the software modifications.  It shows the SoftwareName, Computer, ChangeCategory, Previous and Current states.  In this case, an update for Windows Defender Antivirus was added.

What if you want to run a query of all Windows services that are stopped?  In the top pane, you can write a KQL query that searches in the ConfigurationChange table for all services whose SvcState is equal to stopped in the following format:
Configuration | where SvcState == “Stopped”.  I was pleasantly surprised to discover that the query editor has built-in tab completion and intellisense.  This is very helpful for lousy typists like me, or you are used to IDE’s like VScode or Powershell ISE.  Do note this: the query is case sensitive. If you type “Stopped” with a lower case “s”, the query will not yield any results.  If you are a Powersheller, your experience is that comparison values are not case-sensitive.

Here’s another time saver for building queries:  In the below figure, I went back to the results pane and expanded one of the Windows services to get the detailed information.  By placing the cursor next to “SvcState”, two buttons will appear:  a plus and minus.  Clicking on the plus button will add the search logic for all Windows services where the SvcState is stopped.  Then just click “Run” to get the results.  Nice!

Change Alerting
Lastly, Change Tracking provides the ability to configure alert rules if you would like to receive notifications on certain changes.  In the Logs window, click on “New alert rule” to set this up.  In the screen blow, set the resource as the Workspace used by Change tracking, add the appropriate logic for the condition and define how to receive notifications under Action groups.  You can have the alerts emailed or sent via text messages.

Supplemental resources
The following links are to articles that discuss topics that are related to material covered in this article:
1. https://docs.microsoft.com/en-us/azure/kusto/query/ – an overview of Kusto Query Language (KQL)
2. https://docs.microsoft.com/en-us/azure/automation/automation-change-tracking – an overview of Chang tracking
3. https://docs.microsoft.com/en-us/azure/automation/change-tracking-file-contents –  how to view contents of tracked files

This concludes Part 2 of 3 on Configuration management using Azure automation.  In summary, this article explained how to utilize Change tracking to monitor particular resources on an AWS virtual machine, how to perform queries against logged changes and setup alerts for those changes. The final article, part 3, will cover Azure State configuration (DSC).

 

 

 

 

 

 

 

 

 

 

Standard

MyIgnite – Reflections on Microsoft Ignite 2018

I attended this year’s Microsoft Ignite conference in Orlando, FL and decided i would provide my reflections on the event.  The annual conference provides a plethora of sessions on Microsoft technology offerings and solutions related to Microsoft 365, IoT, containers, DevOps, Team collaboration, Azure services and more.  Also, there’s an Expo of various IT vendors; panel discussions on Diversity in IT; and hands on labs to provide IT skill development.  It’s a huge event with attendees in all walks of IT from around the world.

The conference kicked off with a Keynote address from Microsoft CEO Satya Nadella.  In his opening speech, he outlined Microsoft’s vision of the next generation in IT.  This involves solutions which revolve around an Intelligent cloud and edge that transform products (business apps, gaming, infrastructure, etc.) and how IT organizations design their operations.  Traditionally, IT has been slow to adopt new technologies due to security concerns and policies.  Also, some IT shops are still afraid of the cloud and the perceived risks that it presents to business information.  However, this posture is no longer viable as users and business partners must be given the flexibility to be productive from any device and any location.

What’s new at Microsoft?  Two new features introduced at Ignite are Ideas and a refined Microsoft Search.  Ideas is cool feature that uses AI to predict what a user will do and can offer a set of design suggestions when creating a PowerPoint presentation.  For instance, if you are designing a PowerPoint slide, Ideas will suggest a particular graphic image based on the written text in the slide.  It can also find content inconsistencies such as a particular word spelled differently and offer to remediate the differences.  You must have OfficePro Plus to use this feature since it leverages the AI capabilities in the cloud.  It was announced that Microsoft Search has now been expanded to search across all Office products and device types.  By using Microsoft Graph and Bing, It intelligently provides customized results based on previous activities and work.

Also, Microsoft 365 now has a new Admin center.  As an improvement to the Office 365 Admin Center, it offers a more focused and centralized workplace for managing and securing resources in Microsoft’s cloud ecosystem.  If you are a Security Administrator for your organization, the Microsoft 365 Admin Center has an HTTP endpoint called security.microsoft.com that is a custom portal for security related responsibilities such as DLP, document classification and permissions.  Also, there’s an endpoint called admin.microsoft.com for managing users, groups and resources.  This approach falls in line with the concept of Just Enough Administration (JEA).

Microsoft is clearly implementing a full court press towards a wider adoption of Azure and Office365.  A majority of the sessions are related to Azure cloud platform and its myriad of offerings.  On-Premise enterprise applications such as Exchange server may not be dead, but they are definitely on the endangered list. At past conferences, there would have been a variety of sessions around On-Premise Exchange and related features, particular in a year which has a new release of Exchange server.  Not so this year.  There were a couple session devoted towards Exchange 2019, which is currently in Preview.  Although, the handwriting has been on the wall for several years that the focal point of messaging is the cloud.

In addition to technical skill development of staff, a very important part of IT is creating a work environment that is free of sexual harassment and racial biases.  The IT field is very male dominated and an unspoken reality is that it’s often a toxic world for women.  It was good to see in the session lineup several discussions highlighting the challenges that women face in IT; how to overcome biases; or creating a more inclusive workplace.

Microsoft has announced that Ignite will be held in Orlando again in 2019.  However, it will be the first week of November as opposed to the last week of September.  This will mark the third year in a row that Ignite will be in Orlando.  Although it’s nice to be able to visit different cities, Orlando is great location for the conference, which has nearly 30,000 attendees.  The weather is great, it’s close to Disney theme parks (the closing celebration was at Universal Studios) and it’s not congested like other major cities.

Those are my thoughts and takeaways.  What stood out to you about this year’s Ignite?

 

 

 

 

 

Standard

Azure AD Attribute Hide and Seek

Azure Ad Connect provides organizations with the ability to synchronize their On-premise users and groups to Azure Active Directory.  When synchronizing objects to Azure, administrators have the ability to control which users or groups are synchronized to the cloud.    Furthermore, it’s also possible to select which user or group attributes are synchronized.  Some organizations may have Security policies that prohibit certain information, such as phone numbers and addresses, from appearing in the cloud.  Luckily, attributes can be easily filtered by unchecking the attribute on the AD connector object in Synchronization Service Manager.  However, what if there’s an attribute that is being synced, but does not appear on the Azure AD connector as a filterable option?  Here’s an example that shows you how to deal with that.

Lets take a look at a user called TesterB in Powershell.  Using the Azure Powershell module (or Azure Cloud shell), we can get the user object and its properties with the following command.  Notice that the City attribute for our user is set to New York.

We don’t want location information available in Azure AD.  Lets logon to the Azure AD connect sever and open Synchronization Service Manager to filter this attribute.  Once there, click on the Connectors button.  You will see two connectors:  one for Azure AD and the other for On-premise AD.  Select the On-premise AD connector.

On the Properties window for the AD connector, click on “Select Attributes” to see the list of attributes that are available and being synchronized to Azure.

As shown below in the AD connector attributes window, there isn’t a “City” attribute.  Also, the attributes with a check mark are being synced to Azure AD.  This view shows the ldap name for each attribute, which is not always the same as its Display name, which is what the user property showed above in Powershell. To get to the bottom of this, we will need to look at the Attributes Editor for the user object in On-premise AD.

Open the TesterB user in ADUC and go to the Attribute Editor tab.  There you will see a list of the attributes that are available.  This view shows the ldap name for the attribute and its value, if one is set.  The ldap name for City is “l”, since the value is New York.

Now if you go back to the AD connector for verification, you will notice the attribute “l” is checked.  This will need to be unchecked.

Once you uncheck it and save the change, run the following command in Powershell to remove the City information from users in Azure AD and prevent it from being synced in the future.

A quick look at the City property for TesterB shows the location is no longer displayed.

That’s it!  If you ever have a situation where you can’t find an attribute to filter on the Azure AD connector, remember it probably has a ldap name that is different from the display name.

 

 

 

Standard

A Guide to Passing Azure Exam 70-533

Back in April of this year, I passed Azure exam 70-533:  Implementing Microsoft Azure Infrastructure Solutions.  To be honest, this was actually my second attempt at the exam.  I failed on my first try about three weeks earlier.  But who’s counting?   All that matters is that I persisted and eventually passed.  I’m not mentioning this to be discouraging to anyone intending to take the exam.  However, my intention is to provide encouragement if you don’t pass the first time around.  No one likes seeing the word “Fail” on the exam printout, but it’s not the end of the world.  With that being said, I thought I would write an article outlining the methods I employed to prepare for the test.

Practical Experience

First and foremost, you will need hands-on experience to pass this test.  Azure exam 70-533 is not easy and cannot be passed solely on reading books or articles.  If you do not have access to Azure through your employer or a Visual Studio subscription, Microsoft offers a 30-day free trial, which comes with a $200 credit.  The free trial allows you to create resources in Azure such as VM’s, vrtual networks, storage accounts, web apps, containers, etc.

Once you setup your account, it’s important to have a strategy to learning the skills that are needed to pass the exam.  Microsoft has a list of objectives and related skills that are covered by the exam.  As of this writing, the objectives were last updated on March 29, 2018.  Under each category of objectives are a number of relevant tasks or exercises.  Go to the exam site and do exercises around all the listed skill areas.  Microsoft has excellent documentation that will help you develop the skills measured by exam 70-533.  Also, it’s very important to learn how to accomplish tasks using Powershell and ARM templates, instead of only in the Portal.  For instance, learn how to deploy VM’s and related resources from a script or template.  Perform all of the tasks until you feel you have mastered them.

Training Courses

Pluralsight courses were an asset that proved to be a critical component of my training.  This site offers a number courses that cover topics such as Azure infrastructure solutions, storage, networking, application services, ARM templates, Identity management and more.  Also, there is a learning path for exam 70-533 that consists of about 7 or 8 course.  The training material is excellent, and consists of demos and exercise files that provide some practical training.  Pluralsight courses will give you a solid foundation.  Additionally, a monthly Pluralsight subscription will cost you $29.  The site is more than worth the price.  Another site that was helpful is Cloud Ranger.  The courses are free but many of them are now outdated since they are designed around the old Classic Model.

Practice Exam

I would advise you to get the official Measureup practice exams from Mindhub.  Some of the questions are on the Classic model, however the exam was still very helpful.  The real exam is all ARM, nothing on the Classic model.  The Measureup practice exam provides the option of taking the test in Practice mode, which is a customizable format.  For instance, you can select questions from a particular objective, or only questions that you missed during the last practice exam.  A huge benefit with the practice test is that it offers explanations for why an answer is correct and the others are wrong.  Also, each answer has links to documents that are relevant to each question.  DO NOT memorize the answer; know why an answer is correct.  I retook the full Practice exam (nearly 200 questions) until I consistently passed with at least a 95%.  At this point, I moved on to taking the practice test in Exam mode.  Mindhub currently has a special that offers an exam voucher, the practice test and 2 retakes for $266.00.

Helpful Links

The Exam 70-533 reference book has not been updated for awhile, but this site has tips that were extracted from the book’s content.  These bullet points are important facts that you will need to remember for the exam.  Also, make sure you know the features and pricing with app service plans and SQL database service tiers.

I hope the information I provided was beneficial and will contribute towards you passing exam 70-533.  Good luck!

 

 

 

 

 

 

Standard